Mar 11 08:39:07 crc systemd[1]: Starting Kubernetes Kubelet... Mar 11 08:39:07 crc restorecon[4750]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Mar 11 08:39:07 crc restorecon[4750]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 11 08:39:07 crc restorecon[4750]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 11 08:39:07 crc restorecon[4750]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 11 08:39:07 crc restorecon[4750]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 11 08:39:07 crc restorecon[4750]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 11 08:39:07 crc restorecon[4750]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 11 08:39:07 crc restorecon[4750]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 11 08:39:07 crc restorecon[4750]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Mar 11 08:39:07 crc restorecon[4750]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 11 08:39:07 crc restorecon[4750]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Mar 11 08:39:07 crc restorecon[4750]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Mar 11 08:39:07 crc restorecon[4750]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 11 08:39:07 crc restorecon[4750]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 11 08:39:07 crc restorecon[4750]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 11 08:39:07 crc restorecon[4750]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 11 08:39:07 crc restorecon[4750]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Mar 11 08:39:07 crc restorecon[4750]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Mar 11 08:39:07 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 11 08:39:07 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 11 08:39:07 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 11 08:39:07 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 11 08:39:07 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 11 08:39:07 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 11 08:39:07 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 11 08:39:07 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 11 08:39:07 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 11 08:39:07 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 11 08:39:07 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 11 08:39:07 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 11 08:39:07 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 11 08:39:07 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 11 08:39:07 crc restorecon[4750]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Mar 11 08:39:07 crc restorecon[4750]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Mar 11 08:39:07 crc restorecon[4750]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Mar 11 08:39:07 crc restorecon[4750]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Mar 11 08:39:07 crc restorecon[4750]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Mar 11 08:39:07 crc restorecon[4750]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Mar 11 08:39:07 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 11 08:39:07 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 11 08:39:07 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 11 08:39:07 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 11 08:39:07 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 11 08:39:07 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 11 08:39:07 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 11 08:39:07 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 11 08:39:07 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 11 08:39:07 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 11 08:39:07 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 11 08:39:07 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 11 08:39:07 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 11 08:39:07 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 11 08:39:07 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 11 08:39:07 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 11 08:39:07 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 11 08:39:07 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 11 08:39:07 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 11 08:39:07 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 11 08:39:07 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 11 08:39:07 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 11 08:39:07 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 11 08:39:07 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 11 08:39:07 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 11 08:39:07 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 11 08:39:07 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 11 08:39:07 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 11 08:39:07 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 11 08:39:07 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 11 08:39:07 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 11 08:39:07 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 11 08:39:07 crc restorecon[4750]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Mar 11 08:39:07 crc restorecon[4750]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Mar 11 08:39:07 crc restorecon[4750]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Mar 11 08:39:07 crc restorecon[4750]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Mar 11 08:39:07 crc restorecon[4750]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Mar 11 08:39:07 crc restorecon[4750]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Mar 11 08:39:07 crc restorecon[4750]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Mar 11 08:39:07 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 11 08:39:07 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 11 08:39:07 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 11 08:39:07 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 11 08:39:07 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 11 08:39:07 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 11 08:39:07 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 11 08:39:07 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 11 08:39:07 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 11 08:39:07 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Mar 11 08:39:07 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Mar 11 08:39:07 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 11 08:39:08 crc restorecon[4750]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 11 08:39:08 crc restorecon[4750]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Mar 11 08:39:09 crc kubenswrapper[4808]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 11 08:39:09 crc kubenswrapper[4808]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Mar 11 08:39:09 crc kubenswrapper[4808]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 11 08:39:09 crc kubenswrapper[4808]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 11 08:39:09 crc kubenswrapper[4808]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 11 08:39:09 crc kubenswrapper[4808]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.501387 4808 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.511267 4808 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.511623 4808 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.511636 4808 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.511645 4808 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.511656 4808 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.511666 4808 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.511676 4808 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.511685 4808 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.511693 4808 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.511703 4808 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.511712 4808 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.511720 4808 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.511730 4808 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.511738 4808 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.511746 4808 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.511757 4808 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.511767 4808 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.511777 4808 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.511785 4808 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.511793 4808 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.511801 4808 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.511822 4808 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.511829 4808 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.511837 4808 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.511845 4808 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.511853 4808 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.511860 4808 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.511868 4808 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.511875 4808 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.511883 4808 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.511890 4808 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.511898 4808 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.511905 4808 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.511913 4808 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.511921 4808 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.511929 4808 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.511937 4808 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.511945 4808 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.511952 4808 feature_gate.go:330] unrecognized feature gate: Example Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.511960 4808 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.511968 4808 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.511975 4808 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.511984 4808 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.511994 4808 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.512003 4808 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.512012 4808 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.512022 4808 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.512031 4808 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.512039 4808 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.512047 4808 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.512055 4808 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.512063 4808 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.512071 4808 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.512078 4808 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.512086 4808 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.512094 4808 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.512102 4808 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.512109 4808 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.512117 4808 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.512124 4808 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.512131 4808 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.512141 4808 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.512149 4808 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.512157 4808 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.512164 4808 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.512172 4808 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.512179 4808 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.512188 4808 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.512195 4808 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.512202 4808 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.512210 4808 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.512384 4808 flags.go:64] FLAG: --address="0.0.0.0" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.512403 4808 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.512417 4808 flags.go:64] FLAG: --anonymous-auth="true" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.512429 4808 flags.go:64] FLAG: --application-metrics-count-limit="100" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.512441 4808 flags.go:64] FLAG: --authentication-token-webhook="false" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.512450 4808 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.512462 4808 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.512472 4808 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.512482 4808 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.512491 4808 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.512500 4808 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.512512 4808 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.512521 4808 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.512530 4808 flags.go:64] FLAG: --cgroup-root="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.512539 4808 flags.go:64] FLAG: --cgroups-per-qos="true" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.512548 4808 flags.go:64] FLAG: --client-ca-file="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.512557 4808 flags.go:64] FLAG: --cloud-config="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.512566 4808 flags.go:64] FLAG: --cloud-provider="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.512575 4808 flags.go:64] FLAG: --cluster-dns="[]" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.512586 4808 flags.go:64] FLAG: --cluster-domain="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.512595 4808 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.512605 4808 flags.go:64] FLAG: --config-dir="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.512614 4808 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.512623 4808 flags.go:64] FLAG: --container-log-max-files="5" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.512634 4808 flags.go:64] FLAG: --container-log-max-size="10Mi" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.512642 4808 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.512652 4808 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.512661 4808 flags.go:64] FLAG: --containerd-namespace="k8s.io" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.512670 4808 flags.go:64] FLAG: --contention-profiling="false" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.512697 4808 flags.go:64] FLAG: --cpu-cfs-quota="true" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.512706 4808 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.512715 4808 flags.go:64] FLAG: --cpu-manager-policy="none" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.512724 4808 flags.go:64] FLAG: --cpu-manager-policy-options="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.512735 4808 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.512744 4808 flags.go:64] FLAG: --enable-controller-attach-detach="true" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.512754 4808 flags.go:64] FLAG: --enable-debugging-handlers="true" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.512762 4808 flags.go:64] FLAG: --enable-load-reader="false" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.512771 4808 flags.go:64] FLAG: --enable-server="true" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.512780 4808 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.512791 4808 flags.go:64] FLAG: --event-burst="100" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.512800 4808 flags.go:64] FLAG: --event-qps="50" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.512809 4808 flags.go:64] FLAG: --event-storage-age-limit="default=0" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.512818 4808 flags.go:64] FLAG: --event-storage-event-limit="default=0" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.512827 4808 flags.go:64] FLAG: --eviction-hard="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.512837 4808 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.512846 4808 flags.go:64] FLAG: --eviction-minimum-reclaim="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.512854 4808 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.512865 4808 flags.go:64] FLAG: --eviction-soft="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.512874 4808 flags.go:64] FLAG: --eviction-soft-grace-period="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.512883 4808 flags.go:64] FLAG: --exit-on-lock-contention="false" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.512891 4808 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.512900 4808 flags.go:64] FLAG: --experimental-mounter-path="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.512909 4808 flags.go:64] FLAG: --fail-cgroupv1="false" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.512918 4808 flags.go:64] FLAG: --fail-swap-on="true" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.512927 4808 flags.go:64] FLAG: --feature-gates="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.512937 4808 flags.go:64] FLAG: --file-check-frequency="20s" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.512946 4808 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.512955 4808 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.512965 4808 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.512976 4808 flags.go:64] FLAG: --healthz-port="10248" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.512985 4808 flags.go:64] FLAG: --help="false" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.512993 4808 flags.go:64] FLAG: --hostname-override="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.513002 4808 flags.go:64] FLAG: --housekeeping-interval="10s" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.513012 4808 flags.go:64] FLAG: --http-check-frequency="20s" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.513021 4808 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.513030 4808 flags.go:64] FLAG: --image-credential-provider-config="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.513038 4808 flags.go:64] FLAG: --image-gc-high-threshold="85" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.513047 4808 flags.go:64] FLAG: --image-gc-low-threshold="80" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.513056 4808 flags.go:64] FLAG: --image-service-endpoint="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.513065 4808 flags.go:64] FLAG: --kernel-memcg-notification="false" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.513074 4808 flags.go:64] FLAG: --kube-api-burst="100" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.513082 4808 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.513091 4808 flags.go:64] FLAG: --kube-api-qps="50" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.513100 4808 flags.go:64] FLAG: --kube-reserved="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.513109 4808 flags.go:64] FLAG: --kube-reserved-cgroup="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.513118 4808 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.513127 4808 flags.go:64] FLAG: --kubelet-cgroups="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.513137 4808 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.513146 4808 flags.go:64] FLAG: --lock-file="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.513155 4808 flags.go:64] FLAG: --log-cadvisor-usage="false" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.513164 4808 flags.go:64] FLAG: --log-flush-frequency="5s" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.513173 4808 flags.go:64] FLAG: --log-json-info-buffer-size="0" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.513196 4808 flags.go:64] FLAG: --log-json-split-stream="false" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.513206 4808 flags.go:64] FLAG: --log-text-info-buffer-size="0" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.513216 4808 flags.go:64] FLAG: --log-text-split-stream="false" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.513224 4808 flags.go:64] FLAG: --logging-format="text" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.513233 4808 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.513242 4808 flags.go:64] FLAG: --make-iptables-util-chains="true" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.513251 4808 flags.go:64] FLAG: --manifest-url="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.513260 4808 flags.go:64] FLAG: --manifest-url-header="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.513276 4808 flags.go:64] FLAG: --max-housekeeping-interval="15s" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.513286 4808 flags.go:64] FLAG: --max-open-files="1000000" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.513297 4808 flags.go:64] FLAG: --max-pods="110" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.513306 4808 flags.go:64] FLAG: --maximum-dead-containers="-1" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.513316 4808 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.513325 4808 flags.go:64] FLAG: --memory-manager-policy="None" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.513335 4808 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.513344 4808 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.513376 4808 flags.go:64] FLAG: --node-ip="192.168.126.11" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.513387 4808 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.513406 4808 flags.go:64] FLAG: --node-status-max-images="50" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.513415 4808 flags.go:64] FLAG: --node-status-update-frequency="10s" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.513424 4808 flags.go:64] FLAG: --oom-score-adj="-999" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.513433 4808 flags.go:64] FLAG: --pod-cidr="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.513442 4808 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.513454 4808 flags.go:64] FLAG: --pod-manifest-path="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.513463 4808 flags.go:64] FLAG: --pod-max-pids="-1" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.513472 4808 flags.go:64] FLAG: --pods-per-core="0" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.513481 4808 flags.go:64] FLAG: --port="10250" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.513490 4808 flags.go:64] FLAG: --protect-kernel-defaults="false" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.513500 4808 flags.go:64] FLAG: --provider-id="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.513509 4808 flags.go:64] FLAG: --qos-reserved="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.513519 4808 flags.go:64] FLAG: --read-only-port="10255" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.513528 4808 flags.go:64] FLAG: --register-node="true" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.513537 4808 flags.go:64] FLAG: --register-schedulable="true" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.513546 4808 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.513560 4808 flags.go:64] FLAG: --registry-burst="10" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.513569 4808 flags.go:64] FLAG: --registry-qps="5" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.513578 4808 flags.go:64] FLAG: --reserved-cpus="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.513587 4808 flags.go:64] FLAG: --reserved-memory="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.513598 4808 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.513607 4808 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.513616 4808 flags.go:64] FLAG: --rotate-certificates="false" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.513625 4808 flags.go:64] FLAG: --rotate-server-certificates="false" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.513633 4808 flags.go:64] FLAG: --runonce="false" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.513642 4808 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.513652 4808 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.513661 4808 flags.go:64] FLAG: --seccomp-default="false" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.513670 4808 flags.go:64] FLAG: --serialize-image-pulls="true" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.513679 4808 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.513689 4808 flags.go:64] FLAG: --storage-driver-db="cadvisor" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.513697 4808 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.513706 4808 flags.go:64] FLAG: --storage-driver-password="root" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.513716 4808 flags.go:64] FLAG: --storage-driver-secure="false" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.513724 4808 flags.go:64] FLAG: --storage-driver-table="stats" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.513733 4808 flags.go:64] FLAG: --storage-driver-user="root" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.513742 4808 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.513751 4808 flags.go:64] FLAG: --sync-frequency="1m0s" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.513760 4808 flags.go:64] FLAG: --system-cgroups="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.513769 4808 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.513782 4808 flags.go:64] FLAG: --system-reserved-cgroup="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.513791 4808 flags.go:64] FLAG: --tls-cert-file="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.513799 4808 flags.go:64] FLAG: --tls-cipher-suites="[]" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.513811 4808 flags.go:64] FLAG: --tls-min-version="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.513819 4808 flags.go:64] FLAG: --tls-private-key-file="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.513828 4808 flags.go:64] FLAG: --topology-manager-policy="none" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.513837 4808 flags.go:64] FLAG: --topology-manager-policy-options="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.513846 4808 flags.go:64] FLAG: --topology-manager-scope="container" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.513855 4808 flags.go:64] FLAG: --v="2" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.513866 4808 flags.go:64] FLAG: --version="false" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.513876 4808 flags.go:64] FLAG: --vmodule="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.513887 4808 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.513896 4808 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.514095 4808 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.514106 4808 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.514116 4808 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.514124 4808 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.514132 4808 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.514141 4808 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.514150 4808 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.514158 4808 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.514165 4808 feature_gate.go:330] unrecognized feature gate: Example Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.514174 4808 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.514191 4808 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.514201 4808 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.514209 4808 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.514218 4808 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.514226 4808 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.514234 4808 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.514242 4808 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.514249 4808 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.514257 4808 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.514265 4808 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.514273 4808 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.514281 4808 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.514289 4808 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.514297 4808 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.514305 4808 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.514313 4808 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.514320 4808 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.514328 4808 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.514335 4808 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.514343 4808 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.514351 4808 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.514381 4808 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.514389 4808 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.514397 4808 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.514405 4808 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.514414 4808 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.514422 4808 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.514430 4808 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.514446 4808 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.514455 4808 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.514463 4808 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.514471 4808 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.514482 4808 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.514491 4808 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.514503 4808 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.514513 4808 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.514523 4808 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.514532 4808 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.514540 4808 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.514547 4808 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.514556 4808 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.514564 4808 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.514572 4808 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.514582 4808 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.514591 4808 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.514599 4808 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.514607 4808 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.514616 4808 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.514623 4808 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.514632 4808 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.514643 4808 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.514654 4808 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.514663 4808 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.514672 4808 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.514683 4808 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.514690 4808 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.514698 4808 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.514706 4808 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.514714 4808 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.514721 4808 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.514729 4808 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.514755 4808 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.525020 4808 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.525064 4808 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525133 4808 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525141 4808 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525146 4808 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525150 4808 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525154 4808 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525158 4808 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525161 4808 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525165 4808 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525168 4808 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525172 4808 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525175 4808 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525179 4808 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525182 4808 feature_gate.go:330] unrecognized feature gate: Example Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525186 4808 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525189 4808 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525193 4808 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525196 4808 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525200 4808 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525203 4808 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525207 4808 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525211 4808 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525214 4808 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525218 4808 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525222 4808 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525226 4808 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525229 4808 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525233 4808 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525236 4808 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525239 4808 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525243 4808 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525246 4808 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525251 4808 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525254 4808 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525258 4808 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525267 4808 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525271 4808 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525275 4808 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525279 4808 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525284 4808 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525289 4808 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525293 4808 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525297 4808 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525301 4808 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525305 4808 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525310 4808 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525314 4808 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525318 4808 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525323 4808 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525329 4808 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525334 4808 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525339 4808 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525343 4808 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525348 4808 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525353 4808 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525385 4808 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525390 4808 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525393 4808 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525397 4808 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525400 4808 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525404 4808 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525408 4808 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525411 4808 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525415 4808 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525420 4808 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525424 4808 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525427 4808 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525431 4808 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525434 4808 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525438 4808 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525442 4808 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525448 4808 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.525460 4808 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525584 4808 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525593 4808 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525597 4808 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525601 4808 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525605 4808 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525608 4808 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525612 4808 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525616 4808 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525619 4808 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525623 4808 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525626 4808 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525631 4808 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525636 4808 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525640 4808 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525645 4808 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525649 4808 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525652 4808 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525657 4808 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525662 4808 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525666 4808 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525670 4808 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525674 4808 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525678 4808 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525682 4808 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525687 4808 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525690 4808 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525695 4808 feature_gate.go:330] unrecognized feature gate: Example Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525699 4808 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525703 4808 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525709 4808 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525713 4808 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525718 4808 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525723 4808 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525727 4808 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525733 4808 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525738 4808 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525743 4808 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525747 4808 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525751 4808 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525755 4808 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525758 4808 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525763 4808 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525769 4808 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525777 4808 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525786 4808 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525793 4808 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525799 4808 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525804 4808 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525809 4808 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525814 4808 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525818 4808 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525823 4808 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525827 4808 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525831 4808 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525834 4808 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525838 4808 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525842 4808 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525846 4808 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525850 4808 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525853 4808 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525857 4808 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525860 4808 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525864 4808 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525867 4808 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525871 4808 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525875 4808 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525878 4808 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525882 4808 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525885 4808 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525889 4808 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.525893 4808 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.525901 4808 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.526091 4808 server.go:940] "Client rotation is on, will bootstrap in background" Mar 11 08:39:09 crc kubenswrapper[4808]: E0311 08:39:09.530302 4808 bootstrap.go:266] "Unhandled Error" err="part of the existing bootstrap client certificate in /var/lib/kubelet/kubeconfig is expired: 2026-02-24 05:52:08 +0000 UTC" logger="UnhandledError" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.537573 4808 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.537734 4808 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.540144 4808 server.go:997] "Starting client certificate rotation" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.540199 4808 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.540381 4808 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.569499 4808 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 11 08:39:09 crc kubenswrapper[4808]: E0311 08:39:09.572038 4808 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.113:6443: connect: connection refused" logger="UnhandledError" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.574408 4808 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.598738 4808 log.go:25] "Validated CRI v1 runtime API" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.642801 4808 log.go:25] "Validated CRI v1 image API" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.645612 4808 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.652209 4808 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-03-11-08-34-44-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.652272 4808 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:41 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.683448 4808 manager.go:217] Machine: {Timestamp:2026-03-11 08:39:09.679174033 +0000 UTC m=+0.632497433 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:8423e724-ca17-4e6e-9671-7e629ecf3f36 BootID:fb54ef7d-152b-411a-b511-256d1778abe5 Filesystems:[{Device:/run/user/1000 DeviceMajor:0 DeviceMinor:41 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:2e:16:5a Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:2e:16:5a Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:80:65:bf Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:a0:b3:1c Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:cc:7d:29 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:89:82:cb Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:d2:69:62 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:6e:d2:12:54:bd:d5 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:26:7b:8d:e7:1c:08 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.683835 4808 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.684016 4808 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.686339 4808 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.686930 4808 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.687018 4808 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.687398 4808 topology_manager.go:138] "Creating topology manager with none policy" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.687418 4808 container_manager_linux.go:303] "Creating device plugin manager" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.688185 4808 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.688241 4808 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.688603 4808 state_mem.go:36] "Initialized new in-memory state store" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.688769 4808 server.go:1245] "Using root directory" path="/var/lib/kubelet" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.693642 4808 kubelet.go:418] "Attempting to sync node with API server" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.693747 4808 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.693865 4808 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.693891 4808 kubelet.go:324] "Adding apiserver pod source" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.693913 4808 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.700260 4808 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.702435 4808 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.702830 4808 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.113:6443: connect: connection refused Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.702873 4808 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.113:6443: connect: connection refused Mar 11 08:39:09 crc kubenswrapper[4808]: E0311 08:39:09.702981 4808 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.113:6443: connect: connection refused" logger="UnhandledError" Mar 11 08:39:09 crc kubenswrapper[4808]: E0311 08:39:09.703004 4808 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.113:6443: connect: connection refused" logger="UnhandledError" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.706997 4808 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.708951 4808 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.708997 4808 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.709012 4808 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.709026 4808 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.709047 4808 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.709063 4808 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.709077 4808 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.709098 4808 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.709114 4808 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.709128 4808 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.709170 4808 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.709185 4808 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.711859 4808 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.712683 4808 server.go:1280] "Started kubelet" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.713982 4808 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.713903 4808 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 11 08:39:09 crc systemd[1]: Started Kubernetes Kubelet. Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.715650 4808 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.113:6443: connect: connection refused Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.715674 4808 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.723776 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.723855 4808 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.724495 4808 server.go:460] "Adding debug handlers to kubelet server" Mar 11 08:39:09 crc kubenswrapper[4808]: E0311 08:39:09.724941 4808 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.725085 4808 volume_manager.go:287] "The desired_state_of_world populator starts" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.725112 4808 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.725402 4808 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.727035 4808 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.113:6443: connect: connection refused Mar 11 08:39:09 crc kubenswrapper[4808]: E0311 08:39:09.727149 4808 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.113:6443: connect: connection refused" logger="UnhandledError" Mar 11 08:39:09 crc kubenswrapper[4808]: E0311 08:39:09.727306 4808 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.113:6443: connect: connection refused" interval="200ms" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.727500 4808 factory.go:55] Registering systemd factory Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.727532 4808 factory.go:221] Registration of the systemd container factory successfully Mar 11 08:39:09 crc kubenswrapper[4808]: E0311 08:39:09.727518 4808 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.113:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.189bbcb1b3d37b75 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:09.712636789 +0000 UTC m=+0.665960139,LastTimestamp:2026-03-11 08:39:09.712636789 +0000 UTC m=+0.665960139,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.735171 4808 factory.go:153] Registering CRI-O factory Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.735330 4808 factory.go:221] Registration of the crio container factory successfully Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.735602 4808 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.735791 4808 factory.go:103] Registering Raw factory Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.735928 4808 manager.go:1196] Started watching for new ooms in manager Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.737046 4808 manager.go:319] Starting recovery of all containers Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.740971 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.741084 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.741102 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.741117 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.741134 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.741149 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.743053 4808 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.743101 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.743122 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.743147 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.743174 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.743190 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.743204 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.743219 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.743235 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.743256 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.743273 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.743331 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.743380 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.743401 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.743418 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.743438 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.743455 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.743473 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.743492 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.743512 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.743531 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.743564 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.743584 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.743602 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.743618 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.743671 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.743692 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.743711 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.743728 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.743748 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.743767 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.743790 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.743806 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.743824 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.743839 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.743855 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.743868 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.743897 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.743912 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.743929 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.743945 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.743961 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.743975 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.743991 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.744012 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.744031 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.744046 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.744071 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.744086 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.744102 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.744120 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.744136 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.744153 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.744169 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.744184 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.744198 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.744211 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.744228 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.744241 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.744255 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.744268 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.744282 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.744301 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.744320 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.744338 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.744379 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.744400 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.744455 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.744481 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.744502 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.744524 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.744542 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.744555 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.744597 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.744612 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.744626 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.744723 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.744740 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.744754 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.744769 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.744791 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.744804 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.744820 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.744840 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.744854 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.744867 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.744881 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.744896 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.744910 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.744928 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.744942 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.744957 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.744970 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.744987 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.745006 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.745026 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.745040 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.745053 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.745067 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.745088 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.745104 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.745120 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.745136 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.745151 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.745166 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.745182 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.745197 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.745213 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.745229 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.745245 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.745261 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.745278 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.745292 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.745306 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.745321 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.745342 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.745375 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.745393 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.745406 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.745421 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.745437 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.745454 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.745467 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.745497 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.745513 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.745528 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.745544 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.745560 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.745573 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.745588 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.745604 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.745620 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.745634 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.745650 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.745666 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.745680 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.745701 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.745717 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.745731 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.745747 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.745762 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.745774 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.745789 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.745803 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.745818 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.745832 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.745847 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.745862 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.745877 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.745890 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.745905 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.745919 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.745933 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.745947 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.745963 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.745979 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.745998 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.746018 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.746035 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.746050 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.746065 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.746078 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.746092 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.746106 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.746121 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.746135 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.746149 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.746162 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.746176 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.746192 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.746208 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.746222 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.746240 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.746254 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.746268 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.746284 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.746298 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.746312 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.746327 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.746342 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.746509 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.746526 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.746540 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.746559 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.746573 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.746589 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.746605 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.746620 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.746636 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.746651 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.746669 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.746685 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.746700 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.746716 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.746733 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.746751 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.746766 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.746789 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.746807 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.746823 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.746841 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.746859 4808 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.746873 4808 reconstruct.go:97] "Volume reconstruction finished" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.746884 4808 reconciler.go:26] "Reconciler: start to sync state" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.758063 4808 manager.go:324] Recovery completed Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.780599 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.783914 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.783986 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.784004 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.785014 4808 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.785596 4808 cpu_manager.go:225] "Starting CPU manager" policy="none" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.785630 4808 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.785667 4808 state_mem.go:36] "Initialized new in-memory state store" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.787981 4808 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.788038 4808 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.788093 4808 kubelet.go:2335] "Starting kubelet main sync loop" Mar 11 08:39:09 crc kubenswrapper[4808]: E0311 08:39:09.788173 4808 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 11 08:39:09 crc kubenswrapper[4808]: W0311 08:39:09.791052 4808 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.113:6443: connect: connection refused Mar 11 08:39:09 crc kubenswrapper[4808]: E0311 08:39:09.791331 4808 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.113:6443: connect: connection refused" logger="UnhandledError" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.802974 4808 policy_none.go:49] "None policy: Start" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.804492 4808 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.804532 4808 state_mem.go:35] "Initializing new in-memory state store" Mar 11 08:39:09 crc kubenswrapper[4808]: E0311 08:39:09.825254 4808 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.878783 4808 manager.go:334] "Starting Device Plugin manager" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.878843 4808 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.878860 4808 server.go:79] "Starting device plugin registration server" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.879503 4808 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.879527 4808 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.880142 4808 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.880517 4808 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.880548 4808 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.889294 4808 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.889464 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:39:09 crc kubenswrapper[4808]: E0311 08:39:09.890544 4808 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.890998 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.891035 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.891048 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.891235 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.891924 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.892011 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.892784 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.892845 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.892873 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.893110 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.893228 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.893272 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.893724 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.893813 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.893842 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.894615 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.894664 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.894687 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.894716 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.894692 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.894780 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.894935 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.895143 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.895199 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.897537 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.897572 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.897590 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.898071 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.898114 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.898141 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.898299 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.899241 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.899316 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.899908 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.899941 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.899958 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.900206 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.900282 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.901847 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.901850 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.901900 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.901918 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.901920 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.901936 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:39:09 crc kubenswrapper[4808]: E0311 08:39:09.929054 4808 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.113:6443: connect: connection refused" interval="400ms" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.950220 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.950294 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.950326 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.950395 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.950505 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.950578 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.950630 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.950694 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.950749 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.950780 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.950821 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.950849 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.950899 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.950928 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.951023 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.979901 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.981635 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.981705 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.981725 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:39:09 crc kubenswrapper[4808]: I0311 08:39:09.981776 4808 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 11 08:39:09 crc kubenswrapper[4808]: E0311 08:39:09.982500 4808 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.113:6443: connect: connection refused" node="crc" Mar 11 08:39:10 crc kubenswrapper[4808]: I0311 08:39:10.052705 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 11 08:39:10 crc kubenswrapper[4808]: I0311 08:39:10.052885 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 11 08:39:10 crc kubenswrapper[4808]: I0311 08:39:10.053041 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 11 08:39:10 crc kubenswrapper[4808]: I0311 08:39:10.053115 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 11 08:39:10 crc kubenswrapper[4808]: I0311 08:39:10.053271 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 11 08:39:10 crc kubenswrapper[4808]: I0311 08:39:10.053447 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 11 08:39:10 crc kubenswrapper[4808]: I0311 08:39:10.053622 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 11 08:39:10 crc kubenswrapper[4808]: I0311 08:39:10.053731 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 11 08:39:10 crc kubenswrapper[4808]: I0311 08:39:10.053854 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 11 08:39:10 crc kubenswrapper[4808]: I0311 08:39:10.053964 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 11 08:39:10 crc kubenswrapper[4808]: I0311 08:39:10.054014 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 11 08:39:10 crc kubenswrapper[4808]: I0311 08:39:10.054061 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 11 08:39:10 crc kubenswrapper[4808]: I0311 08:39:10.054083 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 11 08:39:10 crc kubenswrapper[4808]: I0311 08:39:10.054110 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 11 08:39:10 crc kubenswrapper[4808]: I0311 08:39:10.054124 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 11 08:39:10 crc kubenswrapper[4808]: I0311 08:39:10.054153 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 11 08:39:10 crc kubenswrapper[4808]: I0311 08:39:10.054186 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 11 08:39:10 crc kubenswrapper[4808]: I0311 08:39:10.054200 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 11 08:39:10 crc kubenswrapper[4808]: I0311 08:39:10.054246 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 11 08:39:10 crc kubenswrapper[4808]: I0311 08:39:10.054269 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 11 08:39:10 crc kubenswrapper[4808]: I0311 08:39:10.054289 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 11 08:39:10 crc kubenswrapper[4808]: I0311 08:39:10.054305 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 11 08:39:10 crc kubenswrapper[4808]: I0311 08:39:10.054330 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 11 08:39:10 crc kubenswrapper[4808]: I0311 08:39:10.054440 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 11 08:39:10 crc kubenswrapper[4808]: I0311 08:39:10.054345 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 11 08:39:10 crc kubenswrapper[4808]: I0311 08:39:10.054333 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 11 08:39:10 crc kubenswrapper[4808]: I0311 08:39:10.054540 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 11 08:39:10 crc kubenswrapper[4808]: I0311 08:39:10.054351 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 11 08:39:10 crc kubenswrapper[4808]: I0311 08:39:10.054624 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 11 08:39:10 crc kubenswrapper[4808]: I0311 08:39:10.054473 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 11 08:39:10 crc kubenswrapper[4808]: I0311 08:39:10.182864 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:39:10 crc kubenswrapper[4808]: I0311 08:39:10.184701 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:39:10 crc kubenswrapper[4808]: I0311 08:39:10.184740 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:39:10 crc kubenswrapper[4808]: I0311 08:39:10.184753 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:39:10 crc kubenswrapper[4808]: I0311 08:39:10.184779 4808 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 11 08:39:10 crc kubenswrapper[4808]: E0311 08:39:10.185321 4808 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.113:6443: connect: connection refused" node="crc" Mar 11 08:39:10 crc kubenswrapper[4808]: I0311 08:39:10.233281 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Mar 11 08:39:10 crc kubenswrapper[4808]: I0311 08:39:10.242052 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 11 08:39:10 crc kubenswrapper[4808]: I0311 08:39:10.275071 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 11 08:39:10 crc kubenswrapper[4808]: W0311 08:39:10.294236 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-ef3f2a13dbc2d232bca5624ec25f4dc957a909f8eff46a41d0ae27afc69ef08b WatchSource:0}: Error finding container ef3f2a13dbc2d232bca5624ec25f4dc957a909f8eff46a41d0ae27afc69ef08b: Status 404 returned error can't find the container with id ef3f2a13dbc2d232bca5624ec25f4dc957a909f8eff46a41d0ae27afc69ef08b Mar 11 08:39:10 crc kubenswrapper[4808]: W0311 08:39:10.301098 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-4545bdc27b2c8fa4f5530b2e8ee03eac9944e86b9998c1d184487a56c63f12e0 WatchSource:0}: Error finding container 4545bdc27b2c8fa4f5530b2e8ee03eac9944e86b9998c1d184487a56c63f12e0: Status 404 returned error can't find the container with id 4545bdc27b2c8fa4f5530b2e8ee03eac9944e86b9998c1d184487a56c63f12e0 Mar 11 08:39:10 crc kubenswrapper[4808]: I0311 08:39:10.301214 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 11 08:39:10 crc kubenswrapper[4808]: I0311 08:39:10.306396 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 11 08:39:10 crc kubenswrapper[4808]: W0311 08:39:10.323233 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-255fc09272832cab7f89cf61268508bd1c58c34e0d5b7a438eb0dcb4a10a61e5 WatchSource:0}: Error finding container 255fc09272832cab7f89cf61268508bd1c58c34e0d5b7a438eb0dcb4a10a61e5: Status 404 returned error can't find the container with id 255fc09272832cab7f89cf61268508bd1c58c34e0d5b7a438eb0dcb4a10a61e5 Mar 11 08:39:10 crc kubenswrapper[4808]: E0311 08:39:10.330653 4808 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.113:6443: connect: connection refused" interval="800ms" Mar 11 08:39:10 crc kubenswrapper[4808]: W0311 08:39:10.335624 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-7022d282d6bad22f73936f0a5ecd4b5a59f1528fc020edc4f308b4d4a3f1341e WatchSource:0}: Error finding container 7022d282d6bad22f73936f0a5ecd4b5a59f1528fc020edc4f308b4d4a3f1341e: Status 404 returned error can't find the container with id 7022d282d6bad22f73936f0a5ecd4b5a59f1528fc020edc4f308b4d4a3f1341e Mar 11 08:39:10 crc kubenswrapper[4808]: W0311 08:39:10.548170 4808 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.113:6443: connect: connection refused Mar 11 08:39:10 crc kubenswrapper[4808]: E0311 08:39:10.548270 4808 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.113:6443: connect: connection refused" logger="UnhandledError" Mar 11 08:39:10 crc kubenswrapper[4808]: I0311 08:39:10.586539 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:39:10 crc kubenswrapper[4808]: I0311 08:39:10.588511 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:39:10 crc kubenswrapper[4808]: I0311 08:39:10.588564 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:39:10 crc kubenswrapper[4808]: I0311 08:39:10.588578 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:39:10 crc kubenswrapper[4808]: I0311 08:39:10.588611 4808 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 11 08:39:10 crc kubenswrapper[4808]: E0311 08:39:10.589171 4808 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.113:6443: connect: connection refused" node="crc" Mar 11 08:39:10 crc kubenswrapper[4808]: W0311 08:39:10.589213 4808 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.113:6443: connect: connection refused Mar 11 08:39:10 crc kubenswrapper[4808]: E0311 08:39:10.589310 4808 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.113:6443: connect: connection refused" logger="UnhandledError" Mar 11 08:39:10 crc kubenswrapper[4808]: I0311 08:39:10.716970 4808 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.113:6443: connect: connection refused Mar 11 08:39:10 crc kubenswrapper[4808]: I0311 08:39:10.794116 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"55a23295a95ce36a4bb4c9de49cf344e0fbb9a1c51c6606ad85f1c89e8b0a4ed"} Mar 11 08:39:10 crc kubenswrapper[4808]: I0311 08:39:10.795424 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ef3f2a13dbc2d232bca5624ec25f4dc957a909f8eff46a41d0ae27afc69ef08b"} Mar 11 08:39:10 crc kubenswrapper[4808]: I0311 08:39:10.797282 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4545bdc27b2c8fa4f5530b2e8ee03eac9944e86b9998c1d184487a56c63f12e0"} Mar 11 08:39:10 crc kubenswrapper[4808]: I0311 08:39:10.798703 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"7022d282d6bad22f73936f0a5ecd4b5a59f1528fc020edc4f308b4d4a3f1341e"} Mar 11 08:39:10 crc kubenswrapper[4808]: I0311 08:39:10.800160 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"255fc09272832cab7f89cf61268508bd1c58c34e0d5b7a438eb0dcb4a10a61e5"} Mar 11 08:39:10 crc kubenswrapper[4808]: W0311 08:39:10.820941 4808 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.113:6443: connect: connection refused Mar 11 08:39:10 crc kubenswrapper[4808]: E0311 08:39:10.821013 4808 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.113:6443: connect: connection refused" logger="UnhandledError" Mar 11 08:39:11 crc kubenswrapper[4808]: E0311 08:39:11.035835 4808 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.113:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.189bbcb1b3d37b75 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:09.712636789 +0000 UTC m=+0.665960139,LastTimestamp:2026-03-11 08:39:09.712636789 +0000 UTC m=+0.665960139,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:11 crc kubenswrapper[4808]: E0311 08:39:11.132720 4808 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.113:6443: connect: connection refused" interval="1.6s" Mar 11 08:39:11 crc kubenswrapper[4808]: W0311 08:39:11.208581 4808 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.113:6443: connect: connection refused Mar 11 08:39:11 crc kubenswrapper[4808]: E0311 08:39:11.208697 4808 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.113:6443: connect: connection refused" logger="UnhandledError" Mar 11 08:39:11 crc kubenswrapper[4808]: I0311 08:39:11.389949 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:39:11 crc kubenswrapper[4808]: I0311 08:39:11.391853 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:39:11 crc kubenswrapper[4808]: I0311 08:39:11.391892 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:39:11 crc kubenswrapper[4808]: I0311 08:39:11.391902 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:39:11 crc kubenswrapper[4808]: I0311 08:39:11.391932 4808 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 11 08:39:11 crc kubenswrapper[4808]: E0311 08:39:11.392445 4808 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.113:6443: connect: connection refused" node="crc" Mar 11 08:39:11 crc kubenswrapper[4808]: I0311 08:39:11.697690 4808 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 11 08:39:11 crc kubenswrapper[4808]: E0311 08:39:11.699088 4808 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.113:6443: connect: connection refused" logger="UnhandledError" Mar 11 08:39:11 crc kubenswrapper[4808]: I0311 08:39:11.717390 4808 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.113:6443: connect: connection refused Mar 11 08:39:11 crc kubenswrapper[4808]: I0311 08:39:11.810884 4808 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="184b4659910cdc9b9ee8dc34db1ede687099f4a5103dfba9e9a2fc0428c4e2ea" exitCode=0 Mar 11 08:39:11 crc kubenswrapper[4808]: I0311 08:39:11.811129 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:39:11 crc kubenswrapper[4808]: I0311 08:39:11.811114 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"184b4659910cdc9b9ee8dc34db1ede687099f4a5103dfba9e9a2fc0428c4e2ea"} Mar 11 08:39:11 crc kubenswrapper[4808]: I0311 08:39:11.812714 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:39:11 crc kubenswrapper[4808]: I0311 08:39:11.812757 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:39:11 crc kubenswrapper[4808]: I0311 08:39:11.812773 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:39:11 crc kubenswrapper[4808]: I0311 08:39:11.813461 4808 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="f0167ad24e01c669d29e5bcad3d3dc4e2e7578cf20278001ea2dfe9f1390bc99" exitCode=0 Mar 11 08:39:11 crc kubenswrapper[4808]: I0311 08:39:11.813519 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"f0167ad24e01c669d29e5bcad3d3dc4e2e7578cf20278001ea2dfe9f1390bc99"} Mar 11 08:39:11 crc kubenswrapper[4808]: I0311 08:39:11.813787 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:39:11 crc kubenswrapper[4808]: I0311 08:39:11.814296 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:39:11 crc kubenswrapper[4808]: I0311 08:39:11.815580 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:39:11 crc kubenswrapper[4808]: I0311 08:39:11.815603 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:39:11 crc kubenswrapper[4808]: I0311 08:39:11.815613 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:39:11 crc kubenswrapper[4808]: I0311 08:39:11.815612 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:39:11 crc kubenswrapper[4808]: I0311 08:39:11.815755 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:39:11 crc kubenswrapper[4808]: I0311 08:39:11.815781 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:39:11 crc kubenswrapper[4808]: I0311 08:39:11.817965 4808 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="8915625a5b41d93fa3b078e2e9b1b6cf18c36f3ba7ada3a505ab826c49ce3b6f" exitCode=0 Mar 11 08:39:11 crc kubenswrapper[4808]: I0311 08:39:11.818022 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"8915625a5b41d93fa3b078e2e9b1b6cf18c36f3ba7ada3a505ab826c49ce3b6f"} Mar 11 08:39:11 crc kubenswrapper[4808]: I0311 08:39:11.818109 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:39:11 crc kubenswrapper[4808]: I0311 08:39:11.819012 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:39:11 crc kubenswrapper[4808]: I0311 08:39:11.819038 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:39:11 crc kubenswrapper[4808]: I0311 08:39:11.819063 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:39:11 crc kubenswrapper[4808]: I0311 08:39:11.821091 4808 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="023c9b432e3273f96abe72e9e1e4cf5ab68649d9d71d0ff59f75fc30ea5c2287" exitCode=0 Mar 11 08:39:11 crc kubenswrapper[4808]: I0311 08:39:11.821143 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"023c9b432e3273f96abe72e9e1e4cf5ab68649d9d71d0ff59f75fc30ea5c2287"} Mar 11 08:39:11 crc kubenswrapper[4808]: I0311 08:39:11.821282 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:39:11 crc kubenswrapper[4808]: I0311 08:39:11.822850 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:39:11 crc kubenswrapper[4808]: I0311 08:39:11.822904 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:39:11 crc kubenswrapper[4808]: I0311 08:39:11.822923 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:39:11 crc kubenswrapper[4808]: I0311 08:39:11.826047 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c4b6250310d7b389fc8a10a376b7a92788768aba4d457abe540cf4b23507929f"} Mar 11 08:39:11 crc kubenswrapper[4808]: I0311 08:39:11.826093 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"72cacaca67f766f06463136d31ebea03efe7d2f215fe0c453b5918c82c4e3536"} Mar 11 08:39:11 crc kubenswrapper[4808]: I0311 08:39:11.826108 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"6553330cd365bcb79f61b1485de903b3a33c17e85ccd9e4c2e6a35c3b0443ccf"} Mar 11 08:39:11 crc kubenswrapper[4808]: I0311 08:39:11.826121 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e54127863996440e3e037d7cbbb6a23234fb6b4fcf9fcfd071867341bb3b7963"} Mar 11 08:39:11 crc kubenswrapper[4808]: I0311 08:39:11.826158 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:39:11 crc kubenswrapper[4808]: I0311 08:39:11.827247 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:39:11 crc kubenswrapper[4808]: I0311 08:39:11.827278 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:39:11 crc kubenswrapper[4808]: I0311 08:39:11.827288 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:39:12 crc kubenswrapper[4808]: W0311 08:39:12.648944 4808 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.113:6443: connect: connection refused Mar 11 08:39:12 crc kubenswrapper[4808]: E0311 08:39:12.649053 4808 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.113:6443: connect: connection refused" logger="UnhandledError" Mar 11 08:39:12 crc kubenswrapper[4808]: I0311 08:39:12.717193 4808 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.113:6443: connect: connection refused Mar 11 08:39:12 crc kubenswrapper[4808]: E0311 08:39:12.733605 4808 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.113:6443: connect: connection refused" interval="3.2s" Mar 11 08:39:12 crc kubenswrapper[4808]: I0311 08:39:12.837632 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"00fd34aefc93aa169e305232d0d0bf0fff66d3f4469ae96f68cc5f8a51a92c86"} Mar 11 08:39:12 crc kubenswrapper[4808]: I0311 08:39:12.837736 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:39:12 crc kubenswrapper[4808]: I0311 08:39:12.838522 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:39:12 crc kubenswrapper[4808]: I0311 08:39:12.838552 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:39:12 crc kubenswrapper[4808]: I0311 08:39:12.838559 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:39:12 crc kubenswrapper[4808]: I0311 08:39:12.841482 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"506d71773927c014aeafefdc00c93d5faf4e6a5827bc8b47f525f3ea970ed14e"} Mar 11 08:39:12 crc kubenswrapper[4808]: I0311 08:39:12.841507 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"aa1f460dd013b0b42bfb8031fcb1e2f49ace0fe24ce8194bd6e6970ad3e750ea"} Mar 11 08:39:12 crc kubenswrapper[4808]: I0311 08:39:12.841516 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"675da4e867613d165a41176c1b725f3601ffdd7de762c52efec0ee5485b1d628"} Mar 11 08:39:12 crc kubenswrapper[4808]: I0311 08:39:12.841551 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b9a36b70daad0e5e33a84c699b4c0680f47f80bad4c9852616ea933ba8c6b221"} Mar 11 08:39:12 crc kubenswrapper[4808]: I0311 08:39:12.843615 4808 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="dc6d6b373b603100e83b56735f848166bbe70eaffa795411d0bf95bbfbe31c6b" exitCode=0 Mar 11 08:39:12 crc kubenswrapper[4808]: I0311 08:39:12.843717 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:39:12 crc kubenswrapper[4808]: I0311 08:39:12.843746 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"dc6d6b373b603100e83b56735f848166bbe70eaffa795411d0bf95bbfbe31c6b"} Mar 11 08:39:12 crc kubenswrapper[4808]: I0311 08:39:12.844547 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:39:12 crc kubenswrapper[4808]: I0311 08:39:12.844597 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:39:12 crc kubenswrapper[4808]: I0311 08:39:12.844616 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:39:12 crc kubenswrapper[4808]: I0311 08:39:12.847491 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:39:12 crc kubenswrapper[4808]: I0311 08:39:12.848099 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:39:12 crc kubenswrapper[4808]: I0311 08:39:12.848370 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"047f453a7b029609fd6596a99014fc89181f9313fdbd391b5c755d6f7b7b0db0"} Mar 11 08:39:12 crc kubenswrapper[4808]: I0311 08:39:12.848396 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"3f8347d0738c55456ccb8de04e3b599ef0a2bbaae8ab8eecda19a9bbd9abc7bc"} Mar 11 08:39:12 crc kubenswrapper[4808]: I0311 08:39:12.848407 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"4b2b2eafbff37196bcb9cab96b32ba3af5acd4623128c00e2fc933040f09aab5"} Mar 11 08:39:12 crc kubenswrapper[4808]: I0311 08:39:12.848728 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:39:12 crc kubenswrapper[4808]: I0311 08:39:12.848745 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:39:12 crc kubenswrapper[4808]: I0311 08:39:12.848754 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:39:12 crc kubenswrapper[4808]: I0311 08:39:12.851804 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:39:12 crc kubenswrapper[4808]: I0311 08:39:12.851852 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:39:12 crc kubenswrapper[4808]: I0311 08:39:12.851868 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:39:12 crc kubenswrapper[4808]: W0311 08:39:12.991234 4808 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.113:6443: connect: connection refused Mar 11 08:39:12 crc kubenswrapper[4808]: E0311 08:39:12.991339 4808 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.113:6443: connect: connection refused" logger="UnhandledError" Mar 11 08:39:12 crc kubenswrapper[4808]: I0311 08:39:12.993554 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:39:12 crc kubenswrapper[4808]: I0311 08:39:12.994854 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:39:12 crc kubenswrapper[4808]: I0311 08:39:12.994883 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:39:12 crc kubenswrapper[4808]: I0311 08:39:12.994893 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:39:12 crc kubenswrapper[4808]: I0311 08:39:12.994923 4808 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 11 08:39:12 crc kubenswrapper[4808]: E0311 08:39:12.995222 4808 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.113:6443: connect: connection refused" node="crc" Mar 11 08:39:13 crc kubenswrapper[4808]: I0311 08:39:13.861570 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7ddee38665748f8724f792ddf14e2d20ea119da18cc003a2d6b3fcf3fffcdb60"} Mar 11 08:39:13 crc kubenswrapper[4808]: I0311 08:39:13.861642 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:39:13 crc kubenswrapper[4808]: I0311 08:39:13.863013 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:39:13 crc kubenswrapper[4808]: I0311 08:39:13.863050 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:39:13 crc kubenswrapper[4808]: I0311 08:39:13.863064 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:39:13 crc kubenswrapper[4808]: I0311 08:39:13.864531 4808 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="f029352eaf11747f4efebbb91000b740e298e03d72e8a1347316668b78c23cae" exitCode=0 Mar 11 08:39:13 crc kubenswrapper[4808]: I0311 08:39:13.864628 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:39:13 crc kubenswrapper[4808]: I0311 08:39:13.865022 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:39:13 crc kubenswrapper[4808]: I0311 08:39:13.865377 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"f029352eaf11747f4efebbb91000b740e298e03d72e8a1347316668b78c23cae"} Mar 11 08:39:13 crc kubenswrapper[4808]: I0311 08:39:13.865465 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:39:13 crc kubenswrapper[4808]: I0311 08:39:13.865806 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 11 08:39:13 crc kubenswrapper[4808]: I0311 08:39:13.866161 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:39:13 crc kubenswrapper[4808]: I0311 08:39:13.866186 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:39:13 crc kubenswrapper[4808]: I0311 08:39:13.866196 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:39:13 crc kubenswrapper[4808]: I0311 08:39:13.866741 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:39:13 crc kubenswrapper[4808]: I0311 08:39:13.866766 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:39:13 crc kubenswrapper[4808]: I0311 08:39:13.866777 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:39:13 crc kubenswrapper[4808]: I0311 08:39:13.867121 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:39:13 crc kubenswrapper[4808]: I0311 08:39:13.867145 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:39:13 crc kubenswrapper[4808]: I0311 08:39:13.867155 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:39:14 crc kubenswrapper[4808]: I0311 08:39:14.409669 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 11 08:39:14 crc kubenswrapper[4808]: I0311 08:39:14.409923 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:39:14 crc kubenswrapper[4808]: I0311 08:39:14.411302 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:39:14 crc kubenswrapper[4808]: I0311 08:39:14.411337 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:39:14 crc kubenswrapper[4808]: I0311 08:39:14.411346 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:39:14 crc kubenswrapper[4808]: I0311 08:39:14.869950 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"fac42d74d7b0553ef1fda4393cbe7872b21bc18baada8519db8137f2ef71d2aa"} Mar 11 08:39:14 crc kubenswrapper[4808]: I0311 08:39:14.869988 4808 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 11 08:39:14 crc kubenswrapper[4808]: I0311 08:39:14.870006 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"99862ebd2568f5b782683b45df57b331cb0b659d0a48b16d898658463fcaa023"} Mar 11 08:39:14 crc kubenswrapper[4808]: I0311 08:39:14.870026 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"46ab0655a5449d68489b2ad4cdadf1ab8539dfc66e9a09ef0bf492fc84c3df01"} Mar 11 08:39:14 crc kubenswrapper[4808]: I0311 08:39:14.870040 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:39:14 crc kubenswrapper[4808]: I0311 08:39:14.870040 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d1da63ac4504d6a817f5c856ae4ec6feee655c983f804372f8100624e0fbb08e"} Mar 11 08:39:14 crc kubenswrapper[4808]: I0311 08:39:14.870045 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:39:14 crc kubenswrapper[4808]: I0311 08:39:14.871350 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:39:14 crc kubenswrapper[4808]: I0311 08:39:14.871398 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:39:14 crc kubenswrapper[4808]: I0311 08:39:14.871420 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:39:14 crc kubenswrapper[4808]: I0311 08:39:14.871428 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:39:14 crc kubenswrapper[4808]: I0311 08:39:14.871401 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:39:14 crc kubenswrapper[4808]: I0311 08:39:14.871475 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:39:15 crc kubenswrapper[4808]: I0311 08:39:15.424318 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 11 08:39:15 crc kubenswrapper[4808]: I0311 08:39:15.466093 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 11 08:39:15 crc kubenswrapper[4808]: I0311 08:39:15.466333 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:39:15 crc kubenswrapper[4808]: I0311 08:39:15.468045 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:39:15 crc kubenswrapper[4808]: I0311 08:39:15.468098 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:39:15 crc kubenswrapper[4808]: I0311 08:39:15.468145 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:39:15 crc kubenswrapper[4808]: I0311 08:39:15.662959 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 11 08:39:15 crc kubenswrapper[4808]: I0311 08:39:15.719176 4808 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 11 08:39:15 crc kubenswrapper[4808]: I0311 08:39:15.879465 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"ce1b28200696e0c819d326a53d481baf48418232ac2a66fe01504e7af648c56f"} Mar 11 08:39:15 crc kubenswrapper[4808]: I0311 08:39:15.879566 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:39:15 crc kubenswrapper[4808]: I0311 08:39:15.879641 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:39:15 crc kubenswrapper[4808]: I0311 08:39:15.880984 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:39:15 crc kubenswrapper[4808]: I0311 08:39:15.881049 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:39:15 crc kubenswrapper[4808]: I0311 08:39:15.880984 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:39:15 crc kubenswrapper[4808]: I0311 08:39:15.881075 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:39:15 crc kubenswrapper[4808]: I0311 08:39:15.881104 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:39:15 crc kubenswrapper[4808]: I0311 08:39:15.881131 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:39:16 crc kubenswrapper[4808]: I0311 08:39:16.146680 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Mar 11 08:39:16 crc kubenswrapper[4808]: I0311 08:39:16.196157 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:39:16 crc kubenswrapper[4808]: I0311 08:39:16.198016 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:39:16 crc kubenswrapper[4808]: I0311 08:39:16.198062 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:39:16 crc kubenswrapper[4808]: I0311 08:39:16.198072 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:39:16 crc kubenswrapper[4808]: I0311 08:39:16.198106 4808 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 11 08:39:16 crc kubenswrapper[4808]: I0311 08:39:16.369829 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 11 08:39:16 crc kubenswrapper[4808]: I0311 08:39:16.882949 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:39:16 crc kubenswrapper[4808]: I0311 08:39:16.883114 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:39:16 crc kubenswrapper[4808]: I0311 08:39:16.885061 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:39:16 crc kubenswrapper[4808]: I0311 08:39:16.885125 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:39:16 crc kubenswrapper[4808]: I0311 08:39:16.885136 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:39:16 crc kubenswrapper[4808]: I0311 08:39:16.885297 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:39:16 crc kubenswrapper[4808]: I0311 08:39:16.885350 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:39:16 crc kubenswrapper[4808]: I0311 08:39:16.885384 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:39:17 crc kubenswrapper[4808]: I0311 08:39:17.705768 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 11 08:39:17 crc kubenswrapper[4808]: I0311 08:39:17.706248 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:39:17 crc kubenswrapper[4808]: I0311 08:39:17.708527 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:39:17 crc kubenswrapper[4808]: I0311 08:39:17.708578 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:39:17 crc kubenswrapper[4808]: I0311 08:39:17.708589 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:39:17 crc kubenswrapper[4808]: I0311 08:39:17.717801 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 11 08:39:17 crc kubenswrapper[4808]: I0311 08:39:17.732161 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Mar 11 08:39:17 crc kubenswrapper[4808]: I0311 08:39:17.886828 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:39:17 crc kubenswrapper[4808]: I0311 08:39:17.886857 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:39:17 crc kubenswrapper[4808]: I0311 08:39:17.886941 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 11 08:39:17 crc kubenswrapper[4808]: I0311 08:39:17.887074 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:39:17 crc kubenswrapper[4808]: I0311 08:39:17.888770 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:39:17 crc kubenswrapper[4808]: I0311 08:39:17.888831 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:39:17 crc kubenswrapper[4808]: I0311 08:39:17.888855 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:39:17 crc kubenswrapper[4808]: I0311 08:39:17.888987 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:39:17 crc kubenswrapper[4808]: I0311 08:39:17.889053 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:39:17 crc kubenswrapper[4808]: I0311 08:39:17.889070 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:39:17 crc kubenswrapper[4808]: I0311 08:39:17.889080 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:39:17 crc kubenswrapper[4808]: I0311 08:39:17.889106 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:39:17 crc kubenswrapper[4808]: I0311 08:39:17.889104 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:39:18 crc kubenswrapper[4808]: I0311 08:39:18.466621 4808 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 11 08:39:18 crc kubenswrapper[4808]: I0311 08:39:18.466757 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 11 08:39:18 crc kubenswrapper[4808]: I0311 08:39:18.889630 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:39:18 crc kubenswrapper[4808]: I0311 08:39:18.889631 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:39:18 crc kubenswrapper[4808]: I0311 08:39:18.890952 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:39:18 crc kubenswrapper[4808]: I0311 08:39:18.890981 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:39:18 crc kubenswrapper[4808]: I0311 08:39:18.891002 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:39:18 crc kubenswrapper[4808]: I0311 08:39:18.891013 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:39:18 crc kubenswrapper[4808]: I0311 08:39:18.891003 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:39:18 crc kubenswrapper[4808]: I0311 08:39:18.891031 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:39:19 crc kubenswrapper[4808]: E0311 08:39:19.890674 4808 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 11 08:39:20 crc kubenswrapper[4808]: I0311 08:39:20.459230 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 11 08:39:20 crc kubenswrapper[4808]: I0311 08:39:20.459364 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:39:20 crc kubenswrapper[4808]: I0311 08:39:20.460695 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:39:20 crc kubenswrapper[4808]: I0311 08:39:20.460850 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:39:20 crc kubenswrapper[4808]: I0311 08:39:20.461072 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:39:23 crc kubenswrapper[4808]: I0311 08:39:23.718687 4808 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Mar 11 08:39:23 crc kubenswrapper[4808]: W0311 08:39:23.807828 4808 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Mar 11 08:39:23 crc kubenswrapper[4808]: I0311 08:39:23.807920 4808 trace.go:236] Trace[1287396229]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (11-Mar-2026 08:39:13.806) (total time: 10001ms): Mar 11 08:39:23 crc kubenswrapper[4808]: Trace[1287396229]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (08:39:23.807) Mar 11 08:39:23 crc kubenswrapper[4808]: Trace[1287396229]: [10.001582655s] [10.001582655s] END Mar 11 08:39:23 crc kubenswrapper[4808]: E0311 08:39:23.807941 4808 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Mar 11 08:39:24 crc kubenswrapper[4808]: E0311 08:39:24.198345 4808 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:39:24Z is after 2026-02-23T05:33:13Z" node="crc" Mar 11 08:39:24 crc kubenswrapper[4808]: W0311 08:39:24.199111 4808 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:39:24Z is after 2026-02-23T05:33:13Z Mar 11 08:39:24 crc kubenswrapper[4808]: E0311 08:39:24.199291 4808 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:39:24Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 11 08:39:24 crc kubenswrapper[4808]: W0311 08:39:24.200586 4808 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:39:24Z is after 2026-02-23T05:33:13Z Mar 11 08:39:24 crc kubenswrapper[4808]: E0311 08:39:24.200621 4808 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:39:24Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 11 08:39:24 crc kubenswrapper[4808]: E0311 08:39:24.203499 4808 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:39:24Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.189bbcb1b3d37b75 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:09.712636789 +0000 UTC m=+0.665960139,LastTimestamp:2026-03-11 08:39:09.712636789 +0000 UTC m=+0.665960139,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:24 crc kubenswrapper[4808]: E0311 08:39:24.205337 4808 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:39:24Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 11 08:39:24 crc kubenswrapper[4808]: E0311 08:39:24.207476 4808 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:39:24Z is after 2026-02-23T05:33:13Z" interval="6.4s" Mar 11 08:39:24 crc kubenswrapper[4808]: I0311 08:39:24.208310 4808 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:52524->192.168.126.11:17697: read: connection reset by peer" start-of-body= Mar 11 08:39:24 crc kubenswrapper[4808]: I0311 08:39:24.208510 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:52524->192.168.126.11:17697: read: connection reset by peer" Mar 11 08:39:24 crc kubenswrapper[4808]: W0311 08:39:24.215069 4808 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:39:24Z is after 2026-02-23T05:33:13Z Mar 11 08:39:24 crc kubenswrapper[4808]: E0311 08:39:24.215180 4808 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:39:24Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 11 08:39:24 crc kubenswrapper[4808]: I0311 08:39:24.216531 4808 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Mar 11 08:39:24 crc kubenswrapper[4808]: I0311 08:39:24.216601 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Mar 11 08:39:24 crc kubenswrapper[4808]: I0311 08:39:24.221054 4808 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Mar 11 08:39:24 crc kubenswrapper[4808]: I0311 08:39:24.221128 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Mar 11 08:39:24 crc kubenswrapper[4808]: I0311 08:39:24.718769 4808 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:39:24Z is after 2026-02-23T05:33:13Z Mar 11 08:39:24 crc kubenswrapper[4808]: I0311 08:39:24.910682 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Mar 11 08:39:24 crc kubenswrapper[4808]: I0311 08:39:24.913802 4808 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7ddee38665748f8724f792ddf14e2d20ea119da18cc003a2d6b3fcf3fffcdb60" exitCode=255 Mar 11 08:39:24 crc kubenswrapper[4808]: I0311 08:39:24.913877 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"7ddee38665748f8724f792ddf14e2d20ea119da18cc003a2d6b3fcf3fffcdb60"} Mar 11 08:39:24 crc kubenswrapper[4808]: I0311 08:39:24.914105 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:39:24 crc kubenswrapper[4808]: I0311 08:39:24.915593 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:39:24 crc kubenswrapper[4808]: I0311 08:39:24.915635 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:39:24 crc kubenswrapper[4808]: I0311 08:39:24.915656 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:39:24 crc kubenswrapper[4808]: I0311 08:39:24.916545 4808 scope.go:117] "RemoveContainer" containerID="7ddee38665748f8724f792ddf14e2d20ea119da18cc003a2d6b3fcf3fffcdb60" Mar 11 08:39:25 crc kubenswrapper[4808]: I0311 08:39:25.671461 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 11 08:39:25 crc kubenswrapper[4808]: I0311 08:39:25.721001 4808 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:39:25Z is after 2026-02-23T05:33:13Z Mar 11 08:39:25 crc kubenswrapper[4808]: I0311 08:39:25.918033 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Mar 11 08:39:25 crc kubenswrapper[4808]: I0311 08:39:25.918617 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Mar 11 08:39:25 crc kubenswrapper[4808]: I0311 08:39:25.920340 4808 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7a72abeb1c5a2b28e37dac3a5cd2ea17fe2d31d2bd22f4ac18bbaf98efc1af87" exitCode=255 Mar 11 08:39:25 crc kubenswrapper[4808]: I0311 08:39:25.920401 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"7a72abeb1c5a2b28e37dac3a5cd2ea17fe2d31d2bd22f4ac18bbaf98efc1af87"} Mar 11 08:39:25 crc kubenswrapper[4808]: I0311 08:39:25.920489 4808 scope.go:117] "RemoveContainer" containerID="7ddee38665748f8724f792ddf14e2d20ea119da18cc003a2d6b3fcf3fffcdb60" Mar 11 08:39:25 crc kubenswrapper[4808]: I0311 08:39:25.920640 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:39:25 crc kubenswrapper[4808]: I0311 08:39:25.922333 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:39:25 crc kubenswrapper[4808]: I0311 08:39:25.922381 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:39:25 crc kubenswrapper[4808]: I0311 08:39:25.922395 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:39:25 crc kubenswrapper[4808]: I0311 08:39:25.923055 4808 scope.go:117] "RemoveContainer" containerID="7a72abeb1c5a2b28e37dac3a5cd2ea17fe2d31d2bd22f4ac18bbaf98efc1af87" Mar 11 08:39:25 crc kubenswrapper[4808]: E0311 08:39:25.923297 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 11 08:39:25 crc kubenswrapper[4808]: I0311 08:39:25.928793 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 11 08:39:26 crc kubenswrapper[4808]: I0311 08:39:26.186955 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Mar 11 08:39:26 crc kubenswrapper[4808]: I0311 08:39:26.187217 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:39:26 crc kubenswrapper[4808]: I0311 08:39:26.189031 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:39:26 crc kubenswrapper[4808]: I0311 08:39:26.189086 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:39:26 crc kubenswrapper[4808]: I0311 08:39:26.189110 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:39:26 crc kubenswrapper[4808]: I0311 08:39:26.206391 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Mar 11 08:39:26 crc kubenswrapper[4808]: I0311 08:39:26.719034 4808 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:39:26Z is after 2026-02-23T05:33:13Z Mar 11 08:39:26 crc kubenswrapper[4808]: I0311 08:39:26.925506 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Mar 11 08:39:26 crc kubenswrapper[4808]: I0311 08:39:26.928441 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:39:26 crc kubenswrapper[4808]: I0311 08:39:26.928832 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:39:26 crc kubenswrapper[4808]: I0311 08:39:26.929468 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:39:26 crc kubenswrapper[4808]: I0311 08:39:26.929588 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:39:26 crc kubenswrapper[4808]: I0311 08:39:26.929907 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:39:26 crc kubenswrapper[4808]: I0311 08:39:26.930051 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:39:26 crc kubenswrapper[4808]: I0311 08:39:26.930100 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:39:26 crc kubenswrapper[4808]: I0311 08:39:26.930118 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:39:26 crc kubenswrapper[4808]: I0311 08:39:26.931486 4808 scope.go:117] "RemoveContainer" containerID="7a72abeb1c5a2b28e37dac3a5cd2ea17fe2d31d2bd22f4ac18bbaf98efc1af87" Mar 11 08:39:26 crc kubenswrapper[4808]: E0311 08:39:26.931845 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 11 08:39:27 crc kubenswrapper[4808]: I0311 08:39:27.722480 4808 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:39:27Z is after 2026-02-23T05:33:13Z Mar 11 08:39:27 crc kubenswrapper[4808]: I0311 08:39:27.930406 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:39:27 crc kubenswrapper[4808]: I0311 08:39:27.931477 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:39:27 crc kubenswrapper[4808]: I0311 08:39:27.931515 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:39:27 crc kubenswrapper[4808]: I0311 08:39:27.931527 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:39:27 crc kubenswrapper[4808]: I0311 08:39:27.932219 4808 scope.go:117] "RemoveContainer" containerID="7a72abeb1c5a2b28e37dac3a5cd2ea17fe2d31d2bd22f4ac18bbaf98efc1af87" Mar 11 08:39:27 crc kubenswrapper[4808]: E0311 08:39:27.932485 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 11 08:39:28 crc kubenswrapper[4808]: W0311 08:39:28.257844 4808 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:39:28Z is after 2026-02-23T05:33:13Z Mar 11 08:39:28 crc kubenswrapper[4808]: E0311 08:39:28.257978 4808 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:39:28Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 11 08:39:28 crc kubenswrapper[4808]: I0311 08:39:28.466496 4808 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 11 08:39:28 crc kubenswrapper[4808]: I0311 08:39:28.466621 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 11 08:39:28 crc kubenswrapper[4808]: I0311 08:39:28.722669 4808 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:39:28Z is after 2026-02-23T05:33:13Z Mar 11 08:39:29 crc kubenswrapper[4808]: I0311 08:39:29.719847 4808 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:39:29Z is after 2026-02-23T05:33:13Z Mar 11 08:39:29 crc kubenswrapper[4808]: E0311 08:39:29.890865 4808 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 11 08:39:30 crc kubenswrapper[4808]: W0311 08:39:30.432715 4808 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:39:30Z is after 2026-02-23T05:33:13Z Mar 11 08:39:30 crc kubenswrapper[4808]: E0311 08:39:30.432832 4808 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:39:30Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 11 08:39:30 crc kubenswrapper[4808]: I0311 08:39:30.599452 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:39:30 crc kubenswrapper[4808]: I0311 08:39:30.601311 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:39:30 crc kubenswrapper[4808]: I0311 08:39:30.601415 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:39:30 crc kubenswrapper[4808]: I0311 08:39:30.601441 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:39:30 crc kubenswrapper[4808]: I0311 08:39:30.601483 4808 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 11 08:39:30 crc kubenswrapper[4808]: E0311 08:39:30.606604 4808 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:39:30Z is after 2026-02-23T05:33:13Z" node="crc" Mar 11 08:39:30 crc kubenswrapper[4808]: E0311 08:39:30.616080 4808 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:39:30Z is after 2026-02-23T05:33:13Z" interval="7s" Mar 11 08:39:30 crc kubenswrapper[4808]: I0311 08:39:30.721508 4808 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:39:30Z is after 2026-02-23T05:33:13Z Mar 11 08:39:31 crc kubenswrapper[4808]: I0311 08:39:31.721841 4808 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:39:31Z is after 2026-02-23T05:33:13Z Mar 11 08:39:32 crc kubenswrapper[4808]: I0311 08:39:32.720215 4808 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:39:32Z is after 2026-02-23T05:33:13Z Mar 11 08:39:32 crc kubenswrapper[4808]: I0311 08:39:32.818471 4808 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 11 08:39:32 crc kubenswrapper[4808]: E0311 08:39:32.824103 4808 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:39:32Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 11 08:39:33 crc kubenswrapper[4808]: I0311 08:39:33.720500 4808 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:39:33Z is after 2026-02-23T05:33:13Z Mar 11 08:39:34 crc kubenswrapper[4808]: I0311 08:39:34.061868 4808 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 11 08:39:34 crc kubenswrapper[4808]: I0311 08:39:34.062123 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:39:34 crc kubenswrapper[4808]: I0311 08:39:34.063728 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:39:34 crc kubenswrapper[4808]: I0311 08:39:34.063799 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:39:34 crc kubenswrapper[4808]: I0311 08:39:34.063824 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:39:34 crc kubenswrapper[4808]: I0311 08:39:34.064617 4808 scope.go:117] "RemoveContainer" containerID="7a72abeb1c5a2b28e37dac3a5cd2ea17fe2d31d2bd22f4ac18bbaf98efc1af87" Mar 11 08:39:34 crc kubenswrapper[4808]: E0311 08:39:34.064896 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 11 08:39:34 crc kubenswrapper[4808]: W0311 08:39:34.130282 4808 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:39:34Z is after 2026-02-23T05:33:13Z Mar 11 08:39:34 crc kubenswrapper[4808]: E0311 08:39:34.130423 4808 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:39:34Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 11 08:39:34 crc kubenswrapper[4808]: E0311 08:39:34.207590 4808 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:39:34Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.189bbcb1b3d37b75 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:09.712636789 +0000 UTC m=+0.665960139,LastTimestamp:2026-03-11 08:39:09.712636789 +0000 UTC m=+0.665960139,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:34 crc kubenswrapper[4808]: I0311 08:39:34.721106 4808 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:39:34Z is after 2026-02-23T05:33:13Z Mar 11 08:39:35 crc kubenswrapper[4808]: W0311 08:39:35.168452 4808 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:39:35Z is after 2026-02-23T05:33:13Z Mar 11 08:39:35 crc kubenswrapper[4808]: E0311 08:39:35.169215 4808 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:39:35Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 11 08:39:35 crc kubenswrapper[4808]: I0311 08:39:35.424468 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 11 08:39:35 crc kubenswrapper[4808]: I0311 08:39:35.424774 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:39:35 crc kubenswrapper[4808]: I0311 08:39:35.426452 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:39:35 crc kubenswrapper[4808]: I0311 08:39:35.426658 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:39:35 crc kubenswrapper[4808]: I0311 08:39:35.426697 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:39:35 crc kubenswrapper[4808]: I0311 08:39:35.427728 4808 scope.go:117] "RemoveContainer" containerID="7a72abeb1c5a2b28e37dac3a5cd2ea17fe2d31d2bd22f4ac18bbaf98efc1af87" Mar 11 08:39:35 crc kubenswrapper[4808]: E0311 08:39:35.428131 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 11 08:39:35 crc kubenswrapper[4808]: I0311 08:39:35.721987 4808 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:39:35Z is after 2026-02-23T05:33:13Z Mar 11 08:39:36 crc kubenswrapper[4808]: W0311 08:39:36.487698 4808 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:39:36Z is after 2026-02-23T05:33:13Z Mar 11 08:39:36 crc kubenswrapper[4808]: E0311 08:39:36.487770 4808 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:39:36Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 11 08:39:36 crc kubenswrapper[4808]: I0311 08:39:36.719734 4808 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:39:36Z is after 2026-02-23T05:33:13Z Mar 11 08:39:37 crc kubenswrapper[4808]: I0311 08:39:37.607881 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:39:37 crc kubenswrapper[4808]: I0311 08:39:37.609552 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:39:37 crc kubenswrapper[4808]: I0311 08:39:37.609600 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:39:37 crc kubenswrapper[4808]: I0311 08:39:37.609615 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:39:37 crc kubenswrapper[4808]: I0311 08:39:37.609646 4808 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 11 08:39:37 crc kubenswrapper[4808]: E0311 08:39:37.614426 4808 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:39:37Z is after 2026-02-23T05:33:13Z" node="crc" Mar 11 08:39:37 crc kubenswrapper[4808]: E0311 08:39:37.622081 4808 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:39:37Z is after 2026-02-23T05:33:13Z" interval="7s" Mar 11 08:39:37 crc kubenswrapper[4808]: I0311 08:39:37.721823 4808 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:39:37Z is after 2026-02-23T05:33:13Z Mar 11 08:39:38 crc kubenswrapper[4808]: I0311 08:39:38.467780 4808 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 11 08:39:38 crc kubenswrapper[4808]: I0311 08:39:38.467891 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 11 08:39:38 crc kubenswrapper[4808]: I0311 08:39:38.467994 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 11 08:39:38 crc kubenswrapper[4808]: I0311 08:39:38.468255 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:39:38 crc kubenswrapper[4808]: I0311 08:39:38.470126 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:39:38 crc kubenswrapper[4808]: I0311 08:39:38.470187 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:39:38 crc kubenswrapper[4808]: I0311 08:39:38.470204 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:39:38 crc kubenswrapper[4808]: I0311 08:39:38.470990 4808 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"6553330cd365bcb79f61b1485de903b3a33c17e85ccd9e4c2e6a35c3b0443ccf"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Mar 11 08:39:38 crc kubenswrapper[4808]: I0311 08:39:38.471241 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" containerID="cri-o://6553330cd365bcb79f61b1485de903b3a33c17e85ccd9e4c2e6a35c3b0443ccf" gracePeriod=30 Mar 11 08:39:38 crc kubenswrapper[4808]: I0311 08:39:38.727062 4808 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:39:38Z is after 2026-02-23T05:33:13Z Mar 11 08:39:38 crc kubenswrapper[4808]: I0311 08:39:38.965172 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Mar 11 08:39:38 crc kubenswrapper[4808]: I0311 08:39:38.965963 4808 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="6553330cd365bcb79f61b1485de903b3a33c17e85ccd9e4c2e6a35c3b0443ccf" exitCode=255 Mar 11 08:39:38 crc kubenswrapper[4808]: I0311 08:39:38.966016 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"6553330cd365bcb79f61b1485de903b3a33c17e85ccd9e4c2e6a35c3b0443ccf"} Mar 11 08:39:38 crc kubenswrapper[4808]: I0311 08:39:38.966052 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"92167e4d93f32c4e5d29be5deeb05f6d9a65f5bdc4959226dd4681484f476658"} Mar 11 08:39:38 crc kubenswrapper[4808]: I0311 08:39:38.966178 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:39:38 crc kubenswrapper[4808]: I0311 08:39:38.968199 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:39:38 crc kubenswrapper[4808]: I0311 08:39:38.968275 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:39:38 crc kubenswrapper[4808]: I0311 08:39:38.968299 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:39:39 crc kubenswrapper[4808]: W0311 08:39:39.199180 4808 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:39:39Z is after 2026-02-23T05:33:13Z Mar 11 08:39:39 crc kubenswrapper[4808]: E0311 08:39:39.199704 4808 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:39:39Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 11 08:39:39 crc kubenswrapper[4808]: I0311 08:39:39.719105 4808 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:39:39Z is after 2026-02-23T05:33:13Z Mar 11 08:39:39 crc kubenswrapper[4808]: E0311 08:39:39.891001 4808 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 11 08:39:40 crc kubenswrapper[4808]: I0311 08:39:40.721867 4808 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:39:40Z is after 2026-02-23T05:33:13Z Mar 11 08:39:41 crc kubenswrapper[4808]: I0311 08:39:41.722127 4808 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:39:41Z is after 2026-02-23T05:33:13Z Mar 11 08:39:42 crc kubenswrapper[4808]: I0311 08:39:42.719865 4808 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:39:42Z is after 2026-02-23T05:33:13Z Mar 11 08:39:43 crc kubenswrapper[4808]: I0311 08:39:43.721982 4808 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:39:43Z is after 2026-02-23T05:33:13Z Mar 11 08:39:44 crc kubenswrapper[4808]: E0311 08:39:44.213508 4808 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:39:44Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.189bbcb1b3d37b75 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:09.712636789 +0000 UTC m=+0.665960139,LastTimestamp:2026-03-11 08:39:09.712636789 +0000 UTC m=+0.665960139,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:44 crc kubenswrapper[4808]: I0311 08:39:44.410667 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 11 08:39:44 crc kubenswrapper[4808]: I0311 08:39:44.410887 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:39:44 crc kubenswrapper[4808]: I0311 08:39:44.412493 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:39:44 crc kubenswrapper[4808]: I0311 08:39:44.412568 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:39:44 crc kubenswrapper[4808]: I0311 08:39:44.412583 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:39:44 crc kubenswrapper[4808]: I0311 08:39:44.615341 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:39:44 crc kubenswrapper[4808]: I0311 08:39:44.616800 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:39:44 crc kubenswrapper[4808]: I0311 08:39:44.616836 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:39:44 crc kubenswrapper[4808]: I0311 08:39:44.616849 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:39:44 crc kubenswrapper[4808]: I0311 08:39:44.616874 4808 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 11 08:39:44 crc kubenswrapper[4808]: E0311 08:39:44.621959 4808 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:39:44Z is after 2026-02-23T05:33:13Z" node="crc" Mar 11 08:39:44 crc kubenswrapper[4808]: E0311 08:39:44.628145 4808 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:39:44Z is after 2026-02-23T05:33:13Z" interval="7s" Mar 11 08:39:44 crc kubenswrapper[4808]: I0311 08:39:44.719833 4808 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:39:44Z is after 2026-02-23T05:33:13Z Mar 11 08:39:45 crc kubenswrapper[4808]: I0311 08:39:45.466467 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 11 08:39:45 crc kubenswrapper[4808]: I0311 08:39:45.466722 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:39:45 crc kubenswrapper[4808]: I0311 08:39:45.468500 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:39:45 crc kubenswrapper[4808]: I0311 08:39:45.468546 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:39:45 crc kubenswrapper[4808]: I0311 08:39:45.468560 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:39:45 crc kubenswrapper[4808]: I0311 08:39:45.720724 4808 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:39:45Z is after 2026-02-23T05:33:13Z Mar 11 08:39:46 crc kubenswrapper[4808]: I0311 08:39:46.722326 4808 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:39:46Z is after 2026-02-23T05:33:13Z Mar 11 08:39:46 crc kubenswrapper[4808]: I0311 08:39:46.789021 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:39:46 crc kubenswrapper[4808]: I0311 08:39:46.791414 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:39:46 crc kubenswrapper[4808]: I0311 08:39:46.791470 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:39:46 crc kubenswrapper[4808]: I0311 08:39:46.791489 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:39:46 crc kubenswrapper[4808]: I0311 08:39:46.792307 4808 scope.go:117] "RemoveContainer" containerID="7a72abeb1c5a2b28e37dac3a5cd2ea17fe2d31d2bd22f4ac18bbaf98efc1af87" Mar 11 08:39:47 crc kubenswrapper[4808]: W0311 08:39:47.205444 4808 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "crc" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Mar 11 08:39:47 crc kubenswrapper[4808]: E0311 08:39:47.205802 4808 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Mar 11 08:39:47 crc kubenswrapper[4808]: I0311 08:39:47.725273 4808 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 11 08:39:48 crc kubenswrapper[4808]: I0311 08:39:48.016257 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Mar 11 08:39:48 crc kubenswrapper[4808]: I0311 08:39:48.017135 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Mar 11 08:39:48 crc kubenswrapper[4808]: I0311 08:39:48.019250 4808 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="8ec32f6a6daa85e28d13dc9afd67d901d62245ef2bc288428c423f3ddef1a68d" exitCode=255 Mar 11 08:39:48 crc kubenswrapper[4808]: I0311 08:39:48.019288 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"8ec32f6a6daa85e28d13dc9afd67d901d62245ef2bc288428c423f3ddef1a68d"} Mar 11 08:39:48 crc kubenswrapper[4808]: I0311 08:39:48.019321 4808 scope.go:117] "RemoveContainer" containerID="7a72abeb1c5a2b28e37dac3a5cd2ea17fe2d31d2bd22f4ac18bbaf98efc1af87" Mar 11 08:39:48 crc kubenswrapper[4808]: I0311 08:39:48.019486 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:39:48 crc kubenswrapper[4808]: I0311 08:39:48.020474 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:39:48 crc kubenswrapper[4808]: I0311 08:39:48.020518 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:39:48 crc kubenswrapper[4808]: I0311 08:39:48.020532 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:39:48 crc kubenswrapper[4808]: I0311 08:39:48.025479 4808 scope.go:117] "RemoveContainer" containerID="8ec32f6a6daa85e28d13dc9afd67d901d62245ef2bc288428c423f3ddef1a68d" Mar 11 08:39:48 crc kubenswrapper[4808]: E0311 08:39:48.025759 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 11 08:39:48 crc kubenswrapper[4808]: I0311 08:39:48.467334 4808 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 11 08:39:48 crc kubenswrapper[4808]: I0311 08:39:48.467483 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 11 08:39:48 crc kubenswrapper[4808]: I0311 08:39:48.721231 4808 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 11 08:39:49 crc kubenswrapper[4808]: I0311 08:39:49.023224 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Mar 11 08:39:49 crc kubenswrapper[4808]: I0311 08:39:49.285231 4808 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 11 08:39:49 crc kubenswrapper[4808]: I0311 08:39:49.310819 4808 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Mar 11 08:39:49 crc kubenswrapper[4808]: I0311 08:39:49.723418 4808 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 11 08:39:49 crc kubenswrapper[4808]: E0311 08:39:49.891126 4808 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 11 08:39:50 crc kubenswrapper[4808]: W0311 08:39:50.195426 4808 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Mar 11 08:39:50 crc kubenswrapper[4808]: E0311 08:39:50.195503 4808 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Mar 11 08:39:50 crc kubenswrapper[4808]: I0311 08:39:50.724769 4808 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 11 08:39:51 crc kubenswrapper[4808]: I0311 08:39:51.622527 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:39:51 crc kubenswrapper[4808]: I0311 08:39:51.624397 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:39:51 crc kubenswrapper[4808]: I0311 08:39:51.624435 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:39:51 crc kubenswrapper[4808]: I0311 08:39:51.624448 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:39:51 crc kubenswrapper[4808]: I0311 08:39:51.624481 4808 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 11 08:39:51 crc kubenswrapper[4808]: E0311 08:39:51.627782 4808 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Mar 11 08:39:51 crc kubenswrapper[4808]: E0311 08:39:51.630160 4808 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 11 08:39:51 crc kubenswrapper[4808]: I0311 08:39:51.720730 4808 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 11 08:39:52 crc kubenswrapper[4808]: I0311 08:39:52.723601 4808 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 11 08:39:53 crc kubenswrapper[4808]: W0311 08:39:53.702004 4808 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Mar 11 08:39:53 crc kubenswrapper[4808]: E0311 08:39:53.702088 4808 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Mar 11 08:39:53 crc kubenswrapper[4808]: I0311 08:39:53.723250 4808 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 11 08:39:54 crc kubenswrapper[4808]: I0311 08:39:54.061265 4808 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 11 08:39:54 crc kubenswrapper[4808]: I0311 08:39:54.061550 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:39:54 crc kubenswrapper[4808]: I0311 08:39:54.063130 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:39:54 crc kubenswrapper[4808]: I0311 08:39:54.063194 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:39:54 crc kubenswrapper[4808]: I0311 08:39:54.063217 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:39:54 crc kubenswrapper[4808]: I0311 08:39:54.064143 4808 scope.go:117] "RemoveContainer" containerID="8ec32f6a6daa85e28d13dc9afd67d901d62245ef2bc288428c423f3ddef1a68d" Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.064457 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.232917 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189bbcb1b3d37b75 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:09.712636789 +0000 UTC m=+0.665960139,LastTimestamp:2026-03-11 08:39:09.712636789 +0000 UTC m=+0.665960139,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.238602 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189bbcb1b813e98c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:09.78396814 +0000 UTC m=+0.737291500,LastTimestamp:2026-03-11 08:39:09.78396814 +0000 UTC m=+0.737291500,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.244512 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189bbcb1b8145a33 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:09.783996979 +0000 UTC m=+0.737320329,LastTimestamp:2026-03-11 08:39:09.783996979 +0000 UTC m=+0.737320329,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.252041 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189bbcb1b81499cb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:09.784013259 +0000 UTC m=+0.737336609,LastTimestamp:2026-03-11 08:39:09.784013259 +0000 UTC m=+0.737336609,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.259242 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189bbcb1bdf4505a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:09.882560602 +0000 UTC m=+0.835883932,LastTimestamp:2026-03-11 08:39:09.882560602 +0000 UTC m=+0.835883932,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.267223 4808 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189bbcb1b813e98c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189bbcb1b813e98c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:09.78396814 +0000 UTC m=+0.737291500,LastTimestamp:2026-03-11 08:39:09.891021835 +0000 UTC m=+0.844345165,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.280075 4808 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189bbcb1b8145a33\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189bbcb1b8145a33 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:09.783996979 +0000 UTC m=+0.737320329,LastTimestamp:2026-03-11 08:39:09.891043484 +0000 UTC m=+0.844366814,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.287786 4808 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189bbcb1b81499cb\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189bbcb1b81499cb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:09.784013259 +0000 UTC m=+0.737336609,LastTimestamp:2026-03-11 08:39:09.891054184 +0000 UTC m=+0.844377514,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.294065 4808 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189bbcb1b813e98c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189bbcb1b813e98c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:09.78396814 +0000 UTC m=+0.737291500,LastTimestamp:2026-03-11 08:39:09.892825011 +0000 UTC m=+0.846148381,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.299981 4808 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189bbcb1b8145a33\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189bbcb1b8145a33 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:09.783996979 +0000 UTC m=+0.737320329,LastTimestamp:2026-03-11 08:39:09.89286317 +0000 UTC m=+0.846186540,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.306495 4808 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189bbcb1b81499cb\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189bbcb1b81499cb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:09.784013259 +0000 UTC m=+0.737336609,LastTimestamp:2026-03-11 08:39:09.89288883 +0000 UTC m=+0.846212190,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.312870 4808 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189bbcb1b813e98c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189bbcb1b813e98c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:09.78396814 +0000 UTC m=+0.737291500,LastTimestamp:2026-03-11 08:39:09.893776874 +0000 UTC m=+0.847100234,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.319307 4808 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189bbcb1b8145a33\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189bbcb1b8145a33 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:09.783996979 +0000 UTC m=+0.737320329,LastTimestamp:2026-03-11 08:39:09.893832592 +0000 UTC m=+0.847155952,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.327650 4808 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189bbcb1b81499cb\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189bbcb1b81499cb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:09.784013259 +0000 UTC m=+0.737336609,LastTimestamp:2026-03-11 08:39:09.893856122 +0000 UTC m=+0.847179492,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.335092 4808 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189bbcb1b813e98c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189bbcb1b813e98c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:09.78396814 +0000 UTC m=+0.737291500,LastTimestamp:2026-03-11 08:39:09.894658077 +0000 UTC m=+0.847981437,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.340123 4808 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189bbcb1b813e98c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189bbcb1b813e98c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:09.78396814 +0000 UTC m=+0.737291500,LastTimestamp:2026-03-11 08:39:09.894682737 +0000 UTC m=+0.848006067,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.347091 4808 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189bbcb1b8145a33\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189bbcb1b8145a33 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:09.783996979 +0000 UTC m=+0.737320329,LastTimestamp:2026-03-11 08:39:09.894706586 +0000 UTC m=+0.848029946,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.352443 4808 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189bbcb1b81499cb\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189bbcb1b81499cb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:09.784013259 +0000 UTC m=+0.737336609,LastTimestamp:2026-03-11 08:39:09.894738896 +0000 UTC m=+0.848062266,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.381805 4808 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189bbcb1b8145a33\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189bbcb1b8145a33 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:09.783996979 +0000 UTC m=+0.737320329,LastTimestamp:2026-03-11 08:39:09.894767185 +0000 UTC m=+0.848090515,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.389933 4808 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189bbcb1b81499cb\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189bbcb1b81499cb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:09.784013259 +0000 UTC m=+0.737336609,LastTimestamp:2026-03-11 08:39:09.894791585 +0000 UTC m=+0.848114915,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.394645 4808 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189bbcb1b813e98c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189bbcb1b813e98c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:09.78396814 +0000 UTC m=+0.737291500,LastTimestamp:2026-03-11 08:39:09.897562533 +0000 UTC m=+0.850885863,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.401317 4808 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189bbcb1b8145a33\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189bbcb1b8145a33 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:09.783996979 +0000 UTC m=+0.737320329,LastTimestamp:2026-03-11 08:39:09.897582983 +0000 UTC m=+0.850906313,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.405411 4808 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189bbcb1b81499cb\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189bbcb1b81499cb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:09.784013259 +0000 UTC m=+0.737336609,LastTimestamp:2026-03-11 08:39:09.897598722 +0000 UTC m=+0.850922062,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.411186 4808 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189bbcb1b813e98c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189bbcb1b813e98c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:09.78396814 +0000 UTC m=+0.737291500,LastTimestamp:2026-03-11 08:39:09.898103893 +0000 UTC m=+0.851427253,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.415343 4808 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189bbcb1b8145a33\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189bbcb1b8145a33 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:09.783996979 +0000 UTC m=+0.737320329,LastTimestamp:2026-03-11 08:39:09.898127372 +0000 UTC m=+0.851450732,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.420403 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189bbcb1d6e40688 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:10.300923528 +0000 UTC m=+1.254246838,LastTimestamp:2026-03-11 08:39:10.300923528 +0000 UTC m=+1.254246838,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.424693 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189bbcb1d719880c openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:10.304430092 +0000 UTC m=+1.257753452,LastTimestamp:2026-03-11 08:39:10.304430092 +0000 UTC m=+1.257753452,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.428559 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189bbcb1d779d665 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:10.310741605 +0000 UTC m=+1.264064935,LastTimestamp:2026-03-11 08:39:10.310741605 +0000 UTC m=+1.264064935,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.435590 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189bbcb1d90fe199 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:10.337352089 +0000 UTC m=+1.290675419,LastTimestamp:2026-03-11 08:39:10.337352089 +0000 UTC m=+1.290675419,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.439994 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189bbcb1d930bf77 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:10.339506039 +0000 UTC m=+1.292829389,LastTimestamp:2026-03-11 08:39:10.339506039 +0000 UTC m=+1.292829389,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.444758 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189bbcb1fa88bdb2 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Created,Message:Created container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:10.898920882 +0000 UTC m=+1.852244212,LastTimestamp:2026-03-11 08:39:10.898920882 +0000 UTC m=+1.852244212,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.449109 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189bbcb1fa8a4383 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:10.899020675 +0000 UTC m=+1.852343995,LastTimestamp:2026-03-11 08:39:10.899020675 +0000 UTC m=+1.852343995,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.453744 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189bbcb1fa8b4fff openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:10.899089407 +0000 UTC m=+1.852412747,LastTimestamp:2026-03-11 08:39:10.899089407 +0000 UTC m=+1.852412747,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.458937 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189bbcb1fa9a9e3d openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:10.900092477 +0000 UTC m=+1.853415817,LastTimestamp:2026-03-11 08:39:10.900092477 +0000 UTC m=+1.853415817,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.466237 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189bbcb1faa0ec14 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:10.90050562 +0000 UTC m=+1.853828960,LastTimestamp:2026-03-11 08:39:10.90050562 +0000 UTC m=+1.853828960,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.471300 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189bbcb1fb449f46 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:10.911233862 +0000 UTC m=+1.864557202,LastTimestamp:2026-03-11 08:39:10.911233862 +0000 UTC m=+1.864557202,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.478395 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189bbcb1fb577055 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:10.912467029 +0000 UTC m=+1.865790359,LastTimestamp:2026-03-11 08:39:10.912467029 +0000 UTC m=+1.865790359,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.482995 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189bbcb1fb59031e openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Started,Message:Started container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:10.912570142 +0000 UTC m=+1.865893472,LastTimestamp:2026-03-11 08:39:10.912570142 +0000 UTC m=+1.865893472,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.487609 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189bbcb1fb62f8ff openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:10.913222911 +0000 UTC m=+1.866546241,LastTimestamp:2026-03-11 08:39:10.913222911 +0000 UTC m=+1.866546241,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.492467 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189bbcb1fb719e5e openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:10.91418275 +0000 UTC m=+1.867506080,LastTimestamp:2026-03-11 08:39:10.91418275 +0000 UTC m=+1.867506080,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.497177 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189bbcb1fbc9da00 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:10.919965184 +0000 UTC m=+1.873288514,LastTimestamp:2026-03-11 08:39:10.919965184 +0000 UTC m=+1.873288514,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.503222 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189bbcb20fc421bb openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:11.255134651 +0000 UTC m=+2.208457991,LastTimestamp:2026-03-11 08:39:11.255134651 +0000 UTC m=+2.208457991,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.509574 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189bbcb210feee02 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:11.27576525 +0000 UTC m=+2.229088600,LastTimestamp:2026-03-11 08:39:11.27576525 +0000 UTC m=+2.229088600,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.514759 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189bbcb21118c5b7 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:11.277458871 +0000 UTC m=+2.230782211,LastTimestamp:2026-03-11 08:39:11.277458871 +0000 UTC m=+2.230782211,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.519949 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189bbcb21c251ddc openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Created,Message:Created container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:11.462817244 +0000 UTC m=+2.416140554,LastTimestamp:2026-03-11 08:39:11.462817244 +0000 UTC m=+2.416140554,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.526476 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189bbcb21cdfc1bb openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Started,Message:Started container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:11.475048891 +0000 UTC m=+2.428372211,LastTimestamp:2026-03-11 08:39:11.475048891 +0000 UTC m=+2.428372211,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.531908 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189bbcb21cf70ce1 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:11.476575457 +0000 UTC m=+2.429898777,LastTimestamp:2026-03-11 08:39:11.476575457 +0000 UTC m=+2.429898777,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.537085 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189bbcb2272946e6 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Created,Message:Created container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:11.64763927 +0000 UTC m=+2.600962590,LastTimestamp:2026-03-11 08:39:11.64763927 +0000 UTC m=+2.600962590,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.543782 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189bbcb2281a8e00 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Started,Message:Started container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:11.663451648 +0000 UTC m=+2.616775008,LastTimestamp:2026-03-11 08:39:11.663451648 +0000 UTC m=+2.616775008,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.547906 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189bbcb23115e090 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:11.814140048 +0000 UTC m=+2.767463368,LastTimestamp:2026-03-11 08:39:11.814140048 +0000 UTC m=+2.767463368,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.553418 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189bbcb2313caec2 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:11.816683202 +0000 UTC m=+2.770006522,LastTimestamp:2026-03-11 08:39:11.816683202 +0000 UTC m=+2.770006522,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.558301 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189bbcb2317373e3 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:11.820272611 +0000 UTC m=+2.773595931,LastTimestamp:2026-03-11 08:39:11.820272611 +0000 UTC m=+2.773595931,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.565043 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189bbcb231b76f87 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:11.824727943 +0000 UTC m=+2.778051303,LastTimestamp:2026-03-11 08:39:11.824727943 +0000 UTC m=+2.778051303,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.571844 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189bbcb240551c00 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:12.069942272 +0000 UTC m=+3.023265592,LastTimestamp:2026-03-11 08:39:12.069942272 +0000 UTC m=+3.023265592,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.576120 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189bbcb240625b38 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:12.070810424 +0000 UTC m=+3.024133744,LastTimestamp:2026-03-11 08:39:12.070810424 +0000 UTC m=+3.024133744,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.582933 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189bbcb240644725 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Created,Message:Created container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:12.070936357 +0000 UTC m=+3.024259677,LastTimestamp:2026-03-11 08:39:12.070936357 +0000 UTC m=+3.024259677,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.589606 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189bbcb240656a67 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:12.071010919 +0000 UTC m=+3.024334259,LastTimestamp:2026-03-11 08:39:12.071010919 +0000 UTC m=+3.024334259,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.596802 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189bbcb240e0ef13 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:12.079105811 +0000 UTC m=+3.032429131,LastTimestamp:2026-03-11 08:39:12.079105811 +0000 UTC m=+3.032429131,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.603458 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189bbcb240eff990 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:12.080091536 +0000 UTC m=+3.033414856,LastTimestamp:2026-03-11 08:39:12.080091536 +0000 UTC m=+3.033414856,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.610648 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189bbcb241276b3c openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:12.083725116 +0000 UTC m=+3.037048436,LastTimestamp:2026-03-11 08:39:12.083725116 +0000 UTC m=+3.037048436,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.616425 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189bbcb2419661ce openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Started,Message:Started container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:12.090997198 +0000 UTC m=+3.044320518,LastTimestamp:2026-03-11 08:39:12.090997198 +0000 UTC m=+3.044320518,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.622950 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189bbcb241ba501f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:12.093351967 +0000 UTC m=+3.046675287,LastTimestamp:2026-03-11 08:39:12.093351967 +0000 UTC m=+3.046675287,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.630129 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189bbcb241cba679 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:12.094488185 +0000 UTC m=+3.047811505,LastTimestamp:2026-03-11 08:39:12.094488185 +0000 UTC m=+3.047811505,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.638062 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189bbcb24be4c251 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Created,Message:Created container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:12.263905873 +0000 UTC m=+3.217229193,LastTimestamp:2026-03-11 08:39:12.263905873 +0000 UTC m=+3.217229193,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.644957 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189bbcb24be75c45 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Created,Message:Created container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:12.264076357 +0000 UTC m=+3.217399677,LastTimestamp:2026-03-11 08:39:12.264076357 +0000 UTC m=+3.217399677,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.655023 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189bbcb24cbc4f9e openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Started,Message:Started container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:12.278032286 +0000 UTC m=+3.231355616,LastTimestamp:2026-03-11 08:39:12.278032286 +0000 UTC m=+3.231355616,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.656678 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189bbcb24ccde752 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:12.279185234 +0000 UTC m=+3.232508574,LastTimestamp:2026-03-11 08:39:12.279185234 +0000 UTC m=+3.232508574,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.660650 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189bbcb24cf8ba2c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Started,Message:Started container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:12.281991724 +0000 UTC m=+3.235315054,LastTimestamp:2026-03-11 08:39:12.281991724 +0000 UTC m=+3.235315054,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.664859 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189bbcb24d03827a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:12.282698362 +0000 UTC m=+3.236021692,LastTimestamp:2026-03-11 08:39:12.282698362 +0000 UTC m=+3.236021692,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.669424 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189bbcb2582ab7fe openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Created,Message:Created container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:12.469817342 +0000 UTC m=+3.423140662,LastTimestamp:2026-03-11 08:39:12.469817342 +0000 UTC m=+3.423140662,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.676271 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189bbcb258a87d98 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Created,Message:Created container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:12.478059928 +0000 UTC m=+3.431383248,LastTimestamp:2026-03-11 08:39:12.478059928 +0000 UTC m=+3.431383248,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.680860 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189bbcb258fb30a1 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Started,Message:Started container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:12.483479713 +0000 UTC m=+3.436803053,LastTimestamp:2026-03-11 08:39:12.483479713 +0000 UTC m=+3.436803053,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.685436 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189bbcb259ccfe68 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Started,Message:Started container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:12.497229416 +0000 UTC m=+3.450552736,LastTimestamp:2026-03-11 08:39:12.497229416 +0000 UTC m=+3.450552736,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.689854 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189bbcb259def6fd openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:12.498407165 +0000 UTC m=+3.451730475,LastTimestamp:2026-03-11 08:39:12.498407165 +0000 UTC m=+3.451730475,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.694254 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189bbcb26419bd4b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:12.670031179 +0000 UTC m=+3.623354509,LastTimestamp:2026-03-11 08:39:12.670031179 +0000 UTC m=+3.623354509,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.698782 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189bbcb264fbfdc3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:12.684858819 +0000 UTC m=+3.638182129,LastTimestamp:2026-03-11 08:39:12.684858819 +0000 UTC m=+3.638182129,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.705623 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189bbcb2650ebfad openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:12.686088109 +0000 UTC m=+3.639411429,LastTimestamp:2026-03-11 08:39:12.686088109 +0000 UTC m=+3.639411429,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.713323 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189bbcb26e9d6b2f openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:12.846433071 +0000 UTC m=+3.799756391,LastTimestamp:2026-03-11 08:39:12.846433071 +0000 UTC m=+3.799756391,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.717954 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189bbcb2704c9d42 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:12.874691906 +0000 UTC m=+3.828015226,LastTimestamp:2026-03-11 08:39:12.874691906 +0000 UTC m=+3.828015226,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:54 crc kubenswrapper[4808]: I0311 08:39:54.718186 4808 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.723799 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189bbcb2716e6c68 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:12.89368484 +0000 UTC m=+3.847008160,LastTimestamp:2026-03-11 08:39:12.89368484 +0000 UTC m=+3.847008160,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.728287 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189bbcb27b988094 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Created,Message:Created container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:13.064214676 +0000 UTC m=+4.017537986,LastTimestamp:2026-03-11 08:39:13.064214676 +0000 UTC m=+4.017537986,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.734484 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189bbcb27c4e71ae openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Started,Message:Started container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:13.076138414 +0000 UTC m=+4.029461734,LastTimestamp:2026-03-11 08:39:13.076138414 +0000 UTC m=+4.029461734,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.741962 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189bbcb2ab8ece2b openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:13.868885547 +0000 UTC m=+4.822208867,LastTimestamp:2026-03-11 08:39:13.868885547 +0000 UTC m=+4.822208867,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.749069 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189bbcb2b6baf985 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:14.056329605 +0000 UTC m=+5.009652925,LastTimestamp:2026-03-11 08:39:14.056329605 +0000 UTC m=+5.009652925,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.754985 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189bbcb2b75a1713 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:14.066757395 +0000 UTC m=+5.020080715,LastTimestamp:2026-03-11 08:39:14.066757395 +0000 UTC m=+5.020080715,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.757581 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189bbcb2b76e457e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:14.068079998 +0000 UTC m=+5.021403308,LastTimestamp:2026-03-11 08:39:14.068079998 +0000 UTC m=+5.021403308,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.760599 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189bbcb2c42c3893 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:14.281855123 +0000 UTC m=+5.235178453,LastTimestamp:2026-03-11 08:39:14.281855123 +0000 UTC m=+5.235178453,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.764399 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189bbcb2c51315db openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:14.296985051 +0000 UTC m=+5.250308371,LastTimestamp:2026-03-11 08:39:14.296985051 +0000 UTC m=+5.250308371,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.768049 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189bbcb2c528fa69 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:14.298419817 +0000 UTC m=+5.251743147,LastTimestamp:2026-03-11 08:39:14.298419817 +0000 UTC m=+5.251743147,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.774592 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189bbcb2d31088a0 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Created,Message:Created container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:14.531698848 +0000 UTC m=+5.485022168,LastTimestamp:2026-03-11 08:39:14.531698848 +0000 UTC m=+5.485022168,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.778958 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189bbcb2d3a83947 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Started,Message:Started container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:14.541640007 +0000 UTC m=+5.494963327,LastTimestamp:2026-03-11 08:39:14.541640007 +0000 UTC m=+5.494963327,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.783697 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189bbcb2d3b8a689 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:14.542716553 +0000 UTC m=+5.496039873,LastTimestamp:2026-03-11 08:39:14.542716553 +0000 UTC m=+5.496039873,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.789862 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189bbcb2e192db3f openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Created,Message:Created container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:14.775120703 +0000 UTC m=+5.728444023,LastTimestamp:2026-03-11 08:39:14.775120703 +0000 UTC m=+5.728444023,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.796070 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189bbcb2e244d521 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Started,Message:Started container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:14.786784545 +0000 UTC m=+5.740107865,LastTimestamp:2026-03-11 08:39:14.786784545 +0000 UTC m=+5.740107865,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.802927 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189bbcb2e25b0531 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:14.788238641 +0000 UTC m=+5.741561961,LastTimestamp:2026-03-11 08:39:14.788238641 +0000 UTC m=+5.741561961,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.810102 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189bbcb2ec99f6ab openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Created,Message:Created container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:14.960135851 +0000 UTC m=+5.913459161,LastTimestamp:2026-03-11 08:39:14.960135851 +0000 UTC m=+5.913459161,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.816077 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189bbcb2ed4e7e82 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Started,Message:Started container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:14.971967106 +0000 UTC m=+5.925290426,LastTimestamp:2026-03-11 08:39:14.971967106 +0000 UTC m=+5.925290426,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.827246 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 11 08:39:54 crc kubenswrapper[4808]: &Event{ObjectMeta:{kube-controller-manager-crc.189bbcb3bd9c36c9 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Mar 11 08:39:54 crc kubenswrapper[4808]: body: Mar 11 08:39:54 crc kubenswrapper[4808]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:18.466721481 +0000 UTC m=+9.420044801,LastTimestamp:2026-03-11 08:39:18.466721481 +0000 UTC m=+9.420044801,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 11 08:39:54 crc kubenswrapper[4808]: > Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.833904 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189bbcb3bd9d5ae7 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:18.466796263 +0000 UTC m=+9.420119583,LastTimestamp:2026-03-11 08:39:18.466796263 +0000 UTC m=+9.420119583,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.841251 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Mar 11 08:39:54 crc kubenswrapper[4808]: &Event{ObjectMeta:{kube-apiserver-crc.189bbcb513d895d3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Liveness probe error: Get "https://192.168.126.11:17697/healthz": read tcp 192.168.126.11:52524->192.168.126.11:17697: read: connection reset by peer Mar 11 08:39:54 crc kubenswrapper[4808]: body: Mar 11 08:39:54 crc kubenswrapper[4808]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:24.208485843 +0000 UTC m=+15.161809173,LastTimestamp:2026-03-11 08:39:24.208485843 +0000 UTC m=+15.161809173,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 11 08:39:54 crc kubenswrapper[4808]: > Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.845471 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189bbcb513da57b8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Liveness probe failed: Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:52524->192.168.126.11:17697: read: connection reset by peer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:24.208601016 +0000 UTC m=+15.161924336,LastTimestamp:2026-03-11 08:39:24.208601016 +0000 UTC m=+15.161924336,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.851768 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Mar 11 08:39:54 crc kubenswrapper[4808]: &Event{ObjectMeta:{kube-apiserver-crc.189bbcb5145421e2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Mar 11 08:39:54 crc kubenswrapper[4808]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Mar 11 08:39:54 crc kubenswrapper[4808]: Mar 11 08:39:54 crc kubenswrapper[4808]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:24.216582626 +0000 UTC m=+15.169905946,LastTimestamp:2026-03-11 08:39:24.216582626 +0000 UTC m=+15.169905946,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 11 08:39:54 crc kubenswrapper[4808]: > Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.858708 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189bbcb51454c233 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:24.216623667 +0000 UTC m=+15.169946987,LastTimestamp:2026-03-11 08:39:24.216623667 +0000 UTC m=+15.169946987,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.862500 4808 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189bbcb5145421e2\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Mar 11 08:39:54 crc kubenswrapper[4808]: &Event{ObjectMeta:{kube-apiserver-crc.189bbcb5145421e2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Mar 11 08:39:54 crc kubenswrapper[4808]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Mar 11 08:39:54 crc kubenswrapper[4808]: Mar 11 08:39:54 crc kubenswrapper[4808]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:24.216582626 +0000 UTC m=+15.169905946,LastTimestamp:2026-03-11 08:39:24.221106456 +0000 UTC m=+15.174429796,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 11 08:39:54 crc kubenswrapper[4808]: > Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.865783 4808 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189bbcb51454c233\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189bbcb51454c233 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:24.216623667 +0000 UTC m=+15.169946987,LastTimestamp:2026-03-11 08:39:24.221161958 +0000 UTC m=+15.174485288,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.872214 4808 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189bbcb2650ebfad\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189bbcb2650ebfad openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:12.686088109 +0000 UTC m=+3.639411429,LastTimestamp:2026-03-11 08:39:24.918176453 +0000 UTC m=+15.871499813,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.892935 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 11 08:39:54 crc kubenswrapper[4808]: &Event{ObjectMeta:{kube-controller-manager-crc.189bbcb611a60551 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Mar 11 08:39:54 crc kubenswrapper[4808]: body: Mar 11 08:39:54 crc kubenswrapper[4808]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:28.466584913 +0000 UTC m=+19.419908273,LastTimestamp:2026-03-11 08:39:28.466584913 +0000 UTC m=+19.419908273,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 11 08:39:54 crc kubenswrapper[4808]: > Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.900388 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189bbcb611a7554e openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:28.466670926 +0000 UTC m=+19.419994286,LastTimestamp:2026-03-11 08:39:28.466670926 +0000 UTC m=+19.419994286,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.908465 4808 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189bbcb611a60551\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 11 08:39:54 crc kubenswrapper[4808]: &Event{ObjectMeta:{kube-controller-manager-crc.189bbcb611a60551 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Mar 11 08:39:54 crc kubenswrapper[4808]: body: Mar 11 08:39:54 crc kubenswrapper[4808]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:28.466584913 +0000 UTC m=+19.419908273,LastTimestamp:2026-03-11 08:39:38.467863666 +0000 UTC m=+29.421187036,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 11 08:39:54 crc kubenswrapper[4808]: > Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.915905 4808 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189bbcb611a7554e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189bbcb611a7554e openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:28.466670926 +0000 UTC m=+19.419994286,LastTimestamp:2026-03-11 08:39:38.467940289 +0000 UTC m=+29.421263649,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.923834 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189bbcb865f8930b openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Killing,Message:Container cluster-policy-controller failed startup probe, will be restarted,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:38.471215883 +0000 UTC m=+29.424539243,LastTimestamp:2026-03-11 08:39:38.471215883 +0000 UTC m=+29.424539243,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.930660 4808 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189bbcb1fb719e5e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189bbcb1fb719e5e openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:10.91418275 +0000 UTC m=+1.867506080,LastTimestamp:2026-03-11 08:39:38.594826135 +0000 UTC m=+29.548149495,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.937187 4808 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189bbcb20fc421bb\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189bbcb20fc421bb openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:11.255134651 +0000 UTC m=+2.208457991,LastTimestamp:2026-03-11 08:39:38.815431362 +0000 UTC m=+29.768754722,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.943708 4808 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189bbcb210feee02\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189bbcb210feee02 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:11.27576525 +0000 UTC m=+2.229088600,LastTimestamp:2026-03-11 08:39:38.827915492 +0000 UTC m=+29.781238842,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.952382 4808 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189bbcb611a60551\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 11 08:39:54 crc kubenswrapper[4808]: &Event{ObjectMeta:{kube-controller-manager-crc.189bbcb611a60551 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Mar 11 08:39:54 crc kubenswrapper[4808]: body: Mar 11 08:39:54 crc kubenswrapper[4808]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:28.466584913 +0000 UTC m=+19.419908273,LastTimestamp:2026-03-11 08:39:48.467441032 +0000 UTC m=+39.420764442,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 11 08:39:54 crc kubenswrapper[4808]: > Mar 11 08:39:54 crc kubenswrapper[4808]: E0311 08:39:54.959934 4808 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189bbcb611a7554e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189bbcb611a7554e openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:28.466670926 +0000 UTC m=+19.419994286,LastTimestamp:2026-03-11 08:39:48.467527545 +0000 UTC m=+39.420850905,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:39:55 crc kubenswrapper[4808]: I0311 08:39:55.424855 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 11 08:39:55 crc kubenswrapper[4808]: I0311 08:39:55.425178 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:39:55 crc kubenswrapper[4808]: I0311 08:39:55.427289 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:39:55 crc kubenswrapper[4808]: I0311 08:39:55.427400 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:39:55 crc kubenswrapper[4808]: I0311 08:39:55.427420 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:39:55 crc kubenswrapper[4808]: I0311 08:39:55.428193 4808 scope.go:117] "RemoveContainer" containerID="8ec32f6a6daa85e28d13dc9afd67d901d62245ef2bc288428c423f3ddef1a68d" Mar 11 08:39:55 crc kubenswrapper[4808]: E0311 08:39:55.428514 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 11 08:39:55 crc kubenswrapper[4808]: I0311 08:39:55.722925 4808 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 11 08:39:56 crc kubenswrapper[4808]: I0311 08:39:56.723797 4808 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 11 08:39:57 crc kubenswrapper[4808]: W0311 08:39:57.273806 4808 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Mar 11 08:39:57 crc kubenswrapper[4808]: E0311 08:39:57.273881 4808 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Mar 11 08:39:57 crc kubenswrapper[4808]: I0311 08:39:57.722827 4808 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 11 08:39:58 crc kubenswrapper[4808]: I0311 08:39:58.467060 4808 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 11 08:39:58 crc kubenswrapper[4808]: I0311 08:39:58.467144 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 11 08:39:58 crc kubenswrapper[4808]: E0311 08:39:58.474506 4808 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189bbcb611a60551\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 11 08:39:58 crc kubenswrapper[4808]: &Event{ObjectMeta:{kube-controller-manager-crc.189bbcb611a60551 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Mar 11 08:39:58 crc kubenswrapper[4808]: body: Mar 11 08:39:58 crc kubenswrapper[4808]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:39:28.466584913 +0000 UTC m=+19.419908273,LastTimestamp:2026-03-11 08:39:58.467123958 +0000 UTC m=+49.420447318,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 11 08:39:58 crc kubenswrapper[4808]: > Mar 11 08:39:58 crc kubenswrapper[4808]: I0311 08:39:58.628740 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:39:58 crc kubenswrapper[4808]: I0311 08:39:58.630482 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:39:58 crc kubenswrapper[4808]: I0311 08:39:58.630559 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:39:58 crc kubenswrapper[4808]: I0311 08:39:58.630586 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:39:58 crc kubenswrapper[4808]: I0311 08:39:58.630633 4808 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 11 08:39:58 crc kubenswrapper[4808]: E0311 08:39:58.636245 4808 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Mar 11 08:39:58 crc kubenswrapper[4808]: E0311 08:39:58.636628 4808 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 11 08:39:58 crc kubenswrapper[4808]: I0311 08:39:58.720331 4808 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 11 08:39:59 crc kubenswrapper[4808]: I0311 08:39:59.723113 4808 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 11 08:39:59 crc kubenswrapper[4808]: E0311 08:39:59.891286 4808 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 11 08:40:00 crc kubenswrapper[4808]: I0311 08:40:00.724591 4808 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 11 08:40:01 crc kubenswrapper[4808]: I0311 08:40:01.723656 4808 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 11 08:40:02 crc kubenswrapper[4808]: I0311 08:40:02.309734 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 11 08:40:02 crc kubenswrapper[4808]: I0311 08:40:02.309975 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:40:02 crc kubenswrapper[4808]: I0311 08:40:02.311922 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:02 crc kubenswrapper[4808]: I0311 08:40:02.311985 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:02 crc kubenswrapper[4808]: I0311 08:40:02.312004 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:02 crc kubenswrapper[4808]: I0311 08:40:02.722683 4808 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 11 08:40:03 crc kubenswrapper[4808]: I0311 08:40:03.723213 4808 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 11 08:40:04 crc kubenswrapper[4808]: I0311 08:40:04.723806 4808 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 11 08:40:05 crc kubenswrapper[4808]: I0311 08:40:05.637559 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:40:05 crc kubenswrapper[4808]: I0311 08:40:05.639139 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:05 crc kubenswrapper[4808]: I0311 08:40:05.639195 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:05 crc kubenswrapper[4808]: I0311 08:40:05.639213 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:05 crc kubenswrapper[4808]: I0311 08:40:05.639245 4808 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 11 08:40:05 crc kubenswrapper[4808]: E0311 08:40:05.644072 4808 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Mar 11 08:40:05 crc kubenswrapper[4808]: E0311 08:40:05.644005 4808 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 11 08:40:05 crc kubenswrapper[4808]: I0311 08:40:05.717835 4808 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 11 08:40:05 crc kubenswrapper[4808]: I0311 08:40:05.788923 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:40:05 crc kubenswrapper[4808]: I0311 08:40:05.791031 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:05 crc kubenswrapper[4808]: I0311 08:40:05.791130 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:05 crc kubenswrapper[4808]: I0311 08:40:05.791151 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:05 crc kubenswrapper[4808]: I0311 08:40:05.792046 4808 scope.go:117] "RemoveContainer" containerID="8ec32f6a6daa85e28d13dc9afd67d901d62245ef2bc288428c423f3ddef1a68d" Mar 11 08:40:05 crc kubenswrapper[4808]: E0311 08:40:05.792339 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 11 08:40:06 crc kubenswrapper[4808]: I0311 08:40:06.723921 4808 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 11 08:40:07 crc kubenswrapper[4808]: I0311 08:40:07.725155 4808 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 11 08:40:08 crc kubenswrapper[4808]: I0311 08:40:08.467176 4808 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 11 08:40:08 crc kubenswrapper[4808]: I0311 08:40:08.467336 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 11 08:40:08 crc kubenswrapper[4808]: I0311 08:40:08.467439 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 11 08:40:08 crc kubenswrapper[4808]: I0311 08:40:08.467619 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:40:08 crc kubenswrapper[4808]: I0311 08:40:08.469079 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:08 crc kubenswrapper[4808]: I0311 08:40:08.469139 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:08 crc kubenswrapper[4808]: I0311 08:40:08.469156 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:08 crc kubenswrapper[4808]: I0311 08:40:08.469672 4808 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"92167e4d93f32c4e5d29be5deeb05f6d9a65f5bdc4959226dd4681484f476658"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Mar 11 08:40:08 crc kubenswrapper[4808]: I0311 08:40:08.469802 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" containerID="cri-o://92167e4d93f32c4e5d29be5deeb05f6d9a65f5bdc4959226dd4681484f476658" gracePeriod=30 Mar 11 08:40:08 crc kubenswrapper[4808]: I0311 08:40:08.723453 4808 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 11 08:40:09 crc kubenswrapper[4808]: I0311 08:40:09.123705 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/1.log" Mar 11 08:40:09 crc kubenswrapper[4808]: I0311 08:40:09.125386 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Mar 11 08:40:09 crc kubenswrapper[4808]: I0311 08:40:09.125912 4808 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="92167e4d93f32c4e5d29be5deeb05f6d9a65f5bdc4959226dd4681484f476658" exitCode=255 Mar 11 08:40:09 crc kubenswrapper[4808]: I0311 08:40:09.125980 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"92167e4d93f32c4e5d29be5deeb05f6d9a65f5bdc4959226dd4681484f476658"} Mar 11 08:40:09 crc kubenswrapper[4808]: I0311 08:40:09.126038 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e245f8f898c89b9c15271b5381bcd9c4dd3de809948843bdf23b025c4b58b667"} Mar 11 08:40:09 crc kubenswrapper[4808]: I0311 08:40:09.126070 4808 scope.go:117] "RemoveContainer" containerID="6553330cd365bcb79f61b1485de903b3a33c17e85ccd9e4c2e6a35c3b0443ccf" Mar 11 08:40:09 crc kubenswrapper[4808]: I0311 08:40:09.126261 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:40:09 crc kubenswrapper[4808]: I0311 08:40:09.127611 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:09 crc kubenswrapper[4808]: I0311 08:40:09.127668 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:09 crc kubenswrapper[4808]: I0311 08:40:09.127689 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:09 crc kubenswrapper[4808]: I0311 08:40:09.723704 4808 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 11 08:40:09 crc kubenswrapper[4808]: E0311 08:40:09.891429 4808 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 11 08:40:10 crc kubenswrapper[4808]: I0311 08:40:10.133089 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/1.log" Mar 11 08:40:10 crc kubenswrapper[4808]: I0311 08:40:10.134733 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:40:10 crc kubenswrapper[4808]: I0311 08:40:10.136184 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:10 crc kubenswrapper[4808]: I0311 08:40:10.136236 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:10 crc kubenswrapper[4808]: I0311 08:40:10.136250 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:10 crc kubenswrapper[4808]: I0311 08:40:10.722095 4808 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 11 08:40:11 crc kubenswrapper[4808]: I0311 08:40:11.724545 4808 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 11 08:40:12 crc kubenswrapper[4808]: I0311 08:40:12.648210 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:40:12 crc kubenswrapper[4808]: I0311 08:40:12.650390 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:12 crc kubenswrapper[4808]: I0311 08:40:12.650456 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:12 crc kubenswrapper[4808]: I0311 08:40:12.650474 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:12 crc kubenswrapper[4808]: I0311 08:40:12.650511 4808 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 11 08:40:12 crc kubenswrapper[4808]: E0311 08:40:12.657202 4808 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 11 08:40:12 crc kubenswrapper[4808]: E0311 08:40:12.658222 4808 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Mar 11 08:40:12 crc kubenswrapper[4808]: I0311 08:40:12.724141 4808 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 11 08:40:13 crc kubenswrapper[4808]: I0311 08:40:13.719961 4808 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 11 08:40:14 crc kubenswrapper[4808]: I0311 08:40:14.410553 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 11 08:40:14 crc kubenswrapper[4808]: I0311 08:40:14.410737 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:40:14 crc kubenswrapper[4808]: I0311 08:40:14.411945 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:14 crc kubenswrapper[4808]: I0311 08:40:14.411968 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:14 crc kubenswrapper[4808]: I0311 08:40:14.411976 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:14 crc kubenswrapper[4808]: I0311 08:40:14.724558 4808 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 11 08:40:14 crc kubenswrapper[4808]: I0311 08:40:14.984751 4808 csr.go:261] certificate signing request csr-2zv4m is approved, waiting to be issued Mar 11 08:40:14 crc kubenswrapper[4808]: I0311 08:40:14.991346 4808 csr.go:257] certificate signing request csr-2zv4m is issued Mar 11 08:40:15 crc kubenswrapper[4808]: I0311 08:40:15.066246 4808 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Mar 11 08:40:15 crc kubenswrapper[4808]: I0311 08:40:15.467047 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 11 08:40:15 crc kubenswrapper[4808]: I0311 08:40:15.467222 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:40:15 crc kubenswrapper[4808]: I0311 08:40:15.468725 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:15 crc kubenswrapper[4808]: I0311 08:40:15.468902 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:15 crc kubenswrapper[4808]: I0311 08:40:15.469006 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:15 crc kubenswrapper[4808]: I0311 08:40:15.470696 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 11 08:40:15 crc kubenswrapper[4808]: I0311 08:40:15.540049 4808 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Mar 11 08:40:15 crc kubenswrapper[4808]: I0311 08:40:15.992504 4808 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2027-01-01 10:24:58.848932535 +0000 UTC Mar 11 08:40:15 crc kubenswrapper[4808]: I0311 08:40:15.992550 4808 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7105h44m42.85638595s for next certificate rotation Mar 11 08:40:16 crc kubenswrapper[4808]: I0311 08:40:16.147919 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:40:16 crc kubenswrapper[4808]: I0311 08:40:16.148771 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:16 crc kubenswrapper[4808]: I0311 08:40:16.148896 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:16 crc kubenswrapper[4808]: I0311 08:40:16.148980 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:19 crc kubenswrapper[4808]: I0311 08:40:19.659163 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:40:19 crc kubenswrapper[4808]: I0311 08:40:19.661541 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:19 crc kubenswrapper[4808]: I0311 08:40:19.661608 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:19 crc kubenswrapper[4808]: I0311 08:40:19.661626 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:19 crc kubenswrapper[4808]: I0311 08:40:19.661755 4808 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 11 08:40:19 crc kubenswrapper[4808]: I0311 08:40:19.671241 4808 kubelet_node_status.go:115] "Node was previously registered" node="crc" Mar 11 08:40:19 crc kubenswrapper[4808]: I0311 08:40:19.671590 4808 kubelet_node_status.go:79] "Successfully registered node" node="crc" Mar 11 08:40:19 crc kubenswrapper[4808]: E0311 08:40:19.671625 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Mar 11 08:40:19 crc kubenswrapper[4808]: I0311 08:40:19.674976 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:19 crc kubenswrapper[4808]: I0311 08:40:19.675041 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:19 crc kubenswrapper[4808]: I0311 08:40:19.675055 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:19 crc kubenswrapper[4808]: I0311 08:40:19.675073 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:19 crc kubenswrapper[4808]: I0311 08:40:19.675086 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:19Z","lastTransitionTime":"2026-03-11T08:40:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:19 crc kubenswrapper[4808]: E0311 08:40:19.690735 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:40:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:40:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:40:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:40:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fb54ef7d-152b-411a-b511-256d1778abe5\\\",\\\"systemUUID\\\":\\\"8423e724-ca17-4e6e-9671-7e629ecf3f36\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 11 08:40:19 crc kubenswrapper[4808]: I0311 08:40:19.702454 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:19 crc kubenswrapper[4808]: I0311 08:40:19.702488 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:19 crc kubenswrapper[4808]: I0311 08:40:19.702500 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:19 crc kubenswrapper[4808]: I0311 08:40:19.702517 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:19 crc kubenswrapper[4808]: I0311 08:40:19.702528 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:19Z","lastTransitionTime":"2026-03-11T08:40:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:19 crc kubenswrapper[4808]: E0311 08:40:19.717721 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:40:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:40:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:40:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:40:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fb54ef7d-152b-411a-b511-256d1778abe5\\\",\\\"systemUUID\\\":\\\"8423e724-ca17-4e6e-9671-7e629ecf3f36\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 11 08:40:19 crc kubenswrapper[4808]: I0311 08:40:19.724534 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:19 crc kubenswrapper[4808]: I0311 08:40:19.724566 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:19 crc kubenswrapper[4808]: I0311 08:40:19.724575 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:19 crc kubenswrapper[4808]: I0311 08:40:19.724589 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:19 crc kubenswrapper[4808]: I0311 08:40:19.724598 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:19Z","lastTransitionTime":"2026-03-11T08:40:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:19 crc kubenswrapper[4808]: E0311 08:40:19.733073 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:40:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:40:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:40:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:40:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fb54ef7d-152b-411a-b511-256d1778abe5\\\",\\\"systemUUID\\\":\\\"8423e724-ca17-4e6e-9671-7e629ecf3f36\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 11 08:40:19 crc kubenswrapper[4808]: I0311 08:40:19.739471 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:19 crc kubenswrapper[4808]: I0311 08:40:19.739525 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:19 crc kubenswrapper[4808]: I0311 08:40:19.739543 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:19 crc kubenswrapper[4808]: I0311 08:40:19.739567 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:19 crc kubenswrapper[4808]: I0311 08:40:19.739583 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:19Z","lastTransitionTime":"2026-03-11T08:40:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:19 crc kubenswrapper[4808]: E0311 08:40:19.754113 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:40:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:40:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:40:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:40:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fb54ef7d-152b-411a-b511-256d1778abe5\\\",\\\"systemUUID\\\":\\\"8423e724-ca17-4e6e-9671-7e629ecf3f36\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 11 08:40:19 crc kubenswrapper[4808]: E0311 08:40:19.754234 4808 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 11 08:40:19 crc kubenswrapper[4808]: E0311 08:40:19.754256 4808 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:40:19 crc kubenswrapper[4808]: I0311 08:40:19.788660 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:40:19 crc kubenswrapper[4808]: I0311 08:40:19.789726 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:19 crc kubenswrapper[4808]: I0311 08:40:19.789769 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:19 crc kubenswrapper[4808]: I0311 08:40:19.789783 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:19 crc kubenswrapper[4808]: I0311 08:40:19.790327 4808 scope.go:117] "RemoveContainer" containerID="8ec32f6a6daa85e28d13dc9afd67d901d62245ef2bc288428c423f3ddef1a68d" Mar 11 08:40:19 crc kubenswrapper[4808]: E0311 08:40:19.854757 4808 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:40:19 crc kubenswrapper[4808]: E0311 08:40:19.892214 4808 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 11 08:40:19 crc kubenswrapper[4808]: E0311 08:40:19.954928 4808 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:40:20 crc kubenswrapper[4808]: E0311 08:40:20.055968 4808 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:40:20 crc kubenswrapper[4808]: E0311 08:40:20.157087 4808 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:40:20 crc kubenswrapper[4808]: I0311 08:40:20.160650 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Mar 11 08:40:20 crc kubenswrapper[4808]: I0311 08:40:20.163534 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"9f99d53f25011afeb242934cb64a3cce6d5a03c17bd6008729f5e07298f152fb"} Mar 11 08:40:20 crc kubenswrapper[4808]: I0311 08:40:20.163762 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:40:20 crc kubenswrapper[4808]: I0311 08:40:20.165459 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:20 crc kubenswrapper[4808]: I0311 08:40:20.165514 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:20 crc kubenswrapper[4808]: I0311 08:40:20.165537 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:20 crc kubenswrapper[4808]: E0311 08:40:20.257477 4808 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:40:20 crc kubenswrapper[4808]: E0311 08:40:20.357768 4808 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:40:20 crc kubenswrapper[4808]: E0311 08:40:20.458275 4808 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:40:21 crc kubenswrapper[4808]: E0311 08:40:21.046636 4808 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:40:21 crc kubenswrapper[4808]: E0311 08:40:21.147126 4808 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:40:21 crc kubenswrapper[4808]: I0311 08:40:21.167850 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Mar 11 08:40:21 crc kubenswrapper[4808]: I0311 08:40:21.168599 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Mar 11 08:40:21 crc kubenswrapper[4808]: I0311 08:40:21.171046 4808 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="9f99d53f25011afeb242934cb64a3cce6d5a03c17bd6008729f5e07298f152fb" exitCode=255 Mar 11 08:40:21 crc kubenswrapper[4808]: I0311 08:40:21.171097 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"9f99d53f25011afeb242934cb64a3cce6d5a03c17bd6008729f5e07298f152fb"} Mar 11 08:40:21 crc kubenswrapper[4808]: I0311 08:40:21.171149 4808 scope.go:117] "RemoveContainer" containerID="8ec32f6a6daa85e28d13dc9afd67d901d62245ef2bc288428c423f3ddef1a68d" Mar 11 08:40:21 crc kubenswrapper[4808]: I0311 08:40:21.171443 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:40:21 crc kubenswrapper[4808]: I0311 08:40:21.173092 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:21 crc kubenswrapper[4808]: I0311 08:40:21.173127 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:21 crc kubenswrapper[4808]: I0311 08:40:21.173139 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:21 crc kubenswrapper[4808]: I0311 08:40:21.173895 4808 scope.go:117] "RemoveContainer" containerID="9f99d53f25011afeb242934cb64a3cce6d5a03c17bd6008729f5e07298f152fb" Mar 11 08:40:21 crc kubenswrapper[4808]: E0311 08:40:21.174104 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 11 08:40:21 crc kubenswrapper[4808]: E0311 08:40:21.247748 4808 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:40:21 crc kubenswrapper[4808]: E0311 08:40:21.348753 4808 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:40:21 crc kubenswrapper[4808]: E0311 08:40:21.450009 4808 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:40:21 crc kubenswrapper[4808]: E0311 08:40:21.551001 4808 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:40:21 crc kubenswrapper[4808]: E0311 08:40:21.651722 4808 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:40:21 crc kubenswrapper[4808]: E0311 08:40:21.751870 4808 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:40:21 crc kubenswrapper[4808]: E0311 08:40:21.852535 4808 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:40:21 crc kubenswrapper[4808]: E0311 08:40:21.953169 4808 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:40:22 crc kubenswrapper[4808]: E0311 08:40:22.053909 4808 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:40:22 crc kubenswrapper[4808]: E0311 08:40:22.154114 4808 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:40:22 crc kubenswrapper[4808]: I0311 08:40:22.176465 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Mar 11 08:40:22 crc kubenswrapper[4808]: E0311 08:40:22.255244 4808 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:40:22 crc kubenswrapper[4808]: E0311 08:40:22.356389 4808 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:40:22 crc kubenswrapper[4808]: I0311 08:40:22.386779 4808 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 11 08:40:22 crc kubenswrapper[4808]: E0311 08:40:22.457008 4808 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:40:22 crc kubenswrapper[4808]: E0311 08:40:22.558195 4808 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:40:22 crc kubenswrapper[4808]: E0311 08:40:22.658545 4808 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:40:22 crc kubenswrapper[4808]: E0311 08:40:22.758695 4808 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:40:22 crc kubenswrapper[4808]: E0311 08:40:22.859153 4808 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:40:22 crc kubenswrapper[4808]: E0311 08:40:22.960204 4808 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:40:23 crc kubenswrapper[4808]: E0311 08:40:23.061452 4808 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:40:23 crc kubenswrapper[4808]: E0311 08:40:23.162278 4808 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:40:23 crc kubenswrapper[4808]: E0311 08:40:23.262691 4808 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:40:23 crc kubenswrapper[4808]: E0311 08:40:23.363758 4808 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:40:23 crc kubenswrapper[4808]: E0311 08:40:23.464896 4808 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:40:23 crc kubenswrapper[4808]: E0311 08:40:23.565606 4808 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:40:23 crc kubenswrapper[4808]: E0311 08:40:23.665947 4808 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:40:23 crc kubenswrapper[4808]: E0311 08:40:23.766784 4808 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:40:23 crc kubenswrapper[4808]: E0311 08:40:23.867354 4808 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:40:23 crc kubenswrapper[4808]: E0311 08:40:23.968057 4808 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:40:24 crc kubenswrapper[4808]: I0311 08:40:24.061660 4808 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 11 08:40:24 crc kubenswrapper[4808]: I0311 08:40:24.061885 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:40:24 crc kubenswrapper[4808]: I0311 08:40:24.063554 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:24 crc kubenswrapper[4808]: I0311 08:40:24.063634 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:24 crc kubenswrapper[4808]: I0311 08:40:24.063657 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:24 crc kubenswrapper[4808]: I0311 08:40:24.064785 4808 scope.go:117] "RemoveContainer" containerID="9f99d53f25011afeb242934cb64a3cce6d5a03c17bd6008729f5e07298f152fb" Mar 11 08:40:24 crc kubenswrapper[4808]: E0311 08:40:24.065067 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 11 08:40:24 crc kubenswrapper[4808]: E0311 08:40:24.069266 4808 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:40:24 crc kubenswrapper[4808]: E0311 08:40:24.169474 4808 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:40:24 crc kubenswrapper[4808]: E0311 08:40:24.269613 4808 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:40:24 crc kubenswrapper[4808]: E0311 08:40:24.370209 4808 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:40:24 crc kubenswrapper[4808]: I0311 08:40:24.416641 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 11 08:40:24 crc kubenswrapper[4808]: I0311 08:40:24.416829 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:40:24 crc kubenswrapper[4808]: I0311 08:40:24.418272 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:24 crc kubenswrapper[4808]: I0311 08:40:24.418353 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:24 crc kubenswrapper[4808]: I0311 08:40:24.418424 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:24 crc kubenswrapper[4808]: E0311 08:40:24.470909 4808 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:40:24 crc kubenswrapper[4808]: E0311 08:40:24.571778 4808 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:40:24 crc kubenswrapper[4808]: E0311 08:40:24.672798 4808 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:40:24 crc kubenswrapper[4808]: E0311 08:40:24.773545 4808 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:40:24 crc kubenswrapper[4808]: E0311 08:40:24.874584 4808 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:40:24 crc kubenswrapper[4808]: E0311 08:40:24.975103 4808 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:40:25 crc kubenswrapper[4808]: E0311 08:40:25.076096 4808 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:40:25 crc kubenswrapper[4808]: E0311 08:40:25.177056 4808 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:40:25 crc kubenswrapper[4808]: E0311 08:40:25.278165 4808 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:40:25 crc kubenswrapper[4808]: E0311 08:40:25.379265 4808 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:40:25 crc kubenswrapper[4808]: I0311 08:40:25.424877 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 11 08:40:25 crc kubenswrapper[4808]: I0311 08:40:25.425115 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:40:25 crc kubenswrapper[4808]: I0311 08:40:25.427189 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:25 crc kubenswrapper[4808]: I0311 08:40:25.427237 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:25 crc kubenswrapper[4808]: I0311 08:40:25.427248 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:25 crc kubenswrapper[4808]: I0311 08:40:25.428016 4808 scope.go:117] "RemoveContainer" containerID="9f99d53f25011afeb242934cb64a3cce6d5a03c17bd6008729f5e07298f152fb" Mar 11 08:40:25 crc kubenswrapper[4808]: E0311 08:40:25.428235 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 11 08:40:25 crc kubenswrapper[4808]: E0311 08:40:25.480238 4808 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:40:25 crc kubenswrapper[4808]: E0311 08:40:25.580853 4808 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:40:25 crc kubenswrapper[4808]: I0311 08:40:25.589823 4808 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 11 08:40:25 crc kubenswrapper[4808]: I0311 08:40:25.682820 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:25 crc kubenswrapper[4808]: I0311 08:40:25.682864 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:25 crc kubenswrapper[4808]: I0311 08:40:25.682874 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:25 crc kubenswrapper[4808]: I0311 08:40:25.682887 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:25 crc kubenswrapper[4808]: I0311 08:40:25.682895 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:25Z","lastTransitionTime":"2026-03-11T08:40:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:25 crc kubenswrapper[4808]: I0311 08:40:25.786067 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:25 crc kubenswrapper[4808]: I0311 08:40:25.786134 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:25 crc kubenswrapper[4808]: I0311 08:40:25.786153 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:25 crc kubenswrapper[4808]: I0311 08:40:25.786178 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:25 crc kubenswrapper[4808]: I0311 08:40:25.786196 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:25Z","lastTransitionTime":"2026-03-11T08:40:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:25 crc kubenswrapper[4808]: I0311 08:40:25.889156 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:25 crc kubenswrapper[4808]: I0311 08:40:25.889226 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:25 crc kubenswrapper[4808]: I0311 08:40:25.889247 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:25 crc kubenswrapper[4808]: I0311 08:40:25.889278 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:25 crc kubenswrapper[4808]: I0311 08:40:25.889301 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:25Z","lastTransitionTime":"2026-03-11T08:40:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:25 crc kubenswrapper[4808]: I0311 08:40:25.992279 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:25 crc kubenswrapper[4808]: I0311 08:40:25.992341 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:25 crc kubenswrapper[4808]: I0311 08:40:25.992396 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:25 crc kubenswrapper[4808]: I0311 08:40:25.992424 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:25 crc kubenswrapper[4808]: I0311 08:40:25.992446 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:25Z","lastTransitionTime":"2026-03-11T08:40:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.051614 4808 apiserver.go:52] "Watching apiserver" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.057166 4808 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.057647 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.058289 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.058420 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.058494 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 11 08:40:26 crc kubenswrapper[4808]: E0311 08:40:26.059971 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.060354 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:40:26 crc kubenswrapper[4808]: E0311 08:40:26.060454 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.061155 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.061162 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 11 08:40:26 crc kubenswrapper[4808]: E0311 08:40:26.061269 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.061428 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.061565 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.061566 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.063741 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.063784 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.063818 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.063949 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.064217 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.065029 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.093219 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.094636 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.094693 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.094719 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.094749 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.094774 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:26Z","lastTransitionTime":"2026-03-11T08:40:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.108966 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.125257 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.125885 4808 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.141516 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.160729 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.177549 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.183792 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.183881 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.183936 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.183983 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.184020 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.184074 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.184121 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.184158 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.184196 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.184231 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.184263 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.184345 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.184413 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.184448 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.184479 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.184516 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.184585 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.184626 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.184665 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.184700 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.184734 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.184773 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.184807 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.184803 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.184843 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.184879 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.184916 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.184951 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.184988 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.185025 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.185034 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.185061 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.185340 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.185341 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.185391 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.185423 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.185503 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.185637 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.185649 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.185679 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.185717 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.185763 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.185811 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.185847 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.185879 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.185911 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.185947 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.185981 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.186012 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.186050 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.186095 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.186140 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.186183 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.186222 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.186255 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.186287 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.186320 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.186354 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.186422 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.186457 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.186488 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.186521 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.186553 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.186587 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.186618 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.186650 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.186692 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.186739 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.186779 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.186824 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.186954 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.189064 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.189117 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.189154 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.189181 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.189203 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.189229 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.189255 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.189278 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.189304 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.189330 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.189354 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.189400 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.189423 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.189446 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.189468 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.189490 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.189512 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.189535 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.189560 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.189584 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.189609 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.189631 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.189654 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.189677 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.189703 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.189727 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.189750 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.189776 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.189799 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.189835 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.189859 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.189883 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.189956 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.190016 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.190118 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.190150 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.190173 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.190197 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.190219 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.190242 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.190266 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.190290 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.190317 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.190344 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.190391 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.190420 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.190444 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.190468 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.190489 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.190513 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.190538 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.190564 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.190589 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.190614 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.190637 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.190662 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.190686 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.190711 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.190734 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.190762 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.190788 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.190810 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.190835 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.190858 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.190882 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.190905 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.190929 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.190952 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.190987 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.191012 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.191092 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.191122 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.191146 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.191256 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.191282 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.191321 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.191347 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.191402 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.191428 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.191452 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.191475 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.191502 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.191529 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.191554 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.191578 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.191603 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.191629 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.191653 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.191679 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.191702 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.191727 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.191754 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.191780 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.191806 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.191830 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.191857 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.191883 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.191907 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.191932 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.191957 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.191981 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.192006 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.192031 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.192067 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.192102 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.192131 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.192224 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.192255 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.192280 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.192306 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.192338 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.192379 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.192458 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.192490 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.192519 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.192548 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.192578 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.192695 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.192724 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.192751 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.192777 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.192805 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.192832 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.192861 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.192888 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.192916 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.193364 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.193415 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.193441 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.193468 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.193524 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.193554 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.193581 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.193611 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.193656 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.193684 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.193713 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.193755 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.193787 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.193814 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.193841 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.193866 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.193892 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.193920 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.193982 4808 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.193999 4808 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.194016 4808 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.194030 4808 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.194044 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.194059 4808 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.198910 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.185949 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.186174 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.186199 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.186887 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.187140 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.187437 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.203436 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.187481 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.187766 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.187787 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.187899 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.187933 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.188005 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.188103 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.188295 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.188412 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.188334 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.188524 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.205039 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.188788 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.188841 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.189273 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.189412 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.189547 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.190076 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.205623 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.190157 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.190680 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.190678 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.191269 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.191580 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.191732 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.191923 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.192176 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.192422 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.192662 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.205842 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.192934 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.193061 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.193193 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.193241 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.194300 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.194299 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.194347 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.194923 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.195296 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.195526 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.195794 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.196105 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.196173 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.196884 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.206125 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.196904 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.196945 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.196958 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.197180 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.197777 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.197913 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.198004 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.198012 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.198054 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.198073 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.198426 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.198445 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.198516 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.198749 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.199270 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.199749 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.199752 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.199959 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.200043 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.200242 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.200203 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.200327 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.200541 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.200557 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.200660 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.200827 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.201016 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.201132 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.201272 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.201277 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.201326 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.201499 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.201712 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.202693 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.202669 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.203342 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.203820 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.203858 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.203995 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.204300 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.204259 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.204488 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.204500 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.204947 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.205118 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.205335 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.205334 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.205417 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.206326 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.207033 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.207507 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.207696 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.207721 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.207731 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.207749 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.207762 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:26Z","lastTransitionTime":"2026-03-11T08:40:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.208093 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.208213 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.208203 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.208251 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.208698 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.208753 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.208774 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.208918 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.208930 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: E0311 08:40:26.209058 4808 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.209118 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: E0311 08:40:26.209154 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-11 08:40:26.70912516 +0000 UTC m=+77.662448520 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.209153 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.209165 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: E0311 08:40:26.209251 4808 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.209265 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: E0311 08:40:26.209392 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-11 08:40:26.709304696 +0000 UTC m=+77.662628026 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.209485 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.209564 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.209627 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.209769 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.209953 4808 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Mar 11 08:40:26 crc kubenswrapper[4808]: E0311 08:40:26.210049 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:40:26.710029817 +0000 UTC m=+77.663353237 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.211766 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.212030 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.213807 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.220979 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.221115 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.221669 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.223970 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.224553 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.223470 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.227557 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.228360 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.228619 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.229315 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.229344 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.230783 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.231405 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.235418 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 11 08:40:26 crc kubenswrapper[4808]: E0311 08:40:26.235673 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 11 08:40:26 crc kubenswrapper[4808]: E0311 08:40:26.235693 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 11 08:40:26 crc kubenswrapper[4808]: E0311 08:40:26.235709 4808 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 11 08:40:26 crc kubenswrapper[4808]: E0311 08:40:26.235773 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-11 08:40:26.735754907 +0000 UTC m=+77.689078237 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.235986 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.238959 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: E0311 08:40:26.238977 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 11 08:40:26 crc kubenswrapper[4808]: E0311 08:40:26.239011 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 11 08:40:26 crc kubenswrapper[4808]: E0311 08:40:26.239033 4808 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 11 08:40:26 crc kubenswrapper[4808]: E0311 08:40:26.239119 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-11 08:40:26.739089575 +0000 UTC m=+77.692412925 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.240189 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.240591 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.240850 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.242235 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.242829 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.243460 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.245748 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.245890 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.246008 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.246051 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.246221 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.246245 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.246349 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.246438 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.246445 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.246543 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.246916 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.246960 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.246985 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.247034 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.247093 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.247130 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.247295 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.247555 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.247672 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.247719 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.248011 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.248140 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.248160 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.248195 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.248218 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.248409 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.248408 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.248778 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.248850 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.248941 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.249182 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.250192 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.252601 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.252974 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.253497 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.253632 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.254336 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.255019 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.255182 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.255306 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.255535 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.255770 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.256037 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.256217 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.256334 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.256491 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.256867 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.260183 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.281881 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.286098 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.287290 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.295200 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.295265 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.295327 4808 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.295335 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.295344 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.295435 4808 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.295456 4808 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.295469 4808 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.295483 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.295495 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.295510 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.295523 4808 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.295535 4808 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.295538 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.295547 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.295570 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.295586 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.295600 4808 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.295612 4808 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.295625 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.295637 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.295653 4808 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.295677 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.295691 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.295703 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.295716 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.295730 4808 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.295743 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.295756 4808 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.295769 4808 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.295782 4808 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.295797 4808 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.295811 4808 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.295832 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.295847 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.295863 4808 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.295879 4808 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.295894 4808 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.295910 4808 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.295932 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.295948 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.295965 4808 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.295987 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.296005 4808 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.296021 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.296033 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.296045 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.296058 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.296070 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.296082 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.296098 4808 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.296115 4808 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.296132 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.296145 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.296156 4808 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.296168 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.296180 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.296192 4808 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.296204 4808 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.296216 4808 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.296227 4808 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.296239 4808 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.296251 4808 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.296262 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.296275 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.296287 4808 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.296299 4808 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.296311 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.296324 4808 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.296336 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.296348 4808 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.296363 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.296395 4808 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.296408 4808 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.296421 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.296432 4808 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.296444 4808 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.296456 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.296472 4808 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.296487 4808 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.296499 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.296511 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.296523 4808 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.296535 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.296546 4808 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.296560 4808 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.296572 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.296585 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.296596 4808 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.296609 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.296620 4808 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.296631 4808 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.296643 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.296657 4808 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.296669 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.296684 4808 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.296697 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.296710 4808 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.296722 4808 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.296734 4808 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.296746 4808 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.296758 4808 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.296771 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.296783 4808 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.296794 4808 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.296806 4808 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.296818 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.296829 4808 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.296840 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.296852 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.296863 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.296875 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.296887 4808 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.296898 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.296910 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.296922 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.296933 4808 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.296946 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.296957 4808 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.296970 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.296982 4808 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.296995 4808 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.297007 4808 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.297018 4808 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.297031 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.297042 4808 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.297054 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.297065 4808 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.297078 4808 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.297091 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.297105 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.297123 4808 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.297140 4808 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.297156 4808 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.297174 4808 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.297190 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.297205 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.297222 4808 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.297235 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.297247 4808 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.297262 4808 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.297278 4808 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.297293 4808 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.297309 4808 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.297324 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.297338 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.297349 4808 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.297383 4808 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.297397 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.297409 4808 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.297421 4808 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.297433 4808 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.297444 4808 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.297456 4808 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.297468 4808 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.297479 4808 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.297491 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.297531 4808 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.297543 4808 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.297557 4808 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.297569 4808 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.297584 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.297598 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.297611 4808 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.297625 4808 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.297638 4808 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.297653 4808 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.297666 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.297679 4808 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.297692 4808 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.297706 4808 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.297719 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.297733 4808 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.297746 4808 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.297759 4808 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.297771 4808 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.297782 4808 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.297793 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.297804 4808 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.297816 4808 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.297827 4808 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.297840 4808 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.297852 4808 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.297864 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.297878 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.297890 4808 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.297901 4808 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.297912 4808 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.297924 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.297935 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.297946 4808 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.297958 4808 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.311158 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.311206 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.311217 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.311237 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.311249 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:26Z","lastTransitionTime":"2026-03-11T08:40:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.386773 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.400157 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.410915 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.413768 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.413805 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.413815 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.413832 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.413845 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:26Z","lastTransitionTime":"2026-03-11T08:40:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:26 crc kubenswrapper[4808]: W0311 08:40:26.436007 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-9d22ab34466f4592e509ee44c3640fa8957a6cc583dffd62c9ff6cea47bf5a4e WatchSource:0}: Error finding container 9d22ab34466f4592e509ee44c3640fa8957a6cc583dffd62c9ff6cea47bf5a4e: Status 404 returned error can't find the container with id 9d22ab34466f4592e509ee44c3640fa8957a6cc583dffd62c9ff6cea47bf5a4e Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.517090 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.517588 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.517607 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.517633 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.517651 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:26Z","lastTransitionTime":"2026-03-11T08:40:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.619614 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.619651 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.619660 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.619678 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.619687 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:26Z","lastTransitionTime":"2026-03-11T08:40:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.722421 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.722469 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.722483 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.722503 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.722516 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:26Z","lastTransitionTime":"2026-03-11T08:40:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.803621 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.803717 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.803746 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:40:26 crc kubenswrapper[4808]: E0311 08:40:26.803832 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:40:27.8037929 +0000 UTC m=+78.757116240 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:40:26 crc kubenswrapper[4808]: E0311 08:40:26.803839 4808 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.803870 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.803897 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:40:26 crc kubenswrapper[4808]: E0311 08:40:26.803919 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-11 08:40:27.803906773 +0000 UTC m=+78.757230103 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 11 08:40:26 crc kubenswrapper[4808]: E0311 08:40:26.803918 4808 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 11 08:40:26 crc kubenswrapper[4808]: E0311 08:40:26.803977 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 11 08:40:26 crc kubenswrapper[4808]: E0311 08:40:26.803993 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 11 08:40:26 crc kubenswrapper[4808]: E0311 08:40:26.804005 4808 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 11 08:40:26 crc kubenswrapper[4808]: E0311 08:40:26.804100 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 11 08:40:26 crc kubenswrapper[4808]: E0311 08:40:26.804159 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 11 08:40:26 crc kubenswrapper[4808]: E0311 08:40:26.804184 4808 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 11 08:40:26 crc kubenswrapper[4808]: E0311 08:40:26.804120 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-11 08:40:27.804048118 +0000 UTC m=+78.757371478 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 11 08:40:26 crc kubenswrapper[4808]: E0311 08:40:26.804294 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-11 08:40:27.804267314 +0000 UTC m=+78.757590674 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 11 08:40:26 crc kubenswrapper[4808]: E0311 08:40:26.804319 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-11 08:40:27.804306885 +0000 UTC m=+78.757630245 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.827029 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.827087 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.827105 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.827139 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.827167 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:26Z","lastTransitionTime":"2026-03-11T08:40:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.930339 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.930461 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.930486 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.930520 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:26 crc kubenswrapper[4808]: I0311 08:40:26.930544 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:26Z","lastTransitionTime":"2026-03-11T08:40:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.033639 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.033703 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.033721 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.033746 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.033764 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:27Z","lastTransitionTime":"2026-03-11T08:40:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.137193 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.137260 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.137283 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.137309 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.137331 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:27Z","lastTransitionTime":"2026-03-11T08:40:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.203160 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"9d22ab34466f4592e509ee44c3640fa8957a6cc583dffd62c9ff6cea47bf5a4e"} Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.205525 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"afd64db1e4772f4558a0a810e319481d1c8d521b299c9f6f79d5c0f752b5f89e"} Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.205618 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"c8deb33e0cd83c7419f7ddd4cba0e84d33849e982b5da5e74e0e93eebb40b249"} Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.205650 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"462dac10a7c7e3ac617b87b0fdc8a5e6e58a0d3c16325ee8fdf2a797fbf62893"} Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.207437 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"b088d2a8d5895f936e824861275482977fb68600e1ca4fd0c08e124c4f11da52"} Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.207493 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"39cebb2b461adfd21d04492b7eb7114c64ae370ff64cd15b064eb35b8d8433fa"} Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.231386 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:27Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.240502 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.240550 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.240562 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.240582 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.240595 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:27Z","lastTransitionTime":"2026-03-11T08:40:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.257548 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:27Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.273647 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:27Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.296796 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:27Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.322080 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afd64db1e4772f4558a0a810e319481d1c8d521b299c9f6f79d5c0f752b5f89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8deb33e0cd83c7419f7ddd4cba0e84d33849e982b5da5e74e0e93eebb40b249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:27Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.342803 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:27Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.343194 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.343228 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.343243 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.343269 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.343285 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:27Z","lastTransitionTime":"2026-03-11T08:40:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.367450 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b088d2a8d5895f936e824861275482977fb68600e1ca4fd0c08e124c4f11da52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:27Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.386463 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:27Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.405227 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:27Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.427761 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:27Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.446269 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afd64db1e4772f4558a0a810e319481d1c8d521b299c9f6f79d5c0f752b5f89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8deb33e0cd83c7419f7ddd4cba0e84d33849e982b5da5e74e0e93eebb40b249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:27Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.446599 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.446644 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.446660 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.446684 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.446702 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:27Z","lastTransitionTime":"2026-03-11T08:40:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.488215 4808 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.508835 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:27Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.548666 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.548727 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.548742 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.548758 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.548774 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:27Z","lastTransitionTime":"2026-03-11T08:40:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.652176 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.652340 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.652405 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.652439 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.652462 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:27Z","lastTransitionTime":"2026-03-11T08:40:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.755935 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.756418 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.756578 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.756715 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.756819 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:27Z","lastTransitionTime":"2026-03-11T08:40:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.789098 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:40:27 crc kubenswrapper[4808]: E0311 08:40:27.789222 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.789432 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.789625 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:40:27 crc kubenswrapper[4808]: E0311 08:40:27.789656 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 11 08:40:27 crc kubenswrapper[4808]: E0311 08:40:27.789993 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.794689 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.795527 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.797112 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.798090 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.799411 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.800127 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.801026 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.802344 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.803453 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.804683 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.805294 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.806625 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.807231 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.807899 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.809057 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.809788 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.810966 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.811478 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.811546 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.811492 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.811575 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.811604 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:40:27 crc kubenswrapper[4808]: E0311 08:40:27.811645 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:40:29.811604664 +0000 UTC m=+80.764927984 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.811690 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:40:27 crc kubenswrapper[4808]: E0311 08:40:27.811721 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 11 08:40:27 crc kubenswrapper[4808]: E0311 08:40:27.811741 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 11 08:40:27 crc kubenswrapper[4808]: E0311 08:40:27.811752 4808 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 11 08:40:27 crc kubenswrapper[4808]: E0311 08:40:27.811808 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-11 08:40:29.811778679 +0000 UTC m=+80.765102069 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 11 08:40:27 crc kubenswrapper[4808]: E0311 08:40:27.811821 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 11 08:40:27 crc kubenswrapper[4808]: E0311 08:40:27.811833 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 11 08:40:27 crc kubenswrapper[4808]: E0311 08:40:27.811845 4808 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 11 08:40:27 crc kubenswrapper[4808]: E0311 08:40:27.811877 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-11 08:40:29.811870412 +0000 UTC m=+80.765193732 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 11 08:40:27 crc kubenswrapper[4808]: E0311 08:40:27.811921 4808 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 11 08:40:27 crc kubenswrapper[4808]: E0311 08:40:27.811940 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-11 08:40:29.811934994 +0000 UTC m=+80.765258314 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 11 08:40:27 crc kubenswrapper[4808]: E0311 08:40:27.811964 4808 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 11 08:40:27 crc kubenswrapper[4808]: E0311 08:40:27.811987 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-11 08:40:29.811978215 +0000 UTC m=+80.765301535 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.812271 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.813659 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.814342 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.815569 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.816757 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.818722 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.819307 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.820110 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.822044 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.822717 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.824160 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.825813 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.828818 4808 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.829012 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.831463 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.832614 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.833134 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.835079 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.835902 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.837033 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.837988 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.839313 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.840126 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.841490 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.842260 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.843526 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.844111 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.845312 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.845887 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.846978 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.847432 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.848185 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.848664 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.849540 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.850064 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.850544 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.859573 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.859604 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.859613 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.859626 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.859635 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:27Z","lastTransitionTime":"2026-03-11T08:40:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.961827 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.961889 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.961905 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.961927 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:27 crc kubenswrapper[4808]: I0311 08:40:27.961943 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:27Z","lastTransitionTime":"2026-03-11T08:40:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:28 crc kubenswrapper[4808]: I0311 08:40:28.064824 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:28 crc kubenswrapper[4808]: I0311 08:40:28.064932 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:28 crc kubenswrapper[4808]: I0311 08:40:28.064956 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:28 crc kubenswrapper[4808]: I0311 08:40:28.064983 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:28 crc kubenswrapper[4808]: I0311 08:40:28.065005 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:28Z","lastTransitionTime":"2026-03-11T08:40:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:28 crc kubenswrapper[4808]: I0311 08:40:28.167690 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:28 crc kubenswrapper[4808]: I0311 08:40:28.168477 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:28 crc kubenswrapper[4808]: I0311 08:40:28.168595 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:28 crc kubenswrapper[4808]: I0311 08:40:28.168680 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:28 crc kubenswrapper[4808]: I0311 08:40:28.168761 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:28Z","lastTransitionTime":"2026-03-11T08:40:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:28 crc kubenswrapper[4808]: I0311 08:40:28.270980 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:28 crc kubenswrapper[4808]: I0311 08:40:28.271018 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:28 crc kubenswrapper[4808]: I0311 08:40:28.271030 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:28 crc kubenswrapper[4808]: I0311 08:40:28.271047 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:28 crc kubenswrapper[4808]: I0311 08:40:28.271060 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:28Z","lastTransitionTime":"2026-03-11T08:40:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:28 crc kubenswrapper[4808]: I0311 08:40:28.373418 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:28 crc kubenswrapper[4808]: I0311 08:40:28.373455 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:28 crc kubenswrapper[4808]: I0311 08:40:28.373464 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:28 crc kubenswrapper[4808]: I0311 08:40:28.373476 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:28 crc kubenswrapper[4808]: I0311 08:40:28.373485 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:28Z","lastTransitionTime":"2026-03-11T08:40:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:28 crc kubenswrapper[4808]: I0311 08:40:28.476376 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:28 crc kubenswrapper[4808]: I0311 08:40:28.476411 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:28 crc kubenswrapper[4808]: I0311 08:40:28.476423 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:28 crc kubenswrapper[4808]: I0311 08:40:28.476438 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:28 crc kubenswrapper[4808]: I0311 08:40:28.476448 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:28Z","lastTransitionTime":"2026-03-11T08:40:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:28 crc kubenswrapper[4808]: I0311 08:40:28.579064 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:28 crc kubenswrapper[4808]: I0311 08:40:28.579130 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:28 crc kubenswrapper[4808]: I0311 08:40:28.579149 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:28 crc kubenswrapper[4808]: I0311 08:40:28.579171 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:28 crc kubenswrapper[4808]: I0311 08:40:28.579216 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:28Z","lastTransitionTime":"2026-03-11T08:40:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:28 crc kubenswrapper[4808]: I0311 08:40:28.682251 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:28 crc kubenswrapper[4808]: I0311 08:40:28.682341 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:28 crc kubenswrapper[4808]: I0311 08:40:28.682376 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:28 crc kubenswrapper[4808]: I0311 08:40:28.682398 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:28 crc kubenswrapper[4808]: I0311 08:40:28.682412 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:28Z","lastTransitionTime":"2026-03-11T08:40:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:28 crc kubenswrapper[4808]: I0311 08:40:28.785471 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:28 crc kubenswrapper[4808]: I0311 08:40:28.785537 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:28 crc kubenswrapper[4808]: I0311 08:40:28.785551 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:28 crc kubenswrapper[4808]: I0311 08:40:28.785573 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:28 crc kubenswrapper[4808]: I0311 08:40:28.785588 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:28Z","lastTransitionTime":"2026-03-11T08:40:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:28 crc kubenswrapper[4808]: I0311 08:40:28.888486 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:28 crc kubenswrapper[4808]: I0311 08:40:28.888552 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:28 crc kubenswrapper[4808]: I0311 08:40:28.888570 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:28 crc kubenswrapper[4808]: I0311 08:40:28.888595 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:28 crc kubenswrapper[4808]: I0311 08:40:28.888614 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:28Z","lastTransitionTime":"2026-03-11T08:40:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:28 crc kubenswrapper[4808]: I0311 08:40:28.991956 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:28 crc kubenswrapper[4808]: I0311 08:40:28.992008 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:28 crc kubenswrapper[4808]: I0311 08:40:28.992024 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:28 crc kubenswrapper[4808]: I0311 08:40:28.992047 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:28 crc kubenswrapper[4808]: I0311 08:40:28.992063 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:28Z","lastTransitionTime":"2026-03-11T08:40:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:29 crc kubenswrapper[4808]: I0311 08:40:29.094734 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:29 crc kubenswrapper[4808]: I0311 08:40:29.094780 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:29 crc kubenswrapper[4808]: I0311 08:40:29.094793 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:29 crc kubenswrapper[4808]: I0311 08:40:29.094812 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:29 crc kubenswrapper[4808]: I0311 08:40:29.094825 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:29Z","lastTransitionTime":"2026-03-11T08:40:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:29 crc kubenswrapper[4808]: I0311 08:40:29.197635 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:29 crc kubenswrapper[4808]: I0311 08:40:29.197700 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:29 crc kubenswrapper[4808]: I0311 08:40:29.197722 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:29 crc kubenswrapper[4808]: I0311 08:40:29.197751 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:29 crc kubenswrapper[4808]: I0311 08:40:29.197777 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:29Z","lastTransitionTime":"2026-03-11T08:40:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:29 crc kubenswrapper[4808]: I0311 08:40:29.212645 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"d7a12f71a3a92a4513660a5b6c0bb73f40545f5528a45fab7aec18cb22283921"} Mar 11 08:40:29 crc kubenswrapper[4808]: I0311 08:40:29.227991 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:29Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:29 crc kubenswrapper[4808]: I0311 08:40:29.242420 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b088d2a8d5895f936e824861275482977fb68600e1ca4fd0c08e124c4f11da52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:29Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:29 crc kubenswrapper[4808]: I0311 08:40:29.259577 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:29Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:29 crc kubenswrapper[4808]: I0311 08:40:29.272171 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:29Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:29 crc kubenswrapper[4808]: I0311 08:40:29.290311 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afd64db1e4772f4558a0a810e319481d1c8d521b299c9f6f79d5c0f752b5f89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8deb33e0cd83c7419f7ddd4cba0e84d33849e982b5da5e74e0e93eebb40b249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:29Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:29 crc kubenswrapper[4808]: I0311 08:40:29.300423 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:29 crc kubenswrapper[4808]: I0311 08:40:29.300461 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:29 crc kubenswrapper[4808]: I0311 08:40:29.300474 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:29 crc kubenswrapper[4808]: I0311 08:40:29.300490 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:29 crc kubenswrapper[4808]: I0311 08:40:29.300503 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:29Z","lastTransitionTime":"2026-03-11T08:40:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:29 crc kubenswrapper[4808]: I0311 08:40:29.302649 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a12f71a3a92a4513660a5b6c0bb73f40545f5528a45fab7aec18cb22283921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:29Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:29 crc kubenswrapper[4808]: I0311 08:40:29.402996 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:29 crc kubenswrapper[4808]: I0311 08:40:29.403054 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:29 crc kubenswrapper[4808]: I0311 08:40:29.403073 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:29 crc kubenswrapper[4808]: I0311 08:40:29.403098 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:29 crc kubenswrapper[4808]: I0311 08:40:29.403115 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:29Z","lastTransitionTime":"2026-03-11T08:40:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:29 crc kubenswrapper[4808]: I0311 08:40:29.505448 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:29 crc kubenswrapper[4808]: I0311 08:40:29.505503 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:29 crc kubenswrapper[4808]: I0311 08:40:29.505517 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:29 crc kubenswrapper[4808]: I0311 08:40:29.505540 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:29 crc kubenswrapper[4808]: I0311 08:40:29.505554 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:29Z","lastTransitionTime":"2026-03-11T08:40:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:29 crc kubenswrapper[4808]: I0311 08:40:29.608613 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:29 crc kubenswrapper[4808]: I0311 08:40:29.608656 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:29 crc kubenswrapper[4808]: I0311 08:40:29.608665 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:29 crc kubenswrapper[4808]: I0311 08:40:29.608679 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:29 crc kubenswrapper[4808]: I0311 08:40:29.608691 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:29Z","lastTransitionTime":"2026-03-11T08:40:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:29 crc kubenswrapper[4808]: I0311 08:40:29.711604 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:29 crc kubenswrapper[4808]: I0311 08:40:29.711670 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:29 crc kubenswrapper[4808]: I0311 08:40:29.711689 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:29 crc kubenswrapper[4808]: I0311 08:40:29.711716 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:29 crc kubenswrapper[4808]: I0311 08:40:29.711734 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:29Z","lastTransitionTime":"2026-03-11T08:40:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:29 crc kubenswrapper[4808]: I0311 08:40:29.789125 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:40:29 crc kubenswrapper[4808]: I0311 08:40:29.789280 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:40:29 crc kubenswrapper[4808]: E0311 08:40:29.789464 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 11 08:40:29 crc kubenswrapper[4808]: I0311 08:40:29.789571 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:40:29 crc kubenswrapper[4808]: E0311 08:40:29.789755 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 11 08:40:29 crc kubenswrapper[4808]: E0311 08:40:29.789866 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 11 08:40:29 crc kubenswrapper[4808]: I0311 08:40:29.804097 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b088d2a8d5895f936e824861275482977fb68600e1ca4fd0c08e124c4f11da52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:29Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:29 crc kubenswrapper[4808]: I0311 08:40:29.814640 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:29 crc kubenswrapper[4808]: I0311 08:40:29.814670 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:29 crc kubenswrapper[4808]: I0311 08:40:29.814683 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:29 crc kubenswrapper[4808]: I0311 08:40:29.814701 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:29 crc kubenswrapper[4808]: I0311 08:40:29.814715 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:29Z","lastTransitionTime":"2026-03-11T08:40:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:29 crc kubenswrapper[4808]: I0311 08:40:29.817175 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:29Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:29 crc kubenswrapper[4808]: I0311 08:40:29.830480 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:40:29 crc kubenswrapper[4808]: I0311 08:40:29.830561 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:40:29 crc kubenswrapper[4808]: I0311 08:40:29.830586 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:40:29 crc kubenswrapper[4808]: I0311 08:40:29.830609 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:40:29 crc kubenswrapper[4808]: E0311 08:40:29.830646 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:40:33.830609149 +0000 UTC m=+84.783932489 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:40:29 crc kubenswrapper[4808]: E0311 08:40:29.830673 4808 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 11 08:40:29 crc kubenswrapper[4808]: I0311 08:40:29.830710 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:40:29 crc kubenswrapper[4808]: E0311 08:40:29.830722 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-11 08:40:33.830709692 +0000 UTC m=+84.784033012 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 11 08:40:29 crc kubenswrapper[4808]: E0311 08:40:29.830770 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 11 08:40:29 crc kubenswrapper[4808]: E0311 08:40:29.830802 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 11 08:40:29 crc kubenswrapper[4808]: E0311 08:40:29.830815 4808 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 11 08:40:29 crc kubenswrapper[4808]: E0311 08:40:29.830846 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 11 08:40:29 crc kubenswrapper[4808]: E0311 08:40:29.830894 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 11 08:40:29 crc kubenswrapper[4808]: E0311 08:40:29.830915 4808 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 11 08:40:29 crc kubenswrapper[4808]: E0311 08:40:29.830864 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-11 08:40:33.830846946 +0000 UTC m=+84.784170326 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 11 08:40:29 crc kubenswrapper[4808]: E0311 08:40:29.831025 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-11 08:40:33.830999401 +0000 UTC m=+84.784322711 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 11 08:40:29 crc kubenswrapper[4808]: E0311 08:40:29.830865 4808 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 11 08:40:29 crc kubenswrapper[4808]: E0311 08:40:29.831067 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-11 08:40:33.831060053 +0000 UTC m=+84.784383373 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 11 08:40:29 crc kubenswrapper[4808]: I0311 08:40:29.831212 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:29Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:29 crc kubenswrapper[4808]: I0311 08:40:29.845718 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:29Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:29 crc kubenswrapper[4808]: I0311 08:40:29.858472 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afd64db1e4772f4558a0a810e319481d1c8d521b299c9f6f79d5c0f752b5f89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8deb33e0cd83c7419f7ddd4cba0e84d33849e982b5da5e74e0e93eebb40b249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:29Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:29 crc kubenswrapper[4808]: I0311 08:40:29.873221 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a12f71a3a92a4513660a5b6c0bb73f40545f5528a45fab7aec18cb22283921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:29Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:29 crc kubenswrapper[4808]: I0311 08:40:29.913751 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:29 crc kubenswrapper[4808]: I0311 08:40:29.913803 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:29 crc kubenswrapper[4808]: I0311 08:40:29.913814 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:29 crc kubenswrapper[4808]: I0311 08:40:29.913837 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:29 crc kubenswrapper[4808]: I0311 08:40:29.913848 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:29Z","lastTransitionTime":"2026-03-11T08:40:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:29 crc kubenswrapper[4808]: E0311 08:40:29.927731 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fb54ef7d-152b-411a-b511-256d1778abe5\\\",\\\"systemUUID\\\":\\\"8423e724-ca17-4e6e-9671-7e629ecf3f36\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:29Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:29 crc kubenswrapper[4808]: I0311 08:40:29.931592 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:29 crc kubenswrapper[4808]: I0311 08:40:29.931713 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:29 crc kubenswrapper[4808]: I0311 08:40:29.931831 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:29 crc kubenswrapper[4808]: I0311 08:40:29.931921 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:29 crc kubenswrapper[4808]: I0311 08:40:29.932019 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:29Z","lastTransitionTime":"2026-03-11T08:40:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:29 crc kubenswrapper[4808]: E0311 08:40:29.944553 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fb54ef7d-152b-411a-b511-256d1778abe5\\\",\\\"systemUUID\\\":\\\"8423e724-ca17-4e6e-9671-7e629ecf3f36\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:29Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:29 crc kubenswrapper[4808]: I0311 08:40:29.947823 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:29 crc kubenswrapper[4808]: I0311 08:40:29.947871 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:29 crc kubenswrapper[4808]: I0311 08:40:29.947883 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:29 crc kubenswrapper[4808]: I0311 08:40:29.947900 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:29 crc kubenswrapper[4808]: I0311 08:40:29.947911 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:29Z","lastTransitionTime":"2026-03-11T08:40:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:29 crc kubenswrapper[4808]: E0311 08:40:29.962608 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fb54ef7d-152b-411a-b511-256d1778abe5\\\",\\\"systemUUID\\\":\\\"8423e724-ca17-4e6e-9671-7e629ecf3f36\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:29Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:29 crc kubenswrapper[4808]: I0311 08:40:29.966491 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:29 crc kubenswrapper[4808]: I0311 08:40:29.966619 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:29 crc kubenswrapper[4808]: I0311 08:40:29.966703 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:29 crc kubenswrapper[4808]: I0311 08:40:29.966792 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:29 crc kubenswrapper[4808]: I0311 08:40:29.966882 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:29Z","lastTransitionTime":"2026-03-11T08:40:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:29 crc kubenswrapper[4808]: E0311 08:40:29.982165 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fb54ef7d-152b-411a-b511-256d1778abe5\\\",\\\"systemUUID\\\":\\\"8423e724-ca17-4e6e-9671-7e629ecf3f36\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:29Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:29 crc kubenswrapper[4808]: I0311 08:40:29.986025 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:29 crc kubenswrapper[4808]: I0311 08:40:29.986051 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:29 crc kubenswrapper[4808]: I0311 08:40:29.986062 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:29 crc kubenswrapper[4808]: I0311 08:40:29.986077 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:29 crc kubenswrapper[4808]: I0311 08:40:29.986087 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:29Z","lastTransitionTime":"2026-03-11T08:40:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:30 crc kubenswrapper[4808]: E0311 08:40:30.001384 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fb54ef7d-152b-411a-b511-256d1778abe5\\\",\\\"systemUUID\\\":\\\"8423e724-ca17-4e6e-9671-7e629ecf3f36\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:29Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:30 crc kubenswrapper[4808]: E0311 08:40:30.001562 4808 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 11 08:40:30 crc kubenswrapper[4808]: I0311 08:40:30.003799 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:30 crc kubenswrapper[4808]: I0311 08:40:30.003837 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:30 crc kubenswrapper[4808]: I0311 08:40:30.003848 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:30 crc kubenswrapper[4808]: I0311 08:40:30.003864 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:30 crc kubenswrapper[4808]: I0311 08:40:30.003874 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:30Z","lastTransitionTime":"2026-03-11T08:40:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:30 crc kubenswrapper[4808]: I0311 08:40:30.106550 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:30 crc kubenswrapper[4808]: I0311 08:40:30.106596 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:30 crc kubenswrapper[4808]: I0311 08:40:30.106629 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:30 crc kubenswrapper[4808]: I0311 08:40:30.106651 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:30 crc kubenswrapper[4808]: I0311 08:40:30.106663 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:30Z","lastTransitionTime":"2026-03-11T08:40:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:30 crc kubenswrapper[4808]: I0311 08:40:30.209412 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:30 crc kubenswrapper[4808]: I0311 08:40:30.209474 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:30 crc kubenswrapper[4808]: I0311 08:40:30.209492 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:30 crc kubenswrapper[4808]: I0311 08:40:30.209520 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:30 crc kubenswrapper[4808]: I0311 08:40:30.209538 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:30Z","lastTransitionTime":"2026-03-11T08:40:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:30 crc kubenswrapper[4808]: I0311 08:40:30.312909 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:30 crc kubenswrapper[4808]: I0311 08:40:30.312975 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:30 crc kubenswrapper[4808]: I0311 08:40:30.312993 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:30 crc kubenswrapper[4808]: I0311 08:40:30.313020 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:30 crc kubenswrapper[4808]: I0311 08:40:30.313038 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:30Z","lastTransitionTime":"2026-03-11T08:40:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:30 crc kubenswrapper[4808]: I0311 08:40:30.416197 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:30 crc kubenswrapper[4808]: I0311 08:40:30.416287 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:30 crc kubenswrapper[4808]: I0311 08:40:30.416306 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:30 crc kubenswrapper[4808]: I0311 08:40:30.416326 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:30 crc kubenswrapper[4808]: I0311 08:40:30.416338 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:30Z","lastTransitionTime":"2026-03-11T08:40:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:30 crc kubenswrapper[4808]: I0311 08:40:30.518926 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:30 crc kubenswrapper[4808]: I0311 08:40:30.518983 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:30 crc kubenswrapper[4808]: I0311 08:40:30.518997 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:30 crc kubenswrapper[4808]: I0311 08:40:30.519022 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:30 crc kubenswrapper[4808]: I0311 08:40:30.519034 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:30Z","lastTransitionTime":"2026-03-11T08:40:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:30 crc kubenswrapper[4808]: I0311 08:40:30.623247 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:30 crc kubenswrapper[4808]: I0311 08:40:30.623316 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:30 crc kubenswrapper[4808]: I0311 08:40:30.623331 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:30 crc kubenswrapper[4808]: I0311 08:40:30.623387 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:30 crc kubenswrapper[4808]: I0311 08:40:30.623422 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:30Z","lastTransitionTime":"2026-03-11T08:40:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:30 crc kubenswrapper[4808]: I0311 08:40:30.726738 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:30 crc kubenswrapper[4808]: I0311 08:40:30.726789 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:30 crc kubenswrapper[4808]: I0311 08:40:30.726798 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:30 crc kubenswrapper[4808]: I0311 08:40:30.726826 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:30 crc kubenswrapper[4808]: I0311 08:40:30.726835 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:30Z","lastTransitionTime":"2026-03-11T08:40:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:30 crc kubenswrapper[4808]: I0311 08:40:30.829376 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:30 crc kubenswrapper[4808]: I0311 08:40:30.829422 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:30 crc kubenswrapper[4808]: I0311 08:40:30.829435 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:30 crc kubenswrapper[4808]: I0311 08:40:30.829451 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:30 crc kubenswrapper[4808]: I0311 08:40:30.829463 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:30Z","lastTransitionTime":"2026-03-11T08:40:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:30 crc kubenswrapper[4808]: I0311 08:40:30.932293 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:30 crc kubenswrapper[4808]: I0311 08:40:30.932386 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:30 crc kubenswrapper[4808]: I0311 08:40:30.932414 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:30 crc kubenswrapper[4808]: I0311 08:40:30.932443 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:30 crc kubenswrapper[4808]: I0311 08:40:30.932464 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:30Z","lastTransitionTime":"2026-03-11T08:40:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:31 crc kubenswrapper[4808]: I0311 08:40:31.035817 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:31 crc kubenswrapper[4808]: I0311 08:40:31.035867 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:31 crc kubenswrapper[4808]: I0311 08:40:31.035878 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:31 crc kubenswrapper[4808]: I0311 08:40:31.035896 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:31 crc kubenswrapper[4808]: I0311 08:40:31.035908 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:31Z","lastTransitionTime":"2026-03-11T08:40:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:31 crc kubenswrapper[4808]: I0311 08:40:31.138624 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:31 crc kubenswrapper[4808]: I0311 08:40:31.138675 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:31 crc kubenswrapper[4808]: I0311 08:40:31.138688 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:31 crc kubenswrapper[4808]: I0311 08:40:31.138705 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:31 crc kubenswrapper[4808]: I0311 08:40:31.138716 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:31Z","lastTransitionTime":"2026-03-11T08:40:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:31 crc kubenswrapper[4808]: I0311 08:40:31.242229 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:31 crc kubenswrapper[4808]: I0311 08:40:31.242453 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:31 crc kubenswrapper[4808]: I0311 08:40:31.242493 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:31 crc kubenswrapper[4808]: I0311 08:40:31.242526 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:31 crc kubenswrapper[4808]: I0311 08:40:31.242546 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:31Z","lastTransitionTime":"2026-03-11T08:40:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:31 crc kubenswrapper[4808]: I0311 08:40:31.345790 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:31 crc kubenswrapper[4808]: I0311 08:40:31.345845 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:31 crc kubenswrapper[4808]: I0311 08:40:31.345861 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:31 crc kubenswrapper[4808]: I0311 08:40:31.345884 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:31 crc kubenswrapper[4808]: I0311 08:40:31.345900 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:31Z","lastTransitionTime":"2026-03-11T08:40:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:31 crc kubenswrapper[4808]: I0311 08:40:31.448902 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:31 crc kubenswrapper[4808]: I0311 08:40:31.448979 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:31 crc kubenswrapper[4808]: I0311 08:40:31.449005 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:31 crc kubenswrapper[4808]: I0311 08:40:31.449037 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:31 crc kubenswrapper[4808]: I0311 08:40:31.449061 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:31Z","lastTransitionTime":"2026-03-11T08:40:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:31 crc kubenswrapper[4808]: I0311 08:40:31.551924 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:31 crc kubenswrapper[4808]: I0311 08:40:31.551985 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:31 crc kubenswrapper[4808]: I0311 08:40:31.552003 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:31 crc kubenswrapper[4808]: I0311 08:40:31.552026 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:31 crc kubenswrapper[4808]: I0311 08:40:31.552044 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:31Z","lastTransitionTime":"2026-03-11T08:40:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:31 crc kubenswrapper[4808]: I0311 08:40:31.654750 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:31 crc kubenswrapper[4808]: I0311 08:40:31.654803 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:31 crc kubenswrapper[4808]: I0311 08:40:31.654815 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:31 crc kubenswrapper[4808]: I0311 08:40:31.654834 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:31 crc kubenswrapper[4808]: I0311 08:40:31.654846 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:31Z","lastTransitionTime":"2026-03-11T08:40:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:31 crc kubenswrapper[4808]: I0311 08:40:31.758424 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:31 crc kubenswrapper[4808]: I0311 08:40:31.758505 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:31 crc kubenswrapper[4808]: I0311 08:40:31.758530 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:31 crc kubenswrapper[4808]: I0311 08:40:31.758556 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:31 crc kubenswrapper[4808]: I0311 08:40:31.758579 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:31Z","lastTransitionTime":"2026-03-11T08:40:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:31 crc kubenswrapper[4808]: I0311 08:40:31.788353 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:40:31 crc kubenswrapper[4808]: I0311 08:40:31.788476 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:40:31 crc kubenswrapper[4808]: I0311 08:40:31.788437 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:40:31 crc kubenswrapper[4808]: E0311 08:40:31.788730 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 11 08:40:31 crc kubenswrapper[4808]: E0311 08:40:31.788761 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 11 08:40:31 crc kubenswrapper[4808]: E0311 08:40:31.789133 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 11 08:40:31 crc kubenswrapper[4808]: I0311 08:40:31.800491 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Mar 11 08:40:31 crc kubenswrapper[4808]: I0311 08:40:31.861648 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:31 crc kubenswrapper[4808]: I0311 08:40:31.861730 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:31 crc kubenswrapper[4808]: I0311 08:40:31.861755 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:31 crc kubenswrapper[4808]: I0311 08:40:31.861787 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:31 crc kubenswrapper[4808]: I0311 08:40:31.861812 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:31Z","lastTransitionTime":"2026-03-11T08:40:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:31 crc kubenswrapper[4808]: I0311 08:40:31.964276 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:31 crc kubenswrapper[4808]: I0311 08:40:31.964319 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:31 crc kubenswrapper[4808]: I0311 08:40:31.964329 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:31 crc kubenswrapper[4808]: I0311 08:40:31.964345 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:31 crc kubenswrapper[4808]: I0311 08:40:31.964374 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:31Z","lastTransitionTime":"2026-03-11T08:40:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:32 crc kubenswrapper[4808]: I0311 08:40:32.067145 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:32 crc kubenswrapper[4808]: I0311 08:40:32.067257 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:32 crc kubenswrapper[4808]: I0311 08:40:32.067273 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:32 crc kubenswrapper[4808]: I0311 08:40:32.067297 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:32 crc kubenswrapper[4808]: I0311 08:40:32.067314 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:32Z","lastTransitionTime":"2026-03-11T08:40:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:32 crc kubenswrapper[4808]: I0311 08:40:32.170758 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:32 crc kubenswrapper[4808]: I0311 08:40:32.170819 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:32 crc kubenswrapper[4808]: I0311 08:40:32.170846 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:32 crc kubenswrapper[4808]: I0311 08:40:32.170878 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:32 crc kubenswrapper[4808]: I0311 08:40:32.170899 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:32Z","lastTransitionTime":"2026-03-11T08:40:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:32 crc kubenswrapper[4808]: I0311 08:40:32.273056 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:32 crc kubenswrapper[4808]: I0311 08:40:32.273108 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:32 crc kubenswrapper[4808]: I0311 08:40:32.273124 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:32 crc kubenswrapper[4808]: I0311 08:40:32.273146 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:32 crc kubenswrapper[4808]: I0311 08:40:32.273162 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:32Z","lastTransitionTime":"2026-03-11T08:40:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:32 crc kubenswrapper[4808]: I0311 08:40:32.376166 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:32 crc kubenswrapper[4808]: I0311 08:40:32.376246 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:32 crc kubenswrapper[4808]: I0311 08:40:32.376273 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:32 crc kubenswrapper[4808]: I0311 08:40:32.376391 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:32 crc kubenswrapper[4808]: I0311 08:40:32.376415 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:32Z","lastTransitionTime":"2026-03-11T08:40:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:32 crc kubenswrapper[4808]: I0311 08:40:32.478705 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:32 crc kubenswrapper[4808]: I0311 08:40:32.478764 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:32 crc kubenswrapper[4808]: I0311 08:40:32.478786 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:32 crc kubenswrapper[4808]: I0311 08:40:32.478813 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:32 crc kubenswrapper[4808]: I0311 08:40:32.478838 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:32Z","lastTransitionTime":"2026-03-11T08:40:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:32 crc kubenswrapper[4808]: I0311 08:40:32.582101 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:32 crc kubenswrapper[4808]: I0311 08:40:32.582167 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:32 crc kubenswrapper[4808]: I0311 08:40:32.582189 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:32 crc kubenswrapper[4808]: I0311 08:40:32.582218 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:32 crc kubenswrapper[4808]: I0311 08:40:32.582237 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:32Z","lastTransitionTime":"2026-03-11T08:40:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:32 crc kubenswrapper[4808]: I0311 08:40:32.684628 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:32 crc kubenswrapper[4808]: I0311 08:40:32.684677 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:32 crc kubenswrapper[4808]: I0311 08:40:32.684697 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:32 crc kubenswrapper[4808]: I0311 08:40:32.684721 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:32 crc kubenswrapper[4808]: I0311 08:40:32.684743 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:32Z","lastTransitionTime":"2026-03-11T08:40:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:32 crc kubenswrapper[4808]: I0311 08:40:32.787309 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:32 crc kubenswrapper[4808]: I0311 08:40:32.787351 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:32 crc kubenswrapper[4808]: I0311 08:40:32.787381 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:32 crc kubenswrapper[4808]: I0311 08:40:32.787397 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:32 crc kubenswrapper[4808]: I0311 08:40:32.787408 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:32Z","lastTransitionTime":"2026-03-11T08:40:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:32 crc kubenswrapper[4808]: I0311 08:40:32.889575 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:32 crc kubenswrapper[4808]: I0311 08:40:32.889622 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:32 crc kubenswrapper[4808]: I0311 08:40:32.889634 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:32 crc kubenswrapper[4808]: I0311 08:40:32.889651 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:32 crc kubenswrapper[4808]: I0311 08:40:32.889662 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:32Z","lastTransitionTime":"2026-03-11T08:40:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:32 crc kubenswrapper[4808]: I0311 08:40:32.992067 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:32 crc kubenswrapper[4808]: I0311 08:40:32.992113 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:32 crc kubenswrapper[4808]: I0311 08:40:32.992125 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:32 crc kubenswrapper[4808]: I0311 08:40:32.992146 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:32 crc kubenswrapper[4808]: I0311 08:40:32.992159 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:32Z","lastTransitionTime":"2026-03-11T08:40:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:33 crc kubenswrapper[4808]: I0311 08:40:33.094916 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:33 crc kubenswrapper[4808]: I0311 08:40:33.094963 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:33 crc kubenswrapper[4808]: I0311 08:40:33.094982 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:33 crc kubenswrapper[4808]: I0311 08:40:33.095003 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:33 crc kubenswrapper[4808]: I0311 08:40:33.095016 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:33Z","lastTransitionTime":"2026-03-11T08:40:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:33 crc kubenswrapper[4808]: I0311 08:40:33.197740 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:33 crc kubenswrapper[4808]: I0311 08:40:33.197799 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:33 crc kubenswrapper[4808]: I0311 08:40:33.197820 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:33 crc kubenswrapper[4808]: I0311 08:40:33.197843 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:33 crc kubenswrapper[4808]: I0311 08:40:33.197861 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:33Z","lastTransitionTime":"2026-03-11T08:40:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:33 crc kubenswrapper[4808]: I0311 08:40:33.300964 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:33 crc kubenswrapper[4808]: I0311 08:40:33.301048 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:33 crc kubenswrapper[4808]: I0311 08:40:33.301073 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:33 crc kubenswrapper[4808]: I0311 08:40:33.301103 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:33 crc kubenswrapper[4808]: I0311 08:40:33.301124 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:33Z","lastTransitionTime":"2026-03-11T08:40:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:33 crc kubenswrapper[4808]: I0311 08:40:33.404379 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:33 crc kubenswrapper[4808]: I0311 08:40:33.404431 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:33 crc kubenswrapper[4808]: I0311 08:40:33.404443 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:33 crc kubenswrapper[4808]: I0311 08:40:33.404459 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:33 crc kubenswrapper[4808]: I0311 08:40:33.404472 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:33Z","lastTransitionTime":"2026-03-11T08:40:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:33 crc kubenswrapper[4808]: I0311 08:40:33.507382 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:33 crc kubenswrapper[4808]: I0311 08:40:33.507442 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:33 crc kubenswrapper[4808]: I0311 08:40:33.507460 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:33 crc kubenswrapper[4808]: I0311 08:40:33.507482 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:33 crc kubenswrapper[4808]: I0311 08:40:33.507496 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:33Z","lastTransitionTime":"2026-03-11T08:40:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:33 crc kubenswrapper[4808]: I0311 08:40:33.611284 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:33 crc kubenswrapper[4808]: I0311 08:40:33.611329 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:33 crc kubenswrapper[4808]: I0311 08:40:33.611342 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:33 crc kubenswrapper[4808]: I0311 08:40:33.611383 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:33 crc kubenswrapper[4808]: I0311 08:40:33.611396 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:33Z","lastTransitionTime":"2026-03-11T08:40:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:33 crc kubenswrapper[4808]: I0311 08:40:33.713663 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:33 crc kubenswrapper[4808]: I0311 08:40:33.713738 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:33 crc kubenswrapper[4808]: I0311 08:40:33.713761 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:33 crc kubenswrapper[4808]: I0311 08:40:33.713790 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:33 crc kubenswrapper[4808]: I0311 08:40:33.713814 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:33Z","lastTransitionTime":"2026-03-11T08:40:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:33 crc kubenswrapper[4808]: I0311 08:40:33.788961 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:40:33 crc kubenswrapper[4808]: I0311 08:40:33.788964 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:40:33 crc kubenswrapper[4808]: E0311 08:40:33.789127 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 11 08:40:33 crc kubenswrapper[4808]: E0311 08:40:33.789166 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 11 08:40:33 crc kubenswrapper[4808]: I0311 08:40:33.788990 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:40:33 crc kubenswrapper[4808]: E0311 08:40:33.789234 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 11 08:40:33 crc kubenswrapper[4808]: I0311 08:40:33.816205 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:33 crc kubenswrapper[4808]: I0311 08:40:33.816271 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:33 crc kubenswrapper[4808]: I0311 08:40:33.816293 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:33 crc kubenswrapper[4808]: I0311 08:40:33.816324 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:33 crc kubenswrapper[4808]: I0311 08:40:33.816348 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:33Z","lastTransitionTime":"2026-03-11T08:40:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:33 crc kubenswrapper[4808]: I0311 08:40:33.869803 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:40:33 crc kubenswrapper[4808]: I0311 08:40:33.869901 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:40:33 crc kubenswrapper[4808]: I0311 08:40:33.869937 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:40:33 crc kubenswrapper[4808]: E0311 08:40:33.870000 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:40:41.869978179 +0000 UTC m=+92.823301509 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:40:33 crc kubenswrapper[4808]: E0311 08:40:33.870044 4808 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 11 08:40:33 crc kubenswrapper[4808]: I0311 08:40:33.870043 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:40:33 crc kubenswrapper[4808]: E0311 08:40:33.870078 4808 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 11 08:40:33 crc kubenswrapper[4808]: I0311 08:40:33.870587 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:40:33 crc kubenswrapper[4808]: E0311 08:40:33.870765 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 11 08:40:33 crc kubenswrapper[4808]: E0311 08:40:33.870790 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 11 08:40:33 crc kubenswrapper[4808]: E0311 08:40:33.870810 4808 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 11 08:40:33 crc kubenswrapper[4808]: E0311 08:40:33.870829 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 11 08:40:33 crc kubenswrapper[4808]: E0311 08:40:33.870851 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 11 08:40:33 crc kubenswrapper[4808]: E0311 08:40:33.870866 4808 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 11 08:40:33 crc kubenswrapper[4808]: E0311 08:40:33.873835 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-11 08:40:41.873785 +0000 UTC m=+92.827108370 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 11 08:40:33 crc kubenswrapper[4808]: E0311 08:40:33.873901 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-11 08:40:41.873884233 +0000 UTC m=+92.827207583 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 11 08:40:33 crc kubenswrapper[4808]: E0311 08:40:33.873935 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-11 08:40:41.873912614 +0000 UTC m=+92.827235954 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 11 08:40:33 crc kubenswrapper[4808]: E0311 08:40:33.873960 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-11 08:40:41.873947905 +0000 UTC m=+92.827271245 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 11 08:40:33 crc kubenswrapper[4808]: I0311 08:40:33.918904 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:33 crc kubenswrapper[4808]: I0311 08:40:33.918942 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:33 crc kubenswrapper[4808]: I0311 08:40:33.918952 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:33 crc kubenswrapper[4808]: I0311 08:40:33.918968 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:33 crc kubenswrapper[4808]: I0311 08:40:33.918979 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:33Z","lastTransitionTime":"2026-03-11T08:40:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:34 crc kubenswrapper[4808]: I0311 08:40:34.021991 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:34 crc kubenswrapper[4808]: I0311 08:40:34.022026 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:34 crc kubenswrapper[4808]: I0311 08:40:34.022035 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:34 crc kubenswrapper[4808]: I0311 08:40:34.022050 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:34 crc kubenswrapper[4808]: I0311 08:40:34.022060 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:34Z","lastTransitionTime":"2026-03-11T08:40:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:34 crc kubenswrapper[4808]: I0311 08:40:34.124156 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:34 crc kubenswrapper[4808]: I0311 08:40:34.124235 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:34 crc kubenswrapper[4808]: I0311 08:40:34.124250 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:34 crc kubenswrapper[4808]: I0311 08:40:34.124267 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:34 crc kubenswrapper[4808]: I0311 08:40:34.124283 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:34Z","lastTransitionTime":"2026-03-11T08:40:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:34 crc kubenswrapper[4808]: I0311 08:40:34.227790 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:34 crc kubenswrapper[4808]: I0311 08:40:34.227851 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:34 crc kubenswrapper[4808]: I0311 08:40:34.227868 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:34 crc kubenswrapper[4808]: I0311 08:40:34.227894 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:34 crc kubenswrapper[4808]: I0311 08:40:34.227911 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:34Z","lastTransitionTime":"2026-03-11T08:40:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:34 crc kubenswrapper[4808]: I0311 08:40:34.330633 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:34 crc kubenswrapper[4808]: I0311 08:40:34.330693 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:34 crc kubenswrapper[4808]: I0311 08:40:34.330709 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:34 crc kubenswrapper[4808]: I0311 08:40:34.330729 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:34 crc kubenswrapper[4808]: I0311 08:40:34.330743 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:34Z","lastTransitionTime":"2026-03-11T08:40:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:34 crc kubenswrapper[4808]: I0311 08:40:34.433422 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:34 crc kubenswrapper[4808]: I0311 08:40:34.433496 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:34 crc kubenswrapper[4808]: I0311 08:40:34.433510 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:34 crc kubenswrapper[4808]: I0311 08:40:34.433536 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:34 crc kubenswrapper[4808]: I0311 08:40:34.433553 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:34Z","lastTransitionTime":"2026-03-11T08:40:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:34 crc kubenswrapper[4808]: I0311 08:40:34.536287 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:34 crc kubenswrapper[4808]: I0311 08:40:34.536423 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:34 crc kubenswrapper[4808]: I0311 08:40:34.536453 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:34 crc kubenswrapper[4808]: I0311 08:40:34.536505 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:34 crc kubenswrapper[4808]: I0311 08:40:34.536541 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:34Z","lastTransitionTime":"2026-03-11T08:40:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:34 crc kubenswrapper[4808]: I0311 08:40:34.639162 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:34 crc kubenswrapper[4808]: I0311 08:40:34.639222 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:34 crc kubenswrapper[4808]: I0311 08:40:34.639239 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:34 crc kubenswrapper[4808]: I0311 08:40:34.639261 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:34 crc kubenswrapper[4808]: I0311 08:40:34.639279 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:34Z","lastTransitionTime":"2026-03-11T08:40:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:34 crc kubenswrapper[4808]: I0311 08:40:34.742684 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:34 crc kubenswrapper[4808]: I0311 08:40:34.742751 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:34 crc kubenswrapper[4808]: I0311 08:40:34.742787 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:34 crc kubenswrapper[4808]: I0311 08:40:34.742816 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:34 crc kubenswrapper[4808]: I0311 08:40:34.742836 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:34Z","lastTransitionTime":"2026-03-11T08:40:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:34 crc kubenswrapper[4808]: I0311 08:40:34.845955 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:34 crc kubenswrapper[4808]: I0311 08:40:34.846026 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:34 crc kubenswrapper[4808]: I0311 08:40:34.846046 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:34 crc kubenswrapper[4808]: I0311 08:40:34.846069 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:34 crc kubenswrapper[4808]: I0311 08:40:34.846086 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:34Z","lastTransitionTime":"2026-03-11T08:40:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:34 crc kubenswrapper[4808]: I0311 08:40:34.948348 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:34 crc kubenswrapper[4808]: I0311 08:40:34.948426 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:34 crc kubenswrapper[4808]: I0311 08:40:34.948441 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:34 crc kubenswrapper[4808]: I0311 08:40:34.948457 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:34 crc kubenswrapper[4808]: I0311 08:40:34.948468 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:34Z","lastTransitionTime":"2026-03-11T08:40:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:35 crc kubenswrapper[4808]: I0311 08:40:35.051405 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:35 crc kubenswrapper[4808]: I0311 08:40:35.051475 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:35 crc kubenswrapper[4808]: I0311 08:40:35.051493 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:35 crc kubenswrapper[4808]: I0311 08:40:35.051512 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:35 crc kubenswrapper[4808]: I0311 08:40:35.051526 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:35Z","lastTransitionTime":"2026-03-11T08:40:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:35 crc kubenswrapper[4808]: I0311 08:40:35.154991 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:35 crc kubenswrapper[4808]: I0311 08:40:35.155257 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:35 crc kubenswrapper[4808]: I0311 08:40:35.155289 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:35 crc kubenswrapper[4808]: I0311 08:40:35.155318 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:35 crc kubenswrapper[4808]: I0311 08:40:35.155342 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:35Z","lastTransitionTime":"2026-03-11T08:40:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:35 crc kubenswrapper[4808]: I0311 08:40:35.258210 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:35 crc kubenswrapper[4808]: I0311 08:40:35.258278 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:35 crc kubenswrapper[4808]: I0311 08:40:35.258298 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:35 crc kubenswrapper[4808]: I0311 08:40:35.258324 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:35 crc kubenswrapper[4808]: I0311 08:40:35.258344 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:35Z","lastTransitionTime":"2026-03-11T08:40:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:35 crc kubenswrapper[4808]: I0311 08:40:35.362345 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:35 crc kubenswrapper[4808]: I0311 08:40:35.364061 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:35 crc kubenswrapper[4808]: I0311 08:40:35.364104 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:35 crc kubenswrapper[4808]: I0311 08:40:35.364129 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:35 crc kubenswrapper[4808]: I0311 08:40:35.364145 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:35Z","lastTransitionTime":"2026-03-11T08:40:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:35 crc kubenswrapper[4808]: I0311 08:40:35.467281 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:35 crc kubenswrapper[4808]: I0311 08:40:35.467338 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:35 crc kubenswrapper[4808]: I0311 08:40:35.467379 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:35 crc kubenswrapper[4808]: I0311 08:40:35.467403 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:35 crc kubenswrapper[4808]: I0311 08:40:35.467423 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:35Z","lastTransitionTime":"2026-03-11T08:40:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:35 crc kubenswrapper[4808]: I0311 08:40:35.570486 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:35 crc kubenswrapper[4808]: I0311 08:40:35.570556 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:35 crc kubenswrapper[4808]: I0311 08:40:35.570581 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:35 crc kubenswrapper[4808]: I0311 08:40:35.570612 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:35 crc kubenswrapper[4808]: I0311 08:40:35.570636 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:35Z","lastTransitionTime":"2026-03-11T08:40:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:35 crc kubenswrapper[4808]: I0311 08:40:35.673256 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:35 crc kubenswrapper[4808]: I0311 08:40:35.673312 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:35 crc kubenswrapper[4808]: I0311 08:40:35.673329 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:35 crc kubenswrapper[4808]: I0311 08:40:35.673352 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:35 crc kubenswrapper[4808]: I0311 08:40:35.673400 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:35Z","lastTransitionTime":"2026-03-11T08:40:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:35 crc kubenswrapper[4808]: I0311 08:40:35.776930 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:35 crc kubenswrapper[4808]: I0311 08:40:35.776992 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:35 crc kubenswrapper[4808]: I0311 08:40:35.777011 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:35 crc kubenswrapper[4808]: I0311 08:40:35.777039 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:35 crc kubenswrapper[4808]: I0311 08:40:35.777059 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:35Z","lastTransitionTime":"2026-03-11T08:40:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:35 crc kubenswrapper[4808]: I0311 08:40:35.788526 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:40:35 crc kubenswrapper[4808]: I0311 08:40:35.788610 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:40:35 crc kubenswrapper[4808]: E0311 08:40:35.788699 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 11 08:40:35 crc kubenswrapper[4808]: I0311 08:40:35.788831 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:40:35 crc kubenswrapper[4808]: E0311 08:40:35.788970 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 11 08:40:35 crc kubenswrapper[4808]: E0311 08:40:35.789178 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 11 08:40:35 crc kubenswrapper[4808]: I0311 08:40:35.879705 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:35 crc kubenswrapper[4808]: I0311 08:40:35.879766 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:35 crc kubenswrapper[4808]: I0311 08:40:35.879784 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:35 crc kubenswrapper[4808]: I0311 08:40:35.879808 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:35 crc kubenswrapper[4808]: I0311 08:40:35.879832 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:35Z","lastTransitionTime":"2026-03-11T08:40:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:35 crc kubenswrapper[4808]: I0311 08:40:35.982961 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:35 crc kubenswrapper[4808]: I0311 08:40:35.983024 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:35 crc kubenswrapper[4808]: I0311 08:40:35.983041 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:35 crc kubenswrapper[4808]: I0311 08:40:35.983066 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:35 crc kubenswrapper[4808]: I0311 08:40:35.983086 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:35Z","lastTransitionTime":"2026-03-11T08:40:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:36 crc kubenswrapper[4808]: I0311 08:40:36.086544 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:36 crc kubenswrapper[4808]: I0311 08:40:36.086599 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:36 crc kubenswrapper[4808]: I0311 08:40:36.086629 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:36 crc kubenswrapper[4808]: I0311 08:40:36.086659 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:36 crc kubenswrapper[4808]: I0311 08:40:36.086683 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:36Z","lastTransitionTime":"2026-03-11T08:40:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:36 crc kubenswrapper[4808]: I0311 08:40:36.188973 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:36 crc kubenswrapper[4808]: I0311 08:40:36.189018 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:36 crc kubenswrapper[4808]: I0311 08:40:36.189031 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:36 crc kubenswrapper[4808]: I0311 08:40:36.189050 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:36 crc kubenswrapper[4808]: I0311 08:40:36.189061 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:36Z","lastTransitionTime":"2026-03-11T08:40:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:36 crc kubenswrapper[4808]: I0311 08:40:36.292302 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:36 crc kubenswrapper[4808]: I0311 08:40:36.292347 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:36 crc kubenswrapper[4808]: I0311 08:40:36.292384 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:36 crc kubenswrapper[4808]: I0311 08:40:36.292401 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:36 crc kubenswrapper[4808]: I0311 08:40:36.292413 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:36Z","lastTransitionTime":"2026-03-11T08:40:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:36 crc kubenswrapper[4808]: I0311 08:40:36.394782 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:36 crc kubenswrapper[4808]: I0311 08:40:36.394844 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:36 crc kubenswrapper[4808]: I0311 08:40:36.394862 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:36 crc kubenswrapper[4808]: I0311 08:40:36.394891 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:36 crc kubenswrapper[4808]: I0311 08:40:36.394910 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:36Z","lastTransitionTime":"2026-03-11T08:40:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:36 crc kubenswrapper[4808]: I0311 08:40:36.497307 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:36 crc kubenswrapper[4808]: I0311 08:40:36.497420 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:36 crc kubenswrapper[4808]: I0311 08:40:36.497445 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:36 crc kubenswrapper[4808]: I0311 08:40:36.497470 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:36 crc kubenswrapper[4808]: I0311 08:40:36.497490 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:36Z","lastTransitionTime":"2026-03-11T08:40:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:36 crc kubenswrapper[4808]: I0311 08:40:36.599807 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:36 crc kubenswrapper[4808]: I0311 08:40:36.600334 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:36 crc kubenswrapper[4808]: I0311 08:40:36.600385 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:36 crc kubenswrapper[4808]: I0311 08:40:36.600414 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:36 crc kubenswrapper[4808]: I0311 08:40:36.600435 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:36Z","lastTransitionTime":"2026-03-11T08:40:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:36 crc kubenswrapper[4808]: I0311 08:40:36.703288 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:36 crc kubenswrapper[4808]: I0311 08:40:36.703348 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:36 crc kubenswrapper[4808]: I0311 08:40:36.703379 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:36 crc kubenswrapper[4808]: I0311 08:40:36.703399 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:36 crc kubenswrapper[4808]: I0311 08:40:36.703411 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:36Z","lastTransitionTime":"2026-03-11T08:40:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:36 crc kubenswrapper[4808]: I0311 08:40:36.805503 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:36 crc kubenswrapper[4808]: I0311 08:40:36.805545 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:36 crc kubenswrapper[4808]: I0311 08:40:36.805557 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:36 crc kubenswrapper[4808]: I0311 08:40:36.805574 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:36 crc kubenswrapper[4808]: I0311 08:40:36.805585 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:36Z","lastTransitionTime":"2026-03-11T08:40:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:36 crc kubenswrapper[4808]: I0311 08:40:36.806591 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Mar 11 08:40:36 crc kubenswrapper[4808]: I0311 08:40:36.907855 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:36 crc kubenswrapper[4808]: I0311 08:40:36.907918 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:36 crc kubenswrapper[4808]: I0311 08:40:36.907934 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:36 crc kubenswrapper[4808]: I0311 08:40:36.907957 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:36 crc kubenswrapper[4808]: I0311 08:40:36.907973 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:36Z","lastTransitionTime":"2026-03-11T08:40:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:37 crc kubenswrapper[4808]: I0311 08:40:37.010539 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:37 crc kubenswrapper[4808]: I0311 08:40:37.010587 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:37 crc kubenswrapper[4808]: I0311 08:40:37.010597 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:37 crc kubenswrapper[4808]: I0311 08:40:37.010614 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:37 crc kubenswrapper[4808]: I0311 08:40:37.010627 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:37Z","lastTransitionTime":"2026-03-11T08:40:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:37 crc kubenswrapper[4808]: I0311 08:40:37.113180 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:37 crc kubenswrapper[4808]: I0311 08:40:37.113224 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:37 crc kubenswrapper[4808]: I0311 08:40:37.113237 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:37 crc kubenswrapper[4808]: I0311 08:40:37.113257 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:37 crc kubenswrapper[4808]: I0311 08:40:37.113270 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:37Z","lastTransitionTime":"2026-03-11T08:40:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:37 crc kubenswrapper[4808]: I0311 08:40:37.216210 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:37 crc kubenswrapper[4808]: I0311 08:40:37.216280 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:37 crc kubenswrapper[4808]: I0311 08:40:37.216298 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:37 crc kubenswrapper[4808]: I0311 08:40:37.216324 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:37 crc kubenswrapper[4808]: I0311 08:40:37.216343 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:37Z","lastTransitionTime":"2026-03-11T08:40:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:37 crc kubenswrapper[4808]: I0311 08:40:37.319212 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:37 crc kubenswrapper[4808]: I0311 08:40:37.319265 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:37 crc kubenswrapper[4808]: I0311 08:40:37.319284 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:37 crc kubenswrapper[4808]: I0311 08:40:37.319310 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:37 crc kubenswrapper[4808]: I0311 08:40:37.319328 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:37Z","lastTransitionTime":"2026-03-11T08:40:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:37 crc kubenswrapper[4808]: I0311 08:40:37.422926 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:37 crc kubenswrapper[4808]: I0311 08:40:37.423009 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:37 crc kubenswrapper[4808]: I0311 08:40:37.423037 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:37 crc kubenswrapper[4808]: I0311 08:40:37.423070 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:37 crc kubenswrapper[4808]: I0311 08:40:37.423135 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:37Z","lastTransitionTime":"2026-03-11T08:40:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:37 crc kubenswrapper[4808]: I0311 08:40:37.526158 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:37 crc kubenswrapper[4808]: I0311 08:40:37.526197 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:37 crc kubenswrapper[4808]: I0311 08:40:37.526206 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:37 crc kubenswrapper[4808]: I0311 08:40:37.526220 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:37 crc kubenswrapper[4808]: I0311 08:40:37.526231 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:37Z","lastTransitionTime":"2026-03-11T08:40:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:37 crc kubenswrapper[4808]: I0311 08:40:37.628557 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:37 crc kubenswrapper[4808]: I0311 08:40:37.628865 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:37 crc kubenswrapper[4808]: I0311 08:40:37.628947 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:37 crc kubenswrapper[4808]: I0311 08:40:37.629032 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:37 crc kubenswrapper[4808]: I0311 08:40:37.629121 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:37Z","lastTransitionTime":"2026-03-11T08:40:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:37 crc kubenswrapper[4808]: I0311 08:40:37.731861 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:37 crc kubenswrapper[4808]: I0311 08:40:37.731920 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:37 crc kubenswrapper[4808]: I0311 08:40:37.731972 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:37 crc kubenswrapper[4808]: I0311 08:40:37.732003 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:37 crc kubenswrapper[4808]: I0311 08:40:37.732028 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:37Z","lastTransitionTime":"2026-03-11T08:40:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:37 crc kubenswrapper[4808]: I0311 08:40:37.789147 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:40:37 crc kubenswrapper[4808]: I0311 08:40:37.789216 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:40:37 crc kubenswrapper[4808]: E0311 08:40:37.789339 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 11 08:40:37 crc kubenswrapper[4808]: E0311 08:40:37.789470 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 11 08:40:37 crc kubenswrapper[4808]: I0311 08:40:37.789984 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:40:37 crc kubenswrapper[4808]: E0311 08:40:37.790396 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 11 08:40:37 crc kubenswrapper[4808]: I0311 08:40:37.834793 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:37 crc kubenswrapper[4808]: I0311 08:40:37.835103 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:37 crc kubenswrapper[4808]: I0311 08:40:37.835198 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:37 crc kubenswrapper[4808]: I0311 08:40:37.835342 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:37 crc kubenswrapper[4808]: I0311 08:40:37.835546 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:37Z","lastTransitionTime":"2026-03-11T08:40:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:37 crc kubenswrapper[4808]: I0311 08:40:37.938842 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:37 crc kubenswrapper[4808]: I0311 08:40:37.938888 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:37 crc kubenswrapper[4808]: I0311 08:40:37.938900 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:37 crc kubenswrapper[4808]: I0311 08:40:37.938916 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:37 crc kubenswrapper[4808]: I0311 08:40:37.938926 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:37Z","lastTransitionTime":"2026-03-11T08:40:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:38 crc kubenswrapper[4808]: I0311 08:40:38.040981 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:38 crc kubenswrapper[4808]: I0311 08:40:38.041017 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:38 crc kubenswrapper[4808]: I0311 08:40:38.041025 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:38 crc kubenswrapper[4808]: I0311 08:40:38.041038 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:38 crc kubenswrapper[4808]: I0311 08:40:38.041048 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:38Z","lastTransitionTime":"2026-03-11T08:40:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:38 crc kubenswrapper[4808]: I0311 08:40:38.143691 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:38 crc kubenswrapper[4808]: I0311 08:40:38.143758 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:38 crc kubenswrapper[4808]: I0311 08:40:38.143781 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:38 crc kubenswrapper[4808]: I0311 08:40:38.143815 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:38 crc kubenswrapper[4808]: I0311 08:40:38.143840 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:38Z","lastTransitionTime":"2026-03-11T08:40:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:38 crc kubenswrapper[4808]: I0311 08:40:38.246287 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:38 crc kubenswrapper[4808]: I0311 08:40:38.246440 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:38 crc kubenswrapper[4808]: I0311 08:40:38.246467 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:38 crc kubenswrapper[4808]: I0311 08:40:38.246501 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:38 crc kubenswrapper[4808]: I0311 08:40:38.246522 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:38Z","lastTransitionTime":"2026-03-11T08:40:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:38 crc kubenswrapper[4808]: I0311 08:40:38.349247 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:38 crc kubenswrapper[4808]: I0311 08:40:38.349354 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:38 crc kubenswrapper[4808]: I0311 08:40:38.349422 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:38 crc kubenswrapper[4808]: I0311 08:40:38.349454 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:38 crc kubenswrapper[4808]: I0311 08:40:38.349476 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:38Z","lastTransitionTime":"2026-03-11T08:40:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:38 crc kubenswrapper[4808]: I0311 08:40:38.452681 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:38 crc kubenswrapper[4808]: I0311 08:40:38.452722 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:38 crc kubenswrapper[4808]: I0311 08:40:38.452735 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:38 crc kubenswrapper[4808]: I0311 08:40:38.452753 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:38 crc kubenswrapper[4808]: I0311 08:40:38.452765 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:38Z","lastTransitionTime":"2026-03-11T08:40:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:38 crc kubenswrapper[4808]: I0311 08:40:38.555650 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:38 crc kubenswrapper[4808]: I0311 08:40:38.555711 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:38 crc kubenswrapper[4808]: I0311 08:40:38.555729 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:38 crc kubenswrapper[4808]: I0311 08:40:38.555755 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:38 crc kubenswrapper[4808]: I0311 08:40:38.555772 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:38Z","lastTransitionTime":"2026-03-11T08:40:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:38 crc kubenswrapper[4808]: I0311 08:40:38.658774 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:38 crc kubenswrapper[4808]: I0311 08:40:38.658842 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:38 crc kubenswrapper[4808]: I0311 08:40:38.658858 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:38 crc kubenswrapper[4808]: I0311 08:40:38.658885 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:38 crc kubenswrapper[4808]: I0311 08:40:38.658910 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:38Z","lastTransitionTime":"2026-03-11T08:40:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:38 crc kubenswrapper[4808]: I0311 08:40:38.762175 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:38 crc kubenswrapper[4808]: I0311 08:40:38.762243 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:38 crc kubenswrapper[4808]: I0311 08:40:38.762260 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:38 crc kubenswrapper[4808]: I0311 08:40:38.762286 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:38 crc kubenswrapper[4808]: I0311 08:40:38.762302 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:38Z","lastTransitionTime":"2026-03-11T08:40:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:38 crc kubenswrapper[4808]: I0311 08:40:38.865962 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:38 crc kubenswrapper[4808]: I0311 08:40:38.866038 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:38 crc kubenswrapper[4808]: I0311 08:40:38.866063 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:38 crc kubenswrapper[4808]: I0311 08:40:38.866093 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:38 crc kubenswrapper[4808]: I0311 08:40:38.866114 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:38Z","lastTransitionTime":"2026-03-11T08:40:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:38 crc kubenswrapper[4808]: I0311 08:40:38.969664 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:38 crc kubenswrapper[4808]: I0311 08:40:38.969706 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:38 crc kubenswrapper[4808]: I0311 08:40:38.969723 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:38 crc kubenswrapper[4808]: I0311 08:40:38.969741 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:38 crc kubenswrapper[4808]: I0311 08:40:38.969752 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:38Z","lastTransitionTime":"2026-03-11T08:40:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:39 crc kubenswrapper[4808]: I0311 08:40:39.072158 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:39 crc kubenswrapper[4808]: I0311 08:40:39.072196 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:39 crc kubenswrapper[4808]: I0311 08:40:39.072207 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:39 crc kubenswrapper[4808]: I0311 08:40:39.072225 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:39 crc kubenswrapper[4808]: I0311 08:40:39.072238 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:39Z","lastTransitionTime":"2026-03-11T08:40:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:39 crc kubenswrapper[4808]: I0311 08:40:39.174692 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:39 crc kubenswrapper[4808]: I0311 08:40:39.174731 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:39 crc kubenswrapper[4808]: I0311 08:40:39.174742 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:39 crc kubenswrapper[4808]: I0311 08:40:39.174759 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:39 crc kubenswrapper[4808]: I0311 08:40:39.174772 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:39Z","lastTransitionTime":"2026-03-11T08:40:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:39 crc kubenswrapper[4808]: I0311 08:40:39.277434 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:39 crc kubenswrapper[4808]: I0311 08:40:39.277502 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:39 crc kubenswrapper[4808]: I0311 08:40:39.277540 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:39 crc kubenswrapper[4808]: I0311 08:40:39.277572 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:39 crc kubenswrapper[4808]: I0311 08:40:39.277595 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:39Z","lastTransitionTime":"2026-03-11T08:40:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:39 crc kubenswrapper[4808]: I0311 08:40:39.380230 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:39 crc kubenswrapper[4808]: I0311 08:40:39.380277 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:39 crc kubenswrapper[4808]: I0311 08:40:39.380285 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:39 crc kubenswrapper[4808]: I0311 08:40:39.380301 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:39 crc kubenswrapper[4808]: I0311 08:40:39.380312 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:39Z","lastTransitionTime":"2026-03-11T08:40:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:39 crc kubenswrapper[4808]: I0311 08:40:39.483297 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:39 crc kubenswrapper[4808]: I0311 08:40:39.483396 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:39 crc kubenswrapper[4808]: I0311 08:40:39.483423 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:39 crc kubenswrapper[4808]: I0311 08:40:39.483454 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:39 crc kubenswrapper[4808]: I0311 08:40:39.483475 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:39Z","lastTransitionTime":"2026-03-11T08:40:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:39 crc kubenswrapper[4808]: I0311 08:40:39.587653 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:39 crc kubenswrapper[4808]: I0311 08:40:39.587719 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:39 crc kubenswrapper[4808]: I0311 08:40:39.587734 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:39 crc kubenswrapper[4808]: I0311 08:40:39.587753 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:39 crc kubenswrapper[4808]: I0311 08:40:39.587765 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:39Z","lastTransitionTime":"2026-03-11T08:40:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:39 crc kubenswrapper[4808]: I0311 08:40:39.690740 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:39 crc kubenswrapper[4808]: I0311 08:40:39.690826 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:39 crc kubenswrapper[4808]: I0311 08:40:39.690850 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:39 crc kubenswrapper[4808]: I0311 08:40:39.690888 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:39 crc kubenswrapper[4808]: I0311 08:40:39.690930 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:39Z","lastTransitionTime":"2026-03-11T08:40:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:39 crc kubenswrapper[4808]: I0311 08:40:39.788951 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:40:39 crc kubenswrapper[4808]: I0311 08:40:39.788986 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:40:39 crc kubenswrapper[4808]: E0311 08:40:39.789123 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 11 08:40:39 crc kubenswrapper[4808]: I0311 08:40:39.789249 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:40:39 crc kubenswrapper[4808]: E0311 08:40:39.789276 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 11 08:40:39 crc kubenswrapper[4808]: E0311 08:40:39.789320 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 11 08:40:39 crc kubenswrapper[4808]: I0311 08:40:39.793878 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:39 crc kubenswrapper[4808]: I0311 08:40:39.793926 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:39 crc kubenswrapper[4808]: I0311 08:40:39.793939 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:39 crc kubenswrapper[4808]: I0311 08:40:39.793957 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:39 crc kubenswrapper[4808]: I0311 08:40:39.793971 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:39Z","lastTransitionTime":"2026-03-11T08:40:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:39 crc kubenswrapper[4808]: I0311 08:40:39.804693 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b088d2a8d5895f936e824861275482977fb68600e1ca4fd0c08e124c4f11da52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:39Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:39 crc kubenswrapper[4808]: I0311 08:40:39.819384 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:39Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:39 crc kubenswrapper[4808]: I0311 08:40:39.832284 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:39Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:39 crc kubenswrapper[4808]: I0311 08:40:39.855238 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39fe7c21-7960-42c6-aa61-98bac9e7ecd5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46ab0655a5449d68489b2ad4cdadf1ab8539dfc66e9a09ef0bf492fc84c3df01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99862ebd2568f5b782683b45df57b331cb0b659d0a48b16d898658463fcaa023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fac42d74d7b0553ef1fda4393cbe7872b21bc18baada8519db8137f2ef71d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce1b28200696e0c819d326a53d481baf48418232ac2a66fe01504e7af648c56f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1da63ac4504d6a817f5c856ae4ec6feee655c983f804372f8100624e0fbb08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0167ad24e01c669d29e5bcad3d3dc4e2e7578cf20278001ea2dfe9f1390bc99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0167ad24e01c669d29e5bcad3d3dc4e2e7578cf20278001ea2dfe9f1390bc99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc6d6b373b603100e83b56735f848166bbe70eaffa795411d0bf95bbfbe31c6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc6d6b373b603100e83b56735f848166bbe70eaffa795411d0bf95bbfbe31c6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f029352eaf11747f4efebbb91000b740e298e03d72e8a1347316668b78c23cae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f029352eaf11747f4efebbb91000b740e298e03d72e8a1347316668b78c23cae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:39Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:39 crc kubenswrapper[4808]: I0311 08:40:39.866021 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"134c7d84-5243-4e2e-afcf-0eaf67967ce7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00fd34aefc93aa169e305232d0d0bf0fff66d3f4469ae96f68cc5f8a51a92c86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://023c9b432e3273f96abe72e9e1e4cf5ab68649d9d71d0ff59f75fc30ea5c2287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://023c9b432e3273f96abe72e9e1e4cf5ab68649d9d71d0ff59f75fc30ea5c2287\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:39Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:39 crc kubenswrapper[4808]: I0311 08:40:39.879198 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:39Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:39 crc kubenswrapper[4808]: I0311 08:40:39.892612 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afd64db1e4772f4558a0a810e319481d1c8d521b299c9f6f79d5c0f752b5f89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8deb33e0cd83c7419f7ddd4cba0e84d33849e982b5da5e74e0e93eebb40b249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:39Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:39 crc kubenswrapper[4808]: I0311 08:40:39.896463 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:39 crc kubenswrapper[4808]: I0311 08:40:39.896525 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:39 crc kubenswrapper[4808]: I0311 08:40:39.896542 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:39 crc kubenswrapper[4808]: I0311 08:40:39.896568 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:39 crc kubenswrapper[4808]: I0311 08:40:39.896588 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:39Z","lastTransitionTime":"2026-03-11T08:40:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:39 crc kubenswrapper[4808]: I0311 08:40:39.910197 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a12f71a3a92a4513660a5b6c0bb73f40545f5528a45fab7aec18cb22283921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:39Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:40 crc kubenswrapper[4808]: I0311 08:40:40.000297 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:40 crc kubenswrapper[4808]: I0311 08:40:40.000383 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:40 crc kubenswrapper[4808]: I0311 08:40:40.000403 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:40 crc kubenswrapper[4808]: I0311 08:40:40.000428 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:40 crc kubenswrapper[4808]: I0311 08:40:40.000448 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:40Z","lastTransitionTime":"2026-03-11T08:40:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:40 crc kubenswrapper[4808]: I0311 08:40:40.103500 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:40 crc kubenswrapper[4808]: I0311 08:40:40.103541 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:40 crc kubenswrapper[4808]: I0311 08:40:40.103550 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:40 crc kubenswrapper[4808]: I0311 08:40:40.103564 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:40 crc kubenswrapper[4808]: I0311 08:40:40.103572 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:40Z","lastTransitionTime":"2026-03-11T08:40:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:40 crc kubenswrapper[4808]: I0311 08:40:40.194156 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:40 crc kubenswrapper[4808]: I0311 08:40:40.194234 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:40 crc kubenswrapper[4808]: I0311 08:40:40.194254 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:40 crc kubenswrapper[4808]: I0311 08:40:40.194280 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:40 crc kubenswrapper[4808]: I0311 08:40:40.194298 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:40Z","lastTransitionTime":"2026-03-11T08:40:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:40 crc kubenswrapper[4808]: E0311 08:40:40.208820 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:40:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:40:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:40:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:40:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fb54ef7d-152b-411a-b511-256d1778abe5\\\",\\\"systemUUID\\\":\\\"8423e724-ca17-4e6e-9671-7e629ecf3f36\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:40Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:40 crc kubenswrapper[4808]: I0311 08:40:40.213609 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:40 crc kubenswrapper[4808]: I0311 08:40:40.213672 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:40 crc kubenswrapper[4808]: I0311 08:40:40.213691 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:40 crc kubenswrapper[4808]: I0311 08:40:40.213717 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:40 crc kubenswrapper[4808]: I0311 08:40:40.213735 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:40Z","lastTransitionTime":"2026-03-11T08:40:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:40 crc kubenswrapper[4808]: E0311 08:40:40.233200 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:40:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:40:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:40:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:40:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fb54ef7d-152b-411a-b511-256d1778abe5\\\",\\\"systemUUID\\\":\\\"8423e724-ca17-4e6e-9671-7e629ecf3f36\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:40Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:40 crc kubenswrapper[4808]: I0311 08:40:40.238030 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:40 crc kubenswrapper[4808]: I0311 08:40:40.238101 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:40 crc kubenswrapper[4808]: I0311 08:40:40.238119 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:40 crc kubenswrapper[4808]: I0311 08:40:40.238146 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:40 crc kubenswrapper[4808]: I0311 08:40:40.238165 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:40Z","lastTransitionTime":"2026-03-11T08:40:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:40 crc kubenswrapper[4808]: E0311 08:40:40.250232 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:40:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:40:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:40:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:40:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fb54ef7d-152b-411a-b511-256d1778abe5\\\",\\\"systemUUID\\\":\\\"8423e724-ca17-4e6e-9671-7e629ecf3f36\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:40Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:40 crc kubenswrapper[4808]: I0311 08:40:40.255444 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:40 crc kubenswrapper[4808]: I0311 08:40:40.255497 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:40 crc kubenswrapper[4808]: I0311 08:40:40.255515 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:40 crc kubenswrapper[4808]: I0311 08:40:40.255540 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:40 crc kubenswrapper[4808]: I0311 08:40:40.255563 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:40Z","lastTransitionTime":"2026-03-11T08:40:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:40 crc kubenswrapper[4808]: E0311 08:40:40.272794 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:40:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:40:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:40:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:40:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fb54ef7d-152b-411a-b511-256d1778abe5\\\",\\\"systemUUID\\\":\\\"8423e724-ca17-4e6e-9671-7e629ecf3f36\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:40Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:40 crc kubenswrapper[4808]: I0311 08:40:40.278271 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:40 crc kubenswrapper[4808]: I0311 08:40:40.278340 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:40 crc kubenswrapper[4808]: I0311 08:40:40.278394 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:40 crc kubenswrapper[4808]: I0311 08:40:40.278429 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:40 crc kubenswrapper[4808]: I0311 08:40:40.278452 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:40Z","lastTransitionTime":"2026-03-11T08:40:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:40 crc kubenswrapper[4808]: E0311 08:40:40.292214 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:40:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:40:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:40:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:40:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fb54ef7d-152b-411a-b511-256d1778abe5\\\",\\\"systemUUID\\\":\\\"8423e724-ca17-4e6e-9671-7e629ecf3f36\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:40Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:40 crc kubenswrapper[4808]: E0311 08:40:40.292486 4808 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 11 08:40:40 crc kubenswrapper[4808]: I0311 08:40:40.294821 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:40 crc kubenswrapper[4808]: I0311 08:40:40.294897 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:40 crc kubenswrapper[4808]: I0311 08:40:40.294918 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:40 crc kubenswrapper[4808]: I0311 08:40:40.294943 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:40 crc kubenswrapper[4808]: I0311 08:40:40.294962 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:40Z","lastTransitionTime":"2026-03-11T08:40:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:40 crc kubenswrapper[4808]: I0311 08:40:40.398026 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:40 crc kubenswrapper[4808]: I0311 08:40:40.398097 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:40 crc kubenswrapper[4808]: I0311 08:40:40.398121 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:40 crc kubenswrapper[4808]: I0311 08:40:40.398156 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:40 crc kubenswrapper[4808]: I0311 08:40:40.398175 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:40Z","lastTransitionTime":"2026-03-11T08:40:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:40 crc kubenswrapper[4808]: I0311 08:40:40.500617 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:40 crc kubenswrapper[4808]: I0311 08:40:40.500656 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:40 crc kubenswrapper[4808]: I0311 08:40:40.500666 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:40 crc kubenswrapper[4808]: I0311 08:40:40.500680 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:40 crc kubenswrapper[4808]: I0311 08:40:40.500688 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:40Z","lastTransitionTime":"2026-03-11T08:40:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:40 crc kubenswrapper[4808]: I0311 08:40:40.603475 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:40 crc kubenswrapper[4808]: I0311 08:40:40.603522 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:40 crc kubenswrapper[4808]: I0311 08:40:40.603536 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:40 crc kubenswrapper[4808]: I0311 08:40:40.603554 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:40 crc kubenswrapper[4808]: I0311 08:40:40.603565 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:40Z","lastTransitionTime":"2026-03-11T08:40:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:40 crc kubenswrapper[4808]: I0311 08:40:40.705709 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:40 crc kubenswrapper[4808]: I0311 08:40:40.705761 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:40 crc kubenswrapper[4808]: I0311 08:40:40.705779 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:40 crc kubenswrapper[4808]: I0311 08:40:40.705802 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:40 crc kubenswrapper[4808]: I0311 08:40:40.705818 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:40Z","lastTransitionTime":"2026-03-11T08:40:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:40 crc kubenswrapper[4808]: I0311 08:40:40.802806 4808 scope.go:117] "RemoveContainer" containerID="9f99d53f25011afeb242934cb64a3cce6d5a03c17bd6008729f5e07298f152fb" Mar 11 08:40:40 crc kubenswrapper[4808]: I0311 08:40:40.802923 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 11 08:40:40 crc kubenswrapper[4808]: E0311 08:40:40.803078 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 11 08:40:40 crc kubenswrapper[4808]: I0311 08:40:40.808303 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:40 crc kubenswrapper[4808]: I0311 08:40:40.808381 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:40 crc kubenswrapper[4808]: I0311 08:40:40.808392 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:40 crc kubenswrapper[4808]: I0311 08:40:40.808409 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:40 crc kubenswrapper[4808]: I0311 08:40:40.808420 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:40Z","lastTransitionTime":"2026-03-11T08:40:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:40 crc kubenswrapper[4808]: I0311 08:40:40.910504 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:40 crc kubenswrapper[4808]: I0311 08:40:40.910548 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:40 crc kubenswrapper[4808]: I0311 08:40:40.910559 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:40 crc kubenswrapper[4808]: I0311 08:40:40.910575 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:40 crc kubenswrapper[4808]: I0311 08:40:40.910584 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:40Z","lastTransitionTime":"2026-03-11T08:40:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:41 crc kubenswrapper[4808]: I0311 08:40:41.012350 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:41 crc kubenswrapper[4808]: I0311 08:40:41.012398 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:41 crc kubenswrapper[4808]: I0311 08:40:41.012406 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:41 crc kubenswrapper[4808]: I0311 08:40:41.012421 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:41 crc kubenswrapper[4808]: I0311 08:40:41.012430 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:41Z","lastTransitionTime":"2026-03-11T08:40:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:41 crc kubenswrapper[4808]: I0311 08:40:41.115204 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:41 crc kubenswrapper[4808]: I0311 08:40:41.115260 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:41 crc kubenswrapper[4808]: I0311 08:40:41.115272 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:41 crc kubenswrapper[4808]: I0311 08:40:41.115291 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:41 crc kubenswrapper[4808]: I0311 08:40:41.115304 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:41Z","lastTransitionTime":"2026-03-11T08:40:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:41 crc kubenswrapper[4808]: I0311 08:40:41.217550 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:41 crc kubenswrapper[4808]: I0311 08:40:41.217619 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:41 crc kubenswrapper[4808]: I0311 08:40:41.217641 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:41 crc kubenswrapper[4808]: I0311 08:40:41.217670 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:41 crc kubenswrapper[4808]: I0311 08:40:41.217693 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:41Z","lastTransitionTime":"2026-03-11T08:40:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:41 crc kubenswrapper[4808]: I0311 08:40:41.247185 4808 scope.go:117] "RemoveContainer" containerID="9f99d53f25011afeb242934cb64a3cce6d5a03c17bd6008729f5e07298f152fb" Mar 11 08:40:41 crc kubenswrapper[4808]: E0311 08:40:41.247325 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 11 08:40:41 crc kubenswrapper[4808]: I0311 08:40:41.319756 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:41 crc kubenswrapper[4808]: I0311 08:40:41.319810 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:41 crc kubenswrapper[4808]: I0311 08:40:41.319827 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:41 crc kubenswrapper[4808]: I0311 08:40:41.319850 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:41 crc kubenswrapper[4808]: I0311 08:40:41.319867 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:41Z","lastTransitionTime":"2026-03-11T08:40:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:41 crc kubenswrapper[4808]: I0311 08:40:41.422291 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:41 crc kubenswrapper[4808]: I0311 08:40:41.422372 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:41 crc kubenswrapper[4808]: I0311 08:40:41.422384 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:41 crc kubenswrapper[4808]: I0311 08:40:41.422455 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:41 crc kubenswrapper[4808]: I0311 08:40:41.422471 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:41Z","lastTransitionTime":"2026-03-11T08:40:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:41 crc kubenswrapper[4808]: I0311 08:40:41.525842 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:41 crc kubenswrapper[4808]: I0311 08:40:41.525970 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:41 crc kubenswrapper[4808]: I0311 08:40:41.526006 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:41 crc kubenswrapper[4808]: I0311 08:40:41.526039 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:41 crc kubenswrapper[4808]: I0311 08:40:41.526076 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:41Z","lastTransitionTime":"2026-03-11T08:40:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:41 crc kubenswrapper[4808]: I0311 08:40:41.632047 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:41 crc kubenswrapper[4808]: I0311 08:40:41.632471 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:41 crc kubenswrapper[4808]: I0311 08:40:41.632501 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:41 crc kubenswrapper[4808]: I0311 08:40:41.632529 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:41 crc kubenswrapper[4808]: I0311 08:40:41.632553 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:41Z","lastTransitionTime":"2026-03-11T08:40:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:41 crc kubenswrapper[4808]: I0311 08:40:41.735771 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:41 crc kubenswrapper[4808]: I0311 08:40:41.735832 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:41 crc kubenswrapper[4808]: I0311 08:40:41.735849 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:41 crc kubenswrapper[4808]: I0311 08:40:41.735870 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:41 crc kubenswrapper[4808]: I0311 08:40:41.735884 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:41Z","lastTransitionTime":"2026-03-11T08:40:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:41 crc kubenswrapper[4808]: I0311 08:40:41.788679 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:40:41 crc kubenswrapper[4808]: E0311 08:40:41.789203 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 11 08:40:41 crc kubenswrapper[4808]: I0311 08:40:41.789572 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:40:41 crc kubenswrapper[4808]: I0311 08:40:41.789649 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:40:41 crc kubenswrapper[4808]: E0311 08:40:41.789729 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 11 08:40:41 crc kubenswrapper[4808]: E0311 08:40:41.789812 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 11 08:40:41 crc kubenswrapper[4808]: I0311 08:40:41.838859 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:41 crc kubenswrapper[4808]: I0311 08:40:41.839144 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:41 crc kubenswrapper[4808]: I0311 08:40:41.839218 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:41 crc kubenswrapper[4808]: I0311 08:40:41.839338 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:41 crc kubenswrapper[4808]: I0311 08:40:41.839450 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:41Z","lastTransitionTime":"2026-03-11T08:40:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:41 crc kubenswrapper[4808]: I0311 08:40:41.941748 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:40:41 crc kubenswrapper[4808]: I0311 08:40:41.941824 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:40:41 crc kubenswrapper[4808]: I0311 08:40:41.941860 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:40:41 crc kubenswrapper[4808]: E0311 08:40:41.941886 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:40:57.941859601 +0000 UTC m=+108.895182961 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:40:41 crc kubenswrapper[4808]: I0311 08:40:41.941932 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:40:41 crc kubenswrapper[4808]: E0311 08:40:41.941963 4808 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 11 08:40:41 crc kubenswrapper[4808]: I0311 08:40:41.941982 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:40:41 crc kubenswrapper[4808]: E0311 08:40:41.942017 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-11 08:40:57.942000345 +0000 UTC m=+108.895323665 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 11 08:40:41 crc kubenswrapper[4808]: E0311 08:40:41.942127 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 11 08:40:41 crc kubenswrapper[4808]: E0311 08:40:41.942150 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 11 08:40:41 crc kubenswrapper[4808]: E0311 08:40:41.942168 4808 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 11 08:40:41 crc kubenswrapper[4808]: E0311 08:40:41.942223 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-11 08:40:57.942204691 +0000 UTC m=+108.895528041 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 11 08:40:41 crc kubenswrapper[4808]: E0311 08:40:41.942275 4808 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 11 08:40:41 crc kubenswrapper[4808]: I0311 08:40:41.941756 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:41 crc kubenswrapper[4808]: I0311 08:40:41.942321 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:41 crc kubenswrapper[4808]: E0311 08:40:41.942313 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-11 08:40:57.942301834 +0000 UTC m=+108.895625184 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 11 08:40:41 crc kubenswrapper[4808]: I0311 08:40:41.942382 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:41 crc kubenswrapper[4808]: I0311 08:40:41.942403 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:41 crc kubenswrapper[4808]: I0311 08:40:41.942415 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:41Z","lastTransitionTime":"2026-03-11T08:40:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:41 crc kubenswrapper[4808]: E0311 08:40:41.942492 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 11 08:40:41 crc kubenswrapper[4808]: E0311 08:40:41.942544 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 11 08:40:41 crc kubenswrapper[4808]: E0311 08:40:41.942560 4808 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 11 08:40:41 crc kubenswrapper[4808]: E0311 08:40:41.942604 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-11 08:40:57.942590532 +0000 UTC m=+108.895913892 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 11 08:40:42 crc kubenswrapper[4808]: I0311 08:40:42.044159 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:42 crc kubenswrapper[4808]: I0311 08:40:42.044191 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:42 crc kubenswrapper[4808]: I0311 08:40:42.044199 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:42 crc kubenswrapper[4808]: I0311 08:40:42.044214 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:42 crc kubenswrapper[4808]: I0311 08:40:42.044223 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:42Z","lastTransitionTime":"2026-03-11T08:40:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:42 crc kubenswrapper[4808]: I0311 08:40:42.147026 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:42 crc kubenswrapper[4808]: I0311 08:40:42.147119 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:42 crc kubenswrapper[4808]: I0311 08:40:42.147144 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:42 crc kubenswrapper[4808]: I0311 08:40:42.147178 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:42 crc kubenswrapper[4808]: I0311 08:40:42.147201 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:42Z","lastTransitionTime":"2026-03-11T08:40:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:42 crc kubenswrapper[4808]: I0311 08:40:42.250714 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:42 crc kubenswrapper[4808]: I0311 08:40:42.250805 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:42 crc kubenswrapper[4808]: I0311 08:40:42.250822 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:42 crc kubenswrapper[4808]: I0311 08:40:42.250878 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:42 crc kubenswrapper[4808]: I0311 08:40:42.250898 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:42Z","lastTransitionTime":"2026-03-11T08:40:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:42 crc kubenswrapper[4808]: I0311 08:40:42.353205 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:42 crc kubenswrapper[4808]: I0311 08:40:42.353244 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:42 crc kubenswrapper[4808]: I0311 08:40:42.353254 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:42 crc kubenswrapper[4808]: I0311 08:40:42.353267 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:42 crc kubenswrapper[4808]: I0311 08:40:42.353276 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:42Z","lastTransitionTime":"2026-03-11T08:40:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:42 crc kubenswrapper[4808]: I0311 08:40:42.455624 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:42 crc kubenswrapper[4808]: I0311 08:40:42.455670 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:42 crc kubenswrapper[4808]: I0311 08:40:42.455679 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:42 crc kubenswrapper[4808]: I0311 08:40:42.455696 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:42 crc kubenswrapper[4808]: I0311 08:40:42.455705 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:42Z","lastTransitionTime":"2026-03-11T08:40:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:42 crc kubenswrapper[4808]: I0311 08:40:42.558603 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:42 crc kubenswrapper[4808]: I0311 08:40:42.558652 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:42 crc kubenswrapper[4808]: I0311 08:40:42.558660 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:42 crc kubenswrapper[4808]: I0311 08:40:42.558677 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:42 crc kubenswrapper[4808]: I0311 08:40:42.558688 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:42Z","lastTransitionTime":"2026-03-11T08:40:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:42 crc kubenswrapper[4808]: I0311 08:40:42.661485 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:42 crc kubenswrapper[4808]: I0311 08:40:42.661611 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:42 crc kubenswrapper[4808]: I0311 08:40:42.661625 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:42 crc kubenswrapper[4808]: I0311 08:40:42.661665 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:42 crc kubenswrapper[4808]: I0311 08:40:42.661679 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:42Z","lastTransitionTime":"2026-03-11T08:40:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:42 crc kubenswrapper[4808]: I0311 08:40:42.764622 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:42 crc kubenswrapper[4808]: I0311 08:40:42.764685 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:42 crc kubenswrapper[4808]: I0311 08:40:42.764702 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:42 crc kubenswrapper[4808]: I0311 08:40:42.764726 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:42 crc kubenswrapper[4808]: I0311 08:40:42.764743 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:42Z","lastTransitionTime":"2026-03-11T08:40:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:42 crc kubenswrapper[4808]: I0311 08:40:42.867073 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:42 crc kubenswrapper[4808]: I0311 08:40:42.867116 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:42 crc kubenswrapper[4808]: I0311 08:40:42.867124 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:42 crc kubenswrapper[4808]: I0311 08:40:42.867137 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:42 crc kubenswrapper[4808]: I0311 08:40:42.867146 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:42Z","lastTransitionTime":"2026-03-11T08:40:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:42 crc kubenswrapper[4808]: I0311 08:40:42.969789 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:42 crc kubenswrapper[4808]: I0311 08:40:42.969849 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:42 crc kubenswrapper[4808]: I0311 08:40:42.969861 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:42 crc kubenswrapper[4808]: I0311 08:40:42.969875 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:42 crc kubenswrapper[4808]: I0311 08:40:42.969885 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:42Z","lastTransitionTime":"2026-03-11T08:40:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:43 crc kubenswrapper[4808]: I0311 08:40:43.072858 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:43 crc kubenswrapper[4808]: I0311 08:40:43.072899 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:43 crc kubenswrapper[4808]: I0311 08:40:43.072910 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:43 crc kubenswrapper[4808]: I0311 08:40:43.072925 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:43 crc kubenswrapper[4808]: I0311 08:40:43.072936 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:43Z","lastTransitionTime":"2026-03-11T08:40:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:43 crc kubenswrapper[4808]: I0311 08:40:43.175634 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:43 crc kubenswrapper[4808]: I0311 08:40:43.175711 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:43 crc kubenswrapper[4808]: I0311 08:40:43.175735 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:43 crc kubenswrapper[4808]: I0311 08:40:43.175766 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:43 crc kubenswrapper[4808]: I0311 08:40:43.175788 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:43Z","lastTransitionTime":"2026-03-11T08:40:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:43 crc kubenswrapper[4808]: I0311 08:40:43.278126 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:43 crc kubenswrapper[4808]: I0311 08:40:43.278175 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:43 crc kubenswrapper[4808]: I0311 08:40:43.278187 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:43 crc kubenswrapper[4808]: I0311 08:40:43.278207 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:43 crc kubenswrapper[4808]: I0311 08:40:43.278219 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:43Z","lastTransitionTime":"2026-03-11T08:40:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:43 crc kubenswrapper[4808]: I0311 08:40:43.380664 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:43 crc kubenswrapper[4808]: I0311 08:40:43.380701 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:43 crc kubenswrapper[4808]: I0311 08:40:43.380709 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:43 crc kubenswrapper[4808]: I0311 08:40:43.380725 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:43 crc kubenswrapper[4808]: I0311 08:40:43.380737 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:43Z","lastTransitionTime":"2026-03-11T08:40:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:43 crc kubenswrapper[4808]: I0311 08:40:43.483807 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:43 crc kubenswrapper[4808]: I0311 08:40:43.483872 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:43 crc kubenswrapper[4808]: I0311 08:40:43.483888 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:43 crc kubenswrapper[4808]: I0311 08:40:43.483912 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:43 crc kubenswrapper[4808]: I0311 08:40:43.483930 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:43Z","lastTransitionTime":"2026-03-11T08:40:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:43 crc kubenswrapper[4808]: I0311 08:40:43.587338 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:43 crc kubenswrapper[4808]: I0311 08:40:43.587429 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:43 crc kubenswrapper[4808]: I0311 08:40:43.587447 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:43 crc kubenswrapper[4808]: I0311 08:40:43.587470 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:43 crc kubenswrapper[4808]: I0311 08:40:43.587487 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:43Z","lastTransitionTime":"2026-03-11T08:40:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:43 crc kubenswrapper[4808]: I0311 08:40:43.662581 4808 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 11 08:40:43 crc kubenswrapper[4808]: I0311 08:40:43.690648 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:43 crc kubenswrapper[4808]: I0311 08:40:43.690715 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:43 crc kubenswrapper[4808]: I0311 08:40:43.690737 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:43 crc kubenswrapper[4808]: I0311 08:40:43.690762 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:43 crc kubenswrapper[4808]: I0311 08:40:43.690853 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:43Z","lastTransitionTime":"2026-03-11T08:40:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:43 crc kubenswrapper[4808]: I0311 08:40:43.788867 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:40:43 crc kubenswrapper[4808]: E0311 08:40:43.791519 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 11 08:40:43 crc kubenswrapper[4808]: I0311 08:40:43.791677 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:40:43 crc kubenswrapper[4808]: I0311 08:40:43.791727 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:40:43 crc kubenswrapper[4808]: E0311 08:40:43.791831 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 11 08:40:43 crc kubenswrapper[4808]: E0311 08:40:43.791885 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 11 08:40:43 crc kubenswrapper[4808]: I0311 08:40:43.793298 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:43 crc kubenswrapper[4808]: I0311 08:40:43.793411 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:43 crc kubenswrapper[4808]: I0311 08:40:43.793439 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:43 crc kubenswrapper[4808]: I0311 08:40:43.793474 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:43 crc kubenswrapper[4808]: I0311 08:40:43.793498 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:43Z","lastTransitionTime":"2026-03-11T08:40:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:43 crc kubenswrapper[4808]: I0311 08:40:43.897636 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:43 crc kubenswrapper[4808]: I0311 08:40:43.897686 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:43 crc kubenswrapper[4808]: I0311 08:40:43.897704 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:43 crc kubenswrapper[4808]: I0311 08:40:43.897728 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:43 crc kubenswrapper[4808]: I0311 08:40:43.897745 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:43Z","lastTransitionTime":"2026-03-11T08:40:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:44 crc kubenswrapper[4808]: I0311 08:40:44.000486 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:44 crc kubenswrapper[4808]: I0311 08:40:44.000577 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:44 crc kubenswrapper[4808]: I0311 08:40:44.000592 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:44 crc kubenswrapper[4808]: I0311 08:40:44.000611 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:44 crc kubenswrapper[4808]: I0311 08:40:44.000625 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:44Z","lastTransitionTime":"2026-03-11T08:40:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:44 crc kubenswrapper[4808]: I0311 08:40:44.104676 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:44 crc kubenswrapper[4808]: I0311 08:40:44.104731 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:44 crc kubenswrapper[4808]: I0311 08:40:44.104749 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:44 crc kubenswrapper[4808]: I0311 08:40:44.104773 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:44 crc kubenswrapper[4808]: I0311 08:40:44.104789 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:44Z","lastTransitionTime":"2026-03-11T08:40:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:44 crc kubenswrapper[4808]: I0311 08:40:44.208034 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:44 crc kubenswrapper[4808]: I0311 08:40:44.208097 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:44 crc kubenswrapper[4808]: I0311 08:40:44.208120 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:44 crc kubenswrapper[4808]: I0311 08:40:44.208150 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:44 crc kubenswrapper[4808]: I0311 08:40:44.208172 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:44Z","lastTransitionTime":"2026-03-11T08:40:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:44 crc kubenswrapper[4808]: I0311 08:40:44.311034 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:44 crc kubenswrapper[4808]: I0311 08:40:44.311092 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:44 crc kubenswrapper[4808]: I0311 08:40:44.311104 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:44 crc kubenswrapper[4808]: I0311 08:40:44.311122 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:44 crc kubenswrapper[4808]: I0311 08:40:44.311140 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:44Z","lastTransitionTime":"2026-03-11T08:40:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:44 crc kubenswrapper[4808]: I0311 08:40:44.415624 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:44 crc kubenswrapper[4808]: I0311 08:40:44.415693 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:44 crc kubenswrapper[4808]: I0311 08:40:44.415716 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:44 crc kubenswrapper[4808]: I0311 08:40:44.415748 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:44 crc kubenswrapper[4808]: I0311 08:40:44.415772 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:44Z","lastTransitionTime":"2026-03-11T08:40:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:44 crc kubenswrapper[4808]: I0311 08:40:44.518633 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:44 crc kubenswrapper[4808]: I0311 08:40:44.518978 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:44 crc kubenswrapper[4808]: I0311 08:40:44.519089 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:44 crc kubenswrapper[4808]: I0311 08:40:44.519214 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:44 crc kubenswrapper[4808]: I0311 08:40:44.519329 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:44Z","lastTransitionTime":"2026-03-11T08:40:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:44 crc kubenswrapper[4808]: I0311 08:40:44.626030 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:44 crc kubenswrapper[4808]: I0311 08:40:44.635243 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:44 crc kubenswrapper[4808]: I0311 08:40:44.635535 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:44 crc kubenswrapper[4808]: I0311 08:40:44.635847 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:44 crc kubenswrapper[4808]: I0311 08:40:44.636199 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:44Z","lastTransitionTime":"2026-03-11T08:40:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:44 crc kubenswrapper[4808]: I0311 08:40:44.738735 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:44 crc kubenswrapper[4808]: I0311 08:40:44.738831 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:44 crc kubenswrapper[4808]: I0311 08:40:44.738858 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:44 crc kubenswrapper[4808]: I0311 08:40:44.738889 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:44 crc kubenswrapper[4808]: I0311 08:40:44.738912 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:44Z","lastTransitionTime":"2026-03-11T08:40:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:44 crc kubenswrapper[4808]: I0311 08:40:44.842187 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:44 crc kubenswrapper[4808]: I0311 08:40:44.842246 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:44 crc kubenswrapper[4808]: I0311 08:40:44.842264 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:44 crc kubenswrapper[4808]: I0311 08:40:44.842290 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:44 crc kubenswrapper[4808]: I0311 08:40:44.842307 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:44Z","lastTransitionTime":"2026-03-11T08:40:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:44 crc kubenswrapper[4808]: I0311 08:40:44.946023 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:44 crc kubenswrapper[4808]: I0311 08:40:44.946065 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:44 crc kubenswrapper[4808]: I0311 08:40:44.946077 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:44 crc kubenswrapper[4808]: I0311 08:40:44.946096 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:44 crc kubenswrapper[4808]: I0311 08:40:44.946108 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:44Z","lastTransitionTime":"2026-03-11T08:40:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.048620 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.048710 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.048728 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.048751 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.048768 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:45Z","lastTransitionTime":"2026-03-11T08:40:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.151446 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.151475 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.151483 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.151502 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.151512 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:45Z","lastTransitionTime":"2026-03-11T08:40:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.254056 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.254117 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.254136 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.254161 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.254177 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:45Z","lastTransitionTime":"2026-03-11T08:40:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.321765 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-twvrg"] Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.322394 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-twvrg" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.325249 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.325650 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.326437 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.342569 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afd64db1e4772f4558a0a810e319481d1c8d521b299c9f6f79d5c0f752b5f89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8deb33e0cd83c7419f7ddd4cba0e84d33849e982b5da5e74e0e93eebb40b249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:45Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.356768 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.356621 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a12f71a3a92a4513660a5b6c0bb73f40545f5528a45fab7aec18cb22283921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:45Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.356832 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.356856 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.356888 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.356915 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:45Z","lastTransitionTime":"2026-03-11T08:40:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.370501 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:45Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.374243 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/6377c39c-8ecf-409c-b3e7-ea9d717e234f-hosts-file\") pod \"node-resolver-twvrg\" (UID: \"6377c39c-8ecf-409c-b3e7-ea9d717e234f\") " pod="openshift-dns/node-resolver-twvrg" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.374348 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c99gw\" (UniqueName: \"kubernetes.io/projected/6377c39c-8ecf-409c-b3e7-ea9d717e234f-kube-api-access-c99gw\") pod \"node-resolver-twvrg\" (UID: \"6377c39c-8ecf-409c-b3e7-ea9d717e234f\") " pod="openshift-dns/node-resolver-twvrg" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.389007 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39fe7c21-7960-42c6-aa61-98bac9e7ecd5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46ab0655a5449d68489b2ad4cdadf1ab8539dfc66e9a09ef0bf492fc84c3df01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99862ebd2568f5b782683b45df57b331cb0b659d0a48b16d898658463fcaa023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fac42d74d7b0553ef1fda4393cbe7872b21bc18baada8519db8137f2ef71d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce1b28200696e0c819d326a53d481baf48418232ac2a66fe01504e7af648c56f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1da63ac4504d6a817f5c856ae4ec6feee655c983f804372f8100624e0fbb08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0167ad24e01c669d29e5bcad3d3dc4e2e7578cf20278001ea2dfe9f1390bc99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0167ad24e01c669d29e5bcad3d3dc4e2e7578cf20278001ea2dfe9f1390bc99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc6d6b373b603100e83b56735f848166bbe70eaffa795411d0bf95bbfbe31c6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc6d6b373b603100e83b56735f848166bbe70eaffa795411d0bf95bbfbe31c6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f029352eaf11747f4efebbb91000b740e298e03d72e8a1347316668b78c23cae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f029352eaf11747f4efebbb91000b740e298e03d72e8a1347316668b78c23cae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:45Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.405301 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70aeaa0b-1b9a-450e-bac3-61beed554491\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9a36b70daad0e5e33a84c699b4c0680f47f80bad4c9852616ea933ba8c6b221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1f460dd013b0b42bfb8031fcb1e2f49ace0fe24ce8194bd6e6970ad3e750ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://675da4e867613d165a41176c1b725f3601ffdd7de762c52efec0ee5485b1d628\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f99d53f25011afeb242934cb64a3cce6d5a03c17bd6008729f5e07298f152fb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f99d53f25011afeb242934cb64a3cce6d5a03c17bd6008729f5e07298f152fb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-11T08:40:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0311 08:40:20.589250 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0311 08:40:20.589388 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0311 08:40:20.590094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1044918164/tls.crt::/tmp/serving-cert-1044918164/tls.key\\\\\\\"\\\\nI0311 08:40:21.049983 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0311 08:40:21.052943 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0311 08:40:21.052965 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0311 08:40:21.052991 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0311 08:40:21.052999 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0311 08:40:21.059520 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0311 08:40:21.059550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:40:21.059569 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:40:21.059578 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0311 08:40:21.059584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0311 08:40:21.059590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0311 08:40:21.059596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0311 08:40:21.059648 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0311 08:40:21.062132 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://506d71773927c014aeafefdc00c93d5faf4e6a5827bc8b47f525f3ea970ed14e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://184b4659910cdc9b9ee8dc34db1ede687099f4a5103dfba9e9a2fc0428c4e2ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://184b4659910cdc9b9ee8dc34db1ede687099f4a5103dfba9e9a2fc0428c4e2ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:45Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.418513 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"134c7d84-5243-4e2e-afcf-0eaf67967ce7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00fd34aefc93aa169e305232d0d0bf0fff66d3f4469ae96f68cc5f8a51a92c86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://023c9b432e3273f96abe72e9e1e4cf5ab68649d9d71d0ff59f75fc30ea5c2287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://023c9b432e3273f96abe72e9e1e4cf5ab68649d9d71d0ff59f75fc30ea5c2287\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:45Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.430657 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b088d2a8d5895f936e824861275482977fb68600e1ca4fd0c08e124c4f11da52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:45Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.444207 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:45Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.457538 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:45Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.459415 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.459453 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.459461 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.459476 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.459486 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:45Z","lastTransitionTime":"2026-03-11T08:40:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.469326 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-twvrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6377c39c-8ecf-409c-b3e7-ea9d717e234f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c99gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-twvrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:45Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.475777 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/6377c39c-8ecf-409c-b3e7-ea9d717e234f-hosts-file\") pod \"node-resolver-twvrg\" (UID: \"6377c39c-8ecf-409c-b3e7-ea9d717e234f\") " pod="openshift-dns/node-resolver-twvrg" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.475831 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c99gw\" (UniqueName: \"kubernetes.io/projected/6377c39c-8ecf-409c-b3e7-ea9d717e234f-kube-api-access-c99gw\") pod \"node-resolver-twvrg\" (UID: \"6377c39c-8ecf-409c-b3e7-ea9d717e234f\") " pod="openshift-dns/node-resolver-twvrg" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.475944 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/6377c39c-8ecf-409c-b3e7-ea9d717e234f-hosts-file\") pod \"node-resolver-twvrg\" (UID: \"6377c39c-8ecf-409c-b3e7-ea9d717e234f\") " pod="openshift-dns/node-resolver-twvrg" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.494334 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c99gw\" (UniqueName: \"kubernetes.io/projected/6377c39c-8ecf-409c-b3e7-ea9d717e234f-kube-api-access-c99gw\") pod \"node-resolver-twvrg\" (UID: \"6377c39c-8ecf-409c-b3e7-ea9d717e234f\") " pod="openshift-dns/node-resolver-twvrg" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.562178 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.562256 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.562277 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.562305 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.562327 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:45Z","lastTransitionTime":"2026-03-11T08:40:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.648415 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-twvrg" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.664797 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.664833 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.664844 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.664861 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.664873 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:45Z","lastTransitionTime":"2026-03-11T08:40:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.697788 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-tfsm9"] Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.698128 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-2r84h"] Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.698280 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.698940 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-2r84h" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.700123 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.700295 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.700694 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.700837 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.701014 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.701337 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.701606 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.701732 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.702215 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-dgh9v"] Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.702553 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-dgh9v" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.703997 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.705254 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.706208 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.709675 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.714223 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b088d2a8d5895f936e824861275482977fb68600e1ca4fd0c08e124c4f11da52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:45Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.726410 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:45Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.737452 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:45Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.766886 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.766914 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.766922 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.766935 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.766944 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:45Z","lastTransitionTime":"2026-03-11T08:40:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.768879 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39fe7c21-7960-42c6-aa61-98bac9e7ecd5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46ab0655a5449d68489b2ad4cdadf1ab8539dfc66e9a09ef0bf492fc84c3df01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99862ebd2568f5b782683b45df57b331cb0b659d0a48b16d898658463fcaa023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fac42d74d7b0553ef1fda4393cbe7872b21bc18baada8519db8137f2ef71d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce1b28200696e0c819d326a53d481baf48418232ac2a66fe01504e7af648c56f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1da63ac4504d6a817f5c856ae4ec6feee655c983f804372f8100624e0fbb08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0167ad24e01c669d29e5bcad3d3dc4e2e7578cf20278001ea2dfe9f1390bc99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0167ad24e01c669d29e5bcad3d3dc4e2e7578cf20278001ea2dfe9f1390bc99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc6d6b373b603100e83b56735f848166bbe70eaffa795411d0bf95bbfbe31c6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc6d6b373b603100e83b56735f848166bbe70eaffa795411d0bf95bbfbe31c6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f029352eaf11747f4efebbb91000b740e298e03d72e8a1347316668b78c23cae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f029352eaf11747f4efebbb91000b740e298e03d72e8a1347316668b78c23cae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:45Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.778346 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c1a75dfb-31dd-4275-a309-c9e7130feb05-hostroot\") pod \"multus-dgh9v\" (UID: \"c1a75dfb-31dd-4275-a309-c9e7130feb05\") " pod="openshift-multus/multus-dgh9v" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.778392 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c1a75dfb-31dd-4275-a309-c9e7130feb05-cnibin\") pod \"multus-dgh9v\" (UID: \"c1a75dfb-31dd-4275-a309-c9e7130feb05\") " pod="openshift-multus/multus-dgh9v" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.778408 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c1a75dfb-31dd-4275-a309-c9e7130feb05-etc-kubernetes\") pod \"multus-dgh9v\" (UID: \"c1a75dfb-31dd-4275-a309-c9e7130feb05\") " pod="openshift-multus/multus-dgh9v" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.778425 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fcd5dff4-0826-4876-9fd3-3f19781a17bf-os-release\") pod \"multus-additional-cni-plugins-2r84h\" (UID: \"fcd5dff4-0826-4876-9fd3-3f19781a17bf\") " pod="openshift-multus/multus-additional-cni-plugins-2r84h" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.778441 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c1a75dfb-31dd-4275-a309-c9e7130feb05-multus-socket-dir-parent\") pod \"multus-dgh9v\" (UID: \"c1a75dfb-31dd-4275-a309-c9e7130feb05\") " pod="openshift-multus/multus-dgh9v" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.778454 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/fcd5dff4-0826-4876-9fd3-3f19781a17bf-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-2r84h\" (UID: \"fcd5dff4-0826-4876-9fd3-3f19781a17bf\") " pod="openshift-multus/multus-additional-cni-plugins-2r84h" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.778469 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c1a75dfb-31dd-4275-a309-c9e7130feb05-host-var-lib-kubelet\") pod \"multus-dgh9v\" (UID: \"c1a75dfb-31dd-4275-a309-c9e7130feb05\") " pod="openshift-multus/multus-dgh9v" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.778482 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fcd5dff4-0826-4876-9fd3-3f19781a17bf-tuning-conf-dir\") pod \"multus-additional-cni-plugins-2r84h\" (UID: \"fcd5dff4-0826-4876-9fd3-3f19781a17bf\") " pod="openshift-multus/multus-additional-cni-plugins-2r84h" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.778496 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2hw2\" (UniqueName: \"kubernetes.io/projected/3dda5309-668d-4e3c-b3b2-1d708eecc578-kube-api-access-c2hw2\") pod \"machine-config-daemon-tfsm9\" (UID: \"3dda5309-668d-4e3c-b3b2-1d708eecc578\") " pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.778510 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/fcd5dff4-0826-4876-9fd3-3f19781a17bf-cni-binary-copy\") pod \"multus-additional-cni-plugins-2r84h\" (UID: \"fcd5dff4-0826-4876-9fd3-3f19781a17bf\") " pod="openshift-multus/multus-additional-cni-plugins-2r84h" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.778526 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3dda5309-668d-4e3c-b3b2-1d708eecc578-proxy-tls\") pod \"machine-config-daemon-tfsm9\" (UID: \"3dda5309-668d-4e3c-b3b2-1d708eecc578\") " pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.778538 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c1a75dfb-31dd-4275-a309-c9e7130feb05-multus-conf-dir\") pod \"multus-dgh9v\" (UID: \"c1a75dfb-31dd-4275-a309-c9e7130feb05\") " pod="openshift-multus/multus-dgh9v" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.778552 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c1a75dfb-31dd-4275-a309-c9e7130feb05-multus-daemon-config\") pod \"multus-dgh9v\" (UID: \"c1a75dfb-31dd-4275-a309-c9e7130feb05\") " pod="openshift-multus/multus-dgh9v" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.778566 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c1a75dfb-31dd-4275-a309-c9e7130feb05-multus-cni-dir\") pod \"multus-dgh9v\" (UID: \"c1a75dfb-31dd-4275-a309-c9e7130feb05\") " pod="openshift-multus/multus-dgh9v" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.778579 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhgnl\" (UniqueName: \"kubernetes.io/projected/fcd5dff4-0826-4876-9fd3-3f19781a17bf-kube-api-access-zhgnl\") pod \"multus-additional-cni-plugins-2r84h\" (UID: \"fcd5dff4-0826-4876-9fd3-3f19781a17bf\") " pod="openshift-multus/multus-additional-cni-plugins-2r84h" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.778595 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c1a75dfb-31dd-4275-a309-c9e7130feb05-host-var-lib-cni-bin\") pod \"multus-dgh9v\" (UID: \"c1a75dfb-31dd-4275-a309-c9e7130feb05\") " pod="openshift-multus/multus-dgh9v" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.778611 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c1a75dfb-31dd-4275-a309-c9e7130feb05-host-var-lib-cni-multus\") pod \"multus-dgh9v\" (UID: \"c1a75dfb-31dd-4275-a309-c9e7130feb05\") " pod="openshift-multus/multus-dgh9v" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.778626 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c1a75dfb-31dd-4275-a309-c9e7130feb05-host-run-multus-certs\") pod \"multus-dgh9v\" (UID: \"c1a75dfb-31dd-4275-a309-c9e7130feb05\") " pod="openshift-multus/multus-dgh9v" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.778639 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fcd5dff4-0826-4876-9fd3-3f19781a17bf-cnibin\") pod \"multus-additional-cni-plugins-2r84h\" (UID: \"fcd5dff4-0826-4876-9fd3-3f19781a17bf\") " pod="openshift-multus/multus-additional-cni-plugins-2r84h" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.778668 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c1a75dfb-31dd-4275-a309-c9e7130feb05-cni-binary-copy\") pod \"multus-dgh9v\" (UID: \"c1a75dfb-31dd-4275-a309-c9e7130feb05\") " pod="openshift-multus/multus-dgh9v" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.778682 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c1a75dfb-31dd-4275-a309-c9e7130feb05-host-run-k8s-cni-cncf-io\") pod \"multus-dgh9v\" (UID: \"c1a75dfb-31dd-4275-a309-c9e7130feb05\") " pod="openshift-multus/multus-dgh9v" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.778696 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3dda5309-668d-4e3c-b3b2-1d708eecc578-mcd-auth-proxy-config\") pod \"machine-config-daemon-tfsm9\" (UID: \"3dda5309-668d-4e3c-b3b2-1d708eecc578\") " pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.778710 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fcd5dff4-0826-4876-9fd3-3f19781a17bf-system-cni-dir\") pod \"multus-additional-cni-plugins-2r84h\" (UID: \"fcd5dff4-0826-4876-9fd3-3f19781a17bf\") " pod="openshift-multus/multus-additional-cni-plugins-2r84h" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.778730 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c1a75dfb-31dd-4275-a309-c9e7130feb05-system-cni-dir\") pod \"multus-dgh9v\" (UID: \"c1a75dfb-31dd-4275-a309-c9e7130feb05\") " pod="openshift-multus/multus-dgh9v" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.778744 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c1a75dfb-31dd-4275-a309-c9e7130feb05-host-run-netns\") pod \"multus-dgh9v\" (UID: \"c1a75dfb-31dd-4275-a309-c9e7130feb05\") " pod="openshift-multus/multus-dgh9v" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.778761 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chgpx\" (UniqueName: \"kubernetes.io/projected/c1a75dfb-31dd-4275-a309-c9e7130feb05-kube-api-access-chgpx\") pod \"multus-dgh9v\" (UID: \"c1a75dfb-31dd-4275-a309-c9e7130feb05\") " pod="openshift-multus/multus-dgh9v" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.778775 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/3dda5309-668d-4e3c-b3b2-1d708eecc578-rootfs\") pod \"machine-config-daemon-tfsm9\" (UID: \"3dda5309-668d-4e3c-b3b2-1d708eecc578\") " pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.778795 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c1a75dfb-31dd-4275-a309-c9e7130feb05-os-release\") pod \"multus-dgh9v\" (UID: \"c1a75dfb-31dd-4275-a309-c9e7130feb05\") " pod="openshift-multus/multus-dgh9v" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.781791 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70aeaa0b-1b9a-450e-bac3-61beed554491\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9a36b70daad0e5e33a84c699b4c0680f47f80bad4c9852616ea933ba8c6b221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1f460dd013b0b42bfb8031fcb1e2f49ace0fe24ce8194bd6e6970ad3e750ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://675da4e867613d165a41176c1b725f3601ffdd7de762c52efec0ee5485b1d628\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f99d53f25011afeb242934cb64a3cce6d5a03c17bd6008729f5e07298f152fb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f99d53f25011afeb242934cb64a3cce6d5a03c17bd6008729f5e07298f152fb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-11T08:40:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0311 08:40:20.589250 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0311 08:40:20.589388 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0311 08:40:20.590094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1044918164/tls.crt::/tmp/serving-cert-1044918164/tls.key\\\\\\\"\\\\nI0311 08:40:21.049983 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0311 08:40:21.052943 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0311 08:40:21.052965 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0311 08:40:21.052991 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0311 08:40:21.052999 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0311 08:40:21.059520 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0311 08:40:21.059550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:40:21.059569 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:40:21.059578 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0311 08:40:21.059584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0311 08:40:21.059590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0311 08:40:21.059596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0311 08:40:21.059648 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0311 08:40:21.062132 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://506d71773927c014aeafefdc00c93d5faf4e6a5827bc8b47f525f3ea970ed14e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://184b4659910cdc9b9ee8dc34db1ede687099f4a5103dfba9e9a2fc0428c4e2ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://184b4659910cdc9b9ee8dc34db1ede687099f4a5103dfba9e9a2fc0428c4e2ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:45Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.788729 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.788836 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:40:45 crc kubenswrapper[4808]: E0311 08:40:45.788933 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.789184 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:40:45 crc kubenswrapper[4808]: E0311 08:40:45.789243 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 11 08:40:45 crc kubenswrapper[4808]: E0311 08:40:45.789289 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.798530 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"134c7d84-5243-4e2e-afcf-0eaf67967ce7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00fd34aefc93aa169e305232d0d0bf0fff66d3f4469ae96f68cc5f8a51a92c86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://023c9b432e3273f96abe72e9e1e4cf5ab68649d9d71d0ff59f75fc30ea5c2287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://023c9b432e3273f96abe72e9e1e4cf5ab68649d9d71d0ff59f75fc30ea5c2287\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:45Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.811168 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:45Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.820535 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-twvrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6377c39c-8ecf-409c-b3e7-ea9d717e234f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c99gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-twvrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:45Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.831431 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dda5309-668d-4e3c-b3b2-1d708eecc578\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2hw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2hw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tfsm9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:45Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.845885 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afd64db1e4772f4558a0a810e319481d1c8d521b299c9f6f79d5c0f752b5f89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8deb33e0cd83c7419f7ddd4cba0e84d33849e982b5da5e74e0e93eebb40b249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:45Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.861168 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a12f71a3a92a4513660a5b6c0bb73f40545f5528a45fab7aec18cb22283921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:45Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.869391 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.869429 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.869439 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.869454 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.869465 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:45Z","lastTransitionTime":"2026-03-11T08:40:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.874645 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a12f71a3a92a4513660a5b6c0bb73f40545f5528a45fab7aec18cb22283921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:45Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.879139 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c1a75dfb-31dd-4275-a309-c9e7130feb05-host-var-lib-cni-bin\") pod \"multus-dgh9v\" (UID: \"c1a75dfb-31dd-4275-a309-c9e7130feb05\") " pod="openshift-multus/multus-dgh9v" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.879173 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c1a75dfb-31dd-4275-a309-c9e7130feb05-host-var-lib-cni-multus\") pod \"multus-dgh9v\" (UID: \"c1a75dfb-31dd-4275-a309-c9e7130feb05\") " pod="openshift-multus/multus-dgh9v" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.879202 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c1a75dfb-31dd-4275-a309-c9e7130feb05-host-run-multus-certs\") pod \"multus-dgh9v\" (UID: \"c1a75dfb-31dd-4275-a309-c9e7130feb05\") " pod="openshift-multus/multus-dgh9v" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.879218 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fcd5dff4-0826-4876-9fd3-3f19781a17bf-cnibin\") pod \"multus-additional-cni-plugins-2r84h\" (UID: \"fcd5dff4-0826-4876-9fd3-3f19781a17bf\") " pod="openshift-multus/multus-additional-cni-plugins-2r84h" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.879238 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c1a75dfb-31dd-4275-a309-c9e7130feb05-system-cni-dir\") pod \"multus-dgh9v\" (UID: \"c1a75dfb-31dd-4275-a309-c9e7130feb05\") " pod="openshift-multus/multus-dgh9v" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.879245 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c1a75dfb-31dd-4275-a309-c9e7130feb05-host-var-lib-cni-bin\") pod \"multus-dgh9v\" (UID: \"c1a75dfb-31dd-4275-a309-c9e7130feb05\") " pod="openshift-multus/multus-dgh9v" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.879339 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fcd5dff4-0826-4876-9fd3-3f19781a17bf-cnibin\") pod \"multus-additional-cni-plugins-2r84h\" (UID: \"fcd5dff4-0826-4876-9fd3-3f19781a17bf\") " pod="openshift-multus/multus-additional-cni-plugins-2r84h" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.879252 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c1a75dfb-31dd-4275-a309-c9e7130feb05-cni-binary-copy\") pod \"multus-dgh9v\" (UID: \"c1a75dfb-31dd-4275-a309-c9e7130feb05\") " pod="openshift-multus/multus-dgh9v" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.879378 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c1a75dfb-31dd-4275-a309-c9e7130feb05-host-var-lib-cni-multus\") pod \"multus-dgh9v\" (UID: \"c1a75dfb-31dd-4275-a309-c9e7130feb05\") " pod="openshift-multus/multus-dgh9v" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.879492 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c1a75dfb-31dd-4275-a309-c9e7130feb05-system-cni-dir\") pod \"multus-dgh9v\" (UID: \"c1a75dfb-31dd-4275-a309-c9e7130feb05\") " pod="openshift-multus/multus-dgh9v" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.879502 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c1a75dfb-31dd-4275-a309-c9e7130feb05-host-run-k8s-cni-cncf-io\") pod \"multus-dgh9v\" (UID: \"c1a75dfb-31dd-4275-a309-c9e7130feb05\") " pod="openshift-multus/multus-dgh9v" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.879520 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c1a75dfb-31dd-4275-a309-c9e7130feb05-host-run-multus-certs\") pod \"multus-dgh9v\" (UID: \"c1a75dfb-31dd-4275-a309-c9e7130feb05\") " pod="openshift-multus/multus-dgh9v" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.879589 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3dda5309-668d-4e3c-b3b2-1d708eecc578-mcd-auth-proxy-config\") pod \"machine-config-daemon-tfsm9\" (UID: \"3dda5309-668d-4e3c-b3b2-1d708eecc578\") " pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.879614 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c1a75dfb-31dd-4275-a309-c9e7130feb05-host-run-k8s-cni-cncf-io\") pod \"multus-dgh9v\" (UID: \"c1a75dfb-31dd-4275-a309-c9e7130feb05\") " pod="openshift-multus/multus-dgh9v" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.879651 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fcd5dff4-0826-4876-9fd3-3f19781a17bf-system-cni-dir\") pod \"multus-additional-cni-plugins-2r84h\" (UID: \"fcd5dff4-0826-4876-9fd3-3f19781a17bf\") " pod="openshift-multus/multus-additional-cni-plugins-2r84h" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.879681 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c1a75dfb-31dd-4275-a309-c9e7130feb05-os-release\") pod \"multus-dgh9v\" (UID: \"c1a75dfb-31dd-4275-a309-c9e7130feb05\") " pod="openshift-multus/multus-dgh9v" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.879696 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c1a75dfb-31dd-4275-a309-c9e7130feb05-host-run-netns\") pod \"multus-dgh9v\" (UID: \"c1a75dfb-31dd-4275-a309-c9e7130feb05\") " pod="openshift-multus/multus-dgh9v" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.879710 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chgpx\" (UniqueName: \"kubernetes.io/projected/c1a75dfb-31dd-4275-a309-c9e7130feb05-kube-api-access-chgpx\") pod \"multus-dgh9v\" (UID: \"c1a75dfb-31dd-4275-a309-c9e7130feb05\") " pod="openshift-multus/multus-dgh9v" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.879724 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fcd5dff4-0826-4876-9fd3-3f19781a17bf-system-cni-dir\") pod \"multus-additional-cni-plugins-2r84h\" (UID: \"fcd5dff4-0826-4876-9fd3-3f19781a17bf\") " pod="openshift-multus/multus-additional-cni-plugins-2r84h" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.879726 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/3dda5309-668d-4e3c-b3b2-1d708eecc578-rootfs\") pod \"machine-config-daemon-tfsm9\" (UID: \"3dda5309-668d-4e3c-b3b2-1d708eecc578\") " pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.879772 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c1a75dfb-31dd-4275-a309-c9e7130feb05-cnibin\") pod \"multus-dgh9v\" (UID: \"c1a75dfb-31dd-4275-a309-c9e7130feb05\") " pod="openshift-multus/multus-dgh9v" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.879745 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/3dda5309-668d-4e3c-b3b2-1d708eecc578-rootfs\") pod \"machine-config-daemon-tfsm9\" (UID: \"3dda5309-668d-4e3c-b3b2-1d708eecc578\") " pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.879791 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c1a75dfb-31dd-4275-a309-c9e7130feb05-hostroot\") pod \"multus-dgh9v\" (UID: \"c1a75dfb-31dd-4275-a309-c9e7130feb05\") " pod="openshift-multus/multus-dgh9v" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.879809 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c1a75dfb-31dd-4275-a309-c9e7130feb05-etc-kubernetes\") pod \"multus-dgh9v\" (UID: \"c1a75dfb-31dd-4275-a309-c9e7130feb05\") " pod="openshift-multus/multus-dgh9v" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.879826 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fcd5dff4-0826-4876-9fd3-3f19781a17bf-os-release\") pod \"multus-additional-cni-plugins-2r84h\" (UID: \"fcd5dff4-0826-4876-9fd3-3f19781a17bf\") " pod="openshift-multus/multus-additional-cni-plugins-2r84h" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.879838 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c1a75dfb-31dd-4275-a309-c9e7130feb05-etc-kubernetes\") pod \"multus-dgh9v\" (UID: \"c1a75dfb-31dd-4275-a309-c9e7130feb05\") " pod="openshift-multus/multus-dgh9v" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.879838 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c1a75dfb-31dd-4275-a309-c9e7130feb05-hostroot\") pod \"multus-dgh9v\" (UID: \"c1a75dfb-31dd-4275-a309-c9e7130feb05\") " pod="openshift-multus/multus-dgh9v" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.879866 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c1a75dfb-31dd-4275-a309-c9e7130feb05-cnibin\") pod \"multus-dgh9v\" (UID: \"c1a75dfb-31dd-4275-a309-c9e7130feb05\") " pod="openshift-multus/multus-dgh9v" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.879841 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c1a75dfb-31dd-4275-a309-c9e7130feb05-multus-socket-dir-parent\") pod \"multus-dgh9v\" (UID: \"c1a75dfb-31dd-4275-a309-c9e7130feb05\") " pod="openshift-multus/multus-dgh9v" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.879773 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c1a75dfb-31dd-4275-a309-c9e7130feb05-host-run-netns\") pod \"multus-dgh9v\" (UID: \"c1a75dfb-31dd-4275-a309-c9e7130feb05\") " pod="openshift-multus/multus-dgh9v" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.879917 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c1a75dfb-31dd-4275-a309-c9e7130feb05-multus-socket-dir-parent\") pod \"multus-dgh9v\" (UID: \"c1a75dfb-31dd-4275-a309-c9e7130feb05\") " pod="openshift-multus/multus-dgh9v" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.879917 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/fcd5dff4-0826-4876-9fd3-3f19781a17bf-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-2r84h\" (UID: \"fcd5dff4-0826-4876-9fd3-3f19781a17bf\") " pod="openshift-multus/multus-additional-cni-plugins-2r84h" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.879876 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c1a75dfb-31dd-4275-a309-c9e7130feb05-os-release\") pod \"multus-dgh9v\" (UID: \"c1a75dfb-31dd-4275-a309-c9e7130feb05\") " pod="openshift-multus/multus-dgh9v" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.879948 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c1a75dfb-31dd-4275-a309-c9e7130feb05-host-var-lib-kubelet\") pod \"multus-dgh9v\" (UID: \"c1a75dfb-31dd-4275-a309-c9e7130feb05\") " pod="openshift-multus/multus-dgh9v" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.879973 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fcd5dff4-0826-4876-9fd3-3f19781a17bf-tuning-conf-dir\") pod \"multus-additional-cni-plugins-2r84h\" (UID: \"fcd5dff4-0826-4876-9fd3-3f19781a17bf\") " pod="openshift-multus/multus-additional-cni-plugins-2r84h" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.879998 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2hw2\" (UniqueName: \"kubernetes.io/projected/3dda5309-668d-4e3c-b3b2-1d708eecc578-kube-api-access-c2hw2\") pod \"machine-config-daemon-tfsm9\" (UID: \"3dda5309-668d-4e3c-b3b2-1d708eecc578\") " pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.880015 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/fcd5dff4-0826-4876-9fd3-3f19781a17bf-cni-binary-copy\") pod \"multus-additional-cni-plugins-2r84h\" (UID: \"fcd5dff4-0826-4876-9fd3-3f19781a17bf\") " pod="openshift-multus/multus-additional-cni-plugins-2r84h" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.880023 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c1a75dfb-31dd-4275-a309-c9e7130feb05-host-var-lib-kubelet\") pod \"multus-dgh9v\" (UID: \"c1a75dfb-31dd-4275-a309-c9e7130feb05\") " pod="openshift-multus/multus-dgh9v" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.880081 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c1a75dfb-31dd-4275-a309-c9e7130feb05-multus-conf-dir\") pod \"multus-dgh9v\" (UID: \"c1a75dfb-31dd-4275-a309-c9e7130feb05\") " pod="openshift-multus/multus-dgh9v" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.879885 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fcd5dff4-0826-4876-9fd3-3f19781a17bf-os-release\") pod \"multus-additional-cni-plugins-2r84h\" (UID: \"fcd5dff4-0826-4876-9fd3-3f19781a17bf\") " pod="openshift-multus/multus-additional-cni-plugins-2r84h" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.880030 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c1a75dfb-31dd-4275-a309-c9e7130feb05-multus-conf-dir\") pod \"multus-dgh9v\" (UID: \"c1a75dfb-31dd-4275-a309-c9e7130feb05\") " pod="openshift-multus/multus-dgh9v" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.880131 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c1a75dfb-31dd-4275-a309-c9e7130feb05-multus-daemon-config\") pod \"multus-dgh9v\" (UID: \"c1a75dfb-31dd-4275-a309-c9e7130feb05\") " pod="openshift-multus/multus-dgh9v" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.880180 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3dda5309-668d-4e3c-b3b2-1d708eecc578-proxy-tls\") pod \"machine-config-daemon-tfsm9\" (UID: \"3dda5309-668d-4e3c-b3b2-1d708eecc578\") " pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.880202 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c1a75dfb-31dd-4275-a309-c9e7130feb05-multus-cni-dir\") pod \"multus-dgh9v\" (UID: \"c1a75dfb-31dd-4275-a309-c9e7130feb05\") " pod="openshift-multus/multus-dgh9v" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.880250 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhgnl\" (UniqueName: \"kubernetes.io/projected/fcd5dff4-0826-4876-9fd3-3f19781a17bf-kube-api-access-zhgnl\") pod \"multus-additional-cni-plugins-2r84h\" (UID: \"fcd5dff4-0826-4876-9fd3-3f19781a17bf\") " pod="openshift-multus/multus-additional-cni-plugins-2r84h" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.880442 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c1a75dfb-31dd-4275-a309-c9e7130feb05-cni-binary-copy\") pod \"multus-dgh9v\" (UID: \"c1a75dfb-31dd-4275-a309-c9e7130feb05\") " pod="openshift-multus/multus-dgh9v" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.880661 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c1a75dfb-31dd-4275-a309-c9e7130feb05-multus-cni-dir\") pod \"multus-dgh9v\" (UID: \"c1a75dfb-31dd-4275-a309-c9e7130feb05\") " pod="openshift-multus/multus-dgh9v" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.880710 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/fcd5dff4-0826-4876-9fd3-3f19781a17bf-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-2r84h\" (UID: \"fcd5dff4-0826-4876-9fd3-3f19781a17bf\") " pod="openshift-multus/multus-additional-cni-plugins-2r84h" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.880846 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fcd5dff4-0826-4876-9fd3-3f19781a17bf-tuning-conf-dir\") pod \"multus-additional-cni-plugins-2r84h\" (UID: \"fcd5dff4-0826-4876-9fd3-3f19781a17bf\") " pod="openshift-multus/multus-additional-cni-plugins-2r84h" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.880974 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c1a75dfb-31dd-4275-a309-c9e7130feb05-multus-daemon-config\") pod \"multus-dgh9v\" (UID: \"c1a75dfb-31dd-4275-a309-c9e7130feb05\") " pod="openshift-multus/multus-dgh9v" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.881029 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/fcd5dff4-0826-4876-9fd3-3f19781a17bf-cni-binary-copy\") pod \"multus-additional-cni-plugins-2r84h\" (UID: \"fcd5dff4-0826-4876-9fd3-3f19781a17bf\") " pod="openshift-multus/multus-additional-cni-plugins-2r84h" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.881586 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3dda5309-668d-4e3c-b3b2-1d708eecc578-mcd-auth-proxy-config\") pod \"machine-config-daemon-tfsm9\" (UID: \"3dda5309-668d-4e3c-b3b2-1d708eecc578\") " pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.887972 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3dda5309-668d-4e3c-b3b2-1d708eecc578-proxy-tls\") pod \"machine-config-daemon-tfsm9\" (UID: \"3dda5309-668d-4e3c-b3b2-1d708eecc578\") " pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.892506 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39fe7c21-7960-42c6-aa61-98bac9e7ecd5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46ab0655a5449d68489b2ad4cdadf1ab8539dfc66e9a09ef0bf492fc84c3df01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99862ebd2568f5b782683b45df57b331cb0b659d0a48b16d898658463fcaa023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fac42d74d7b0553ef1fda4393cbe7872b21bc18baada8519db8137f2ef71d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce1b28200696e0c819d326a53d481baf48418232ac2a66fe01504e7af648c56f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1da63ac4504d6a817f5c856ae4ec6feee655c983f804372f8100624e0fbb08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0167ad24e01c669d29e5bcad3d3dc4e2e7578cf20278001ea2dfe9f1390bc99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0167ad24e01c669d29e5bcad3d3dc4e2e7578cf20278001ea2dfe9f1390bc99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc6d6b373b603100e83b56735f848166bbe70eaffa795411d0bf95bbfbe31c6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc6d6b373b603100e83b56735f848166bbe70eaffa795411d0bf95bbfbe31c6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f029352eaf11747f4efebbb91000b740e298e03d72e8a1347316668b78c23cae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f029352eaf11747f4efebbb91000b740e298e03d72e8a1347316668b78c23cae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:45Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.896961 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chgpx\" (UniqueName: \"kubernetes.io/projected/c1a75dfb-31dd-4275-a309-c9e7130feb05-kube-api-access-chgpx\") pod \"multus-dgh9v\" (UID: \"c1a75dfb-31dd-4275-a309-c9e7130feb05\") " pod="openshift-multus/multus-dgh9v" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.899015 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2hw2\" (UniqueName: \"kubernetes.io/projected/3dda5309-668d-4e3c-b3b2-1d708eecc578-kube-api-access-c2hw2\") pod \"machine-config-daemon-tfsm9\" (UID: \"3dda5309-668d-4e3c-b3b2-1d708eecc578\") " pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.900943 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhgnl\" (UniqueName: \"kubernetes.io/projected/fcd5dff4-0826-4876-9fd3-3f19781a17bf-kube-api-access-zhgnl\") pod \"multus-additional-cni-plugins-2r84h\" (UID: \"fcd5dff4-0826-4876-9fd3-3f19781a17bf\") " pod="openshift-multus/multus-additional-cni-plugins-2r84h" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.904841 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b088d2a8d5895f936e824861275482977fb68600e1ca4fd0c08e124c4f11da52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:45Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.918233 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:45Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.931780 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:45Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.942138 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-twvrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6377c39c-8ecf-409c-b3e7-ea9d717e234f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c99gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-twvrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:45Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.955380 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2r84h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcd5dff4-0826-4876-9fd3-3f19781a17bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2r84h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:45Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.967325 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afd64db1e4772f4558a0a810e319481d1c8d521b299c9f6f79d5c0f752b5f89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8deb33e0cd83c7419f7ddd4cba0e84d33849e982b5da5e74e0e93eebb40b249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:45Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.974233 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.974278 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.974291 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.974305 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.974316 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:45Z","lastTransitionTime":"2026-03-11T08:40:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.980055 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70aeaa0b-1b9a-450e-bac3-61beed554491\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9a36b70daad0e5e33a84c699b4c0680f47f80bad4c9852616ea933ba8c6b221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1f460dd013b0b42bfb8031fcb1e2f49ace0fe24ce8194bd6e6970ad3e750ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://675da4e867613d165a41176c1b725f3601ffdd7de762c52efec0ee5485b1d628\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f99d53f25011afeb242934cb64a3cce6d5a03c17bd6008729f5e07298f152fb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f99d53f25011afeb242934cb64a3cce6d5a03c17bd6008729f5e07298f152fb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-11T08:40:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0311 08:40:20.589250 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0311 08:40:20.589388 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0311 08:40:20.590094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1044918164/tls.crt::/tmp/serving-cert-1044918164/tls.key\\\\\\\"\\\\nI0311 08:40:21.049983 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0311 08:40:21.052943 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0311 08:40:21.052965 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0311 08:40:21.052991 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0311 08:40:21.052999 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0311 08:40:21.059520 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0311 08:40:21.059550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:40:21.059569 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:40:21.059578 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0311 08:40:21.059584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0311 08:40:21.059590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0311 08:40:21.059596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0311 08:40:21.059648 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0311 08:40:21.062132 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://506d71773927c014aeafefdc00c93d5faf4e6a5827bc8b47f525f3ea970ed14e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://184b4659910cdc9b9ee8dc34db1ede687099f4a5103dfba9e9a2fc0428c4e2ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://184b4659910cdc9b9ee8dc34db1ede687099f4a5103dfba9e9a2fc0428c4e2ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:45Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:45 crc kubenswrapper[4808]: I0311 08:40:45.989915 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"134c7d84-5243-4e2e-afcf-0eaf67967ce7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00fd34aefc93aa169e305232d0d0bf0fff66d3f4469ae96f68cc5f8a51a92c86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://023c9b432e3273f96abe72e9e1e4cf5ab68649d9d71d0ff59f75fc30ea5c2287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://023c9b432e3273f96abe72e9e1e4cf5ab68649d9d71d0ff59f75fc30ea5c2287\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:45Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.001613 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:46Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.012234 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dda5309-668d-4e3c-b3b2-1d708eecc578\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2hw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2hw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tfsm9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:46Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.024264 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dgh9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1a75dfb-31dd-4275-a309-c9e7130feb05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chgpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dgh9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:46Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.026457 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.032649 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-2r84h" Mar 11 08:40:46 crc kubenswrapper[4808]: W0311 08:40:46.038559 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dda5309_668d_4e3c_b3b2_1d708eecc578.slice/crio-938b848eb4400874b69961975c229b166d5a424bb7c561c5fbd832b1085538cf WatchSource:0}: Error finding container 938b848eb4400874b69961975c229b166d5a424bb7c561c5fbd832b1085538cf: Status 404 returned error can't find the container with id 938b848eb4400874b69961975c229b166d5a424bb7c561c5fbd832b1085538cf Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.038696 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-dgh9v" Mar 11 08:40:46 crc kubenswrapper[4808]: W0311 08:40:46.052568 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc1a75dfb_31dd_4275_a309_c9e7130feb05.slice/crio-e84a4732e102217454219178a0702dcdc8d291b51bb7f6a5b9719046737b82d0 WatchSource:0}: Error finding container e84a4732e102217454219178a0702dcdc8d291b51bb7f6a5b9719046737b82d0: Status 404 returned error can't find the container with id e84a4732e102217454219178a0702dcdc8d291b51bb7f6a5b9719046737b82d0 Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.072824 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-8wfl5"] Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.073678 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.078659 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.078811 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.078993 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.078868 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.079088 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.079141 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.079244 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.085284 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.085315 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.085323 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.085337 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.085348 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:46Z","lastTransitionTime":"2026-03-11T08:40:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.089164 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:46Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.098305 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-twvrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6377c39c-8ecf-409c-b3e7-ea9d717e234f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c99gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-twvrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:46Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.114053 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2r84h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcd5dff4-0826-4876-9fd3-3f19781a17bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2r84h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:46Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.133871 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39fe7c21-7960-42c6-aa61-98bac9e7ecd5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46ab0655a5449d68489b2ad4cdadf1ab8539dfc66e9a09ef0bf492fc84c3df01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99862ebd2568f5b782683b45df57b331cb0b659d0a48b16d898658463fcaa023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fac42d74d7b0553ef1fda4393cbe7872b21bc18baada8519db8137f2ef71d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce1b28200696e0c819d326a53d481baf48418232ac2a66fe01504e7af648c56f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1da63ac4504d6a817f5c856ae4ec6feee655c983f804372f8100624e0fbb08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0167ad24e01c669d29e5bcad3d3dc4e2e7578cf20278001ea2dfe9f1390bc99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0167ad24e01c669d29e5bcad3d3dc4e2e7578cf20278001ea2dfe9f1390bc99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc6d6b373b603100e83b56735f848166bbe70eaffa795411d0bf95bbfbe31c6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc6d6b373b603100e83b56735f848166bbe70eaffa795411d0bf95bbfbe31c6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f029352eaf11747f4efebbb91000b740e298e03d72e8a1347316668b78c23cae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f029352eaf11747f4efebbb91000b740e298e03d72e8a1347316668b78c23cae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:46Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.148598 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b088d2a8d5895f936e824861275482977fb68600e1ca4fd0c08e124c4f11da52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:46Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.164429 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:46Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.176823 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afd64db1e4772f4558a0a810e319481d1c8d521b299c9f6f79d5c0f752b5f89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8deb33e0cd83c7419f7ddd4cba0e84d33849e982b5da5e74e0e93eebb40b249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:46Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.182241 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-systemd-units\") pod \"ovnkube-node-8wfl5\" (UID: \"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.182280 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-host-kubelet\") pod \"ovnkube-node-8wfl5\" (UID: \"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.182373 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8wfl5\" (UID: \"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.182421 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-var-lib-openvswitch\") pod \"ovnkube-node-8wfl5\" (UID: \"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.182444 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9k5ct\" (UniqueName: \"kubernetes.io/projected/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-kube-api-access-9k5ct\") pod \"ovnkube-node-8wfl5\" (UID: \"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.182476 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-run-systemd\") pod \"ovnkube-node-8wfl5\" (UID: \"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.182501 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-run-ovn\") pod \"ovnkube-node-8wfl5\" (UID: \"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.182550 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-run-openvswitch\") pod \"ovnkube-node-8wfl5\" (UID: \"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.182571 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-node-log\") pod \"ovnkube-node-8wfl5\" (UID: \"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.182614 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-ovnkube-config\") pod \"ovnkube-node-8wfl5\" (UID: \"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.182645 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-ovn-node-metrics-cert\") pod \"ovnkube-node-8wfl5\" (UID: \"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.182661 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-ovnkube-script-lib\") pod \"ovnkube-node-8wfl5\" (UID: \"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.182677 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-host-slash\") pod \"ovnkube-node-8wfl5\" (UID: \"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.182739 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-host-run-netns\") pod \"ovnkube-node-8wfl5\" (UID: \"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.182776 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-host-run-ovn-kubernetes\") pod \"ovnkube-node-8wfl5\" (UID: \"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.182818 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-host-cni-bin\") pod \"ovnkube-node-8wfl5\" (UID: \"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.182839 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-env-overrides\") pod \"ovnkube-node-8wfl5\" (UID: \"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.182871 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-etc-openvswitch\") pod \"ovnkube-node-8wfl5\" (UID: \"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.182894 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-host-cni-netd\") pod \"ovnkube-node-8wfl5\" (UID: \"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.182917 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-log-socket\") pod \"ovnkube-node-8wfl5\" (UID: \"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.194616 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8wfl5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:46Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.197260 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.197506 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.197518 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.197538 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.197552 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:46Z","lastTransitionTime":"2026-03-11T08:40:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.208142 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:46Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.218908 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dda5309-668d-4e3c-b3b2-1d708eecc578\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2hw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2hw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tfsm9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:46Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.229839 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dgh9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1a75dfb-31dd-4275-a309-c9e7130feb05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chgpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dgh9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:46Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.268955 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-twvrg" event={"ID":"6377c39c-8ecf-409c-b3e7-ea9d717e234f","Type":"ContainerStarted","Data":"a910f47666db2f5fb733718e848c75c15d9f0975c7339afd9e21f21ede7aa1b2"} Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.269008 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-twvrg" event={"ID":"6377c39c-8ecf-409c-b3e7-ea9d717e234f","Type":"ContainerStarted","Data":"59b4a0351224eeb22f47f148d06fed0efc5162de9c27aeae9e8d37c7cb247aba"} Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.270946 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-dgh9v" event={"ID":"c1a75dfb-31dd-4275-a309-c9e7130feb05","Type":"ContainerStarted","Data":"81effdeab74d0bc6b6f904ca05d6ae5f9777772a1b6e69e9038bb6818af8d979"} Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.270993 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-dgh9v" event={"ID":"c1a75dfb-31dd-4275-a309-c9e7130feb05","Type":"ContainerStarted","Data":"e84a4732e102217454219178a0702dcdc8d291b51bb7f6a5b9719046737b82d0"} Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.272488 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" event={"ID":"3dda5309-668d-4e3c-b3b2-1d708eecc578","Type":"ContainerStarted","Data":"5dad38518c1c86c0eb3926b0ff17b9fe7847bd1e942591307c49dfbd87a61d19"} Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.272517 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" event={"ID":"3dda5309-668d-4e3c-b3b2-1d708eecc578","Type":"ContainerStarted","Data":"938b848eb4400874b69961975c229b166d5a424bb7c561c5fbd832b1085538cf"} Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.274151 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2r84h" event={"ID":"fcd5dff4-0826-4876-9fd3-3f19781a17bf","Type":"ContainerStarted","Data":"54b72620db5e782ccc483c0e54a5e3e0b2485ad13ff0270b90ce8546e43fae64"} Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.283238 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-node-log\") pod \"ovnkube-node-8wfl5\" (UID: \"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.283292 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-ovnkube-config\") pod \"ovnkube-node-8wfl5\" (UID: \"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.283309 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-ovn-node-metrics-cert\") pod \"ovnkube-node-8wfl5\" (UID: \"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.283324 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-ovnkube-script-lib\") pod \"ovnkube-node-8wfl5\" (UID: \"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.283338 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-host-slash\") pod \"ovnkube-node-8wfl5\" (UID: \"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.283385 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-host-run-netns\") pod \"ovnkube-node-8wfl5\" (UID: \"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.283400 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-host-run-ovn-kubernetes\") pod \"ovnkube-node-8wfl5\" (UID: \"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.283431 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-env-overrides\") pod \"ovnkube-node-8wfl5\" (UID: \"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.283447 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-host-cni-bin\") pod \"ovnkube-node-8wfl5\" (UID: \"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.283462 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-etc-openvswitch\") pod \"ovnkube-node-8wfl5\" (UID: \"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.283477 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-host-cni-netd\") pod \"ovnkube-node-8wfl5\" (UID: \"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.283492 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-log-socket\") pod \"ovnkube-node-8wfl5\" (UID: \"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.283493 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-host-run-netns\") pod \"ovnkube-node-8wfl5\" (UID: \"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.283531 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-host-kubelet\") pod \"ovnkube-node-8wfl5\" (UID: \"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.283506 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-host-kubelet\") pod \"ovnkube-node-8wfl5\" (UID: \"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.283567 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-host-cni-bin\") pod \"ovnkube-node-8wfl5\" (UID: \"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.283578 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-systemd-units\") pod \"ovnkube-node-8wfl5\" (UID: \"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.283587 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-etc-openvswitch\") pod \"ovnkube-node-8wfl5\" (UID: \"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.283609 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-host-cni-netd\") pod \"ovnkube-node-8wfl5\" (UID: \"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.283611 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8wfl5\" (UID: \"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.283629 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-log-socket\") pod \"ovnkube-node-8wfl5\" (UID: \"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.283652 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-systemd-units\") pod \"ovnkube-node-8wfl5\" (UID: \"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.283667 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-var-lib-openvswitch\") pod \"ovnkube-node-8wfl5\" (UID: \"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.283680 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-host-run-ovn-kubernetes\") pod \"ovnkube-node-8wfl5\" (UID: \"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.283694 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-run-systemd\") pod \"ovnkube-node-8wfl5\" (UID: \"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.283737 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-var-lib-openvswitch\") pod \"ovnkube-node-8wfl5\" (UID: \"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.283751 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-run-ovn\") pod \"ovnkube-node-8wfl5\" (UID: \"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.283758 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8wfl5\" (UID: \"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.283770 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-run-systemd\") pod \"ovnkube-node-8wfl5\" (UID: \"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.283775 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9k5ct\" (UniqueName: \"kubernetes.io/projected/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-kube-api-access-9k5ct\") pod \"ovnkube-node-8wfl5\" (UID: \"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.283797 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-run-openvswitch\") pod \"ovnkube-node-8wfl5\" (UID: \"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.283797 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-run-ovn\") pod \"ovnkube-node-8wfl5\" (UID: \"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.283876 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-run-openvswitch\") pod \"ovnkube-node-8wfl5\" (UID: \"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.284054 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-env-overrides\") pod \"ovnkube-node-8wfl5\" (UID: \"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.284059 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-ovnkube-config\") pod \"ovnkube-node-8wfl5\" (UID: \"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.284246 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-ovnkube-script-lib\") pod \"ovnkube-node-8wfl5\" (UID: \"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.284469 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70aeaa0b-1b9a-450e-bac3-61beed554491\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9a36b70daad0e5e33a84c699b4c0680f47f80bad4c9852616ea933ba8c6b221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1f460dd013b0b42bfb8031fcb1e2f49ace0fe24ce8194bd6e6970ad3e750ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://675da4e867613d165a41176c1b725f3601ffdd7de762c52efec0ee5485b1d628\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f99d53f25011afeb242934cb64a3cce6d5a03c17bd6008729f5e07298f152fb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f99d53f25011afeb242934cb64a3cce6d5a03c17bd6008729f5e07298f152fb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-11T08:40:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0311 08:40:20.589250 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0311 08:40:20.589388 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0311 08:40:20.590094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1044918164/tls.crt::/tmp/serving-cert-1044918164/tls.key\\\\\\\"\\\\nI0311 08:40:21.049983 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0311 08:40:21.052943 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0311 08:40:21.052965 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0311 08:40:21.052991 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0311 08:40:21.052999 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0311 08:40:21.059520 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0311 08:40:21.059550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:40:21.059569 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:40:21.059578 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0311 08:40:21.059584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0311 08:40:21.059590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0311 08:40:21.059596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0311 08:40:21.059648 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0311 08:40:21.062132 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://506d71773927c014aeafefdc00c93d5faf4e6a5827bc8b47f525f3ea970ed14e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://184b4659910cdc9b9ee8dc34db1ede687099f4a5103dfba9e9a2fc0428c4e2ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://184b4659910cdc9b9ee8dc34db1ede687099f4a5103dfba9e9a2fc0428c4e2ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:46Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.284782 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-host-slash\") pod \"ovnkube-node-8wfl5\" (UID: \"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.284822 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-node-log\") pod \"ovnkube-node-8wfl5\" (UID: \"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.288042 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-ovn-node-metrics-cert\") pod \"ovnkube-node-8wfl5\" (UID: \"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.297942 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"134c7d84-5243-4e2e-afcf-0eaf67967ce7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00fd34aefc93aa169e305232d0d0bf0fff66d3f4469ae96f68cc5f8a51a92c86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://023c9b432e3273f96abe72e9e1e4cf5ab68649d9d71d0ff59f75fc30ea5c2287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://023c9b432e3273f96abe72e9e1e4cf5ab68649d9d71d0ff59f75fc30ea5c2287\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:46Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.299350 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.299392 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.299401 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.299415 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.299425 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:46Z","lastTransitionTime":"2026-03-11T08:40:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.312299 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a12f71a3a92a4513660a5b6c0bb73f40545f5528a45fab7aec18cb22283921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:46Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.312713 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9k5ct\" (UniqueName: \"kubernetes.io/projected/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-kube-api-access-9k5ct\") pod \"ovnkube-node-8wfl5\" (UID: \"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\") " pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.330057 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8wfl5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:46Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.340788 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afd64db1e4772f4558a0a810e319481d1c8d521b299c9f6f79d5c0f752b5f89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8deb33e0cd83c7419f7ddd4cba0e84d33849e982b5da5e74e0e93eebb40b249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:46Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.351600 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"134c7d84-5243-4e2e-afcf-0eaf67967ce7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00fd34aefc93aa169e305232d0d0bf0fff66d3f4469ae96f68cc5f8a51a92c86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://023c9b432e3273f96abe72e9e1e4cf5ab68649d9d71d0ff59f75fc30ea5c2287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://023c9b432e3273f96abe72e9e1e4cf5ab68649d9d71d0ff59f75fc30ea5c2287\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:46Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.363178 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:46Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.374116 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dda5309-668d-4e3c-b3b2-1d708eecc578\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2hw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2hw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tfsm9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:46Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.385105 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dgh9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1a75dfb-31dd-4275-a309-c9e7130feb05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81effdeab74d0bc6b6f904ca05d6ae5f9777772a1b6e69e9038bb6818af8d979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chgpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dgh9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:46Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.399915 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70aeaa0b-1b9a-450e-bac3-61beed554491\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9a36b70daad0e5e33a84c699b4c0680f47f80bad4c9852616ea933ba8c6b221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1f460dd013b0b42bfb8031fcb1e2f49ace0fe24ce8194bd6e6970ad3e750ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://675da4e867613d165a41176c1b725f3601ffdd7de762c52efec0ee5485b1d628\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f99d53f25011afeb242934cb64a3cce6d5a03c17bd6008729f5e07298f152fb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f99d53f25011afeb242934cb64a3cce6d5a03c17bd6008729f5e07298f152fb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-11T08:40:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0311 08:40:20.589250 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0311 08:40:20.589388 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0311 08:40:20.590094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1044918164/tls.crt::/tmp/serving-cert-1044918164/tls.key\\\\\\\"\\\\nI0311 08:40:21.049983 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0311 08:40:21.052943 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0311 08:40:21.052965 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0311 08:40:21.052991 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0311 08:40:21.052999 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0311 08:40:21.059520 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0311 08:40:21.059550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:40:21.059569 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:40:21.059578 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0311 08:40:21.059584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0311 08:40:21.059590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0311 08:40:21.059596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0311 08:40:21.059648 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0311 08:40:21.062132 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://506d71773927c014aeafefdc00c93d5faf4e6a5827bc8b47f525f3ea970ed14e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://184b4659910cdc9b9ee8dc34db1ede687099f4a5103dfba9e9a2fc0428c4e2ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://184b4659910cdc9b9ee8dc34db1ede687099f4a5103dfba9e9a2fc0428c4e2ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:46Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.401339 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.401387 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.401396 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.401412 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.401421 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:46Z","lastTransitionTime":"2026-03-11T08:40:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.409537 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a12f71a3a92a4513660a5b6c0bb73f40545f5528a45fab7aec18cb22283921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:46Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.414732 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" Mar 11 08:40:46 crc kubenswrapper[4808]: W0311 08:40:46.426769 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podafeac5d0_d84f_4776_ae37_a03c8f0f66b8.slice/crio-b5fb214c37e3f6908ae8b5c1d7c0a6c7f1142819930144cdd979bfeb40b3e7f2 WatchSource:0}: Error finding container b5fb214c37e3f6908ae8b5c1d7c0a6c7f1142819930144cdd979bfeb40b3e7f2: Status 404 returned error can't find the container with id b5fb214c37e3f6908ae8b5c1d7c0a6c7f1142819930144cdd979bfeb40b3e7f2 Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.430901 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b088d2a8d5895f936e824861275482977fb68600e1ca4fd0c08e124c4f11da52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:46Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.449230 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:46Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.462860 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:46Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.475420 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-twvrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6377c39c-8ecf-409c-b3e7-ea9d717e234f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a910f47666db2f5fb733718e848c75c15d9f0975c7339afd9e21f21ede7aa1b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c99gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-twvrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:46Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.490753 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2r84h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcd5dff4-0826-4876-9fd3-3f19781a17bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2r84h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:46Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.510862 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.510901 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.510911 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.510930 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.510944 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:46Z","lastTransitionTime":"2026-03-11T08:40:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.514501 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39fe7c21-7960-42c6-aa61-98bac9e7ecd5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46ab0655a5449d68489b2ad4cdadf1ab8539dfc66e9a09ef0bf492fc84c3df01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99862ebd2568f5b782683b45df57b331cb0b659d0a48b16d898658463fcaa023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fac42d74d7b0553ef1fda4393cbe7872b21bc18baada8519db8137f2ef71d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce1b28200696e0c819d326a53d481baf48418232ac2a66fe01504e7af648c56f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1da63ac4504d6a817f5c856ae4ec6feee655c983f804372f8100624e0fbb08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0167ad24e01c669d29e5bcad3d3dc4e2e7578cf20278001ea2dfe9f1390bc99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0167ad24e01c669d29e5bcad3d3dc4e2e7578cf20278001ea2dfe9f1390bc99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc6d6b373b603100e83b56735f848166bbe70eaffa795411d0bf95bbfbe31c6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc6d6b373b603100e83b56735f848166bbe70eaffa795411d0bf95bbfbe31c6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f029352eaf11747f4efebbb91000b740e298e03d72e8a1347316668b78c23cae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f029352eaf11747f4efebbb91000b740e298e03d72e8a1347316668b78c23cae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:46Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.613474 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.613523 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.613535 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.613553 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.613566 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:46Z","lastTransitionTime":"2026-03-11T08:40:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.716440 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.716479 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.716525 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.716541 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.716551 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:46Z","lastTransitionTime":"2026-03-11T08:40:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.819284 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.819333 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.819350 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.819398 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.819416 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:46Z","lastTransitionTime":"2026-03-11T08:40:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.922258 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.922301 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.922309 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.922322 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:46 crc kubenswrapper[4808]: I0311 08:40:46.922332 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:46Z","lastTransitionTime":"2026-03-11T08:40:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:47 crc kubenswrapper[4808]: I0311 08:40:47.024984 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:47 crc kubenswrapper[4808]: I0311 08:40:47.025025 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:47 crc kubenswrapper[4808]: I0311 08:40:47.025039 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:47 crc kubenswrapper[4808]: I0311 08:40:47.025056 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:47 crc kubenswrapper[4808]: I0311 08:40:47.025069 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:47Z","lastTransitionTime":"2026-03-11T08:40:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:47 crc kubenswrapper[4808]: I0311 08:40:47.127984 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:47 crc kubenswrapper[4808]: I0311 08:40:47.128232 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:47 crc kubenswrapper[4808]: I0311 08:40:47.128244 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:47 crc kubenswrapper[4808]: I0311 08:40:47.128261 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:47 crc kubenswrapper[4808]: I0311 08:40:47.128273 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:47Z","lastTransitionTime":"2026-03-11T08:40:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:47 crc kubenswrapper[4808]: I0311 08:40:47.231014 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:47 crc kubenswrapper[4808]: I0311 08:40:47.231078 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:47 crc kubenswrapper[4808]: I0311 08:40:47.231095 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:47 crc kubenswrapper[4808]: I0311 08:40:47.231120 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:47 crc kubenswrapper[4808]: I0311 08:40:47.231137 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:47Z","lastTransitionTime":"2026-03-11T08:40:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:47 crc kubenswrapper[4808]: I0311 08:40:47.279455 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" event={"ID":"3dda5309-668d-4e3c-b3b2-1d708eecc578","Type":"ContainerStarted","Data":"125d3edbb47daf863a7f7b431cd468ba206a964c42d2a8fdbf163ef8ae3fd771"} Mar 11 08:40:47 crc kubenswrapper[4808]: I0311 08:40:47.280782 4808 generic.go:334] "Generic (PLEG): container finished" podID="afeac5d0-d84f-4776-ae37-a03c8f0f66b8" containerID="7129de475a69815e0ed556f2d45c208788efcd7ef2792f3d93994af37dee87b2" exitCode=0 Mar 11 08:40:47 crc kubenswrapper[4808]: I0311 08:40:47.280853 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" event={"ID":"afeac5d0-d84f-4776-ae37-a03c8f0f66b8","Type":"ContainerDied","Data":"7129de475a69815e0ed556f2d45c208788efcd7ef2792f3d93994af37dee87b2"} Mar 11 08:40:47 crc kubenswrapper[4808]: I0311 08:40:47.280886 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" event={"ID":"afeac5d0-d84f-4776-ae37-a03c8f0f66b8","Type":"ContainerStarted","Data":"b5fb214c37e3f6908ae8b5c1d7c0a6c7f1142819930144cdd979bfeb40b3e7f2"} Mar 11 08:40:47 crc kubenswrapper[4808]: I0311 08:40:47.282250 4808 generic.go:334] "Generic (PLEG): container finished" podID="fcd5dff4-0826-4876-9fd3-3f19781a17bf" containerID="6f840757548b4af73aae0c247025a649cbf50cfbc534fbd38f0592f0f0e97274" exitCode=0 Mar 11 08:40:47 crc kubenswrapper[4808]: I0311 08:40:47.282275 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2r84h" event={"ID":"fcd5dff4-0826-4876-9fd3-3f19781a17bf","Type":"ContainerDied","Data":"6f840757548b4af73aae0c247025a649cbf50cfbc534fbd38f0592f0f0e97274"} Mar 11 08:40:47 crc kubenswrapper[4808]: I0311 08:40:47.296250 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a12f71a3a92a4513660a5b6c0bb73f40545f5528a45fab7aec18cb22283921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:47Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:47 crc kubenswrapper[4808]: I0311 08:40:47.316287 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b088d2a8d5895f936e824861275482977fb68600e1ca4fd0c08e124c4f11da52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:47Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:47 crc kubenswrapper[4808]: I0311 08:40:47.333371 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:47Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:47 crc kubenswrapper[4808]: I0311 08:40:47.334078 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:47 crc kubenswrapper[4808]: I0311 08:40:47.334112 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:47 crc kubenswrapper[4808]: I0311 08:40:47.334122 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:47 crc kubenswrapper[4808]: I0311 08:40:47.334140 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:47 crc kubenswrapper[4808]: I0311 08:40:47.334151 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:47Z","lastTransitionTime":"2026-03-11T08:40:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:47 crc kubenswrapper[4808]: I0311 08:40:47.346952 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:47Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:47 crc kubenswrapper[4808]: I0311 08:40:47.357055 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-twvrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6377c39c-8ecf-409c-b3e7-ea9d717e234f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a910f47666db2f5fb733718e848c75c15d9f0975c7339afd9e21f21ede7aa1b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c99gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-twvrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:47Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:47 crc kubenswrapper[4808]: I0311 08:40:47.373031 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2r84h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcd5dff4-0826-4876-9fd3-3f19781a17bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2r84h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:47Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:47 crc kubenswrapper[4808]: I0311 08:40:47.393654 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39fe7c21-7960-42c6-aa61-98bac9e7ecd5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46ab0655a5449d68489b2ad4cdadf1ab8539dfc66e9a09ef0bf492fc84c3df01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99862ebd2568f5b782683b45df57b331cb0b659d0a48b16d898658463fcaa023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fac42d74d7b0553ef1fda4393cbe7872b21bc18baada8519db8137f2ef71d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce1b28200696e0c819d326a53d481baf48418232ac2a66fe01504e7af648c56f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1da63ac4504d6a817f5c856ae4ec6feee655c983f804372f8100624e0fbb08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0167ad24e01c669d29e5bcad3d3dc4e2e7578cf20278001ea2dfe9f1390bc99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0167ad24e01c669d29e5bcad3d3dc4e2e7578cf20278001ea2dfe9f1390bc99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc6d6b373b603100e83b56735f848166bbe70eaffa795411d0bf95bbfbe31c6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc6d6b373b603100e83b56735f848166bbe70eaffa795411d0bf95bbfbe31c6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f029352eaf11747f4efebbb91000b740e298e03d72e8a1347316668b78c23cae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f029352eaf11747f4efebbb91000b740e298e03d72e8a1347316668b78c23cae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:47Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:47 crc kubenswrapper[4808]: I0311 08:40:47.417776 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8wfl5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:47Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:47 crc kubenswrapper[4808]: I0311 08:40:47.430843 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afd64db1e4772f4558a0a810e319481d1c8d521b299c9f6f79d5c0f752b5f89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8deb33e0cd83c7419f7ddd4cba0e84d33849e982b5da5e74e0e93eebb40b249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:47Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:47 crc kubenswrapper[4808]: I0311 08:40:47.436329 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:47 crc kubenswrapper[4808]: I0311 08:40:47.436372 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:47 crc kubenswrapper[4808]: I0311 08:40:47.436383 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:47 crc kubenswrapper[4808]: I0311 08:40:47.436398 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:47 crc kubenswrapper[4808]: I0311 08:40:47.436410 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:47Z","lastTransitionTime":"2026-03-11T08:40:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:47 crc kubenswrapper[4808]: I0311 08:40:47.441286 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"134c7d84-5243-4e2e-afcf-0eaf67967ce7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00fd34aefc93aa169e305232d0d0bf0fff66d3f4469ae96f68cc5f8a51a92c86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://023c9b432e3273f96abe72e9e1e4cf5ab68649d9d71d0ff59f75fc30ea5c2287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://023c9b432e3273f96abe72e9e1e4cf5ab68649d9d71d0ff59f75fc30ea5c2287\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:47Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:47 crc kubenswrapper[4808]: I0311 08:40:47.454522 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:47Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:47 crc kubenswrapper[4808]: I0311 08:40:47.466493 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dda5309-668d-4e3c-b3b2-1d708eecc578\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://125d3edbb47daf863a7f7b431cd468ba206a964c42d2a8fdbf163ef8ae3fd771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2hw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dad38518c1c86c0eb3926b0ff17b9fe7847bd1e942591307c49dfbd87a61d19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2hw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tfsm9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:47Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:47 crc kubenswrapper[4808]: I0311 08:40:47.481578 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dgh9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1a75dfb-31dd-4275-a309-c9e7130feb05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81effdeab74d0bc6b6f904ca05d6ae5f9777772a1b6e69e9038bb6818af8d979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chgpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dgh9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:47Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:47 crc kubenswrapper[4808]: I0311 08:40:47.496703 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70aeaa0b-1b9a-450e-bac3-61beed554491\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9a36b70daad0e5e33a84c699b4c0680f47f80bad4c9852616ea933ba8c6b221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1f460dd013b0b42bfb8031fcb1e2f49ace0fe24ce8194bd6e6970ad3e750ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://675da4e867613d165a41176c1b725f3601ffdd7de762c52efec0ee5485b1d628\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f99d53f25011afeb242934cb64a3cce6d5a03c17bd6008729f5e07298f152fb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f99d53f25011afeb242934cb64a3cce6d5a03c17bd6008729f5e07298f152fb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-11T08:40:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0311 08:40:20.589250 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0311 08:40:20.589388 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0311 08:40:20.590094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1044918164/tls.crt::/tmp/serving-cert-1044918164/tls.key\\\\\\\"\\\\nI0311 08:40:21.049983 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0311 08:40:21.052943 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0311 08:40:21.052965 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0311 08:40:21.052991 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0311 08:40:21.052999 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0311 08:40:21.059520 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0311 08:40:21.059550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:40:21.059569 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:40:21.059578 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0311 08:40:21.059584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0311 08:40:21.059590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0311 08:40:21.059596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0311 08:40:21.059648 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0311 08:40:21.062132 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://506d71773927c014aeafefdc00c93d5faf4e6a5827bc8b47f525f3ea970ed14e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://184b4659910cdc9b9ee8dc34db1ede687099f4a5103dfba9e9a2fc0428c4e2ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://184b4659910cdc9b9ee8dc34db1ede687099f4a5103dfba9e9a2fc0428c4e2ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:47Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:47 crc kubenswrapper[4808]: I0311 08:40:47.508079 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a12f71a3a92a4513660a5b6c0bb73f40545f5528a45fab7aec18cb22283921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:47Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:47 crc kubenswrapper[4808]: I0311 08:40:47.532823 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39fe7c21-7960-42c6-aa61-98bac9e7ecd5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46ab0655a5449d68489b2ad4cdadf1ab8539dfc66e9a09ef0bf492fc84c3df01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99862ebd2568f5b782683b45df57b331cb0b659d0a48b16d898658463fcaa023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fac42d74d7b0553ef1fda4393cbe7872b21bc18baada8519db8137f2ef71d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce1b28200696e0c819d326a53d481baf48418232ac2a66fe01504e7af648c56f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1da63ac4504d6a817f5c856ae4ec6feee655c983f804372f8100624e0fbb08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0167ad24e01c669d29e5bcad3d3dc4e2e7578cf20278001ea2dfe9f1390bc99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0167ad24e01c669d29e5bcad3d3dc4e2e7578cf20278001ea2dfe9f1390bc99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc6d6b373b603100e83b56735f848166bbe70eaffa795411d0bf95bbfbe31c6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc6d6b373b603100e83b56735f848166bbe70eaffa795411d0bf95bbfbe31c6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f029352eaf11747f4efebbb91000b740e298e03d72e8a1347316668b78c23cae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f029352eaf11747f4efebbb91000b740e298e03d72e8a1347316668b78c23cae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:47Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:47 crc kubenswrapper[4808]: I0311 08:40:47.542174 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:47 crc kubenswrapper[4808]: I0311 08:40:47.542214 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:47 crc kubenswrapper[4808]: I0311 08:40:47.542225 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:47 crc kubenswrapper[4808]: I0311 08:40:47.542242 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:47 crc kubenswrapper[4808]: I0311 08:40:47.542254 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:47Z","lastTransitionTime":"2026-03-11T08:40:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:47 crc kubenswrapper[4808]: I0311 08:40:47.548385 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b088d2a8d5895f936e824861275482977fb68600e1ca4fd0c08e124c4f11da52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:47Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:47 crc kubenswrapper[4808]: I0311 08:40:47.572440 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:47Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:47 crc kubenswrapper[4808]: I0311 08:40:47.587997 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:47Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:47 crc kubenswrapper[4808]: I0311 08:40:47.600209 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-twvrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6377c39c-8ecf-409c-b3e7-ea9d717e234f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a910f47666db2f5fb733718e848c75c15d9f0975c7339afd9e21f21ede7aa1b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c99gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-twvrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:47Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:47 crc kubenswrapper[4808]: I0311 08:40:47.615925 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2r84h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcd5dff4-0826-4876-9fd3-3f19781a17bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f840757548b4af73aae0c247025a649cbf50cfbc534fbd38f0592f0f0e97274\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f840757548b4af73aae0c247025a649cbf50cfbc534fbd38f0592f0f0e97274\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2r84h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:47Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:47 crc kubenswrapper[4808]: I0311 08:40:47.630404 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afd64db1e4772f4558a0a810e319481d1c8d521b299c9f6f79d5c0f752b5f89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8deb33e0cd83c7419f7ddd4cba0e84d33849e982b5da5e74e0e93eebb40b249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:47Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:47 crc kubenswrapper[4808]: I0311 08:40:47.645397 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:47 crc kubenswrapper[4808]: I0311 08:40:47.645426 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:47 crc kubenswrapper[4808]: I0311 08:40:47.645434 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:47 crc kubenswrapper[4808]: I0311 08:40:47.645448 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:47 crc kubenswrapper[4808]: I0311 08:40:47.645457 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:47Z","lastTransitionTime":"2026-03-11T08:40:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:47 crc kubenswrapper[4808]: I0311 08:40:47.654906 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7129de475a69815e0ed556f2d45c208788efcd7ef2792f3d93994af37dee87b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7129de475a69815e0ed556f2d45c208788efcd7ef2792f3d93994af37dee87b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8wfl5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:47Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:47 crc kubenswrapper[4808]: I0311 08:40:47.672048 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70aeaa0b-1b9a-450e-bac3-61beed554491\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9a36b70daad0e5e33a84c699b4c0680f47f80bad4c9852616ea933ba8c6b221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1f460dd013b0b42bfb8031fcb1e2f49ace0fe24ce8194bd6e6970ad3e750ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://675da4e867613d165a41176c1b725f3601ffdd7de762c52efec0ee5485b1d628\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f99d53f25011afeb242934cb64a3cce6d5a03c17bd6008729f5e07298f152fb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f99d53f25011afeb242934cb64a3cce6d5a03c17bd6008729f5e07298f152fb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-11T08:40:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0311 08:40:20.589250 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0311 08:40:20.589388 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0311 08:40:20.590094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1044918164/tls.crt::/tmp/serving-cert-1044918164/tls.key\\\\\\\"\\\\nI0311 08:40:21.049983 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0311 08:40:21.052943 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0311 08:40:21.052965 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0311 08:40:21.052991 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0311 08:40:21.052999 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0311 08:40:21.059520 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0311 08:40:21.059550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:40:21.059569 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:40:21.059578 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0311 08:40:21.059584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0311 08:40:21.059590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0311 08:40:21.059596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0311 08:40:21.059648 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0311 08:40:21.062132 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://506d71773927c014aeafefdc00c93d5faf4e6a5827bc8b47f525f3ea970ed14e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://184b4659910cdc9b9ee8dc34db1ede687099f4a5103dfba9e9a2fc0428c4e2ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://184b4659910cdc9b9ee8dc34db1ede687099f4a5103dfba9e9a2fc0428c4e2ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:47Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:47 crc kubenswrapper[4808]: I0311 08:40:47.680436 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"134c7d84-5243-4e2e-afcf-0eaf67967ce7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00fd34aefc93aa169e305232d0d0bf0fff66d3f4469ae96f68cc5f8a51a92c86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://023c9b432e3273f96abe72e9e1e4cf5ab68649d9d71d0ff59f75fc30ea5c2287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://023c9b432e3273f96abe72e9e1e4cf5ab68649d9d71d0ff59f75fc30ea5c2287\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:47Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:47 crc kubenswrapper[4808]: I0311 08:40:47.690921 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:47Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:47 crc kubenswrapper[4808]: I0311 08:40:47.701230 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dda5309-668d-4e3c-b3b2-1d708eecc578\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://125d3edbb47daf863a7f7b431cd468ba206a964c42d2a8fdbf163ef8ae3fd771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2hw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dad38518c1c86c0eb3926b0ff17b9fe7847bd1e942591307c49dfbd87a61d19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2hw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tfsm9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:47Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:47 crc kubenswrapper[4808]: I0311 08:40:47.713534 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dgh9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1a75dfb-31dd-4275-a309-c9e7130feb05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81effdeab74d0bc6b6f904ca05d6ae5f9777772a1b6e69e9038bb6818af8d979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chgpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dgh9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:47Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:47 crc kubenswrapper[4808]: I0311 08:40:47.748345 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:47 crc kubenswrapper[4808]: I0311 08:40:47.748392 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:47 crc kubenswrapper[4808]: I0311 08:40:47.748404 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:47 crc kubenswrapper[4808]: I0311 08:40:47.748419 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:47 crc kubenswrapper[4808]: I0311 08:40:47.748430 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:47Z","lastTransitionTime":"2026-03-11T08:40:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:47 crc kubenswrapper[4808]: I0311 08:40:47.788807 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:40:47 crc kubenswrapper[4808]: I0311 08:40:47.788854 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:40:47 crc kubenswrapper[4808]: E0311 08:40:47.788971 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 11 08:40:47 crc kubenswrapper[4808]: I0311 08:40:47.788998 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:40:47 crc kubenswrapper[4808]: E0311 08:40:47.789082 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 11 08:40:47 crc kubenswrapper[4808]: E0311 08:40:47.789136 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 11 08:40:47 crc kubenswrapper[4808]: I0311 08:40:47.851138 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:47 crc kubenswrapper[4808]: I0311 08:40:47.851178 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:47 crc kubenswrapper[4808]: I0311 08:40:47.851191 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:47 crc kubenswrapper[4808]: I0311 08:40:47.851208 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:47 crc kubenswrapper[4808]: I0311 08:40:47.851219 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:47Z","lastTransitionTime":"2026-03-11T08:40:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:47 crc kubenswrapper[4808]: I0311 08:40:47.953622 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:47 crc kubenswrapper[4808]: I0311 08:40:47.953670 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:47 crc kubenswrapper[4808]: I0311 08:40:47.953682 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:47 crc kubenswrapper[4808]: I0311 08:40:47.953700 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:47 crc kubenswrapper[4808]: I0311 08:40:47.953713 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:47Z","lastTransitionTime":"2026-03-11T08:40:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:48 crc kubenswrapper[4808]: I0311 08:40:48.055634 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:48 crc kubenswrapper[4808]: I0311 08:40:48.055688 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:48 crc kubenswrapper[4808]: I0311 08:40:48.055701 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:48 crc kubenswrapper[4808]: I0311 08:40:48.055718 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:48 crc kubenswrapper[4808]: I0311 08:40:48.055730 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:48Z","lastTransitionTime":"2026-03-11T08:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:48 crc kubenswrapper[4808]: I0311 08:40:48.158046 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:48 crc kubenswrapper[4808]: I0311 08:40:48.158091 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:48 crc kubenswrapper[4808]: I0311 08:40:48.158104 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:48 crc kubenswrapper[4808]: I0311 08:40:48.158122 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:48 crc kubenswrapper[4808]: I0311 08:40:48.158133 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:48Z","lastTransitionTime":"2026-03-11T08:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:48 crc kubenswrapper[4808]: I0311 08:40:48.260328 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:48 crc kubenswrapper[4808]: I0311 08:40:48.260374 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:48 crc kubenswrapper[4808]: I0311 08:40:48.260382 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:48 crc kubenswrapper[4808]: I0311 08:40:48.260396 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:48 crc kubenswrapper[4808]: I0311 08:40:48.260410 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:48Z","lastTransitionTime":"2026-03-11T08:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:48 crc kubenswrapper[4808]: I0311 08:40:48.288517 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" event={"ID":"afeac5d0-d84f-4776-ae37-a03c8f0f66b8","Type":"ContainerStarted","Data":"e1d10229731c437de7f29f504fec11ac3688a0a5039812d76551a867f814b95e"} Mar 11 08:40:48 crc kubenswrapper[4808]: I0311 08:40:48.288559 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" event={"ID":"afeac5d0-d84f-4776-ae37-a03c8f0f66b8","Type":"ContainerStarted","Data":"a351ca31265d6256ce9c08680495dd7a9b7882cd17a8faf4a0d224a800737e4e"} Mar 11 08:40:48 crc kubenswrapper[4808]: I0311 08:40:48.288568 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" event={"ID":"afeac5d0-d84f-4776-ae37-a03c8f0f66b8","Type":"ContainerStarted","Data":"02a71d4f33fd4d8bcf666c2d01f03d31c6e0f2b8eaa0cc183b48251e36cdc474"} Mar 11 08:40:48 crc kubenswrapper[4808]: I0311 08:40:48.288581 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" event={"ID":"afeac5d0-d84f-4776-ae37-a03c8f0f66b8","Type":"ContainerStarted","Data":"17fbdac0c2d9f6080066329737fa9866b0e953b38687d6c7b2c62dea2e9964a1"} Mar 11 08:40:48 crc kubenswrapper[4808]: I0311 08:40:48.288589 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" event={"ID":"afeac5d0-d84f-4776-ae37-a03c8f0f66b8","Type":"ContainerStarted","Data":"3674634f2b9f7a0e2fcf3758da17080b26a6284c8446d8326bc405b1307f8c22"} Mar 11 08:40:48 crc kubenswrapper[4808]: I0311 08:40:48.288598 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" event={"ID":"afeac5d0-d84f-4776-ae37-a03c8f0f66b8","Type":"ContainerStarted","Data":"0f11f10c1e9b99cab011ae05555fe3dd8aeb6e7df5cb166f07c6bcfffc9f9164"} Mar 11 08:40:48 crc kubenswrapper[4808]: I0311 08:40:48.290620 4808 generic.go:334] "Generic (PLEG): container finished" podID="fcd5dff4-0826-4876-9fd3-3f19781a17bf" containerID="9ade0f54a4b7718d83ed1ed5025f91b42175bf7daa1327a150a6eedc99df8e3a" exitCode=0 Mar 11 08:40:48 crc kubenswrapper[4808]: I0311 08:40:48.290687 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2r84h" event={"ID":"fcd5dff4-0826-4876-9fd3-3f19781a17bf","Type":"ContainerDied","Data":"9ade0f54a4b7718d83ed1ed5025f91b42175bf7daa1327a150a6eedc99df8e3a"} Mar 11 08:40:48 crc kubenswrapper[4808]: I0311 08:40:48.314480 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70aeaa0b-1b9a-450e-bac3-61beed554491\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9a36b70daad0e5e33a84c699b4c0680f47f80bad4c9852616ea933ba8c6b221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1f460dd013b0b42bfb8031fcb1e2f49ace0fe24ce8194bd6e6970ad3e750ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://675da4e867613d165a41176c1b725f3601ffdd7de762c52efec0ee5485b1d628\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f99d53f25011afeb242934cb64a3cce6d5a03c17bd6008729f5e07298f152fb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f99d53f25011afeb242934cb64a3cce6d5a03c17bd6008729f5e07298f152fb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-11T08:40:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0311 08:40:20.589250 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0311 08:40:20.589388 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0311 08:40:20.590094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1044918164/tls.crt::/tmp/serving-cert-1044918164/tls.key\\\\\\\"\\\\nI0311 08:40:21.049983 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0311 08:40:21.052943 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0311 08:40:21.052965 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0311 08:40:21.052991 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0311 08:40:21.052999 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0311 08:40:21.059520 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0311 08:40:21.059550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:40:21.059569 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:40:21.059578 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0311 08:40:21.059584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0311 08:40:21.059590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0311 08:40:21.059596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0311 08:40:21.059648 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0311 08:40:21.062132 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://506d71773927c014aeafefdc00c93d5faf4e6a5827bc8b47f525f3ea970ed14e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://184b4659910cdc9b9ee8dc34db1ede687099f4a5103dfba9e9a2fc0428c4e2ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://184b4659910cdc9b9ee8dc34db1ede687099f4a5103dfba9e9a2fc0428c4e2ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:48Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:48 crc kubenswrapper[4808]: I0311 08:40:48.323571 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"134c7d84-5243-4e2e-afcf-0eaf67967ce7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00fd34aefc93aa169e305232d0d0bf0fff66d3f4469ae96f68cc5f8a51a92c86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://023c9b432e3273f96abe72e9e1e4cf5ab68649d9d71d0ff59f75fc30ea5c2287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://023c9b432e3273f96abe72e9e1e4cf5ab68649d9d71d0ff59f75fc30ea5c2287\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:48Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:48 crc kubenswrapper[4808]: I0311 08:40:48.336908 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:48Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:48 crc kubenswrapper[4808]: I0311 08:40:48.350077 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dda5309-668d-4e3c-b3b2-1d708eecc578\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://125d3edbb47daf863a7f7b431cd468ba206a964c42d2a8fdbf163ef8ae3fd771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2hw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dad38518c1c86c0eb3926b0ff17b9fe7847bd1e942591307c49dfbd87a61d19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2hw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tfsm9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:48Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:48 crc kubenswrapper[4808]: I0311 08:40:48.362934 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:48 crc kubenswrapper[4808]: I0311 08:40:48.362976 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:48 crc kubenswrapper[4808]: I0311 08:40:48.362987 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:48 crc kubenswrapper[4808]: I0311 08:40:48.363003 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:48 crc kubenswrapper[4808]: I0311 08:40:48.363016 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:48Z","lastTransitionTime":"2026-03-11T08:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:48 crc kubenswrapper[4808]: I0311 08:40:48.369116 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dgh9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1a75dfb-31dd-4275-a309-c9e7130feb05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81effdeab74d0bc6b6f904ca05d6ae5f9777772a1b6e69e9038bb6818af8d979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chgpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dgh9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:48Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:48 crc kubenswrapper[4808]: I0311 08:40:48.381742 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a12f71a3a92a4513660a5b6c0bb73f40545f5528a45fab7aec18cb22283921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:48Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:48 crc kubenswrapper[4808]: I0311 08:40:48.409855 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39fe7c21-7960-42c6-aa61-98bac9e7ecd5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46ab0655a5449d68489b2ad4cdadf1ab8539dfc66e9a09ef0bf492fc84c3df01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99862ebd2568f5b782683b45df57b331cb0b659d0a48b16d898658463fcaa023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fac42d74d7b0553ef1fda4393cbe7872b21bc18baada8519db8137f2ef71d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce1b28200696e0c819d326a53d481baf48418232ac2a66fe01504e7af648c56f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1da63ac4504d6a817f5c856ae4ec6feee655c983f804372f8100624e0fbb08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0167ad24e01c669d29e5bcad3d3dc4e2e7578cf20278001ea2dfe9f1390bc99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0167ad24e01c669d29e5bcad3d3dc4e2e7578cf20278001ea2dfe9f1390bc99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc6d6b373b603100e83b56735f848166bbe70eaffa795411d0bf95bbfbe31c6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc6d6b373b603100e83b56735f848166bbe70eaffa795411d0bf95bbfbe31c6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f029352eaf11747f4efebbb91000b740e298e03d72e8a1347316668b78c23cae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f029352eaf11747f4efebbb91000b740e298e03d72e8a1347316668b78c23cae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:48Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:48 crc kubenswrapper[4808]: I0311 08:40:48.424348 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b088d2a8d5895f936e824861275482977fb68600e1ca4fd0c08e124c4f11da52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:48Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:48 crc kubenswrapper[4808]: I0311 08:40:48.439151 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:48Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:48 crc kubenswrapper[4808]: I0311 08:40:48.454989 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:48Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:48 crc kubenswrapper[4808]: I0311 08:40:48.465818 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:48 crc kubenswrapper[4808]: I0311 08:40:48.465865 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:48 crc kubenswrapper[4808]: I0311 08:40:48.465878 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:48 crc kubenswrapper[4808]: I0311 08:40:48.465897 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:48 crc kubenswrapper[4808]: I0311 08:40:48.465914 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:48Z","lastTransitionTime":"2026-03-11T08:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:48 crc kubenswrapper[4808]: I0311 08:40:48.469388 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-twvrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6377c39c-8ecf-409c-b3e7-ea9d717e234f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a910f47666db2f5fb733718e848c75c15d9f0975c7339afd9e21f21ede7aa1b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c99gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-twvrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:48Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:48 crc kubenswrapper[4808]: I0311 08:40:48.484676 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2r84h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcd5dff4-0826-4876-9fd3-3f19781a17bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f840757548b4af73aae0c247025a649cbf50cfbc534fbd38f0592f0f0e97274\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f840757548b4af73aae0c247025a649cbf50cfbc534fbd38f0592f0f0e97274\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ade0f54a4b7718d83ed1ed5025f91b42175bf7daa1327a150a6eedc99df8e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ade0f54a4b7718d83ed1ed5025f91b42175bf7daa1327a150a6eedc99df8e3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2r84h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:48Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:48 crc kubenswrapper[4808]: I0311 08:40:48.496002 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afd64db1e4772f4558a0a810e319481d1c8d521b299c9f6f79d5c0f752b5f89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8deb33e0cd83c7419f7ddd4cba0e84d33849e982b5da5e74e0e93eebb40b249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:48Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:48 crc kubenswrapper[4808]: I0311 08:40:48.517444 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7129de475a69815e0ed556f2d45c208788efcd7ef2792f3d93994af37dee87b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7129de475a69815e0ed556f2d45c208788efcd7ef2792f3d93994af37dee87b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8wfl5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:48Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:48 crc kubenswrapper[4808]: I0311 08:40:48.567976 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:48 crc kubenswrapper[4808]: I0311 08:40:48.568006 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:48 crc kubenswrapper[4808]: I0311 08:40:48.568017 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:48 crc kubenswrapper[4808]: I0311 08:40:48.568031 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:48 crc kubenswrapper[4808]: I0311 08:40:48.568040 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:48Z","lastTransitionTime":"2026-03-11T08:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:48 crc kubenswrapper[4808]: I0311 08:40:48.673952 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:48 crc kubenswrapper[4808]: I0311 08:40:48.673995 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:48 crc kubenswrapper[4808]: I0311 08:40:48.674005 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:48 crc kubenswrapper[4808]: I0311 08:40:48.674022 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:48 crc kubenswrapper[4808]: I0311 08:40:48.674035 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:48Z","lastTransitionTime":"2026-03-11T08:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:48 crc kubenswrapper[4808]: I0311 08:40:48.777552 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:48 crc kubenswrapper[4808]: I0311 08:40:48.777618 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:48 crc kubenswrapper[4808]: I0311 08:40:48.777639 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:48 crc kubenswrapper[4808]: I0311 08:40:48.777670 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:48 crc kubenswrapper[4808]: I0311 08:40:48.777693 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:48Z","lastTransitionTime":"2026-03-11T08:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:48 crc kubenswrapper[4808]: I0311 08:40:48.880953 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:48 crc kubenswrapper[4808]: I0311 08:40:48.881006 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:48 crc kubenswrapper[4808]: I0311 08:40:48.881022 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:48 crc kubenswrapper[4808]: I0311 08:40:48.881044 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:48 crc kubenswrapper[4808]: I0311 08:40:48.881061 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:48Z","lastTransitionTime":"2026-03-11T08:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:48 crc kubenswrapper[4808]: I0311 08:40:48.985550 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:48 crc kubenswrapper[4808]: I0311 08:40:48.985615 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:48 crc kubenswrapper[4808]: I0311 08:40:48.985639 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:48 crc kubenswrapper[4808]: I0311 08:40:48.985668 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:48 crc kubenswrapper[4808]: I0311 08:40:48.985693 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:48Z","lastTransitionTime":"2026-03-11T08:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:49 crc kubenswrapper[4808]: I0311 08:40:49.088389 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:49 crc kubenswrapper[4808]: I0311 08:40:49.088427 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:49 crc kubenswrapper[4808]: I0311 08:40:49.088442 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:49 crc kubenswrapper[4808]: I0311 08:40:49.088461 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:49 crc kubenswrapper[4808]: I0311 08:40:49.088474 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:49Z","lastTransitionTime":"2026-03-11T08:40:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:49 crc kubenswrapper[4808]: I0311 08:40:49.190346 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:49 crc kubenswrapper[4808]: I0311 08:40:49.190408 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:49 crc kubenswrapper[4808]: I0311 08:40:49.190420 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:49 crc kubenswrapper[4808]: I0311 08:40:49.190438 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:49 crc kubenswrapper[4808]: I0311 08:40:49.190454 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:49Z","lastTransitionTime":"2026-03-11T08:40:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:49 crc kubenswrapper[4808]: I0311 08:40:49.293144 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:49 crc kubenswrapper[4808]: I0311 08:40:49.293176 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:49 crc kubenswrapper[4808]: I0311 08:40:49.293186 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:49 crc kubenswrapper[4808]: I0311 08:40:49.293198 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:49 crc kubenswrapper[4808]: I0311 08:40:49.293206 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:49Z","lastTransitionTime":"2026-03-11T08:40:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:49 crc kubenswrapper[4808]: I0311 08:40:49.295580 4808 generic.go:334] "Generic (PLEG): container finished" podID="fcd5dff4-0826-4876-9fd3-3f19781a17bf" containerID="1e872db89da909105a9343e8b9951c7c88bfd5107edb4386faa05c08b37e34bb" exitCode=0 Mar 11 08:40:49 crc kubenswrapper[4808]: I0311 08:40:49.295615 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2r84h" event={"ID":"fcd5dff4-0826-4876-9fd3-3f19781a17bf","Type":"ContainerDied","Data":"1e872db89da909105a9343e8b9951c7c88bfd5107edb4386faa05c08b37e34bb"} Mar 11 08:40:49 crc kubenswrapper[4808]: I0311 08:40:49.306284 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a12f71a3a92a4513660a5b6c0bb73f40545f5528a45fab7aec18cb22283921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:49Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:49 crc kubenswrapper[4808]: I0311 08:40:49.327426 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39fe7c21-7960-42c6-aa61-98bac9e7ecd5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46ab0655a5449d68489b2ad4cdadf1ab8539dfc66e9a09ef0bf492fc84c3df01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99862ebd2568f5b782683b45df57b331cb0b659d0a48b16d898658463fcaa023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fac42d74d7b0553ef1fda4393cbe7872b21bc18baada8519db8137f2ef71d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce1b28200696e0c819d326a53d481baf48418232ac2a66fe01504e7af648c56f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1da63ac4504d6a817f5c856ae4ec6feee655c983f804372f8100624e0fbb08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0167ad24e01c669d29e5bcad3d3dc4e2e7578cf20278001ea2dfe9f1390bc99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0167ad24e01c669d29e5bcad3d3dc4e2e7578cf20278001ea2dfe9f1390bc99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc6d6b373b603100e83b56735f848166bbe70eaffa795411d0bf95bbfbe31c6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc6d6b373b603100e83b56735f848166bbe70eaffa795411d0bf95bbfbe31c6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f029352eaf11747f4efebbb91000b740e298e03d72e8a1347316668b78c23cae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f029352eaf11747f4efebbb91000b740e298e03d72e8a1347316668b78c23cae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:49Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:49 crc kubenswrapper[4808]: I0311 08:40:49.338538 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b088d2a8d5895f936e824861275482977fb68600e1ca4fd0c08e124c4f11da52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:49Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:49 crc kubenswrapper[4808]: I0311 08:40:49.348889 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:49Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:49 crc kubenswrapper[4808]: I0311 08:40:49.360459 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:49Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:49 crc kubenswrapper[4808]: I0311 08:40:49.370673 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-twvrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6377c39c-8ecf-409c-b3e7-ea9d717e234f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a910f47666db2f5fb733718e848c75c15d9f0975c7339afd9e21f21ede7aa1b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c99gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-twvrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:49Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:49 crc kubenswrapper[4808]: I0311 08:40:49.386884 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2r84h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcd5dff4-0826-4876-9fd3-3f19781a17bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f840757548b4af73aae0c247025a649cbf50cfbc534fbd38f0592f0f0e97274\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f840757548b4af73aae0c247025a649cbf50cfbc534fbd38f0592f0f0e97274\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ade0f54a4b7718d83ed1ed5025f91b42175bf7daa1327a150a6eedc99df8e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ade0f54a4b7718d83ed1ed5025f91b42175bf7daa1327a150a6eedc99df8e3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e872db89da909105a9343e8b9951c7c88bfd5107edb4386faa05c08b37e34bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e872db89da909105a9343e8b9951c7c88bfd5107edb4386faa05c08b37e34bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2r84h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:49Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:49 crc kubenswrapper[4808]: I0311 08:40:49.395988 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:49 crc kubenswrapper[4808]: I0311 08:40:49.396038 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:49 crc kubenswrapper[4808]: I0311 08:40:49.396050 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:49 crc kubenswrapper[4808]: I0311 08:40:49.396069 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:49 crc kubenswrapper[4808]: I0311 08:40:49.396081 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:49Z","lastTransitionTime":"2026-03-11T08:40:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:49 crc kubenswrapper[4808]: I0311 08:40:49.401007 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afd64db1e4772f4558a0a810e319481d1c8d521b299c9f6f79d5c0f752b5f89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8deb33e0cd83c7419f7ddd4cba0e84d33849e982b5da5e74e0e93eebb40b249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:49Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:49 crc kubenswrapper[4808]: I0311 08:40:49.428923 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7129de475a69815e0ed556f2d45c208788efcd7ef2792f3d93994af37dee87b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7129de475a69815e0ed556f2d45c208788efcd7ef2792f3d93994af37dee87b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8wfl5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:49Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:49 crc kubenswrapper[4808]: I0311 08:40:49.440896 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70aeaa0b-1b9a-450e-bac3-61beed554491\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9a36b70daad0e5e33a84c699b4c0680f47f80bad4c9852616ea933ba8c6b221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1f460dd013b0b42bfb8031fcb1e2f49ace0fe24ce8194bd6e6970ad3e750ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://675da4e867613d165a41176c1b725f3601ffdd7de762c52efec0ee5485b1d628\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f99d53f25011afeb242934cb64a3cce6d5a03c17bd6008729f5e07298f152fb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f99d53f25011afeb242934cb64a3cce6d5a03c17bd6008729f5e07298f152fb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-11T08:40:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0311 08:40:20.589250 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0311 08:40:20.589388 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0311 08:40:20.590094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1044918164/tls.crt::/tmp/serving-cert-1044918164/tls.key\\\\\\\"\\\\nI0311 08:40:21.049983 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0311 08:40:21.052943 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0311 08:40:21.052965 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0311 08:40:21.052991 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0311 08:40:21.052999 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0311 08:40:21.059520 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0311 08:40:21.059550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:40:21.059569 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:40:21.059578 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0311 08:40:21.059584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0311 08:40:21.059590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0311 08:40:21.059596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0311 08:40:21.059648 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0311 08:40:21.062132 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://506d71773927c014aeafefdc00c93d5faf4e6a5827bc8b47f525f3ea970ed14e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://184b4659910cdc9b9ee8dc34db1ede687099f4a5103dfba9e9a2fc0428c4e2ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://184b4659910cdc9b9ee8dc34db1ede687099f4a5103dfba9e9a2fc0428c4e2ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:49Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:49 crc kubenswrapper[4808]: I0311 08:40:49.453067 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"134c7d84-5243-4e2e-afcf-0eaf67967ce7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00fd34aefc93aa169e305232d0d0bf0fff66d3f4469ae96f68cc5f8a51a92c86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://023c9b432e3273f96abe72e9e1e4cf5ab68649d9d71d0ff59f75fc30ea5c2287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://023c9b432e3273f96abe72e9e1e4cf5ab68649d9d71d0ff59f75fc30ea5c2287\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:49Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:49 crc kubenswrapper[4808]: I0311 08:40:49.466734 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:49Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:49 crc kubenswrapper[4808]: I0311 08:40:49.476860 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dda5309-668d-4e3c-b3b2-1d708eecc578\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://125d3edbb47daf863a7f7b431cd468ba206a964c42d2a8fdbf163ef8ae3fd771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2hw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dad38518c1c86c0eb3926b0ff17b9fe7847bd1e942591307c49dfbd87a61d19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2hw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tfsm9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:49Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:49 crc kubenswrapper[4808]: I0311 08:40:49.491305 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dgh9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1a75dfb-31dd-4275-a309-c9e7130feb05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81effdeab74d0bc6b6f904ca05d6ae5f9777772a1b6e69e9038bb6818af8d979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chgpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dgh9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:49Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:49 crc kubenswrapper[4808]: I0311 08:40:49.498037 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:49 crc kubenswrapper[4808]: I0311 08:40:49.498073 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:49 crc kubenswrapper[4808]: I0311 08:40:49.498082 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:49 crc kubenswrapper[4808]: I0311 08:40:49.498096 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:49 crc kubenswrapper[4808]: I0311 08:40:49.498105 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:49Z","lastTransitionTime":"2026-03-11T08:40:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:49 crc kubenswrapper[4808]: I0311 08:40:49.600227 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:49 crc kubenswrapper[4808]: I0311 08:40:49.600258 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:49 crc kubenswrapper[4808]: I0311 08:40:49.600266 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:49 crc kubenswrapper[4808]: I0311 08:40:49.600278 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:49 crc kubenswrapper[4808]: I0311 08:40:49.600287 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:49Z","lastTransitionTime":"2026-03-11T08:40:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:49 crc kubenswrapper[4808]: I0311 08:40:49.704016 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:49 crc kubenswrapper[4808]: I0311 08:40:49.704048 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:49 crc kubenswrapper[4808]: I0311 08:40:49.704057 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:49 crc kubenswrapper[4808]: I0311 08:40:49.704070 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:49 crc kubenswrapper[4808]: I0311 08:40:49.704079 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:49Z","lastTransitionTime":"2026-03-11T08:40:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:49 crc kubenswrapper[4808]: I0311 08:40:49.788919 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:40:49 crc kubenswrapper[4808]: I0311 08:40:49.788997 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:40:49 crc kubenswrapper[4808]: I0311 08:40:49.789027 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:40:49 crc kubenswrapper[4808]: E0311 08:40:49.789137 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 11 08:40:49 crc kubenswrapper[4808]: E0311 08:40:49.789208 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 11 08:40:49 crc kubenswrapper[4808]: E0311 08:40:49.789282 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 11 08:40:49 crc kubenswrapper[4808]: I0311 08:40:49.804703 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afd64db1e4772f4558a0a810e319481d1c8d521b299c9f6f79d5c0f752b5f89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8deb33e0cd83c7419f7ddd4cba0e84d33849e982b5da5e74e0e93eebb40b249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:49Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:49 crc kubenswrapper[4808]: I0311 08:40:49.811665 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:49 crc kubenswrapper[4808]: I0311 08:40:49.811710 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:49 crc kubenswrapper[4808]: I0311 08:40:49.811720 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:49 crc kubenswrapper[4808]: I0311 08:40:49.811736 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:49 crc kubenswrapper[4808]: I0311 08:40:49.811746 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:49Z","lastTransitionTime":"2026-03-11T08:40:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:49 crc kubenswrapper[4808]: I0311 08:40:49.831315 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7129de475a69815e0ed556f2d45c208788efcd7ef2792f3d93994af37dee87b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7129de475a69815e0ed556f2d45c208788efcd7ef2792f3d93994af37dee87b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8wfl5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:49Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:49 crc kubenswrapper[4808]: I0311 08:40:49.841807 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:49Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:49 crc kubenswrapper[4808]: I0311 08:40:49.851500 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dda5309-668d-4e3c-b3b2-1d708eecc578\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://125d3edbb47daf863a7f7b431cd468ba206a964c42d2a8fdbf163ef8ae3fd771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2hw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dad38518c1c86c0eb3926b0ff17b9fe7847bd1e942591307c49dfbd87a61d19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2hw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tfsm9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:49Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:49 crc kubenswrapper[4808]: I0311 08:40:49.864071 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dgh9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1a75dfb-31dd-4275-a309-c9e7130feb05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81effdeab74d0bc6b6f904ca05d6ae5f9777772a1b6e69e9038bb6818af8d979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chgpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dgh9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:49Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:49 crc kubenswrapper[4808]: I0311 08:40:49.882058 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70aeaa0b-1b9a-450e-bac3-61beed554491\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9a36b70daad0e5e33a84c699b4c0680f47f80bad4c9852616ea933ba8c6b221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1f460dd013b0b42bfb8031fcb1e2f49ace0fe24ce8194bd6e6970ad3e750ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://675da4e867613d165a41176c1b725f3601ffdd7de762c52efec0ee5485b1d628\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f99d53f25011afeb242934cb64a3cce6d5a03c17bd6008729f5e07298f152fb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f99d53f25011afeb242934cb64a3cce6d5a03c17bd6008729f5e07298f152fb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-11T08:40:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0311 08:40:20.589250 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0311 08:40:20.589388 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0311 08:40:20.590094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1044918164/tls.crt::/tmp/serving-cert-1044918164/tls.key\\\\\\\"\\\\nI0311 08:40:21.049983 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0311 08:40:21.052943 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0311 08:40:21.052965 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0311 08:40:21.052991 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0311 08:40:21.052999 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0311 08:40:21.059520 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0311 08:40:21.059550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:40:21.059569 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:40:21.059578 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0311 08:40:21.059584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0311 08:40:21.059590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0311 08:40:21.059596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0311 08:40:21.059648 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0311 08:40:21.062132 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://506d71773927c014aeafefdc00c93d5faf4e6a5827bc8b47f525f3ea970ed14e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://184b4659910cdc9b9ee8dc34db1ede687099f4a5103dfba9e9a2fc0428c4e2ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://184b4659910cdc9b9ee8dc34db1ede687099f4a5103dfba9e9a2fc0428c4e2ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:49Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:49 crc kubenswrapper[4808]: I0311 08:40:49.892905 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"134c7d84-5243-4e2e-afcf-0eaf67967ce7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00fd34aefc93aa169e305232d0d0bf0fff66d3f4469ae96f68cc5f8a51a92c86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://023c9b432e3273f96abe72e9e1e4cf5ab68649d9d71d0ff59f75fc30ea5c2287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://023c9b432e3273f96abe72e9e1e4cf5ab68649d9d71d0ff59f75fc30ea5c2287\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:49Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:49 crc kubenswrapper[4808]: I0311 08:40:49.910780 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a12f71a3a92a4513660a5b6c0bb73f40545f5528a45fab7aec18cb22283921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:49Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:49 crc kubenswrapper[4808]: I0311 08:40:49.914187 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:49 crc kubenswrapper[4808]: I0311 08:40:49.914211 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:49 crc kubenswrapper[4808]: I0311 08:40:49.914218 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:49 crc kubenswrapper[4808]: I0311 08:40:49.914238 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:49 crc kubenswrapper[4808]: I0311 08:40:49.914246 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:49Z","lastTransitionTime":"2026-03-11T08:40:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:49 crc kubenswrapper[4808]: I0311 08:40:49.924409 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-twvrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6377c39c-8ecf-409c-b3e7-ea9d717e234f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a910f47666db2f5fb733718e848c75c15d9f0975c7339afd9e21f21ede7aa1b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c99gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-twvrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:49Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:49 crc kubenswrapper[4808]: I0311 08:40:49.942828 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2r84h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcd5dff4-0826-4876-9fd3-3f19781a17bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f840757548b4af73aae0c247025a649cbf50cfbc534fbd38f0592f0f0e97274\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f840757548b4af73aae0c247025a649cbf50cfbc534fbd38f0592f0f0e97274\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ade0f54a4b7718d83ed1ed5025f91b42175bf7daa1327a150a6eedc99df8e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ade0f54a4b7718d83ed1ed5025f91b42175bf7daa1327a150a6eedc99df8e3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e872db89da909105a9343e8b9951c7c88bfd5107edb4386faa05c08b37e34bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e872db89da909105a9343e8b9951c7c88bfd5107edb4386faa05c08b37e34bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2r84h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:49Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:49 crc kubenswrapper[4808]: I0311 08:40:49.967207 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39fe7c21-7960-42c6-aa61-98bac9e7ecd5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46ab0655a5449d68489b2ad4cdadf1ab8539dfc66e9a09ef0bf492fc84c3df01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99862ebd2568f5b782683b45df57b331cb0b659d0a48b16d898658463fcaa023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fac42d74d7b0553ef1fda4393cbe7872b21bc18baada8519db8137f2ef71d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce1b28200696e0c819d326a53d481baf48418232ac2a66fe01504e7af648c56f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1da63ac4504d6a817f5c856ae4ec6feee655c983f804372f8100624e0fbb08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0167ad24e01c669d29e5bcad3d3dc4e2e7578cf20278001ea2dfe9f1390bc99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0167ad24e01c669d29e5bcad3d3dc4e2e7578cf20278001ea2dfe9f1390bc99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc6d6b373b603100e83b56735f848166bbe70eaffa795411d0bf95bbfbe31c6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc6d6b373b603100e83b56735f848166bbe70eaffa795411d0bf95bbfbe31c6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f029352eaf11747f4efebbb91000b740e298e03d72e8a1347316668b78c23cae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f029352eaf11747f4efebbb91000b740e298e03d72e8a1347316668b78c23cae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:49Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:49 crc kubenswrapper[4808]: I0311 08:40:49.982503 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b088d2a8d5895f936e824861275482977fb68600e1ca4fd0c08e124c4f11da52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:49Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:50 crc kubenswrapper[4808]: I0311 08:40:50.001078 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:49Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:50 crc kubenswrapper[4808]: I0311 08:40:50.014900 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:50Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:50 crc kubenswrapper[4808]: I0311 08:40:50.016010 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:50 crc kubenswrapper[4808]: I0311 08:40:50.016067 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:50 crc kubenswrapper[4808]: I0311 08:40:50.016083 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:50 crc kubenswrapper[4808]: I0311 08:40:50.016103 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:50 crc kubenswrapper[4808]: I0311 08:40:50.016116 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:50Z","lastTransitionTime":"2026-03-11T08:40:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:50 crc kubenswrapper[4808]: I0311 08:40:50.117925 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:50 crc kubenswrapper[4808]: I0311 08:40:50.117977 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:50 crc kubenswrapper[4808]: I0311 08:40:50.117990 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:50 crc kubenswrapper[4808]: I0311 08:40:50.118010 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:50 crc kubenswrapper[4808]: I0311 08:40:50.118025 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:50Z","lastTransitionTime":"2026-03-11T08:40:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:50 crc kubenswrapper[4808]: I0311 08:40:50.221599 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:50 crc kubenswrapper[4808]: I0311 08:40:50.221652 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:50 crc kubenswrapper[4808]: I0311 08:40:50.221669 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:50 crc kubenswrapper[4808]: I0311 08:40:50.221692 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:50 crc kubenswrapper[4808]: I0311 08:40:50.221707 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:50Z","lastTransitionTime":"2026-03-11T08:40:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:50 crc kubenswrapper[4808]: I0311 08:40:50.302571 4808 generic.go:334] "Generic (PLEG): container finished" podID="fcd5dff4-0826-4876-9fd3-3f19781a17bf" containerID="14920dc320f545db70ffb1b588dcfed5cb019f04e0dddd1d38db7e0696429fc9" exitCode=0 Mar 11 08:40:50 crc kubenswrapper[4808]: I0311 08:40:50.302644 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2r84h" event={"ID":"fcd5dff4-0826-4876-9fd3-3f19781a17bf","Type":"ContainerDied","Data":"14920dc320f545db70ffb1b588dcfed5cb019f04e0dddd1d38db7e0696429fc9"} Mar 11 08:40:50 crc kubenswrapper[4808]: I0311 08:40:50.310611 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" event={"ID":"afeac5d0-d84f-4776-ae37-a03c8f0f66b8","Type":"ContainerStarted","Data":"01f1b29fe1734c41ddcc1b36367ccc54148fa85f2e05072fc0b60987dfcbb078"} Mar 11 08:40:50 crc kubenswrapper[4808]: I0311 08:40:50.325976 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:50 crc kubenswrapper[4808]: I0311 08:40:50.326045 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:50 crc kubenswrapper[4808]: I0311 08:40:50.326068 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:50 crc kubenswrapper[4808]: I0311 08:40:50.326114 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:50 crc kubenswrapper[4808]: I0311 08:40:50.326138 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:50Z","lastTransitionTime":"2026-03-11T08:40:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:50 crc kubenswrapper[4808]: I0311 08:40:50.334332 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afd64db1e4772f4558a0a810e319481d1c8d521b299c9f6f79d5c0f752b5f89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8deb33e0cd83c7419f7ddd4cba0e84d33849e982b5da5e74e0e93eebb40b249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:50Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:50 crc kubenswrapper[4808]: I0311 08:40:50.362119 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7129de475a69815e0ed556f2d45c208788efcd7ef2792f3d93994af37dee87b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7129de475a69815e0ed556f2d45c208788efcd7ef2792f3d93994af37dee87b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8wfl5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:50Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:50 crc kubenswrapper[4808]: I0311 08:40:50.383314 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:50Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:50 crc kubenswrapper[4808]: I0311 08:40:50.401167 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dda5309-668d-4e3c-b3b2-1d708eecc578\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://125d3edbb47daf863a7f7b431cd468ba206a964c42d2a8fdbf163ef8ae3fd771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2hw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dad38518c1c86c0eb3926b0ff17b9fe7847bd1e942591307c49dfbd87a61d19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2hw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tfsm9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:50Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:50 crc kubenswrapper[4808]: I0311 08:40:50.420525 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dgh9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1a75dfb-31dd-4275-a309-c9e7130feb05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81effdeab74d0bc6b6f904ca05d6ae5f9777772a1b6e69e9038bb6818af8d979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chgpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dgh9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:50Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:50 crc kubenswrapper[4808]: I0311 08:40:50.429524 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:50 crc kubenswrapper[4808]: I0311 08:40:50.429562 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:50 crc kubenswrapper[4808]: I0311 08:40:50.429575 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:50 crc kubenswrapper[4808]: I0311 08:40:50.429593 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:50 crc kubenswrapper[4808]: I0311 08:40:50.429608 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:50Z","lastTransitionTime":"2026-03-11T08:40:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:50 crc kubenswrapper[4808]: I0311 08:40:50.438111 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70aeaa0b-1b9a-450e-bac3-61beed554491\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9a36b70daad0e5e33a84c699b4c0680f47f80bad4c9852616ea933ba8c6b221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1f460dd013b0b42bfb8031fcb1e2f49ace0fe24ce8194bd6e6970ad3e750ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://675da4e867613d165a41176c1b725f3601ffdd7de762c52efec0ee5485b1d628\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f99d53f25011afeb242934cb64a3cce6d5a03c17bd6008729f5e07298f152fb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f99d53f25011afeb242934cb64a3cce6d5a03c17bd6008729f5e07298f152fb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-11T08:40:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0311 08:40:20.589250 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0311 08:40:20.589388 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0311 08:40:20.590094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1044918164/tls.crt::/tmp/serving-cert-1044918164/tls.key\\\\\\\"\\\\nI0311 08:40:21.049983 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0311 08:40:21.052943 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0311 08:40:21.052965 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0311 08:40:21.052991 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0311 08:40:21.052999 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0311 08:40:21.059520 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0311 08:40:21.059550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:40:21.059569 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:40:21.059578 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0311 08:40:21.059584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0311 08:40:21.059590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0311 08:40:21.059596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0311 08:40:21.059648 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0311 08:40:21.062132 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://506d71773927c014aeafefdc00c93d5faf4e6a5827bc8b47f525f3ea970ed14e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://184b4659910cdc9b9ee8dc34db1ede687099f4a5103dfba9e9a2fc0428c4e2ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://184b4659910cdc9b9ee8dc34db1ede687099f4a5103dfba9e9a2fc0428c4e2ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:50Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:50 crc kubenswrapper[4808]: I0311 08:40:50.448625 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"134c7d84-5243-4e2e-afcf-0eaf67967ce7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00fd34aefc93aa169e305232d0d0bf0fff66d3f4469ae96f68cc5f8a51a92c86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://023c9b432e3273f96abe72e9e1e4cf5ab68649d9d71d0ff59f75fc30ea5c2287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://023c9b432e3273f96abe72e9e1e4cf5ab68649d9d71d0ff59f75fc30ea5c2287\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:50Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:50 crc kubenswrapper[4808]: I0311 08:40:50.461780 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a12f71a3a92a4513660a5b6c0bb73f40545f5528a45fab7aec18cb22283921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:50Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:50 crc kubenswrapper[4808]: I0311 08:40:50.481069 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:50Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:50 crc kubenswrapper[4808]: I0311 08:40:50.492905 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-twvrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6377c39c-8ecf-409c-b3e7-ea9d717e234f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a910f47666db2f5fb733718e848c75c15d9f0975c7339afd9e21f21ede7aa1b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c99gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-twvrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:50Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:50 crc kubenswrapper[4808]: I0311 08:40:50.509456 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2r84h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcd5dff4-0826-4876-9fd3-3f19781a17bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f840757548b4af73aae0c247025a649cbf50cfbc534fbd38f0592f0f0e97274\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f840757548b4af73aae0c247025a649cbf50cfbc534fbd38f0592f0f0e97274\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ade0f54a4b7718d83ed1ed5025f91b42175bf7daa1327a150a6eedc99df8e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ade0f54a4b7718d83ed1ed5025f91b42175bf7daa1327a150a6eedc99df8e3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e872db89da909105a9343e8b9951c7c88bfd5107edb4386faa05c08b37e34bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e872db89da909105a9343e8b9951c7c88bfd5107edb4386faa05c08b37e34bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14920dc320f545db70ffb1b588dcfed5cb019f04e0dddd1d38db7e0696429fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14920dc320f545db70ffb1b588dcfed5cb019f04e0dddd1d38db7e0696429fc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2r84h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:50Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:50 crc kubenswrapper[4808]: I0311 08:40:50.532113 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:50 crc kubenswrapper[4808]: I0311 08:40:50.532163 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:50 crc kubenswrapper[4808]: I0311 08:40:50.532179 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:50 crc kubenswrapper[4808]: I0311 08:40:50.532204 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:50 crc kubenswrapper[4808]: I0311 08:40:50.532221 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:50Z","lastTransitionTime":"2026-03-11T08:40:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:50 crc kubenswrapper[4808]: I0311 08:40:50.542788 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39fe7c21-7960-42c6-aa61-98bac9e7ecd5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46ab0655a5449d68489b2ad4cdadf1ab8539dfc66e9a09ef0bf492fc84c3df01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99862ebd2568f5b782683b45df57b331cb0b659d0a48b16d898658463fcaa023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fac42d74d7b0553ef1fda4393cbe7872b21bc18baada8519db8137f2ef71d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce1b28200696e0c819d326a53d481baf48418232ac2a66fe01504e7af648c56f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1da63ac4504d6a817f5c856ae4ec6feee655c983f804372f8100624e0fbb08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0167ad24e01c669d29e5bcad3d3dc4e2e7578cf20278001ea2dfe9f1390bc99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0167ad24e01c669d29e5bcad3d3dc4e2e7578cf20278001ea2dfe9f1390bc99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc6d6b373b603100e83b56735f848166bbe70eaffa795411d0bf95bbfbe31c6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc6d6b373b603100e83b56735f848166bbe70eaffa795411d0bf95bbfbe31c6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f029352eaf11747f4efebbb91000b740e298e03d72e8a1347316668b78c23cae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f029352eaf11747f4efebbb91000b740e298e03d72e8a1347316668b78c23cae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:50Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:50 crc kubenswrapper[4808]: I0311 08:40:50.555634 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b088d2a8d5895f936e824861275482977fb68600e1ca4fd0c08e124c4f11da52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:50Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:50 crc kubenswrapper[4808]: I0311 08:40:50.570490 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:50Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:50 crc kubenswrapper[4808]: I0311 08:40:50.613589 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:50 crc kubenswrapper[4808]: I0311 08:40:50.613674 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:50 crc kubenswrapper[4808]: I0311 08:40:50.613695 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:50 crc kubenswrapper[4808]: I0311 08:40:50.614033 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:50 crc kubenswrapper[4808]: I0311 08:40:50.614223 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:50Z","lastTransitionTime":"2026-03-11T08:40:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:50 crc kubenswrapper[4808]: E0311 08:40:50.626155 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:40:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:40:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:40:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:40:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fb54ef7d-152b-411a-b511-256d1778abe5\\\",\\\"systemUUID\\\":\\\"8423e724-ca17-4e6e-9671-7e629ecf3f36\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:50Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:50 crc kubenswrapper[4808]: I0311 08:40:50.630402 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:50 crc kubenswrapper[4808]: I0311 08:40:50.630430 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:50 crc kubenswrapper[4808]: I0311 08:40:50.630441 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:50 crc kubenswrapper[4808]: I0311 08:40:50.630456 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:50 crc kubenswrapper[4808]: I0311 08:40:50.630466 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:50Z","lastTransitionTime":"2026-03-11T08:40:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:50 crc kubenswrapper[4808]: E0311 08:40:50.645726 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:40:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:40:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:40:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:40:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fb54ef7d-152b-411a-b511-256d1778abe5\\\",\\\"systemUUID\\\":\\\"8423e724-ca17-4e6e-9671-7e629ecf3f36\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:50Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:50 crc kubenswrapper[4808]: I0311 08:40:50.649963 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:50 crc kubenswrapper[4808]: I0311 08:40:50.650013 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:50 crc kubenswrapper[4808]: I0311 08:40:50.650026 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:50 crc kubenswrapper[4808]: I0311 08:40:50.650044 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:50 crc kubenswrapper[4808]: I0311 08:40:50.650056 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:50Z","lastTransitionTime":"2026-03-11T08:40:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:50 crc kubenswrapper[4808]: E0311 08:40:50.661699 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:40:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:40:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:40:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:40:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fb54ef7d-152b-411a-b511-256d1778abe5\\\",\\\"systemUUID\\\":\\\"8423e724-ca17-4e6e-9671-7e629ecf3f36\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:50Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:50 crc kubenswrapper[4808]: I0311 08:40:50.665822 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:50 crc kubenswrapper[4808]: I0311 08:40:50.665863 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:50 crc kubenswrapper[4808]: I0311 08:40:50.665877 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:50 crc kubenswrapper[4808]: I0311 08:40:50.665896 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:50 crc kubenswrapper[4808]: I0311 08:40:50.665908 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:50Z","lastTransitionTime":"2026-03-11T08:40:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:50 crc kubenswrapper[4808]: E0311 08:40:50.681529 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:40:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:40:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:40:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:40:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fb54ef7d-152b-411a-b511-256d1778abe5\\\",\\\"systemUUID\\\":\\\"8423e724-ca17-4e6e-9671-7e629ecf3f36\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:50Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:50 crc kubenswrapper[4808]: I0311 08:40:50.685481 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:50 crc kubenswrapper[4808]: I0311 08:40:50.685533 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:50 crc kubenswrapper[4808]: I0311 08:40:50.685546 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:50 crc kubenswrapper[4808]: I0311 08:40:50.685568 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:50 crc kubenswrapper[4808]: I0311 08:40:50.685580 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:50Z","lastTransitionTime":"2026-03-11T08:40:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:50 crc kubenswrapper[4808]: E0311 08:40:50.704665 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:40:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:40:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:40:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:40:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fb54ef7d-152b-411a-b511-256d1778abe5\\\",\\\"systemUUID\\\":\\\"8423e724-ca17-4e6e-9671-7e629ecf3f36\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:50Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:50 crc kubenswrapper[4808]: E0311 08:40:50.705150 4808 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 11 08:40:50 crc kubenswrapper[4808]: I0311 08:40:50.707937 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:50 crc kubenswrapper[4808]: I0311 08:40:50.707999 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:50 crc kubenswrapper[4808]: I0311 08:40:50.708021 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:50 crc kubenswrapper[4808]: I0311 08:40:50.708051 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:50 crc kubenswrapper[4808]: I0311 08:40:50.708076 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:50Z","lastTransitionTime":"2026-03-11T08:40:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:50 crc kubenswrapper[4808]: I0311 08:40:50.810693 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:50 crc kubenswrapper[4808]: I0311 08:40:50.810955 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:50 crc kubenswrapper[4808]: I0311 08:40:50.810963 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:50 crc kubenswrapper[4808]: I0311 08:40:50.810978 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:50 crc kubenswrapper[4808]: I0311 08:40:50.810987 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:50Z","lastTransitionTime":"2026-03-11T08:40:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:50 crc kubenswrapper[4808]: I0311 08:40:50.914011 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:50 crc kubenswrapper[4808]: I0311 08:40:50.914057 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:50 crc kubenswrapper[4808]: I0311 08:40:50.914066 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:50 crc kubenswrapper[4808]: I0311 08:40:50.914083 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:50 crc kubenswrapper[4808]: I0311 08:40:50.914094 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:50Z","lastTransitionTime":"2026-03-11T08:40:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:51 crc kubenswrapper[4808]: I0311 08:40:51.017919 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:51 crc kubenswrapper[4808]: I0311 08:40:51.017981 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:51 crc kubenswrapper[4808]: I0311 08:40:51.018005 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:51 crc kubenswrapper[4808]: I0311 08:40:51.018032 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:51 crc kubenswrapper[4808]: I0311 08:40:51.018049 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:51Z","lastTransitionTime":"2026-03-11T08:40:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:51 crc kubenswrapper[4808]: I0311 08:40:51.120740 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:51 crc kubenswrapper[4808]: I0311 08:40:51.120779 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:51 crc kubenswrapper[4808]: I0311 08:40:51.120789 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:51 crc kubenswrapper[4808]: I0311 08:40:51.120802 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:51 crc kubenswrapper[4808]: I0311 08:40:51.120811 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:51Z","lastTransitionTime":"2026-03-11T08:40:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:51 crc kubenswrapper[4808]: I0311 08:40:51.223010 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:51 crc kubenswrapper[4808]: I0311 08:40:51.223038 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:51 crc kubenswrapper[4808]: I0311 08:40:51.223047 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:51 crc kubenswrapper[4808]: I0311 08:40:51.223060 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:51 crc kubenswrapper[4808]: I0311 08:40:51.223068 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:51Z","lastTransitionTime":"2026-03-11T08:40:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:51 crc kubenswrapper[4808]: I0311 08:40:51.320715 4808 generic.go:334] "Generic (PLEG): container finished" podID="fcd5dff4-0826-4876-9fd3-3f19781a17bf" containerID="2bc79537b4712c9762fe2a813ef7f12b5d6314ed2aa66741561fe635591da8a5" exitCode=0 Mar 11 08:40:51 crc kubenswrapper[4808]: I0311 08:40:51.320754 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2r84h" event={"ID":"fcd5dff4-0826-4876-9fd3-3f19781a17bf","Type":"ContainerDied","Data":"2bc79537b4712c9762fe2a813ef7f12b5d6314ed2aa66741561fe635591da8a5"} Mar 11 08:40:51 crc kubenswrapper[4808]: I0311 08:40:51.329442 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:51 crc kubenswrapper[4808]: I0311 08:40:51.329467 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:51 crc kubenswrapper[4808]: I0311 08:40:51.329476 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:51 crc kubenswrapper[4808]: I0311 08:40:51.329488 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:51 crc kubenswrapper[4808]: I0311 08:40:51.329496 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:51Z","lastTransitionTime":"2026-03-11T08:40:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:51 crc kubenswrapper[4808]: I0311 08:40:51.344683 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39fe7c21-7960-42c6-aa61-98bac9e7ecd5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46ab0655a5449d68489b2ad4cdadf1ab8539dfc66e9a09ef0bf492fc84c3df01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99862ebd2568f5b782683b45df57b331cb0b659d0a48b16d898658463fcaa023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fac42d74d7b0553ef1fda4393cbe7872b21bc18baada8519db8137f2ef71d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce1b28200696e0c819d326a53d481baf48418232ac2a66fe01504e7af648c56f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1da63ac4504d6a817f5c856ae4ec6feee655c983f804372f8100624e0fbb08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0167ad24e01c669d29e5bcad3d3dc4e2e7578cf20278001ea2dfe9f1390bc99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0167ad24e01c669d29e5bcad3d3dc4e2e7578cf20278001ea2dfe9f1390bc99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc6d6b373b603100e83b56735f848166bbe70eaffa795411d0bf95bbfbe31c6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc6d6b373b603100e83b56735f848166bbe70eaffa795411d0bf95bbfbe31c6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f029352eaf11747f4efebbb91000b740e298e03d72e8a1347316668b78c23cae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f029352eaf11747f4efebbb91000b740e298e03d72e8a1347316668b78c23cae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:51Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:51 crc kubenswrapper[4808]: I0311 08:40:51.366502 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b088d2a8d5895f936e824861275482977fb68600e1ca4fd0c08e124c4f11da52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:51Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:51 crc kubenswrapper[4808]: I0311 08:40:51.383446 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:51Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:51 crc kubenswrapper[4808]: I0311 08:40:51.405561 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:51Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:51 crc kubenswrapper[4808]: I0311 08:40:51.415850 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-twvrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6377c39c-8ecf-409c-b3e7-ea9d717e234f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a910f47666db2f5fb733718e848c75c15d9f0975c7339afd9e21f21ede7aa1b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c99gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-twvrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:51Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:51 crc kubenswrapper[4808]: I0311 08:40:51.429534 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2r84h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcd5dff4-0826-4876-9fd3-3f19781a17bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f840757548b4af73aae0c247025a649cbf50cfbc534fbd38f0592f0f0e97274\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f840757548b4af73aae0c247025a649cbf50cfbc534fbd38f0592f0f0e97274\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ade0f54a4b7718d83ed1ed5025f91b42175bf7daa1327a150a6eedc99df8e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ade0f54a4b7718d83ed1ed5025f91b42175bf7daa1327a150a6eedc99df8e3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e872db89da909105a9343e8b9951c7c88bfd5107edb4386faa05c08b37e34bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e872db89da909105a9343e8b9951c7c88bfd5107edb4386faa05c08b37e34bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14920dc320f545db70ffb1b588dcfed5cb019f04e0dddd1d38db7e0696429fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14920dc320f545db70ffb1b588dcfed5cb019f04e0dddd1d38db7e0696429fc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc79537b4712c9762fe2a813ef7f12b5d6314ed2aa66741561fe635591da8a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc79537b4712c9762fe2a813ef7f12b5d6314ed2aa66741561fe635591da8a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2r84h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:51Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:51 crc kubenswrapper[4808]: I0311 08:40:51.432145 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:51 crc kubenswrapper[4808]: I0311 08:40:51.432192 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:51 crc kubenswrapper[4808]: I0311 08:40:51.432202 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:51 crc kubenswrapper[4808]: I0311 08:40:51.432216 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:51 crc kubenswrapper[4808]: I0311 08:40:51.432225 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:51Z","lastTransitionTime":"2026-03-11T08:40:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:51 crc kubenswrapper[4808]: I0311 08:40:51.441269 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afd64db1e4772f4558a0a810e319481d1c8d521b299c9f6f79d5c0f752b5f89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8deb33e0cd83c7419f7ddd4cba0e84d33849e982b5da5e74e0e93eebb40b249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:51Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:51 crc kubenswrapper[4808]: I0311 08:40:51.458239 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7129de475a69815e0ed556f2d45c208788efcd7ef2792f3d93994af37dee87b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7129de475a69815e0ed556f2d45c208788efcd7ef2792f3d93994af37dee87b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8wfl5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:51Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:51 crc kubenswrapper[4808]: I0311 08:40:51.473335 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70aeaa0b-1b9a-450e-bac3-61beed554491\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9a36b70daad0e5e33a84c699b4c0680f47f80bad4c9852616ea933ba8c6b221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1f460dd013b0b42bfb8031fcb1e2f49ace0fe24ce8194bd6e6970ad3e750ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://675da4e867613d165a41176c1b725f3601ffdd7de762c52efec0ee5485b1d628\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f99d53f25011afeb242934cb64a3cce6d5a03c17bd6008729f5e07298f152fb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f99d53f25011afeb242934cb64a3cce6d5a03c17bd6008729f5e07298f152fb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-11T08:40:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0311 08:40:20.589250 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0311 08:40:20.589388 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0311 08:40:20.590094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1044918164/tls.crt::/tmp/serving-cert-1044918164/tls.key\\\\\\\"\\\\nI0311 08:40:21.049983 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0311 08:40:21.052943 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0311 08:40:21.052965 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0311 08:40:21.052991 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0311 08:40:21.052999 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0311 08:40:21.059520 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0311 08:40:21.059550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:40:21.059569 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:40:21.059578 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0311 08:40:21.059584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0311 08:40:21.059590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0311 08:40:21.059596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0311 08:40:21.059648 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0311 08:40:21.062132 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://506d71773927c014aeafefdc00c93d5faf4e6a5827bc8b47f525f3ea970ed14e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://184b4659910cdc9b9ee8dc34db1ede687099f4a5103dfba9e9a2fc0428c4e2ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://184b4659910cdc9b9ee8dc34db1ede687099f4a5103dfba9e9a2fc0428c4e2ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:51Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:51 crc kubenswrapper[4808]: I0311 08:40:51.486278 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"134c7d84-5243-4e2e-afcf-0eaf67967ce7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00fd34aefc93aa169e305232d0d0bf0fff66d3f4469ae96f68cc5f8a51a92c86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://023c9b432e3273f96abe72e9e1e4cf5ab68649d9d71d0ff59f75fc30ea5c2287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://023c9b432e3273f96abe72e9e1e4cf5ab68649d9d71d0ff59f75fc30ea5c2287\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:51Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:51 crc kubenswrapper[4808]: I0311 08:40:51.498579 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:51Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:51 crc kubenswrapper[4808]: I0311 08:40:51.509195 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dda5309-668d-4e3c-b3b2-1d708eecc578\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://125d3edbb47daf863a7f7b431cd468ba206a964c42d2a8fdbf163ef8ae3fd771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2hw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dad38518c1c86c0eb3926b0ff17b9fe7847bd1e942591307c49dfbd87a61d19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2hw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tfsm9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:51Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:51 crc kubenswrapper[4808]: I0311 08:40:51.520633 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dgh9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1a75dfb-31dd-4275-a309-c9e7130feb05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81effdeab74d0bc6b6f904ca05d6ae5f9777772a1b6e69e9038bb6818af8d979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chgpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dgh9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:51Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:51 crc kubenswrapper[4808]: I0311 08:40:51.531682 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a12f71a3a92a4513660a5b6c0bb73f40545f5528a45fab7aec18cb22283921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:51Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:51 crc kubenswrapper[4808]: I0311 08:40:51.534457 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:51 crc kubenswrapper[4808]: I0311 08:40:51.534480 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:51 crc kubenswrapper[4808]: I0311 08:40:51.534488 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:51 crc kubenswrapper[4808]: I0311 08:40:51.534501 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:51 crc kubenswrapper[4808]: I0311 08:40:51.534511 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:51Z","lastTransitionTime":"2026-03-11T08:40:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:51 crc kubenswrapper[4808]: I0311 08:40:51.637253 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:51 crc kubenswrapper[4808]: I0311 08:40:51.637300 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:51 crc kubenswrapper[4808]: I0311 08:40:51.637310 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:51 crc kubenswrapper[4808]: I0311 08:40:51.637326 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:51 crc kubenswrapper[4808]: I0311 08:40:51.637339 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:51Z","lastTransitionTime":"2026-03-11T08:40:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:51 crc kubenswrapper[4808]: I0311 08:40:51.739881 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:51 crc kubenswrapper[4808]: I0311 08:40:51.739929 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:51 crc kubenswrapper[4808]: I0311 08:40:51.739942 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:51 crc kubenswrapper[4808]: I0311 08:40:51.739956 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:51 crc kubenswrapper[4808]: I0311 08:40:51.739967 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:51Z","lastTransitionTime":"2026-03-11T08:40:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:51 crc kubenswrapper[4808]: I0311 08:40:51.788575 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:40:51 crc kubenswrapper[4808]: E0311 08:40:51.788696 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 11 08:40:51 crc kubenswrapper[4808]: I0311 08:40:51.789270 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:40:51 crc kubenswrapper[4808]: I0311 08:40:51.789494 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:40:51 crc kubenswrapper[4808]: E0311 08:40:51.789566 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 11 08:40:51 crc kubenswrapper[4808]: E0311 08:40:51.789484 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 11 08:40:51 crc kubenswrapper[4808]: I0311 08:40:51.842952 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:51 crc kubenswrapper[4808]: I0311 08:40:51.842995 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:51 crc kubenswrapper[4808]: I0311 08:40:51.843014 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:51 crc kubenswrapper[4808]: I0311 08:40:51.843036 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:51 crc kubenswrapper[4808]: I0311 08:40:51.843052 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:51Z","lastTransitionTime":"2026-03-11T08:40:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:51 crc kubenswrapper[4808]: I0311 08:40:51.946117 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:51 crc kubenswrapper[4808]: I0311 08:40:51.946239 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:51 crc kubenswrapper[4808]: I0311 08:40:51.946258 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:51 crc kubenswrapper[4808]: I0311 08:40:51.946281 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:51 crc kubenswrapper[4808]: I0311 08:40:51.946297 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:51Z","lastTransitionTime":"2026-03-11T08:40:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:52 crc kubenswrapper[4808]: I0311 08:40:52.015279 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-dptv8"] Mar 11 08:40:52 crc kubenswrapper[4808]: I0311 08:40:52.016166 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-dptv8" Mar 11 08:40:52 crc kubenswrapper[4808]: I0311 08:40:52.019537 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 11 08:40:52 crc kubenswrapper[4808]: I0311 08:40:52.019674 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 11 08:40:52 crc kubenswrapper[4808]: I0311 08:40:52.019853 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Mar 11 08:40:52 crc kubenswrapper[4808]: I0311 08:40:52.020137 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Mar 11 08:40:52 crc kubenswrapper[4808]: I0311 08:40:52.039509 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a12f71a3a92a4513660a5b6c0bb73f40545f5528a45fab7aec18cb22283921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:52Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:52 crc kubenswrapper[4808]: I0311 08:40:52.049808 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:52 crc kubenswrapper[4808]: I0311 08:40:52.049865 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:52 crc kubenswrapper[4808]: I0311 08:40:52.049882 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:52 crc kubenswrapper[4808]: I0311 08:40:52.049924 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:52 crc kubenswrapper[4808]: I0311 08:40:52.049943 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:52Z","lastTransitionTime":"2026-03-11T08:40:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:52 crc kubenswrapper[4808]: I0311 08:40:52.061989 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:52Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:52 crc kubenswrapper[4808]: I0311 08:40:52.078991 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-twvrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6377c39c-8ecf-409c-b3e7-ea9d717e234f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a910f47666db2f5fb733718e848c75c15d9f0975c7339afd9e21f21ede7aa1b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c99gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-twvrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:52Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:52 crc kubenswrapper[4808]: I0311 08:40:52.108927 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2r84h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcd5dff4-0826-4876-9fd3-3f19781a17bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f840757548b4af73aae0c247025a649cbf50cfbc534fbd38f0592f0f0e97274\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f840757548b4af73aae0c247025a649cbf50cfbc534fbd38f0592f0f0e97274\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ade0f54a4b7718d83ed1ed5025f91b42175bf7daa1327a150a6eedc99df8e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ade0f54a4b7718d83ed1ed5025f91b42175bf7daa1327a150a6eedc99df8e3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e872db89da909105a9343e8b9951c7c88bfd5107edb4386faa05c08b37e34bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e872db89da909105a9343e8b9951c7c88bfd5107edb4386faa05c08b37e34bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14920dc320f545db70ffb1b588dcfed5cb019f04e0dddd1d38db7e0696429fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14920dc320f545db70ffb1b588dcfed5cb019f04e0dddd1d38db7e0696429fc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc79537b4712c9762fe2a813ef7f12b5d6314ed2aa66741561fe635591da8a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc79537b4712c9762fe2a813ef7f12b5d6314ed2aa66741561fe635591da8a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2r84h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:52Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:52 crc kubenswrapper[4808]: I0311 08:40:52.145021 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39fe7c21-7960-42c6-aa61-98bac9e7ecd5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46ab0655a5449d68489b2ad4cdadf1ab8539dfc66e9a09ef0bf492fc84c3df01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99862ebd2568f5b782683b45df57b331cb0b659d0a48b16d898658463fcaa023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fac42d74d7b0553ef1fda4393cbe7872b21bc18baada8519db8137f2ef71d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce1b28200696e0c819d326a53d481baf48418232ac2a66fe01504e7af648c56f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1da63ac4504d6a817f5c856ae4ec6feee655c983f804372f8100624e0fbb08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0167ad24e01c669d29e5bcad3d3dc4e2e7578cf20278001ea2dfe9f1390bc99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0167ad24e01c669d29e5bcad3d3dc4e2e7578cf20278001ea2dfe9f1390bc99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc6d6b373b603100e83b56735f848166bbe70eaffa795411d0bf95bbfbe31c6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc6d6b373b603100e83b56735f848166bbe70eaffa795411d0bf95bbfbe31c6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f029352eaf11747f4efebbb91000b740e298e03d72e8a1347316668b78c23cae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f029352eaf11747f4efebbb91000b740e298e03d72e8a1347316668b78c23cae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:52Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:52 crc kubenswrapper[4808]: I0311 08:40:52.152719 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:52 crc kubenswrapper[4808]: I0311 08:40:52.152758 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:52 crc kubenswrapper[4808]: I0311 08:40:52.152775 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:52 crc kubenswrapper[4808]: I0311 08:40:52.152797 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:52 crc kubenswrapper[4808]: I0311 08:40:52.152813 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:52Z","lastTransitionTime":"2026-03-11T08:40:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:52 crc kubenswrapper[4808]: I0311 08:40:52.160679 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/af91f09b-b749-4f04-81ac-2f0079a0dca5-host\") pod \"node-ca-dptv8\" (UID: \"af91f09b-b749-4f04-81ac-2f0079a0dca5\") " pod="openshift-image-registry/node-ca-dptv8" Mar 11 08:40:52 crc kubenswrapper[4808]: I0311 08:40:52.160749 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtdnd\" (UniqueName: \"kubernetes.io/projected/af91f09b-b749-4f04-81ac-2f0079a0dca5-kube-api-access-dtdnd\") pod \"node-ca-dptv8\" (UID: \"af91f09b-b749-4f04-81ac-2f0079a0dca5\") " pod="openshift-image-registry/node-ca-dptv8" Mar 11 08:40:52 crc kubenswrapper[4808]: I0311 08:40:52.160878 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/af91f09b-b749-4f04-81ac-2f0079a0dca5-serviceca\") pod \"node-ca-dptv8\" (UID: \"af91f09b-b749-4f04-81ac-2f0079a0dca5\") " pod="openshift-image-registry/node-ca-dptv8" Mar 11 08:40:52 crc kubenswrapper[4808]: I0311 08:40:52.198661 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b088d2a8d5895f936e824861275482977fb68600e1ca4fd0c08e124c4f11da52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:52Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:52 crc kubenswrapper[4808]: I0311 08:40:52.221174 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:52Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:52 crc kubenswrapper[4808]: I0311 08:40:52.236725 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afd64db1e4772f4558a0a810e319481d1c8d521b299c9f6f79d5c0f752b5f89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8deb33e0cd83c7419f7ddd4cba0e84d33849e982b5da5e74e0e93eebb40b249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:52Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:52 crc kubenswrapper[4808]: I0311 08:40:52.255907 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:52 crc kubenswrapper[4808]: I0311 08:40:52.255962 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:52 crc kubenswrapper[4808]: I0311 08:40:52.255980 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:52 crc kubenswrapper[4808]: I0311 08:40:52.256004 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:52 crc kubenswrapper[4808]: I0311 08:40:52.256021 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:52Z","lastTransitionTime":"2026-03-11T08:40:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:52 crc kubenswrapper[4808]: I0311 08:40:52.261795 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtdnd\" (UniqueName: \"kubernetes.io/projected/af91f09b-b749-4f04-81ac-2f0079a0dca5-kube-api-access-dtdnd\") pod \"node-ca-dptv8\" (UID: \"af91f09b-b749-4f04-81ac-2f0079a0dca5\") " pod="openshift-image-registry/node-ca-dptv8" Mar 11 08:40:52 crc kubenswrapper[4808]: I0311 08:40:52.261868 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/af91f09b-b749-4f04-81ac-2f0079a0dca5-host\") pod \"node-ca-dptv8\" (UID: \"af91f09b-b749-4f04-81ac-2f0079a0dca5\") " pod="openshift-image-registry/node-ca-dptv8" Mar 11 08:40:52 crc kubenswrapper[4808]: I0311 08:40:52.261972 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/af91f09b-b749-4f04-81ac-2f0079a0dca5-serviceca\") pod \"node-ca-dptv8\" (UID: \"af91f09b-b749-4f04-81ac-2f0079a0dca5\") " pod="openshift-image-registry/node-ca-dptv8" Mar 11 08:40:52 crc kubenswrapper[4808]: I0311 08:40:52.262508 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/af91f09b-b749-4f04-81ac-2f0079a0dca5-host\") pod \"node-ca-dptv8\" (UID: \"af91f09b-b749-4f04-81ac-2f0079a0dca5\") " pod="openshift-image-registry/node-ca-dptv8" Mar 11 08:40:52 crc kubenswrapper[4808]: I0311 08:40:52.264155 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/af91f09b-b749-4f04-81ac-2f0079a0dca5-serviceca\") pod \"node-ca-dptv8\" (UID: \"af91f09b-b749-4f04-81ac-2f0079a0dca5\") " pod="openshift-image-registry/node-ca-dptv8" Mar 11 08:40:52 crc kubenswrapper[4808]: I0311 08:40:52.274596 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7129de475a69815e0ed556f2d45c208788efcd7ef2792f3d93994af37dee87b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7129de475a69815e0ed556f2d45c208788efcd7ef2792f3d93994af37dee87b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8wfl5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:52Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:52 crc kubenswrapper[4808]: I0311 08:40:52.291101 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtdnd\" (UniqueName: \"kubernetes.io/projected/af91f09b-b749-4f04-81ac-2f0079a0dca5-kube-api-access-dtdnd\") pod \"node-ca-dptv8\" (UID: \"af91f09b-b749-4f04-81ac-2f0079a0dca5\") " pod="openshift-image-registry/node-ca-dptv8" Mar 11 08:40:52 crc kubenswrapper[4808]: I0311 08:40:52.291563 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dptv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af91f09b-b749-4f04-81ac-2f0079a0dca5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:52Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:52Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtdnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dptv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:52Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:52 crc kubenswrapper[4808]: I0311 08:40:52.315695 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:52Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:52 crc kubenswrapper[4808]: I0311 08:40:52.329147 4808 generic.go:334] "Generic (PLEG): container finished" podID="fcd5dff4-0826-4876-9fd3-3f19781a17bf" containerID="a689c8bcf97ed3a6950a775029df058df856a2cc37cdc511d0ed62a36d9f7d21" exitCode=0 Mar 11 08:40:52 crc kubenswrapper[4808]: I0311 08:40:52.329210 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2r84h" event={"ID":"fcd5dff4-0826-4876-9fd3-3f19781a17bf","Type":"ContainerDied","Data":"a689c8bcf97ed3a6950a775029df058df856a2cc37cdc511d0ed62a36d9f7d21"} Mar 11 08:40:52 crc kubenswrapper[4808]: I0311 08:40:52.337626 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dda5309-668d-4e3c-b3b2-1d708eecc578\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://125d3edbb47daf863a7f7b431cd468ba206a964c42d2a8fdbf163ef8ae3fd771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2hw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dad38518c1c86c0eb3926b0ff17b9fe7847bd1e942591307c49dfbd87a61d19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2hw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tfsm9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:52Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:52 crc kubenswrapper[4808]: I0311 08:40:52.340783 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-dptv8" Mar 11 08:40:52 crc kubenswrapper[4808]: I0311 08:40:52.359675 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dgh9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1a75dfb-31dd-4275-a309-c9e7130feb05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81effdeab74d0bc6b6f904ca05d6ae5f9777772a1b6e69e9038bb6818af8d979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chgpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dgh9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:52Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:52 crc kubenswrapper[4808]: I0311 08:40:52.359947 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:52 crc kubenswrapper[4808]: I0311 08:40:52.360015 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:52 crc kubenswrapper[4808]: I0311 08:40:52.360038 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:52 crc kubenswrapper[4808]: I0311 08:40:52.360069 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:52 crc kubenswrapper[4808]: I0311 08:40:52.360088 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:52Z","lastTransitionTime":"2026-03-11T08:40:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:52 crc kubenswrapper[4808]: W0311 08:40:52.366549 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaf91f09b_b749_4f04_81ac_2f0079a0dca5.slice/crio-e1f84784c1adee3c4b4005d3a6e2d61a968575b0e6707d0ef95421b8ed0c0401 WatchSource:0}: Error finding container e1f84784c1adee3c4b4005d3a6e2d61a968575b0e6707d0ef95421b8ed0c0401: Status 404 returned error can't find the container with id e1f84784c1adee3c4b4005d3a6e2d61a968575b0e6707d0ef95421b8ed0c0401 Mar 11 08:40:52 crc kubenswrapper[4808]: I0311 08:40:52.380307 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70aeaa0b-1b9a-450e-bac3-61beed554491\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9a36b70daad0e5e33a84c699b4c0680f47f80bad4c9852616ea933ba8c6b221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1f460dd013b0b42bfb8031fcb1e2f49ace0fe24ce8194bd6e6970ad3e750ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://675da4e867613d165a41176c1b725f3601ffdd7de762c52efec0ee5485b1d628\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f99d53f25011afeb242934cb64a3cce6d5a03c17bd6008729f5e07298f152fb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f99d53f25011afeb242934cb64a3cce6d5a03c17bd6008729f5e07298f152fb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-11T08:40:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0311 08:40:20.589250 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0311 08:40:20.589388 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0311 08:40:20.590094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1044918164/tls.crt::/tmp/serving-cert-1044918164/tls.key\\\\\\\"\\\\nI0311 08:40:21.049983 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0311 08:40:21.052943 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0311 08:40:21.052965 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0311 08:40:21.052991 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0311 08:40:21.052999 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0311 08:40:21.059520 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0311 08:40:21.059550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:40:21.059569 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:40:21.059578 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0311 08:40:21.059584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0311 08:40:21.059590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0311 08:40:21.059596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0311 08:40:21.059648 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0311 08:40:21.062132 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://506d71773927c014aeafefdc00c93d5faf4e6a5827bc8b47f525f3ea970ed14e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://184b4659910cdc9b9ee8dc34db1ede687099f4a5103dfba9e9a2fc0428c4e2ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://184b4659910cdc9b9ee8dc34db1ede687099f4a5103dfba9e9a2fc0428c4e2ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:52Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:52 crc kubenswrapper[4808]: I0311 08:40:52.390178 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"134c7d84-5243-4e2e-afcf-0eaf67967ce7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00fd34aefc93aa169e305232d0d0bf0fff66d3f4469ae96f68cc5f8a51a92c86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://023c9b432e3273f96abe72e9e1e4cf5ab68649d9d71d0ff59f75fc30ea5c2287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://023c9b432e3273f96abe72e9e1e4cf5ab68649d9d71d0ff59f75fc30ea5c2287\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:52Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:52 crc kubenswrapper[4808]: I0311 08:40:52.405100 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:52Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:52 crc kubenswrapper[4808]: I0311 08:40:52.424470 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dda5309-668d-4e3c-b3b2-1d708eecc578\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://125d3edbb47daf863a7f7b431cd468ba206a964c42d2a8fdbf163ef8ae3fd771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2hw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dad38518c1c86c0eb3926b0ff17b9fe7847bd1e942591307c49dfbd87a61d19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2hw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tfsm9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:52Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:52 crc kubenswrapper[4808]: I0311 08:40:52.438219 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dgh9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1a75dfb-31dd-4275-a309-c9e7130feb05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81effdeab74d0bc6b6f904ca05d6ae5f9777772a1b6e69e9038bb6818af8d979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chgpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dgh9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:52Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:52 crc kubenswrapper[4808]: I0311 08:40:52.452735 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70aeaa0b-1b9a-450e-bac3-61beed554491\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9a36b70daad0e5e33a84c699b4c0680f47f80bad4c9852616ea933ba8c6b221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1f460dd013b0b42bfb8031fcb1e2f49ace0fe24ce8194bd6e6970ad3e750ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://675da4e867613d165a41176c1b725f3601ffdd7de762c52efec0ee5485b1d628\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f99d53f25011afeb242934cb64a3cce6d5a03c17bd6008729f5e07298f152fb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f99d53f25011afeb242934cb64a3cce6d5a03c17bd6008729f5e07298f152fb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-11T08:40:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0311 08:40:20.589250 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0311 08:40:20.589388 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0311 08:40:20.590094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1044918164/tls.crt::/tmp/serving-cert-1044918164/tls.key\\\\\\\"\\\\nI0311 08:40:21.049983 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0311 08:40:21.052943 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0311 08:40:21.052965 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0311 08:40:21.052991 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0311 08:40:21.052999 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0311 08:40:21.059520 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0311 08:40:21.059550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:40:21.059569 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:40:21.059578 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0311 08:40:21.059584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0311 08:40:21.059590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0311 08:40:21.059596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0311 08:40:21.059648 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0311 08:40:21.062132 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://506d71773927c014aeafefdc00c93d5faf4e6a5827bc8b47f525f3ea970ed14e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://184b4659910cdc9b9ee8dc34db1ede687099f4a5103dfba9e9a2fc0428c4e2ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://184b4659910cdc9b9ee8dc34db1ede687099f4a5103dfba9e9a2fc0428c4e2ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:52Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:52 crc kubenswrapper[4808]: I0311 08:40:52.462864 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:52 crc kubenswrapper[4808]: I0311 08:40:52.462907 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:52 crc kubenswrapper[4808]: I0311 08:40:52.462919 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:52 crc kubenswrapper[4808]: I0311 08:40:52.462938 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:52 crc kubenswrapper[4808]: I0311 08:40:52.462949 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:52Z","lastTransitionTime":"2026-03-11T08:40:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:52 crc kubenswrapper[4808]: I0311 08:40:52.463839 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"134c7d84-5243-4e2e-afcf-0eaf67967ce7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00fd34aefc93aa169e305232d0d0bf0fff66d3f4469ae96f68cc5f8a51a92c86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://023c9b432e3273f96abe72e9e1e4cf5ab68649d9d71d0ff59f75fc30ea5c2287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://023c9b432e3273f96abe72e9e1e4cf5ab68649d9d71d0ff59f75fc30ea5c2287\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:52Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:52 crc kubenswrapper[4808]: I0311 08:40:52.482898 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a12f71a3a92a4513660a5b6c0bb73f40545f5528a45fab7aec18cb22283921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:52Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:52 crc kubenswrapper[4808]: I0311 08:40:52.497889 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:52Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:52 crc kubenswrapper[4808]: I0311 08:40:52.508897 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-twvrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6377c39c-8ecf-409c-b3e7-ea9d717e234f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a910f47666db2f5fb733718e848c75c15d9f0975c7339afd9e21f21ede7aa1b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c99gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-twvrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:52Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:52 crc kubenswrapper[4808]: I0311 08:40:52.524896 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2r84h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcd5dff4-0826-4876-9fd3-3f19781a17bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f840757548b4af73aae0c247025a649cbf50cfbc534fbd38f0592f0f0e97274\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f840757548b4af73aae0c247025a649cbf50cfbc534fbd38f0592f0f0e97274\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ade0f54a4b7718d83ed1ed5025f91b42175bf7daa1327a150a6eedc99df8e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ade0f54a4b7718d83ed1ed5025f91b42175bf7daa1327a150a6eedc99df8e3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e872db89da909105a9343e8b9951c7c88bfd5107edb4386faa05c08b37e34bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e872db89da909105a9343e8b9951c7c88bfd5107edb4386faa05c08b37e34bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14920dc320f545db70ffb1b588dcfed5cb019f04e0dddd1d38db7e0696429fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14920dc320f545db70ffb1b588dcfed5cb019f04e0dddd1d38db7e0696429fc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc79537b4712c9762fe2a813ef7f12b5d6314ed2aa66741561fe635591da8a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc79537b4712c9762fe2a813ef7f12b5d6314ed2aa66741561fe635591da8a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a689c8bcf97ed3a6950a775029df058df856a2cc37cdc511d0ed62a36d9f7d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a689c8bcf97ed3a6950a775029df058df856a2cc37cdc511d0ed62a36d9f7d21\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2r84h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:52Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:52 crc kubenswrapper[4808]: I0311 08:40:52.548575 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39fe7c21-7960-42c6-aa61-98bac9e7ecd5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46ab0655a5449d68489b2ad4cdadf1ab8539dfc66e9a09ef0bf492fc84c3df01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99862ebd2568f5b782683b45df57b331cb0b659d0a48b16d898658463fcaa023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fac42d74d7b0553ef1fda4393cbe7872b21bc18baada8519db8137f2ef71d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce1b28200696e0c819d326a53d481baf48418232ac2a66fe01504e7af648c56f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1da63ac4504d6a817f5c856ae4ec6feee655c983f804372f8100624e0fbb08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0167ad24e01c669d29e5bcad3d3dc4e2e7578cf20278001ea2dfe9f1390bc99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0167ad24e01c669d29e5bcad3d3dc4e2e7578cf20278001ea2dfe9f1390bc99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc6d6b373b603100e83b56735f848166bbe70eaffa795411d0bf95bbfbe31c6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc6d6b373b603100e83b56735f848166bbe70eaffa795411d0bf95bbfbe31c6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f029352eaf11747f4efebbb91000b740e298e03d72e8a1347316668b78c23cae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f029352eaf11747f4efebbb91000b740e298e03d72e8a1347316668b78c23cae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:52Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:52 crc kubenswrapper[4808]: I0311 08:40:52.562849 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b088d2a8d5895f936e824861275482977fb68600e1ca4fd0c08e124c4f11da52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:52Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:52 crc kubenswrapper[4808]: I0311 08:40:52.566831 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:52 crc kubenswrapper[4808]: I0311 08:40:52.566865 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:52 crc kubenswrapper[4808]: I0311 08:40:52.566877 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:52 crc kubenswrapper[4808]: I0311 08:40:52.566898 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:52 crc kubenswrapper[4808]: I0311 08:40:52.566910 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:52Z","lastTransitionTime":"2026-03-11T08:40:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:52 crc kubenswrapper[4808]: I0311 08:40:52.589876 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:52Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:52 crc kubenswrapper[4808]: I0311 08:40:52.603609 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afd64db1e4772f4558a0a810e319481d1c8d521b299c9f6f79d5c0f752b5f89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8deb33e0cd83c7419f7ddd4cba0e84d33849e982b5da5e74e0e93eebb40b249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:52Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:52 crc kubenswrapper[4808]: I0311 08:40:52.619819 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7129de475a69815e0ed556f2d45c208788efcd7ef2792f3d93994af37dee87b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7129de475a69815e0ed556f2d45c208788efcd7ef2792f3d93994af37dee87b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8wfl5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:52Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:52 crc kubenswrapper[4808]: I0311 08:40:52.630252 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dptv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af91f09b-b749-4f04-81ac-2f0079a0dca5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:52Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:52Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtdnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dptv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:52Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:52 crc kubenswrapper[4808]: I0311 08:40:52.674182 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:52 crc kubenswrapper[4808]: I0311 08:40:52.674231 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:52 crc kubenswrapper[4808]: I0311 08:40:52.674243 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:52 crc kubenswrapper[4808]: I0311 08:40:52.674259 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:52 crc kubenswrapper[4808]: I0311 08:40:52.674269 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:52Z","lastTransitionTime":"2026-03-11T08:40:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:52 crc kubenswrapper[4808]: I0311 08:40:52.777226 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:52 crc kubenswrapper[4808]: I0311 08:40:52.777279 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:52 crc kubenswrapper[4808]: I0311 08:40:52.777291 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:52 crc kubenswrapper[4808]: I0311 08:40:52.777311 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:52 crc kubenswrapper[4808]: I0311 08:40:52.777325 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:52Z","lastTransitionTime":"2026-03-11T08:40:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:52 crc kubenswrapper[4808]: I0311 08:40:52.789181 4808 scope.go:117] "RemoveContainer" containerID="9f99d53f25011afeb242934cb64a3cce6d5a03c17bd6008729f5e07298f152fb" Mar 11 08:40:52 crc kubenswrapper[4808]: E0311 08:40:52.789418 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 11 08:40:52 crc kubenswrapper[4808]: I0311 08:40:52.879127 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:52 crc kubenswrapper[4808]: I0311 08:40:52.879156 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:52 crc kubenswrapper[4808]: I0311 08:40:52.879165 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:52 crc kubenswrapper[4808]: I0311 08:40:52.879179 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:52 crc kubenswrapper[4808]: I0311 08:40:52.879188 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:52Z","lastTransitionTime":"2026-03-11T08:40:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:52 crc kubenswrapper[4808]: I0311 08:40:52.981143 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:52 crc kubenswrapper[4808]: I0311 08:40:52.981170 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:52 crc kubenswrapper[4808]: I0311 08:40:52.981178 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:52 crc kubenswrapper[4808]: I0311 08:40:52.981195 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:52 crc kubenswrapper[4808]: I0311 08:40:52.981209 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:52Z","lastTransitionTime":"2026-03-11T08:40:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:53 crc kubenswrapper[4808]: I0311 08:40:53.084952 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:53 crc kubenswrapper[4808]: I0311 08:40:53.085016 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:53 crc kubenswrapper[4808]: I0311 08:40:53.085035 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:53 crc kubenswrapper[4808]: I0311 08:40:53.085058 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:53 crc kubenswrapper[4808]: I0311 08:40:53.085075 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:53Z","lastTransitionTime":"2026-03-11T08:40:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:53 crc kubenswrapper[4808]: I0311 08:40:53.187939 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:53 crc kubenswrapper[4808]: I0311 08:40:53.188011 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:53 crc kubenswrapper[4808]: I0311 08:40:53.188032 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:53 crc kubenswrapper[4808]: I0311 08:40:53.188061 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:53 crc kubenswrapper[4808]: I0311 08:40:53.188086 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:53Z","lastTransitionTime":"2026-03-11T08:40:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:53 crc kubenswrapper[4808]: I0311 08:40:53.291634 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:53 crc kubenswrapper[4808]: I0311 08:40:53.291725 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:53 crc kubenswrapper[4808]: I0311 08:40:53.291751 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:53 crc kubenswrapper[4808]: I0311 08:40:53.291781 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:53 crc kubenswrapper[4808]: I0311 08:40:53.291802 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:53Z","lastTransitionTime":"2026-03-11T08:40:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:53 crc kubenswrapper[4808]: I0311 08:40:53.335796 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-dptv8" event={"ID":"af91f09b-b749-4f04-81ac-2f0079a0dca5","Type":"ContainerStarted","Data":"df6497d4025103c2efba8337b4d7541d3d7015be49a7b3478f5648122e9dbd92"} Mar 11 08:40:53 crc kubenswrapper[4808]: I0311 08:40:53.335874 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-dptv8" event={"ID":"af91f09b-b749-4f04-81ac-2f0079a0dca5","Type":"ContainerStarted","Data":"e1f84784c1adee3c4b4005d3a6e2d61a968575b0e6707d0ef95421b8ed0c0401"} Mar 11 08:40:53 crc kubenswrapper[4808]: I0311 08:40:53.341049 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2r84h" event={"ID":"fcd5dff4-0826-4876-9fd3-3f19781a17bf","Type":"ContainerStarted","Data":"3814b88c4b0107b15c1dacb212e0a05451ebc05065fcb0c25318df4808220e26"} Mar 11 08:40:53 crc kubenswrapper[4808]: I0311 08:40:53.346933 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" event={"ID":"afeac5d0-d84f-4776-ae37-a03c8f0f66b8","Type":"ContainerStarted","Data":"dc0e7d534992483e4ac7693478b4f531b2dba1bb7f7d882081b82f7857529b35"} Mar 11 08:40:53 crc kubenswrapper[4808]: I0311 08:40:53.347439 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" Mar 11 08:40:53 crc kubenswrapper[4808]: I0311 08:40:53.347507 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" Mar 11 08:40:53 crc kubenswrapper[4808]: I0311 08:40:53.347535 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" Mar 11 08:40:53 crc kubenswrapper[4808]: I0311 08:40:53.359266 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a12f71a3a92a4513660a5b6c0bb73f40545f5528a45fab7aec18cb22283921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:53Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:53 crc kubenswrapper[4808]: I0311 08:40:53.415856 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39fe7c21-7960-42c6-aa61-98bac9e7ecd5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46ab0655a5449d68489b2ad4cdadf1ab8539dfc66e9a09ef0bf492fc84c3df01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99862ebd2568f5b782683b45df57b331cb0b659d0a48b16d898658463fcaa023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fac42d74d7b0553ef1fda4393cbe7872b21bc18baada8519db8137f2ef71d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce1b28200696e0c819d326a53d481baf48418232ac2a66fe01504e7af648c56f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1da63ac4504d6a817f5c856ae4ec6feee655c983f804372f8100624e0fbb08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0167ad24e01c669d29e5bcad3d3dc4e2e7578cf20278001ea2dfe9f1390bc99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0167ad24e01c669d29e5bcad3d3dc4e2e7578cf20278001ea2dfe9f1390bc99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc6d6b373b603100e83b56735f848166bbe70eaffa795411d0bf95bbfbe31c6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc6d6b373b603100e83b56735f848166bbe70eaffa795411d0bf95bbfbe31c6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f029352eaf11747f4efebbb91000b740e298e03d72e8a1347316668b78c23cae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f029352eaf11747f4efebbb91000b740e298e03d72e8a1347316668b78c23cae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:53Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:53 crc kubenswrapper[4808]: I0311 08:40:53.418017 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:53 crc kubenswrapper[4808]: I0311 08:40:53.418069 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:53 crc kubenswrapper[4808]: I0311 08:40:53.418085 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:53 crc kubenswrapper[4808]: I0311 08:40:53.418109 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:53 crc kubenswrapper[4808]: I0311 08:40:53.418129 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:53Z","lastTransitionTime":"2026-03-11T08:40:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:53 crc kubenswrapper[4808]: I0311 08:40:53.421512 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" Mar 11 08:40:53 crc kubenswrapper[4808]: I0311 08:40:53.424117 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" Mar 11 08:40:53 crc kubenswrapper[4808]: I0311 08:40:53.438298 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b088d2a8d5895f936e824861275482977fb68600e1ca4fd0c08e124c4f11da52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:53Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:53 crc kubenswrapper[4808]: I0311 08:40:53.461703 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:53Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:53 crc kubenswrapper[4808]: I0311 08:40:53.483889 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:53Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:53 crc kubenswrapper[4808]: I0311 08:40:53.500673 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-twvrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6377c39c-8ecf-409c-b3e7-ea9d717e234f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a910f47666db2f5fb733718e848c75c15d9f0975c7339afd9e21f21ede7aa1b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c99gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-twvrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:53Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:53 crc kubenswrapper[4808]: I0311 08:40:53.521791 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:53 crc kubenswrapper[4808]: I0311 08:40:53.521866 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:53 crc kubenswrapper[4808]: I0311 08:40:53.521893 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:53 crc kubenswrapper[4808]: I0311 08:40:53.521924 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:53 crc kubenswrapper[4808]: I0311 08:40:53.521949 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:53Z","lastTransitionTime":"2026-03-11T08:40:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:53 crc kubenswrapper[4808]: I0311 08:40:53.525087 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2r84h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcd5dff4-0826-4876-9fd3-3f19781a17bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f840757548b4af73aae0c247025a649cbf50cfbc534fbd38f0592f0f0e97274\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f840757548b4af73aae0c247025a649cbf50cfbc534fbd38f0592f0f0e97274\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ade0f54a4b7718d83ed1ed5025f91b42175bf7daa1327a150a6eedc99df8e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ade0f54a4b7718d83ed1ed5025f91b42175bf7daa1327a150a6eedc99df8e3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e872db89da909105a9343e8b9951c7c88bfd5107edb4386faa05c08b37e34bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e872db89da909105a9343e8b9951c7c88bfd5107edb4386faa05c08b37e34bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14920dc320f545db70ffb1b588dcfed5cb019f04e0dddd1d38db7e0696429fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14920dc320f545db70ffb1b588dcfed5cb019f04e0dddd1d38db7e0696429fc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc79537b4712c9762fe2a813ef7f12b5d6314ed2aa66741561fe635591da8a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc79537b4712c9762fe2a813ef7f12b5d6314ed2aa66741561fe635591da8a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a689c8bcf97ed3a6950a775029df058df856a2cc37cdc511d0ed62a36d9f7d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a689c8bcf97ed3a6950a775029df058df856a2cc37cdc511d0ed62a36d9f7d21\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2r84h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:53Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:53 crc kubenswrapper[4808]: I0311 08:40:53.545836 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afd64db1e4772f4558a0a810e319481d1c8d521b299c9f6f79d5c0f752b5f89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8deb33e0cd83c7419f7ddd4cba0e84d33849e982b5da5e74e0e93eebb40b249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:53Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:53 crc kubenswrapper[4808]: I0311 08:40:53.586262 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7129de475a69815e0ed556f2d45c208788efcd7ef2792f3d93994af37dee87b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7129de475a69815e0ed556f2d45c208788efcd7ef2792f3d93994af37dee87b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8wfl5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:53Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:53 crc kubenswrapper[4808]: I0311 08:40:53.603786 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dptv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af91f09b-b749-4f04-81ac-2f0079a0dca5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df6497d4025103c2efba8337b4d7541d3d7015be49a7b3478f5648122e9dbd92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtdnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dptv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:53Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:53 crc kubenswrapper[4808]: I0311 08:40:53.624912 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:53 crc kubenswrapper[4808]: I0311 08:40:53.624982 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:53 crc kubenswrapper[4808]: I0311 08:40:53.625007 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:53 crc kubenswrapper[4808]: I0311 08:40:53.625041 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:53 crc kubenswrapper[4808]: I0311 08:40:53.625064 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:53Z","lastTransitionTime":"2026-03-11T08:40:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:53 crc kubenswrapper[4808]: I0311 08:40:53.629623 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70aeaa0b-1b9a-450e-bac3-61beed554491\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9a36b70daad0e5e33a84c699b4c0680f47f80bad4c9852616ea933ba8c6b221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1f460dd013b0b42bfb8031fcb1e2f49ace0fe24ce8194bd6e6970ad3e750ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://675da4e867613d165a41176c1b725f3601ffdd7de762c52efec0ee5485b1d628\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f99d53f25011afeb242934cb64a3cce6d5a03c17bd6008729f5e07298f152fb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f99d53f25011afeb242934cb64a3cce6d5a03c17bd6008729f5e07298f152fb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-11T08:40:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0311 08:40:20.589250 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0311 08:40:20.589388 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0311 08:40:20.590094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1044918164/tls.crt::/tmp/serving-cert-1044918164/tls.key\\\\\\\"\\\\nI0311 08:40:21.049983 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0311 08:40:21.052943 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0311 08:40:21.052965 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0311 08:40:21.052991 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0311 08:40:21.052999 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0311 08:40:21.059520 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0311 08:40:21.059550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:40:21.059569 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:40:21.059578 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0311 08:40:21.059584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0311 08:40:21.059590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0311 08:40:21.059596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0311 08:40:21.059648 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0311 08:40:21.062132 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://506d71773927c014aeafefdc00c93d5faf4e6a5827bc8b47f525f3ea970ed14e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://184b4659910cdc9b9ee8dc34db1ede687099f4a5103dfba9e9a2fc0428c4e2ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://184b4659910cdc9b9ee8dc34db1ede687099f4a5103dfba9e9a2fc0428c4e2ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:53Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:53 crc kubenswrapper[4808]: I0311 08:40:53.645502 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"134c7d84-5243-4e2e-afcf-0eaf67967ce7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00fd34aefc93aa169e305232d0d0bf0fff66d3f4469ae96f68cc5f8a51a92c86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://023c9b432e3273f96abe72e9e1e4cf5ab68649d9d71d0ff59f75fc30ea5c2287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://023c9b432e3273f96abe72e9e1e4cf5ab68649d9d71d0ff59f75fc30ea5c2287\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:53Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:53 crc kubenswrapper[4808]: I0311 08:40:53.665954 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:53Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:53 crc kubenswrapper[4808]: I0311 08:40:53.681414 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dda5309-668d-4e3c-b3b2-1d708eecc578\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://125d3edbb47daf863a7f7b431cd468ba206a964c42d2a8fdbf163ef8ae3fd771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2hw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dad38518c1c86c0eb3926b0ff17b9fe7847bd1e942591307c49dfbd87a61d19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2hw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tfsm9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:53Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:53 crc kubenswrapper[4808]: I0311 08:40:53.694834 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dgh9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1a75dfb-31dd-4275-a309-c9e7130feb05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81effdeab74d0bc6b6f904ca05d6ae5f9777772a1b6e69e9038bb6818af8d979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chgpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dgh9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:53Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:53 crc kubenswrapper[4808]: I0311 08:40:53.707631 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-twvrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6377c39c-8ecf-409c-b3e7-ea9d717e234f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a910f47666db2f5fb733718e848c75c15d9f0975c7339afd9e21f21ede7aa1b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c99gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-twvrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:53Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:53 crc kubenswrapper[4808]: I0311 08:40:53.721484 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2r84h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcd5dff4-0826-4876-9fd3-3f19781a17bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3814b88c4b0107b15c1dacb212e0a05451ebc05065fcb0c25318df4808220e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f840757548b4af73aae0c247025a649cbf50cfbc534fbd38f0592f0f0e97274\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f840757548b4af73aae0c247025a649cbf50cfbc534fbd38f0592f0f0e97274\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ade0f54a4b7718d83ed1ed5025f91b42175bf7daa1327a150a6eedc99df8e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ade0f54a4b7718d83ed1ed5025f91b42175bf7daa1327a150a6eedc99df8e3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e872db89da909105a9343e8b9951c7c88bfd5107edb4386faa05c08b37e34bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e872db89da909105a9343e8b9951c7c88bfd5107edb4386faa05c08b37e34bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14920dc320f545db70ffb1b588dcfed5cb019f04e0dddd1d38db7e0696429fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14920dc320f545db70ffb1b588dcfed5cb019f04e0dddd1d38db7e0696429fc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc79537b4712c9762fe2a813ef7f12b5d6314ed2aa66741561fe635591da8a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc79537b4712c9762fe2a813ef7f12b5d6314ed2aa66741561fe635591da8a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a689c8bcf97ed3a6950a775029df058df856a2cc37cdc511d0ed62a36d9f7d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a689c8bcf97ed3a6950a775029df058df856a2cc37cdc511d0ed62a36d9f7d21\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2r84h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:53Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:53 crc kubenswrapper[4808]: I0311 08:40:53.727959 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:53 crc kubenswrapper[4808]: I0311 08:40:53.728019 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:53 crc kubenswrapper[4808]: I0311 08:40:53.728040 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:53 crc kubenswrapper[4808]: I0311 08:40:53.728065 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:53 crc kubenswrapper[4808]: I0311 08:40:53.728083 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:53Z","lastTransitionTime":"2026-03-11T08:40:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:53 crc kubenswrapper[4808]: I0311 08:40:53.742441 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39fe7c21-7960-42c6-aa61-98bac9e7ecd5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46ab0655a5449d68489b2ad4cdadf1ab8539dfc66e9a09ef0bf492fc84c3df01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99862ebd2568f5b782683b45df57b331cb0b659d0a48b16d898658463fcaa023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fac42d74d7b0553ef1fda4393cbe7872b21bc18baada8519db8137f2ef71d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce1b28200696e0c819d326a53d481baf48418232ac2a66fe01504e7af648c56f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1da63ac4504d6a817f5c856ae4ec6feee655c983f804372f8100624e0fbb08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0167ad24e01c669d29e5bcad3d3dc4e2e7578cf20278001ea2dfe9f1390bc99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0167ad24e01c669d29e5bcad3d3dc4e2e7578cf20278001ea2dfe9f1390bc99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc6d6b373b603100e83b56735f848166bbe70eaffa795411d0bf95bbfbe31c6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc6d6b373b603100e83b56735f848166bbe70eaffa795411d0bf95bbfbe31c6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f029352eaf11747f4efebbb91000b740e298e03d72e8a1347316668b78c23cae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f029352eaf11747f4efebbb91000b740e298e03d72e8a1347316668b78c23cae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:53Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:53 crc kubenswrapper[4808]: I0311 08:40:53.757120 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b088d2a8d5895f936e824861275482977fb68600e1ca4fd0c08e124c4f11da52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:53Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:53 crc kubenswrapper[4808]: I0311 08:40:53.772462 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:53Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:53 crc kubenswrapper[4808]: I0311 08:40:53.783688 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:53Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:53 crc kubenswrapper[4808]: I0311 08:40:53.789665 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:40:53 crc kubenswrapper[4808]: E0311 08:40:53.789876 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 11 08:40:53 crc kubenswrapper[4808]: I0311 08:40:53.789689 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:40:53 crc kubenswrapper[4808]: E0311 08:40:53.790055 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 11 08:40:53 crc kubenswrapper[4808]: I0311 08:40:53.789668 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:40:53 crc kubenswrapper[4808]: E0311 08:40:53.790207 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 11 08:40:53 crc kubenswrapper[4808]: I0311 08:40:53.801098 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afd64db1e4772f4558a0a810e319481d1c8d521b299c9f6f79d5c0f752b5f89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8deb33e0cd83c7419f7ddd4cba0e84d33849e982b5da5e74e0e93eebb40b249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:53Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:53 crc kubenswrapper[4808]: I0311 08:40:53.828085 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17fbdac0c2d9f6080066329737fa9866b0e953b38687d6c7b2c62dea2e9964a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02a71d4f33fd4d8bcf666c2d01f03d31c6e0f2b8eaa0cc183b48251e36cdc474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1d10229731c437de7f29f504fec11ac3688a0a5039812d76551a867f814b95e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a351ca31265d6256ce9c08680495dd7a9b7882cd17a8faf4a0d224a800737e4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3674634f2b9f7a0e2fcf3758da17080b26a6284c8446d8326bc405b1307f8c22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f11f10c1e9b99cab011ae05555fe3dd8aeb6e7df5cb166f07c6bcfffc9f9164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc0e7d534992483e4ac7693478b4f531b2dba1bb7f7d882081b82f7857529b35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01f1b29fe1734c41ddcc1b36367ccc54148fa85f2e05072fc0b60987dfcbb078\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7129de475a69815e0ed556f2d45c208788efcd7ef2792f3d93994af37dee87b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7129de475a69815e0ed556f2d45c208788efcd7ef2792f3d93994af37dee87b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8wfl5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:53Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:53 crc kubenswrapper[4808]: I0311 08:40:53.830145 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:53 crc kubenswrapper[4808]: I0311 08:40:53.830221 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:53 crc kubenswrapper[4808]: I0311 08:40:53.830249 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:53 crc kubenswrapper[4808]: I0311 08:40:53.830280 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:53 crc kubenswrapper[4808]: I0311 08:40:53.830307 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:53Z","lastTransitionTime":"2026-03-11T08:40:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:53 crc kubenswrapper[4808]: I0311 08:40:53.842355 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dptv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af91f09b-b749-4f04-81ac-2f0079a0dca5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df6497d4025103c2efba8337b4d7541d3d7015be49a7b3478f5648122e9dbd92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtdnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dptv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:53Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:53 crc kubenswrapper[4808]: I0311 08:40:53.855731 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:53Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:53 crc kubenswrapper[4808]: I0311 08:40:53.867350 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dda5309-668d-4e3c-b3b2-1d708eecc578\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://125d3edbb47daf863a7f7b431cd468ba206a964c42d2a8fdbf163ef8ae3fd771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2hw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dad38518c1c86c0eb3926b0ff17b9fe7847bd1e942591307c49dfbd87a61d19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2hw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tfsm9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:53Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:53 crc kubenswrapper[4808]: I0311 08:40:53.882578 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dgh9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1a75dfb-31dd-4275-a309-c9e7130feb05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81effdeab74d0bc6b6f904ca05d6ae5f9777772a1b6e69e9038bb6818af8d979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chgpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dgh9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:53Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:53 crc kubenswrapper[4808]: I0311 08:40:53.895772 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70aeaa0b-1b9a-450e-bac3-61beed554491\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9a36b70daad0e5e33a84c699b4c0680f47f80bad4c9852616ea933ba8c6b221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1f460dd013b0b42bfb8031fcb1e2f49ace0fe24ce8194bd6e6970ad3e750ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://675da4e867613d165a41176c1b725f3601ffdd7de762c52efec0ee5485b1d628\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f99d53f25011afeb242934cb64a3cce6d5a03c17bd6008729f5e07298f152fb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f99d53f25011afeb242934cb64a3cce6d5a03c17bd6008729f5e07298f152fb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-11T08:40:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0311 08:40:20.589250 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0311 08:40:20.589388 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0311 08:40:20.590094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1044918164/tls.crt::/tmp/serving-cert-1044918164/tls.key\\\\\\\"\\\\nI0311 08:40:21.049983 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0311 08:40:21.052943 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0311 08:40:21.052965 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0311 08:40:21.052991 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0311 08:40:21.052999 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0311 08:40:21.059520 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0311 08:40:21.059550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:40:21.059569 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:40:21.059578 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0311 08:40:21.059584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0311 08:40:21.059590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0311 08:40:21.059596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0311 08:40:21.059648 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0311 08:40:21.062132 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://506d71773927c014aeafefdc00c93d5faf4e6a5827bc8b47f525f3ea970ed14e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://184b4659910cdc9b9ee8dc34db1ede687099f4a5103dfba9e9a2fc0428c4e2ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://184b4659910cdc9b9ee8dc34db1ede687099f4a5103dfba9e9a2fc0428c4e2ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:53Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:53 crc kubenswrapper[4808]: I0311 08:40:53.908139 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"134c7d84-5243-4e2e-afcf-0eaf67967ce7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00fd34aefc93aa169e305232d0d0bf0fff66d3f4469ae96f68cc5f8a51a92c86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://023c9b432e3273f96abe72e9e1e4cf5ab68649d9d71d0ff59f75fc30ea5c2287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://023c9b432e3273f96abe72e9e1e4cf5ab68649d9d71d0ff59f75fc30ea5c2287\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:53Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:53 crc kubenswrapper[4808]: I0311 08:40:53.920919 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a12f71a3a92a4513660a5b6c0bb73f40545f5528a45fab7aec18cb22283921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:53Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:53 crc kubenswrapper[4808]: I0311 08:40:53.933904 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:53 crc kubenswrapper[4808]: I0311 08:40:53.933970 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:53 crc kubenswrapper[4808]: I0311 08:40:53.933988 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:53 crc kubenswrapper[4808]: I0311 08:40:53.934013 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:53 crc kubenswrapper[4808]: I0311 08:40:53.934031 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:53Z","lastTransitionTime":"2026-03-11T08:40:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:54 crc kubenswrapper[4808]: I0311 08:40:54.036882 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:54 crc kubenswrapper[4808]: I0311 08:40:54.036952 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:54 crc kubenswrapper[4808]: I0311 08:40:54.036977 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:54 crc kubenswrapper[4808]: I0311 08:40:54.037007 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:54 crc kubenswrapper[4808]: I0311 08:40:54.037028 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:54Z","lastTransitionTime":"2026-03-11T08:40:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:54 crc kubenswrapper[4808]: I0311 08:40:54.140071 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:54 crc kubenswrapper[4808]: I0311 08:40:54.140130 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:54 crc kubenswrapper[4808]: I0311 08:40:54.140151 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:54 crc kubenswrapper[4808]: I0311 08:40:54.140178 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:54 crc kubenswrapper[4808]: I0311 08:40:54.140199 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:54Z","lastTransitionTime":"2026-03-11T08:40:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:54 crc kubenswrapper[4808]: I0311 08:40:54.243801 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:54 crc kubenswrapper[4808]: I0311 08:40:54.243863 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:54 crc kubenswrapper[4808]: I0311 08:40:54.243891 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:54 crc kubenswrapper[4808]: I0311 08:40:54.243936 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:54 crc kubenswrapper[4808]: I0311 08:40:54.243960 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:54Z","lastTransitionTime":"2026-03-11T08:40:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:54 crc kubenswrapper[4808]: I0311 08:40:54.347003 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:54 crc kubenswrapper[4808]: I0311 08:40:54.347078 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:54 crc kubenswrapper[4808]: I0311 08:40:54.347103 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:54 crc kubenswrapper[4808]: I0311 08:40:54.347138 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:54 crc kubenswrapper[4808]: I0311 08:40:54.347163 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:54Z","lastTransitionTime":"2026-03-11T08:40:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:54 crc kubenswrapper[4808]: I0311 08:40:54.449555 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:54 crc kubenswrapper[4808]: I0311 08:40:54.449629 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:54 crc kubenswrapper[4808]: I0311 08:40:54.449656 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:54 crc kubenswrapper[4808]: I0311 08:40:54.449687 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:54 crc kubenswrapper[4808]: I0311 08:40:54.449710 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:54Z","lastTransitionTime":"2026-03-11T08:40:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:54 crc kubenswrapper[4808]: I0311 08:40:54.553695 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:54 crc kubenswrapper[4808]: I0311 08:40:54.553760 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:54 crc kubenswrapper[4808]: I0311 08:40:54.553779 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:54 crc kubenswrapper[4808]: I0311 08:40:54.553803 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:54 crc kubenswrapper[4808]: I0311 08:40:54.553821 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:54Z","lastTransitionTime":"2026-03-11T08:40:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:54 crc kubenswrapper[4808]: I0311 08:40:54.656056 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:54 crc kubenswrapper[4808]: I0311 08:40:54.656347 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:54 crc kubenswrapper[4808]: I0311 08:40:54.656376 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:54 crc kubenswrapper[4808]: I0311 08:40:54.656394 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:54 crc kubenswrapper[4808]: I0311 08:40:54.656406 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:54Z","lastTransitionTime":"2026-03-11T08:40:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:54 crc kubenswrapper[4808]: I0311 08:40:54.758861 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:54 crc kubenswrapper[4808]: I0311 08:40:54.758954 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:54 crc kubenswrapper[4808]: I0311 08:40:54.758975 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:54 crc kubenswrapper[4808]: I0311 08:40:54.758997 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:54 crc kubenswrapper[4808]: I0311 08:40:54.759013 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:54Z","lastTransitionTime":"2026-03-11T08:40:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:54 crc kubenswrapper[4808]: I0311 08:40:54.865445 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:54 crc kubenswrapper[4808]: I0311 08:40:54.865500 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:54 crc kubenswrapper[4808]: I0311 08:40:54.865513 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:54 crc kubenswrapper[4808]: I0311 08:40:54.865529 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:54 crc kubenswrapper[4808]: I0311 08:40:54.865540 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:54Z","lastTransitionTime":"2026-03-11T08:40:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:54 crc kubenswrapper[4808]: I0311 08:40:54.967500 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:54 crc kubenswrapper[4808]: I0311 08:40:54.967543 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:54 crc kubenswrapper[4808]: I0311 08:40:54.967555 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:54 crc kubenswrapper[4808]: I0311 08:40:54.967571 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:54 crc kubenswrapper[4808]: I0311 08:40:54.967584 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:54Z","lastTransitionTime":"2026-03-11T08:40:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:55 crc kubenswrapper[4808]: I0311 08:40:55.071423 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:55 crc kubenswrapper[4808]: I0311 08:40:55.071486 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:55 crc kubenswrapper[4808]: I0311 08:40:55.071509 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:55 crc kubenswrapper[4808]: I0311 08:40:55.071549 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:55 crc kubenswrapper[4808]: I0311 08:40:55.071572 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:55Z","lastTransitionTime":"2026-03-11T08:40:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:55 crc kubenswrapper[4808]: I0311 08:40:55.174057 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:55 crc kubenswrapper[4808]: I0311 08:40:55.174100 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:55 crc kubenswrapper[4808]: I0311 08:40:55.174108 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:55 crc kubenswrapper[4808]: I0311 08:40:55.174121 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:55 crc kubenswrapper[4808]: I0311 08:40:55.174131 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:55Z","lastTransitionTime":"2026-03-11T08:40:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:55 crc kubenswrapper[4808]: I0311 08:40:55.278099 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:55 crc kubenswrapper[4808]: I0311 08:40:55.278160 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:55 crc kubenswrapper[4808]: I0311 08:40:55.278178 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:55 crc kubenswrapper[4808]: I0311 08:40:55.278202 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:55 crc kubenswrapper[4808]: I0311 08:40:55.278218 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:55Z","lastTransitionTime":"2026-03-11T08:40:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:55 crc kubenswrapper[4808]: I0311 08:40:55.380161 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:55 crc kubenswrapper[4808]: I0311 08:40:55.380195 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:55 crc kubenswrapper[4808]: I0311 08:40:55.380207 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:55 crc kubenswrapper[4808]: I0311 08:40:55.380224 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:55 crc kubenswrapper[4808]: I0311 08:40:55.380235 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:55Z","lastTransitionTime":"2026-03-11T08:40:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:55 crc kubenswrapper[4808]: I0311 08:40:55.482252 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:55 crc kubenswrapper[4808]: I0311 08:40:55.482287 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:55 crc kubenswrapper[4808]: I0311 08:40:55.482296 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:55 crc kubenswrapper[4808]: I0311 08:40:55.482311 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:55 crc kubenswrapper[4808]: I0311 08:40:55.482320 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:55Z","lastTransitionTime":"2026-03-11T08:40:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:55 crc kubenswrapper[4808]: I0311 08:40:55.585736 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:55 crc kubenswrapper[4808]: I0311 08:40:55.585847 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:55 crc kubenswrapper[4808]: I0311 08:40:55.585873 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:55 crc kubenswrapper[4808]: I0311 08:40:55.585916 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:55 crc kubenswrapper[4808]: I0311 08:40:55.585942 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:55Z","lastTransitionTime":"2026-03-11T08:40:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:55 crc kubenswrapper[4808]: I0311 08:40:55.689323 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:55 crc kubenswrapper[4808]: I0311 08:40:55.689452 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:55 crc kubenswrapper[4808]: I0311 08:40:55.689478 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:55 crc kubenswrapper[4808]: I0311 08:40:55.689510 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:55 crc kubenswrapper[4808]: I0311 08:40:55.689534 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:55Z","lastTransitionTime":"2026-03-11T08:40:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:55 crc kubenswrapper[4808]: I0311 08:40:55.789275 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:40:55 crc kubenswrapper[4808]: I0311 08:40:55.789410 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:40:55 crc kubenswrapper[4808]: E0311 08:40:55.789489 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 11 08:40:55 crc kubenswrapper[4808]: I0311 08:40:55.789410 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:40:55 crc kubenswrapper[4808]: E0311 08:40:55.789648 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 11 08:40:55 crc kubenswrapper[4808]: E0311 08:40:55.789842 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 11 08:40:55 crc kubenswrapper[4808]: I0311 08:40:55.792325 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:55 crc kubenswrapper[4808]: I0311 08:40:55.792495 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:55 crc kubenswrapper[4808]: I0311 08:40:55.792530 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:55 crc kubenswrapper[4808]: I0311 08:40:55.792555 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:55 crc kubenswrapper[4808]: I0311 08:40:55.792578 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:55Z","lastTransitionTime":"2026-03-11T08:40:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:55 crc kubenswrapper[4808]: I0311 08:40:55.894971 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:55 crc kubenswrapper[4808]: I0311 08:40:55.895034 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:55 crc kubenswrapper[4808]: I0311 08:40:55.895057 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:55 crc kubenswrapper[4808]: I0311 08:40:55.895078 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:55 crc kubenswrapper[4808]: I0311 08:40:55.895092 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:55Z","lastTransitionTime":"2026-03-11T08:40:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:55 crc kubenswrapper[4808]: I0311 08:40:55.998848 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:55 crc kubenswrapper[4808]: I0311 08:40:55.998903 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:55 crc kubenswrapper[4808]: I0311 08:40:55.998920 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:55 crc kubenswrapper[4808]: I0311 08:40:55.998944 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:55 crc kubenswrapper[4808]: I0311 08:40:55.998961 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:55Z","lastTransitionTime":"2026-03-11T08:40:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:56 crc kubenswrapper[4808]: I0311 08:40:56.102148 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:56 crc kubenswrapper[4808]: I0311 08:40:56.102215 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:56 crc kubenswrapper[4808]: I0311 08:40:56.102231 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:56 crc kubenswrapper[4808]: I0311 08:40:56.102254 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:56 crc kubenswrapper[4808]: I0311 08:40:56.102274 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:56Z","lastTransitionTime":"2026-03-11T08:40:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:56 crc kubenswrapper[4808]: I0311 08:40:56.206039 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:56 crc kubenswrapper[4808]: I0311 08:40:56.206098 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:56 crc kubenswrapper[4808]: I0311 08:40:56.206113 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:56 crc kubenswrapper[4808]: I0311 08:40:56.206135 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:56 crc kubenswrapper[4808]: I0311 08:40:56.206147 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:56Z","lastTransitionTime":"2026-03-11T08:40:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:56 crc kubenswrapper[4808]: I0311 08:40:56.308583 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:56 crc kubenswrapper[4808]: I0311 08:40:56.308656 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:56 crc kubenswrapper[4808]: I0311 08:40:56.308673 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:56 crc kubenswrapper[4808]: I0311 08:40:56.308698 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:56 crc kubenswrapper[4808]: I0311 08:40:56.308716 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:56Z","lastTransitionTime":"2026-03-11T08:40:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:56 crc kubenswrapper[4808]: I0311 08:40:56.359277 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8wfl5_afeac5d0-d84f-4776-ae37-a03c8f0f66b8/ovnkube-controller/0.log" Mar 11 08:40:56 crc kubenswrapper[4808]: I0311 08:40:56.363559 4808 generic.go:334] "Generic (PLEG): container finished" podID="afeac5d0-d84f-4776-ae37-a03c8f0f66b8" containerID="dc0e7d534992483e4ac7693478b4f531b2dba1bb7f7d882081b82f7857529b35" exitCode=1 Mar 11 08:40:56 crc kubenswrapper[4808]: I0311 08:40:56.363622 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" event={"ID":"afeac5d0-d84f-4776-ae37-a03c8f0f66b8","Type":"ContainerDied","Data":"dc0e7d534992483e4ac7693478b4f531b2dba1bb7f7d882081b82f7857529b35"} Mar 11 08:40:56 crc kubenswrapper[4808]: I0311 08:40:56.364977 4808 scope.go:117] "RemoveContainer" containerID="dc0e7d534992483e4ac7693478b4f531b2dba1bb7f7d882081b82f7857529b35" Mar 11 08:40:56 crc kubenswrapper[4808]: I0311 08:40:56.386389 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a12f71a3a92a4513660a5b6c0bb73f40545f5528a45fab7aec18cb22283921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:56Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:56 crc kubenswrapper[4808]: I0311 08:40:56.404704 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:56Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:56 crc kubenswrapper[4808]: I0311 08:40:56.412704 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:56 crc kubenswrapper[4808]: I0311 08:40:56.412761 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:56 crc kubenswrapper[4808]: I0311 08:40:56.412780 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:56 crc kubenswrapper[4808]: I0311 08:40:56.412806 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:56 crc kubenswrapper[4808]: I0311 08:40:56.412825 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:56Z","lastTransitionTime":"2026-03-11T08:40:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:56 crc kubenswrapper[4808]: I0311 08:40:56.427097 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:56Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:56 crc kubenswrapper[4808]: I0311 08:40:56.446397 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-twvrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6377c39c-8ecf-409c-b3e7-ea9d717e234f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a910f47666db2f5fb733718e848c75c15d9f0975c7339afd9e21f21ede7aa1b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c99gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-twvrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:56Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:56 crc kubenswrapper[4808]: I0311 08:40:56.469092 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2r84h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcd5dff4-0826-4876-9fd3-3f19781a17bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3814b88c4b0107b15c1dacb212e0a05451ebc05065fcb0c25318df4808220e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f840757548b4af73aae0c247025a649cbf50cfbc534fbd38f0592f0f0e97274\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f840757548b4af73aae0c247025a649cbf50cfbc534fbd38f0592f0f0e97274\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ade0f54a4b7718d83ed1ed5025f91b42175bf7daa1327a150a6eedc99df8e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ade0f54a4b7718d83ed1ed5025f91b42175bf7daa1327a150a6eedc99df8e3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e872db89da909105a9343e8b9951c7c88bfd5107edb4386faa05c08b37e34bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e872db89da909105a9343e8b9951c7c88bfd5107edb4386faa05c08b37e34bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14920dc320f545db70ffb1b588dcfed5cb019f04e0dddd1d38db7e0696429fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14920dc320f545db70ffb1b588dcfed5cb019f04e0dddd1d38db7e0696429fc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc79537b4712c9762fe2a813ef7f12b5d6314ed2aa66741561fe635591da8a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc79537b4712c9762fe2a813ef7f12b5d6314ed2aa66741561fe635591da8a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a689c8bcf97ed3a6950a775029df058df856a2cc37cdc511d0ed62a36d9f7d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a689c8bcf97ed3a6950a775029df058df856a2cc37cdc511d0ed62a36d9f7d21\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2r84h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:56Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:56 crc kubenswrapper[4808]: I0311 08:40:56.500709 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39fe7c21-7960-42c6-aa61-98bac9e7ecd5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46ab0655a5449d68489b2ad4cdadf1ab8539dfc66e9a09ef0bf492fc84c3df01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99862ebd2568f5b782683b45df57b331cb0b659d0a48b16d898658463fcaa023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fac42d74d7b0553ef1fda4393cbe7872b21bc18baada8519db8137f2ef71d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce1b28200696e0c819d326a53d481baf48418232ac2a66fe01504e7af648c56f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1da63ac4504d6a817f5c856ae4ec6feee655c983f804372f8100624e0fbb08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0167ad24e01c669d29e5bcad3d3dc4e2e7578cf20278001ea2dfe9f1390bc99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0167ad24e01c669d29e5bcad3d3dc4e2e7578cf20278001ea2dfe9f1390bc99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc6d6b373b603100e83b56735f848166bbe70eaffa795411d0bf95bbfbe31c6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc6d6b373b603100e83b56735f848166bbe70eaffa795411d0bf95bbfbe31c6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f029352eaf11747f4efebbb91000b740e298e03d72e8a1347316668b78c23cae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f029352eaf11747f4efebbb91000b740e298e03d72e8a1347316668b78c23cae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:56Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:56 crc kubenswrapper[4808]: I0311 08:40:56.515717 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:56 crc kubenswrapper[4808]: I0311 08:40:56.515765 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:56 crc kubenswrapper[4808]: I0311 08:40:56.515780 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:56 crc kubenswrapper[4808]: I0311 08:40:56.515801 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:56 crc kubenswrapper[4808]: I0311 08:40:56.515816 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:56Z","lastTransitionTime":"2026-03-11T08:40:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:56 crc kubenswrapper[4808]: I0311 08:40:56.523983 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b088d2a8d5895f936e824861275482977fb68600e1ca4fd0c08e124c4f11da52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:56Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:56 crc kubenswrapper[4808]: I0311 08:40:56.540153 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dptv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af91f09b-b749-4f04-81ac-2f0079a0dca5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df6497d4025103c2efba8337b4d7541d3d7015be49a7b3478f5648122e9dbd92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtdnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dptv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:56Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:56 crc kubenswrapper[4808]: I0311 08:40:56.562257 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afd64db1e4772f4558a0a810e319481d1c8d521b299c9f6f79d5c0f752b5f89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8deb33e0cd83c7419f7ddd4cba0e84d33849e982b5da5e74e0e93eebb40b249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:56Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:56 crc kubenswrapper[4808]: I0311 08:40:56.595555 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17fbdac0c2d9f6080066329737fa9866b0e953b38687d6c7b2c62dea2e9964a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02a71d4f33fd4d8bcf666c2d01f03d31c6e0f2b8eaa0cc183b48251e36cdc474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1d10229731c437de7f29f504fec11ac3688a0a5039812d76551a867f814b95e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a351ca31265d6256ce9c08680495dd7a9b7882cd17a8faf4a0d224a800737e4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3674634f2b9f7a0e2fcf3758da17080b26a6284c8446d8326bc405b1307f8c22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f11f10c1e9b99cab011ae05555fe3dd8aeb6e7df5cb166f07c6bcfffc9f9164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc0e7d534992483e4ac7693478b4f531b2dba1bb7f7d882081b82f7857529b35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc0e7d534992483e4ac7693478b4f531b2dba1bb7f7d882081b82f7857529b35\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-11T08:40:55Z\\\",\\\"message\\\":\\\"nalversions/factory.go:141\\\\nI0311 08:40:55.460987 6636 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0311 08:40:55.461176 6636 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0311 08:40:55.461284 6636 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0311 08:40:55.461631 6636 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0311 08:40:55.461668 6636 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0311 08:40:55.461696 6636 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0311 08:40:55.461718 6636 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0311 08:40:55.461745 6636 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0311 08:40:55.461785 6636 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0311 08:40:55.461814 6636 factory.go:656] Stopping watch factory\\\\nI0311 08:40:55.461831 6636 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0311 08:40:55.461834 6636 ovnkube.go:599] Stopped ovnkube\\\\nI0311 08:40:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01f1b29fe1734c41ddcc1b36367ccc54148fa85f2e05072fc0b60987dfcbb078\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7129de475a69815e0ed556f2d45c208788efcd7ef2792f3d93994af37dee87b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7129de475a69815e0ed556f2d45c208788efcd7ef2792f3d93994af37dee87b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8wfl5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:56Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:56 crc kubenswrapper[4808]: I0311 08:40:56.611258 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"134c7d84-5243-4e2e-afcf-0eaf67967ce7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00fd34aefc93aa169e305232d0d0bf0fff66d3f4469ae96f68cc5f8a51a92c86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://023c9b432e3273f96abe72e9e1e4cf5ab68649d9d71d0ff59f75fc30ea5c2287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://023c9b432e3273f96abe72e9e1e4cf5ab68649d9d71d0ff59f75fc30ea5c2287\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:56Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:56 crc kubenswrapper[4808]: I0311 08:40:56.618871 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:56 crc kubenswrapper[4808]: I0311 08:40:56.618914 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:56 crc kubenswrapper[4808]: I0311 08:40:56.618930 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:56 crc kubenswrapper[4808]: I0311 08:40:56.618954 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:56 crc kubenswrapper[4808]: I0311 08:40:56.618973 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:56Z","lastTransitionTime":"2026-03-11T08:40:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:56 crc kubenswrapper[4808]: I0311 08:40:56.630003 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:56Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:56 crc kubenswrapper[4808]: I0311 08:40:56.645740 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dda5309-668d-4e3c-b3b2-1d708eecc578\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://125d3edbb47daf863a7f7b431cd468ba206a964c42d2a8fdbf163ef8ae3fd771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2hw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dad38518c1c86c0eb3926b0ff17b9fe7847bd1e942591307c49dfbd87a61d19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2hw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tfsm9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:56Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:56 crc kubenswrapper[4808]: I0311 08:40:56.665258 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dgh9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1a75dfb-31dd-4275-a309-c9e7130feb05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81effdeab74d0bc6b6f904ca05d6ae5f9777772a1b6e69e9038bb6818af8d979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chgpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dgh9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:56Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:56 crc kubenswrapper[4808]: I0311 08:40:56.687665 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70aeaa0b-1b9a-450e-bac3-61beed554491\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9a36b70daad0e5e33a84c699b4c0680f47f80bad4c9852616ea933ba8c6b221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1f460dd013b0b42bfb8031fcb1e2f49ace0fe24ce8194bd6e6970ad3e750ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://675da4e867613d165a41176c1b725f3601ffdd7de762c52efec0ee5485b1d628\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f99d53f25011afeb242934cb64a3cce6d5a03c17bd6008729f5e07298f152fb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f99d53f25011afeb242934cb64a3cce6d5a03c17bd6008729f5e07298f152fb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-11T08:40:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0311 08:40:20.589250 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0311 08:40:20.589388 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0311 08:40:20.590094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1044918164/tls.crt::/tmp/serving-cert-1044918164/tls.key\\\\\\\"\\\\nI0311 08:40:21.049983 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0311 08:40:21.052943 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0311 08:40:21.052965 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0311 08:40:21.052991 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0311 08:40:21.052999 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0311 08:40:21.059520 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0311 08:40:21.059550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:40:21.059569 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:40:21.059578 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0311 08:40:21.059584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0311 08:40:21.059590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0311 08:40:21.059596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0311 08:40:21.059648 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0311 08:40:21.062132 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://506d71773927c014aeafefdc00c93d5faf4e6a5827bc8b47f525f3ea970ed14e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://184b4659910cdc9b9ee8dc34db1ede687099f4a5103dfba9e9a2fc0428c4e2ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://184b4659910cdc9b9ee8dc34db1ede687099f4a5103dfba9e9a2fc0428c4e2ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:56Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:56 crc kubenswrapper[4808]: I0311 08:40:56.729808 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:56 crc kubenswrapper[4808]: I0311 08:40:56.729848 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:56 crc kubenswrapper[4808]: I0311 08:40:56.729860 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:56 crc kubenswrapper[4808]: I0311 08:40:56.729878 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:56 crc kubenswrapper[4808]: I0311 08:40:56.729890 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:56Z","lastTransitionTime":"2026-03-11T08:40:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:56 crc kubenswrapper[4808]: I0311 08:40:56.832025 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:56 crc kubenswrapper[4808]: I0311 08:40:56.832065 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:56 crc kubenswrapper[4808]: I0311 08:40:56.832076 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:56 crc kubenswrapper[4808]: I0311 08:40:56.832092 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:56 crc kubenswrapper[4808]: I0311 08:40:56.832103 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:56Z","lastTransitionTime":"2026-03-11T08:40:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:56 crc kubenswrapper[4808]: I0311 08:40:56.989191 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:56 crc kubenswrapper[4808]: I0311 08:40:56.989239 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:56 crc kubenswrapper[4808]: I0311 08:40:56.989251 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:56 crc kubenswrapper[4808]: I0311 08:40:56.989270 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:56 crc kubenswrapper[4808]: I0311 08:40:56.989282 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:56Z","lastTransitionTime":"2026-03-11T08:40:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:57 crc kubenswrapper[4808]: I0311 08:40:57.091333 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:57 crc kubenswrapper[4808]: I0311 08:40:57.091405 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:57 crc kubenswrapper[4808]: I0311 08:40:57.091419 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:57 crc kubenswrapper[4808]: I0311 08:40:57.091435 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:57 crc kubenswrapper[4808]: I0311 08:40:57.091445 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:57Z","lastTransitionTime":"2026-03-11T08:40:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:57 crc kubenswrapper[4808]: I0311 08:40:57.194538 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:57 crc kubenswrapper[4808]: I0311 08:40:57.194607 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:57 crc kubenswrapper[4808]: I0311 08:40:57.194620 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:57 crc kubenswrapper[4808]: I0311 08:40:57.194645 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:57 crc kubenswrapper[4808]: I0311 08:40:57.194664 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:57Z","lastTransitionTime":"2026-03-11T08:40:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:57 crc kubenswrapper[4808]: I0311 08:40:57.297864 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:57 crc kubenswrapper[4808]: I0311 08:40:57.297912 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:57 crc kubenswrapper[4808]: I0311 08:40:57.297924 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:57 crc kubenswrapper[4808]: I0311 08:40:57.297942 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:57 crc kubenswrapper[4808]: I0311 08:40:57.297954 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:57Z","lastTransitionTime":"2026-03-11T08:40:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:57 crc kubenswrapper[4808]: I0311 08:40:57.370695 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8wfl5_afeac5d0-d84f-4776-ae37-a03c8f0f66b8/ovnkube-controller/1.log" Mar 11 08:40:57 crc kubenswrapper[4808]: I0311 08:40:57.371666 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8wfl5_afeac5d0-d84f-4776-ae37-a03c8f0f66b8/ovnkube-controller/0.log" Mar 11 08:40:57 crc kubenswrapper[4808]: I0311 08:40:57.376836 4808 generic.go:334] "Generic (PLEG): container finished" podID="afeac5d0-d84f-4776-ae37-a03c8f0f66b8" containerID="f2b75d7c063c9a55b7d1bbcc953c2b33048b4a5025f11c85af1f040f77501a1e" exitCode=1 Mar 11 08:40:57 crc kubenswrapper[4808]: I0311 08:40:57.376883 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" event={"ID":"afeac5d0-d84f-4776-ae37-a03c8f0f66b8","Type":"ContainerDied","Data":"f2b75d7c063c9a55b7d1bbcc953c2b33048b4a5025f11c85af1f040f77501a1e"} Mar 11 08:40:57 crc kubenswrapper[4808]: I0311 08:40:57.376928 4808 scope.go:117] "RemoveContainer" containerID="dc0e7d534992483e4ac7693478b4f531b2dba1bb7f7d882081b82f7857529b35" Mar 11 08:40:57 crc kubenswrapper[4808]: I0311 08:40:57.378033 4808 scope.go:117] "RemoveContainer" containerID="f2b75d7c063c9a55b7d1bbcc953c2b33048b4a5025f11c85af1f040f77501a1e" Mar 11 08:40:57 crc kubenswrapper[4808]: E0311 08:40:57.378410 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-8wfl5_openshift-ovn-kubernetes(afeac5d0-d84f-4776-ae37-a03c8f0f66b8)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" podUID="afeac5d0-d84f-4776-ae37-a03c8f0f66b8" Mar 11 08:40:57 crc kubenswrapper[4808]: I0311 08:40:57.397177 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b088d2a8d5895f936e824861275482977fb68600e1ca4fd0c08e124c4f11da52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:57Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:57 crc kubenswrapper[4808]: I0311 08:40:57.400220 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:57 crc kubenswrapper[4808]: I0311 08:40:57.400266 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:57 crc kubenswrapper[4808]: I0311 08:40:57.400285 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:57 crc kubenswrapper[4808]: I0311 08:40:57.400308 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:57 crc kubenswrapper[4808]: I0311 08:40:57.400325 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:57Z","lastTransitionTime":"2026-03-11T08:40:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:57 crc kubenswrapper[4808]: I0311 08:40:57.416515 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:57Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:57 crc kubenswrapper[4808]: I0311 08:40:57.482919 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:57Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:57 crc kubenswrapper[4808]: I0311 08:40:57.492657 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-twvrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6377c39c-8ecf-409c-b3e7-ea9d717e234f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a910f47666db2f5fb733718e848c75c15d9f0975c7339afd9e21f21ede7aa1b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c99gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-twvrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:57Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:57 crc kubenswrapper[4808]: I0311 08:40:57.502248 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:57 crc kubenswrapper[4808]: I0311 08:40:57.502289 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:57 crc kubenswrapper[4808]: I0311 08:40:57.502299 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:57 crc kubenswrapper[4808]: I0311 08:40:57.502315 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:57 crc kubenswrapper[4808]: I0311 08:40:57.502326 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:57Z","lastTransitionTime":"2026-03-11T08:40:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:57 crc kubenswrapper[4808]: I0311 08:40:57.506851 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2r84h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcd5dff4-0826-4876-9fd3-3f19781a17bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3814b88c4b0107b15c1dacb212e0a05451ebc05065fcb0c25318df4808220e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f840757548b4af73aae0c247025a649cbf50cfbc534fbd38f0592f0f0e97274\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f840757548b4af73aae0c247025a649cbf50cfbc534fbd38f0592f0f0e97274\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ade0f54a4b7718d83ed1ed5025f91b42175bf7daa1327a150a6eedc99df8e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ade0f54a4b7718d83ed1ed5025f91b42175bf7daa1327a150a6eedc99df8e3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e872db89da909105a9343e8b9951c7c88bfd5107edb4386faa05c08b37e34bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e872db89da909105a9343e8b9951c7c88bfd5107edb4386faa05c08b37e34bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14920dc320f545db70ffb1b588dcfed5cb019f04e0dddd1d38db7e0696429fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14920dc320f545db70ffb1b588dcfed5cb019f04e0dddd1d38db7e0696429fc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc79537b4712c9762fe2a813ef7f12b5d6314ed2aa66741561fe635591da8a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc79537b4712c9762fe2a813ef7f12b5d6314ed2aa66741561fe635591da8a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a689c8bcf97ed3a6950a775029df058df856a2cc37cdc511d0ed62a36d9f7d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a689c8bcf97ed3a6950a775029df058df856a2cc37cdc511d0ed62a36d9f7d21\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2r84h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:57Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:57 crc kubenswrapper[4808]: I0311 08:40:57.526379 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39fe7c21-7960-42c6-aa61-98bac9e7ecd5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46ab0655a5449d68489b2ad4cdadf1ab8539dfc66e9a09ef0bf492fc84c3df01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99862ebd2568f5b782683b45df57b331cb0b659d0a48b16d898658463fcaa023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fac42d74d7b0553ef1fda4393cbe7872b21bc18baada8519db8137f2ef71d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce1b28200696e0c819d326a53d481baf48418232ac2a66fe01504e7af648c56f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1da63ac4504d6a817f5c856ae4ec6feee655c983f804372f8100624e0fbb08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0167ad24e01c669d29e5bcad3d3dc4e2e7578cf20278001ea2dfe9f1390bc99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0167ad24e01c669d29e5bcad3d3dc4e2e7578cf20278001ea2dfe9f1390bc99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc6d6b373b603100e83b56735f848166bbe70eaffa795411d0bf95bbfbe31c6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc6d6b373b603100e83b56735f848166bbe70eaffa795411d0bf95bbfbe31c6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f029352eaf11747f4efebbb91000b740e298e03d72e8a1347316668b78c23cae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f029352eaf11747f4efebbb91000b740e298e03d72e8a1347316668b78c23cae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:57Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:57 crc kubenswrapper[4808]: I0311 08:40:57.544569 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17fbdac0c2d9f6080066329737fa9866b0e953b38687d6c7b2c62dea2e9964a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02a71d4f33fd4d8bcf666c2d01f03d31c6e0f2b8eaa0cc183b48251e36cdc474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1d10229731c437de7f29f504fec11ac3688a0a5039812d76551a867f814b95e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a351ca31265d6256ce9c08680495dd7a9b7882cd17a8faf4a0d224a800737e4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3674634f2b9f7a0e2fcf3758da17080b26a6284c8446d8326bc405b1307f8c22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f11f10c1e9b99cab011ae05555fe3dd8aeb6e7df5cb166f07c6bcfffc9f9164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2b75d7c063c9a55b7d1bbcc953c2b33048b4a5025f11c85af1f040f77501a1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc0e7d534992483e4ac7693478b4f531b2dba1bb7f7d882081b82f7857529b35\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-11T08:40:55Z\\\",\\\"message\\\":\\\"nalversions/factory.go:141\\\\nI0311 08:40:55.460987 6636 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0311 08:40:55.461176 6636 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0311 08:40:55.461284 6636 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0311 08:40:55.461631 6636 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0311 08:40:55.461668 6636 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0311 08:40:55.461696 6636 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0311 08:40:55.461718 6636 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0311 08:40:55.461745 6636 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0311 08:40:55.461785 6636 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0311 08:40:55.461814 6636 factory.go:656] Stopping watch factory\\\\nI0311 08:40:55.461831 6636 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0311 08:40:55.461834 6636 ovnkube.go:599] Stopped ovnkube\\\\nI0311 08:40:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2b75d7c063c9a55b7d1bbcc953c2b33048b4a5025f11c85af1f040f77501a1e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-11T08:40:57Z\\\",\\\"message\\\":\\\"troller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:57Z is after 2025-08-24T17:21:41Z]\\\\nI0311 08:40:57.290391 6847 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-dgh9v\\\\nI0311 08:40:57.290392 6847 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-dptv8 after 0 failed attempt(s)\\\\nI0311 08:40:57.290400 6847 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-dgh9v\\\\nI0311 08:40:57.290405 6847 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc\\\\nI0311 08:40:57.290415 6847 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc\\\\nI0311 08:40:57.290385 6847 obj_r\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01f1b29fe1734c41ddcc1b36367ccc54148fa85f2e05072fc0b60987dfcbb078\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7129de475a69815e0ed556f2d45c208788efcd7ef2792f3d93994af37dee87b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7129de475a69815e0ed556f2d45c208788efcd7ef2792f3d93994af37dee87b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8wfl5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:57Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:57 crc kubenswrapper[4808]: I0311 08:40:57.553725 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dptv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af91f09b-b749-4f04-81ac-2f0079a0dca5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df6497d4025103c2efba8337b4d7541d3d7015be49a7b3478f5648122e9dbd92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtdnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dptv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:57Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:57 crc kubenswrapper[4808]: I0311 08:40:57.564514 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afd64db1e4772f4558a0a810e319481d1c8d521b299c9f6f79d5c0f752b5f89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8deb33e0cd83c7419f7ddd4cba0e84d33849e982b5da5e74e0e93eebb40b249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:57Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:57 crc kubenswrapper[4808]: I0311 08:40:57.574052 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"134c7d84-5243-4e2e-afcf-0eaf67967ce7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00fd34aefc93aa169e305232d0d0bf0fff66d3f4469ae96f68cc5f8a51a92c86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://023c9b432e3273f96abe72e9e1e4cf5ab68649d9d71d0ff59f75fc30ea5c2287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://023c9b432e3273f96abe72e9e1e4cf5ab68649d9d71d0ff59f75fc30ea5c2287\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:57Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:57 crc kubenswrapper[4808]: I0311 08:40:57.586473 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:57Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:57 crc kubenswrapper[4808]: I0311 08:40:57.597292 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dda5309-668d-4e3c-b3b2-1d708eecc578\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://125d3edbb47daf863a7f7b431cd468ba206a964c42d2a8fdbf163ef8ae3fd771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2hw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dad38518c1c86c0eb3926b0ff17b9fe7847bd1e942591307c49dfbd87a61d19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2hw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tfsm9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:57Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:57 crc kubenswrapper[4808]: I0311 08:40:57.604177 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:57 crc kubenswrapper[4808]: I0311 08:40:57.604220 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:57 crc kubenswrapper[4808]: I0311 08:40:57.604233 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:57 crc kubenswrapper[4808]: I0311 08:40:57.604251 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:57 crc kubenswrapper[4808]: I0311 08:40:57.604265 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:57Z","lastTransitionTime":"2026-03-11T08:40:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:57 crc kubenswrapper[4808]: I0311 08:40:57.609813 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dgh9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1a75dfb-31dd-4275-a309-c9e7130feb05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81effdeab74d0bc6b6f904ca05d6ae5f9777772a1b6e69e9038bb6818af8d979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chgpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dgh9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:57Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:57 crc kubenswrapper[4808]: I0311 08:40:57.623513 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70aeaa0b-1b9a-450e-bac3-61beed554491\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9a36b70daad0e5e33a84c699b4c0680f47f80bad4c9852616ea933ba8c6b221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1f460dd013b0b42bfb8031fcb1e2f49ace0fe24ce8194bd6e6970ad3e750ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://675da4e867613d165a41176c1b725f3601ffdd7de762c52efec0ee5485b1d628\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f99d53f25011afeb242934cb64a3cce6d5a03c17bd6008729f5e07298f152fb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f99d53f25011afeb242934cb64a3cce6d5a03c17bd6008729f5e07298f152fb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-11T08:40:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0311 08:40:20.589250 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0311 08:40:20.589388 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0311 08:40:20.590094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1044918164/tls.crt::/tmp/serving-cert-1044918164/tls.key\\\\\\\"\\\\nI0311 08:40:21.049983 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0311 08:40:21.052943 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0311 08:40:21.052965 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0311 08:40:21.052991 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0311 08:40:21.052999 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0311 08:40:21.059520 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0311 08:40:21.059550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:40:21.059569 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:40:21.059578 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0311 08:40:21.059584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0311 08:40:21.059590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0311 08:40:21.059596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0311 08:40:21.059648 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0311 08:40:21.062132 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://506d71773927c014aeafefdc00c93d5faf4e6a5827bc8b47f525f3ea970ed14e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://184b4659910cdc9b9ee8dc34db1ede687099f4a5103dfba9e9a2fc0428c4e2ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://184b4659910cdc9b9ee8dc34db1ede687099f4a5103dfba9e9a2fc0428c4e2ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:57Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:57 crc kubenswrapper[4808]: I0311 08:40:57.635272 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a12f71a3a92a4513660a5b6c0bb73f40545f5528a45fab7aec18cb22283921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:57Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:57 crc kubenswrapper[4808]: I0311 08:40:57.706831 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:57 crc kubenswrapper[4808]: I0311 08:40:57.706901 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:57 crc kubenswrapper[4808]: I0311 08:40:57.706921 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:57 crc kubenswrapper[4808]: I0311 08:40:57.706946 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:57 crc kubenswrapper[4808]: I0311 08:40:57.706964 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:57Z","lastTransitionTime":"2026-03-11T08:40:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:57 crc kubenswrapper[4808]: I0311 08:40:57.788878 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:40:57 crc kubenswrapper[4808]: I0311 08:40:57.788908 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:40:57 crc kubenswrapper[4808]: E0311 08:40:57.789013 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 11 08:40:57 crc kubenswrapper[4808]: I0311 08:40:57.788878 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:40:57 crc kubenswrapper[4808]: E0311 08:40:57.789155 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 11 08:40:57 crc kubenswrapper[4808]: E0311 08:40:57.789352 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 11 08:40:57 crc kubenswrapper[4808]: I0311 08:40:57.809830 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:57 crc kubenswrapper[4808]: I0311 08:40:57.809895 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:57 crc kubenswrapper[4808]: I0311 08:40:57.809922 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:57 crc kubenswrapper[4808]: I0311 08:40:57.809957 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:57 crc kubenswrapper[4808]: I0311 08:40:57.809980 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:57Z","lastTransitionTime":"2026-03-11T08:40:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:57 crc kubenswrapper[4808]: I0311 08:40:57.912521 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:57 crc kubenswrapper[4808]: I0311 08:40:57.912567 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:57 crc kubenswrapper[4808]: I0311 08:40:57.912580 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:57 crc kubenswrapper[4808]: I0311 08:40:57.912598 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:57 crc kubenswrapper[4808]: I0311 08:40:57.912610 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:57Z","lastTransitionTime":"2026-03-11T08:40:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:58 crc kubenswrapper[4808]: I0311 08:40:58.015328 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:58 crc kubenswrapper[4808]: I0311 08:40:58.015381 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:58 crc kubenswrapper[4808]: I0311 08:40:58.015390 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:58 crc kubenswrapper[4808]: I0311 08:40:58.015406 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:58 crc kubenswrapper[4808]: I0311 08:40:58.015417 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:58Z","lastTransitionTime":"2026-03-11T08:40:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:58 crc kubenswrapper[4808]: I0311 08:40:58.037916 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:40:58 crc kubenswrapper[4808]: I0311 08:40:58.037983 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:40:58 crc kubenswrapper[4808]: I0311 08:40:58.038021 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:40:58 crc kubenswrapper[4808]: I0311 08:40:58.038039 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:40:58 crc kubenswrapper[4808]: I0311 08:40:58.038059 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:40:58 crc kubenswrapper[4808]: E0311 08:40:58.038153 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 11 08:40:58 crc kubenswrapper[4808]: E0311 08:40:58.038169 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 11 08:40:58 crc kubenswrapper[4808]: E0311 08:40:58.038180 4808 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 11 08:40:58 crc kubenswrapper[4808]: E0311 08:40:58.038195 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:41:30.038153051 +0000 UTC m=+140.991476411 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:40:58 crc kubenswrapper[4808]: E0311 08:40:58.038196 4808 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 11 08:40:58 crc kubenswrapper[4808]: E0311 08:40:58.038252 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-11 08:41:30.038232623 +0000 UTC m=+140.991556093 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 11 08:40:58 crc kubenswrapper[4808]: E0311 08:40:58.038322 4808 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 11 08:40:58 crc kubenswrapper[4808]: E0311 08:40:58.038419 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 11 08:40:58 crc kubenswrapper[4808]: E0311 08:40:58.038331 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-11 08:41:30.038311365 +0000 UTC m=+140.991634715 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 11 08:40:58 crc kubenswrapper[4808]: E0311 08:40:58.038466 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 11 08:40:58 crc kubenswrapper[4808]: E0311 08:40:58.038498 4808 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 11 08:40:58 crc kubenswrapper[4808]: E0311 08:40:58.038499 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-11 08:41:30.03847381 +0000 UTC m=+140.991797170 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 11 08:40:58 crc kubenswrapper[4808]: E0311 08:40:58.038587 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-11 08:41:30.038560913 +0000 UTC m=+140.991884273 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 11 08:40:58 crc kubenswrapper[4808]: I0311 08:40:58.117679 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:58 crc kubenswrapper[4808]: I0311 08:40:58.117731 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:58 crc kubenswrapper[4808]: I0311 08:40:58.117747 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:58 crc kubenswrapper[4808]: I0311 08:40:58.117763 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:58 crc kubenswrapper[4808]: I0311 08:40:58.117774 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:58Z","lastTransitionTime":"2026-03-11T08:40:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:58 crc kubenswrapper[4808]: I0311 08:40:58.153666 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nqcqr"] Mar 11 08:40:58 crc kubenswrapper[4808]: I0311 08:40:58.154270 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nqcqr" Mar 11 08:40:58 crc kubenswrapper[4808]: I0311 08:40:58.157590 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Mar 11 08:40:58 crc kubenswrapper[4808]: I0311 08:40:58.157894 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 11 08:40:58 crc kubenswrapper[4808]: I0311 08:40:58.180314 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70aeaa0b-1b9a-450e-bac3-61beed554491\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9a36b70daad0e5e33a84c699b4c0680f47f80bad4c9852616ea933ba8c6b221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1f460dd013b0b42bfb8031fcb1e2f49ace0fe24ce8194bd6e6970ad3e750ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://675da4e867613d165a41176c1b725f3601ffdd7de762c52efec0ee5485b1d628\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f99d53f25011afeb242934cb64a3cce6d5a03c17bd6008729f5e07298f152fb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f99d53f25011afeb242934cb64a3cce6d5a03c17bd6008729f5e07298f152fb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-11T08:40:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0311 08:40:20.589250 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0311 08:40:20.589388 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0311 08:40:20.590094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1044918164/tls.crt::/tmp/serving-cert-1044918164/tls.key\\\\\\\"\\\\nI0311 08:40:21.049983 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0311 08:40:21.052943 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0311 08:40:21.052965 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0311 08:40:21.052991 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0311 08:40:21.052999 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0311 08:40:21.059520 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0311 08:40:21.059550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:40:21.059569 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:40:21.059578 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0311 08:40:21.059584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0311 08:40:21.059590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0311 08:40:21.059596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0311 08:40:21.059648 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0311 08:40:21.062132 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://506d71773927c014aeafefdc00c93d5faf4e6a5827bc8b47f525f3ea970ed14e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://184b4659910cdc9b9ee8dc34db1ede687099f4a5103dfba9e9a2fc0428c4e2ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://184b4659910cdc9b9ee8dc34db1ede687099f4a5103dfba9e9a2fc0428c4e2ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:58Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:58 crc kubenswrapper[4808]: I0311 08:40:58.195858 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"134c7d84-5243-4e2e-afcf-0eaf67967ce7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00fd34aefc93aa169e305232d0d0bf0fff66d3f4469ae96f68cc5f8a51a92c86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://023c9b432e3273f96abe72e9e1e4cf5ab68649d9d71d0ff59f75fc30ea5c2287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://023c9b432e3273f96abe72e9e1e4cf5ab68649d9d71d0ff59f75fc30ea5c2287\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:58Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:58 crc kubenswrapper[4808]: I0311 08:40:58.214753 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:58Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:58 crc kubenswrapper[4808]: I0311 08:40:58.220601 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:58 crc kubenswrapper[4808]: I0311 08:40:58.220645 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:58 crc kubenswrapper[4808]: I0311 08:40:58.220661 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:58 crc kubenswrapper[4808]: I0311 08:40:58.220683 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:58 crc kubenswrapper[4808]: I0311 08:40:58.220700 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:58Z","lastTransitionTime":"2026-03-11T08:40:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:58 crc kubenswrapper[4808]: I0311 08:40:58.232869 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dda5309-668d-4e3c-b3b2-1d708eecc578\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://125d3edbb47daf863a7f7b431cd468ba206a964c42d2a8fdbf163ef8ae3fd771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2hw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dad38518c1c86c0eb3926b0ff17b9fe7847bd1e942591307c49dfbd87a61d19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2hw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tfsm9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:58Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:58 crc kubenswrapper[4808]: I0311 08:40:58.239964 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/da0586e9-4a6f-4ba5-bc48-568fd9cdb0e6-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-nqcqr\" (UID: \"da0586e9-4a6f-4ba5-bc48-568fd9cdb0e6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nqcqr" Mar 11 08:40:58 crc kubenswrapper[4808]: I0311 08:40:58.240265 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/da0586e9-4a6f-4ba5-bc48-568fd9cdb0e6-env-overrides\") pod \"ovnkube-control-plane-749d76644c-nqcqr\" (UID: \"da0586e9-4a6f-4ba5-bc48-568fd9cdb0e6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nqcqr" Mar 11 08:40:58 crc kubenswrapper[4808]: I0311 08:40:58.240484 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/da0586e9-4a6f-4ba5-bc48-568fd9cdb0e6-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-nqcqr\" (UID: \"da0586e9-4a6f-4ba5-bc48-568fd9cdb0e6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nqcqr" Mar 11 08:40:58 crc kubenswrapper[4808]: I0311 08:40:58.240660 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8d5q8\" (UniqueName: \"kubernetes.io/projected/da0586e9-4a6f-4ba5-bc48-568fd9cdb0e6-kube-api-access-8d5q8\") pod \"ovnkube-control-plane-749d76644c-nqcqr\" (UID: \"da0586e9-4a6f-4ba5-bc48-568fd9cdb0e6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nqcqr" Mar 11 08:40:58 crc kubenswrapper[4808]: I0311 08:40:58.253925 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dgh9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1a75dfb-31dd-4275-a309-c9e7130feb05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81effdeab74d0bc6b6f904ca05d6ae5f9777772a1b6e69e9038bb6818af8d979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chgpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dgh9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:58Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:58 crc kubenswrapper[4808]: I0311 08:40:58.275819 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a12f71a3a92a4513660a5b6c0bb73f40545f5528a45fab7aec18cb22283921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:58Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:58 crc kubenswrapper[4808]: I0311 08:40:58.309404 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39fe7c21-7960-42c6-aa61-98bac9e7ecd5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46ab0655a5449d68489b2ad4cdadf1ab8539dfc66e9a09ef0bf492fc84c3df01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99862ebd2568f5b782683b45df57b331cb0b659d0a48b16d898658463fcaa023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fac42d74d7b0553ef1fda4393cbe7872b21bc18baada8519db8137f2ef71d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce1b28200696e0c819d326a53d481baf48418232ac2a66fe01504e7af648c56f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1da63ac4504d6a817f5c856ae4ec6feee655c983f804372f8100624e0fbb08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0167ad24e01c669d29e5bcad3d3dc4e2e7578cf20278001ea2dfe9f1390bc99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0167ad24e01c669d29e5bcad3d3dc4e2e7578cf20278001ea2dfe9f1390bc99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc6d6b373b603100e83b56735f848166bbe70eaffa795411d0bf95bbfbe31c6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc6d6b373b603100e83b56735f848166bbe70eaffa795411d0bf95bbfbe31c6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f029352eaf11747f4efebbb91000b740e298e03d72e8a1347316668b78c23cae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f029352eaf11747f4efebbb91000b740e298e03d72e8a1347316668b78c23cae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:58Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:58 crc kubenswrapper[4808]: I0311 08:40:58.323735 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:58 crc kubenswrapper[4808]: I0311 08:40:58.323787 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:58 crc kubenswrapper[4808]: I0311 08:40:58.323804 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:58 crc kubenswrapper[4808]: I0311 08:40:58.323829 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:58 crc kubenswrapper[4808]: I0311 08:40:58.323846 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:58Z","lastTransitionTime":"2026-03-11T08:40:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:58 crc kubenswrapper[4808]: I0311 08:40:58.331147 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b088d2a8d5895f936e824861275482977fb68600e1ca4fd0c08e124c4f11da52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:58Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:58 crc kubenswrapper[4808]: I0311 08:40:58.342261 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/da0586e9-4a6f-4ba5-bc48-568fd9cdb0e6-env-overrides\") pod \"ovnkube-control-plane-749d76644c-nqcqr\" (UID: \"da0586e9-4a6f-4ba5-bc48-568fd9cdb0e6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nqcqr" Mar 11 08:40:58 crc kubenswrapper[4808]: I0311 08:40:58.342331 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8d5q8\" (UniqueName: \"kubernetes.io/projected/da0586e9-4a6f-4ba5-bc48-568fd9cdb0e6-kube-api-access-8d5q8\") pod \"ovnkube-control-plane-749d76644c-nqcqr\" (UID: \"da0586e9-4a6f-4ba5-bc48-568fd9cdb0e6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nqcqr" Mar 11 08:40:58 crc kubenswrapper[4808]: I0311 08:40:58.342423 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/da0586e9-4a6f-4ba5-bc48-568fd9cdb0e6-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-nqcqr\" (UID: \"da0586e9-4a6f-4ba5-bc48-568fd9cdb0e6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nqcqr" Mar 11 08:40:58 crc kubenswrapper[4808]: I0311 08:40:58.342462 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/da0586e9-4a6f-4ba5-bc48-568fd9cdb0e6-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-nqcqr\" (UID: \"da0586e9-4a6f-4ba5-bc48-568fd9cdb0e6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nqcqr" Mar 11 08:40:58 crc kubenswrapper[4808]: I0311 08:40:58.343328 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/da0586e9-4a6f-4ba5-bc48-568fd9cdb0e6-env-overrides\") pod \"ovnkube-control-plane-749d76644c-nqcqr\" (UID: \"da0586e9-4a6f-4ba5-bc48-568fd9cdb0e6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nqcqr" Mar 11 08:40:58 crc kubenswrapper[4808]: I0311 08:40:58.344978 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/da0586e9-4a6f-4ba5-bc48-568fd9cdb0e6-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-nqcqr\" (UID: \"da0586e9-4a6f-4ba5-bc48-568fd9cdb0e6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nqcqr" Mar 11 08:40:58 crc kubenswrapper[4808]: I0311 08:40:58.352082 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/da0586e9-4a6f-4ba5-bc48-568fd9cdb0e6-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-nqcqr\" (UID: \"da0586e9-4a6f-4ba5-bc48-568fd9cdb0e6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nqcqr" Mar 11 08:40:58 crc kubenswrapper[4808]: I0311 08:40:58.361559 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:58Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:58 crc kubenswrapper[4808]: I0311 08:40:58.370422 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8d5q8\" (UniqueName: \"kubernetes.io/projected/da0586e9-4a6f-4ba5-bc48-568fd9cdb0e6-kube-api-access-8d5q8\") pod \"ovnkube-control-plane-749d76644c-nqcqr\" (UID: \"da0586e9-4a6f-4ba5-bc48-568fd9cdb0e6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nqcqr" Mar 11 08:40:58 crc kubenswrapper[4808]: I0311 08:40:58.381429 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:58Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:58 crc kubenswrapper[4808]: I0311 08:40:58.384500 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8wfl5_afeac5d0-d84f-4776-ae37-a03c8f0f66b8/ovnkube-controller/1.log" Mar 11 08:40:58 crc kubenswrapper[4808]: I0311 08:40:58.397106 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-twvrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6377c39c-8ecf-409c-b3e7-ea9d717e234f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a910f47666db2f5fb733718e848c75c15d9f0975c7339afd9e21f21ede7aa1b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c99gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-twvrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:58Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:58 crc kubenswrapper[4808]: I0311 08:40:58.425155 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2r84h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcd5dff4-0826-4876-9fd3-3f19781a17bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3814b88c4b0107b15c1dacb212e0a05451ebc05065fcb0c25318df4808220e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f840757548b4af73aae0c247025a649cbf50cfbc534fbd38f0592f0f0e97274\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f840757548b4af73aae0c247025a649cbf50cfbc534fbd38f0592f0f0e97274\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ade0f54a4b7718d83ed1ed5025f91b42175bf7daa1327a150a6eedc99df8e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ade0f54a4b7718d83ed1ed5025f91b42175bf7daa1327a150a6eedc99df8e3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e872db89da909105a9343e8b9951c7c88bfd5107edb4386faa05c08b37e34bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e872db89da909105a9343e8b9951c7c88bfd5107edb4386faa05c08b37e34bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14920dc320f545db70ffb1b588dcfed5cb019f04e0dddd1d38db7e0696429fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14920dc320f545db70ffb1b588dcfed5cb019f04e0dddd1d38db7e0696429fc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc79537b4712c9762fe2a813ef7f12b5d6314ed2aa66741561fe635591da8a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc79537b4712c9762fe2a813ef7f12b5d6314ed2aa66741561fe635591da8a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a689c8bcf97ed3a6950a775029df058df856a2cc37cdc511d0ed62a36d9f7d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a689c8bcf97ed3a6950a775029df058df856a2cc37cdc511d0ed62a36d9f7d21\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2r84h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:58Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:58 crc kubenswrapper[4808]: I0311 08:40:58.426628 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:58 crc kubenswrapper[4808]: I0311 08:40:58.426671 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:58 crc kubenswrapper[4808]: I0311 08:40:58.426690 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:58 crc kubenswrapper[4808]: I0311 08:40:58.426714 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:58 crc kubenswrapper[4808]: I0311 08:40:58.426730 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:58Z","lastTransitionTime":"2026-03-11T08:40:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:58 crc kubenswrapper[4808]: I0311 08:40:58.443943 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nqcqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da0586e9-4a6f-4ba5-bc48-568fd9cdb0e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d5q8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d5q8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nqcqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:58Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:58 crc kubenswrapper[4808]: I0311 08:40:58.465177 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afd64db1e4772f4558a0a810e319481d1c8d521b299c9f6f79d5c0f752b5f89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8deb33e0cd83c7419f7ddd4cba0e84d33849e982b5da5e74e0e93eebb40b249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:58Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:58 crc kubenswrapper[4808]: I0311 08:40:58.481959 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nqcqr" Mar 11 08:40:58 crc kubenswrapper[4808]: I0311 08:40:58.497640 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17fbdac0c2d9f6080066329737fa9866b0e953b38687d6c7b2c62dea2e9964a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02a71d4f33fd4d8bcf666c2d01f03d31c6e0f2b8eaa0cc183b48251e36cdc474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1d10229731c437de7f29f504fec11ac3688a0a5039812d76551a867f814b95e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a351ca31265d6256ce9c08680495dd7a9b7882cd17a8faf4a0d224a800737e4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3674634f2b9f7a0e2fcf3758da17080b26a6284c8446d8326bc405b1307f8c22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f11f10c1e9b99cab011ae05555fe3dd8aeb6e7df5cb166f07c6bcfffc9f9164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2b75d7c063c9a55b7d1bbcc953c2b33048b4a5025f11c85af1f040f77501a1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc0e7d534992483e4ac7693478b4f531b2dba1bb7f7d882081b82f7857529b35\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-11T08:40:55Z\\\",\\\"message\\\":\\\"nalversions/factory.go:141\\\\nI0311 08:40:55.460987 6636 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0311 08:40:55.461176 6636 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0311 08:40:55.461284 6636 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0311 08:40:55.461631 6636 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0311 08:40:55.461668 6636 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0311 08:40:55.461696 6636 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0311 08:40:55.461718 6636 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0311 08:40:55.461745 6636 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0311 08:40:55.461785 6636 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0311 08:40:55.461814 6636 factory.go:656] Stopping watch factory\\\\nI0311 08:40:55.461831 6636 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0311 08:40:55.461834 6636 ovnkube.go:599] Stopped ovnkube\\\\nI0311 08:40:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2b75d7c063c9a55b7d1bbcc953c2b33048b4a5025f11c85af1f040f77501a1e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-11T08:40:57Z\\\",\\\"message\\\":\\\"troller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:57Z is after 2025-08-24T17:21:41Z]\\\\nI0311 08:40:57.290391 6847 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-dgh9v\\\\nI0311 08:40:57.290392 6847 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-dptv8 after 0 failed attempt(s)\\\\nI0311 08:40:57.290400 6847 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-dgh9v\\\\nI0311 08:40:57.290405 6847 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc\\\\nI0311 08:40:57.290415 6847 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc\\\\nI0311 08:40:57.290385 6847 obj_r\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01f1b29fe1734c41ddcc1b36367ccc54148fa85f2e05072fc0b60987dfcbb078\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7129de475a69815e0ed556f2d45c208788efcd7ef2792f3d93994af37dee87b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7129de475a69815e0ed556f2d45c208788efcd7ef2792f3d93994af37dee87b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8wfl5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:58Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:58 crc kubenswrapper[4808]: I0311 08:40:58.510587 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dptv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af91f09b-b749-4f04-81ac-2f0079a0dca5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df6497d4025103c2efba8337b4d7541d3d7015be49a7b3478f5648122e9dbd92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtdnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dptv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:58Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:58 crc kubenswrapper[4808]: I0311 08:40:58.529431 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:58 crc kubenswrapper[4808]: I0311 08:40:58.529473 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:58 crc kubenswrapper[4808]: I0311 08:40:58.529486 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:58 crc kubenswrapper[4808]: I0311 08:40:58.529504 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:58 crc kubenswrapper[4808]: I0311 08:40:58.529516 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:58Z","lastTransitionTime":"2026-03-11T08:40:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:58 crc kubenswrapper[4808]: I0311 08:40:58.631865 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:58 crc kubenswrapper[4808]: I0311 08:40:58.631909 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:58 crc kubenswrapper[4808]: I0311 08:40:58.631919 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:58 crc kubenswrapper[4808]: I0311 08:40:58.631936 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:58 crc kubenswrapper[4808]: I0311 08:40:58.631945 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:58Z","lastTransitionTime":"2026-03-11T08:40:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:58 crc kubenswrapper[4808]: I0311 08:40:58.734824 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:58 crc kubenswrapper[4808]: I0311 08:40:58.734860 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:58 crc kubenswrapper[4808]: I0311 08:40:58.734868 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:58 crc kubenswrapper[4808]: I0311 08:40:58.734882 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:58 crc kubenswrapper[4808]: I0311 08:40:58.734895 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:58Z","lastTransitionTime":"2026-03-11T08:40:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:58 crc kubenswrapper[4808]: I0311 08:40:58.838479 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:58 crc kubenswrapper[4808]: I0311 08:40:58.838515 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:58 crc kubenswrapper[4808]: I0311 08:40:58.838528 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:58 crc kubenswrapper[4808]: I0311 08:40:58.838545 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:58 crc kubenswrapper[4808]: I0311 08:40:58.838556 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:58Z","lastTransitionTime":"2026-03-11T08:40:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:58 crc kubenswrapper[4808]: I0311 08:40:58.892108 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-kqsq9"] Mar 11 08:40:58 crc kubenswrapper[4808]: I0311 08:40:58.892704 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kqsq9" Mar 11 08:40:58 crc kubenswrapper[4808]: E0311 08:40:58.892804 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kqsq9" podUID="cf747e37-c201-4dcc-a2a5-2429f4eba47d" Mar 11 08:40:58 crc kubenswrapper[4808]: I0311 08:40:58.906665 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nqcqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da0586e9-4a6f-4ba5-bc48-568fd9cdb0e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d5q8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d5q8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nqcqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:58Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:58 crc kubenswrapper[4808]: I0311 08:40:58.919592 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kqsq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf747e37-c201-4dcc-a2a5-2429f4eba47d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7666v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7666v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kqsq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:58Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:58 crc kubenswrapper[4808]: I0311 08:40:58.941856 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:58 crc kubenswrapper[4808]: I0311 08:40:58.941893 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:58 crc kubenswrapper[4808]: I0311 08:40:58.941905 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:58 crc kubenswrapper[4808]: I0311 08:40:58.941923 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:58 crc kubenswrapper[4808]: I0311 08:40:58.941933 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:58Z","lastTransitionTime":"2026-03-11T08:40:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:58 crc kubenswrapper[4808]: I0311 08:40:58.943327 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39fe7c21-7960-42c6-aa61-98bac9e7ecd5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46ab0655a5449d68489b2ad4cdadf1ab8539dfc66e9a09ef0bf492fc84c3df01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99862ebd2568f5b782683b45df57b331cb0b659d0a48b16d898658463fcaa023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fac42d74d7b0553ef1fda4393cbe7872b21bc18baada8519db8137f2ef71d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce1b28200696e0c819d326a53d481baf48418232ac2a66fe01504e7af648c56f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1da63ac4504d6a817f5c856ae4ec6feee655c983f804372f8100624e0fbb08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0167ad24e01c669d29e5bcad3d3dc4e2e7578cf20278001ea2dfe9f1390bc99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0167ad24e01c669d29e5bcad3d3dc4e2e7578cf20278001ea2dfe9f1390bc99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc6d6b373b603100e83b56735f848166bbe70eaffa795411d0bf95bbfbe31c6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc6d6b373b603100e83b56735f848166bbe70eaffa795411d0bf95bbfbe31c6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f029352eaf11747f4efebbb91000b740e298e03d72e8a1347316668b78c23cae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f029352eaf11747f4efebbb91000b740e298e03d72e8a1347316668b78c23cae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:58Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:58 crc kubenswrapper[4808]: I0311 08:40:58.948977 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7666v\" (UniqueName: \"kubernetes.io/projected/cf747e37-c201-4dcc-a2a5-2429f4eba47d-kube-api-access-7666v\") pod \"network-metrics-daemon-kqsq9\" (UID: \"cf747e37-c201-4dcc-a2a5-2429f4eba47d\") " pod="openshift-multus/network-metrics-daemon-kqsq9" Mar 11 08:40:58 crc kubenswrapper[4808]: I0311 08:40:58.949025 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cf747e37-c201-4dcc-a2a5-2429f4eba47d-metrics-certs\") pod \"network-metrics-daemon-kqsq9\" (UID: \"cf747e37-c201-4dcc-a2a5-2429f4eba47d\") " pod="openshift-multus/network-metrics-daemon-kqsq9" Mar 11 08:40:58 crc kubenswrapper[4808]: I0311 08:40:58.957502 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b088d2a8d5895f936e824861275482977fb68600e1ca4fd0c08e124c4f11da52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:58Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:58 crc kubenswrapper[4808]: I0311 08:40:58.974181 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:58Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:58 crc kubenswrapper[4808]: I0311 08:40:58.987577 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:58Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:59 crc kubenswrapper[4808]: I0311 08:40:59.003202 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-twvrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6377c39c-8ecf-409c-b3e7-ea9d717e234f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a910f47666db2f5fb733718e848c75c15d9f0975c7339afd9e21f21ede7aa1b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c99gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-twvrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:58Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:59 crc kubenswrapper[4808]: I0311 08:40:59.027787 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2r84h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcd5dff4-0826-4876-9fd3-3f19781a17bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3814b88c4b0107b15c1dacb212e0a05451ebc05065fcb0c25318df4808220e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f840757548b4af73aae0c247025a649cbf50cfbc534fbd38f0592f0f0e97274\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f840757548b4af73aae0c247025a649cbf50cfbc534fbd38f0592f0f0e97274\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ade0f54a4b7718d83ed1ed5025f91b42175bf7daa1327a150a6eedc99df8e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ade0f54a4b7718d83ed1ed5025f91b42175bf7daa1327a150a6eedc99df8e3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e872db89da909105a9343e8b9951c7c88bfd5107edb4386faa05c08b37e34bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e872db89da909105a9343e8b9951c7c88bfd5107edb4386faa05c08b37e34bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14920dc320f545db70ffb1b588dcfed5cb019f04e0dddd1d38db7e0696429fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14920dc320f545db70ffb1b588dcfed5cb019f04e0dddd1d38db7e0696429fc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc79537b4712c9762fe2a813ef7f12b5d6314ed2aa66741561fe635591da8a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc79537b4712c9762fe2a813ef7f12b5d6314ed2aa66741561fe635591da8a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a689c8bcf97ed3a6950a775029df058df856a2cc37cdc511d0ed62a36d9f7d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a689c8bcf97ed3a6950a775029df058df856a2cc37cdc511d0ed62a36d9f7d21\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2r84h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:59Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:59 crc kubenswrapper[4808]: I0311 08:40:59.041587 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afd64db1e4772f4558a0a810e319481d1c8d521b299c9f6f79d5c0f752b5f89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8deb33e0cd83c7419f7ddd4cba0e84d33849e982b5da5e74e0e93eebb40b249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:59Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:59 crc kubenswrapper[4808]: I0311 08:40:59.044049 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:59 crc kubenswrapper[4808]: I0311 08:40:59.044090 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:59 crc kubenswrapper[4808]: I0311 08:40:59.044102 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:59 crc kubenswrapper[4808]: I0311 08:40:59.044119 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:59 crc kubenswrapper[4808]: I0311 08:40:59.044130 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:59Z","lastTransitionTime":"2026-03-11T08:40:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:59 crc kubenswrapper[4808]: I0311 08:40:59.050374 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7666v\" (UniqueName: \"kubernetes.io/projected/cf747e37-c201-4dcc-a2a5-2429f4eba47d-kube-api-access-7666v\") pod \"network-metrics-daemon-kqsq9\" (UID: \"cf747e37-c201-4dcc-a2a5-2429f4eba47d\") " pod="openshift-multus/network-metrics-daemon-kqsq9" Mar 11 08:40:59 crc kubenswrapper[4808]: I0311 08:40:59.050419 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cf747e37-c201-4dcc-a2a5-2429f4eba47d-metrics-certs\") pod \"network-metrics-daemon-kqsq9\" (UID: \"cf747e37-c201-4dcc-a2a5-2429f4eba47d\") " pod="openshift-multus/network-metrics-daemon-kqsq9" Mar 11 08:40:59 crc kubenswrapper[4808]: E0311 08:40:59.050525 4808 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 11 08:40:59 crc kubenswrapper[4808]: E0311 08:40:59.050563 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cf747e37-c201-4dcc-a2a5-2429f4eba47d-metrics-certs podName:cf747e37-c201-4dcc-a2a5-2429f4eba47d nodeName:}" failed. No retries permitted until 2026-03-11 08:40:59.550550549 +0000 UTC m=+110.503873869 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cf747e37-c201-4dcc-a2a5-2429f4eba47d-metrics-certs") pod "network-metrics-daemon-kqsq9" (UID: "cf747e37-c201-4dcc-a2a5-2429f4eba47d") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 11 08:40:59 crc kubenswrapper[4808]: I0311 08:40:59.061018 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17fbdac0c2d9f6080066329737fa9866b0e953b38687d6c7b2c62dea2e9964a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02a71d4f33fd4d8bcf666c2d01f03d31c6e0f2b8eaa0cc183b48251e36cdc474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1d10229731c437de7f29f504fec11ac3688a0a5039812d76551a867f814b95e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a351ca31265d6256ce9c08680495dd7a9b7882cd17a8faf4a0d224a800737e4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3674634f2b9f7a0e2fcf3758da17080b26a6284c8446d8326bc405b1307f8c22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f11f10c1e9b99cab011ae05555fe3dd8aeb6e7df5cb166f07c6bcfffc9f9164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2b75d7c063c9a55b7d1bbcc953c2b33048b4a5025f11c85af1f040f77501a1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc0e7d534992483e4ac7693478b4f531b2dba1bb7f7d882081b82f7857529b35\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-11T08:40:55Z\\\",\\\"message\\\":\\\"nalversions/factory.go:141\\\\nI0311 08:40:55.460987 6636 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0311 08:40:55.461176 6636 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0311 08:40:55.461284 6636 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0311 08:40:55.461631 6636 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0311 08:40:55.461668 6636 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0311 08:40:55.461696 6636 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0311 08:40:55.461718 6636 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0311 08:40:55.461745 6636 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0311 08:40:55.461785 6636 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0311 08:40:55.461814 6636 factory.go:656] Stopping watch factory\\\\nI0311 08:40:55.461831 6636 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0311 08:40:55.461834 6636 ovnkube.go:599] Stopped ovnkube\\\\nI0311 08:40:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2b75d7c063c9a55b7d1bbcc953c2b33048b4a5025f11c85af1f040f77501a1e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-11T08:40:57Z\\\",\\\"message\\\":\\\"troller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:57Z is after 2025-08-24T17:21:41Z]\\\\nI0311 08:40:57.290391 6847 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-dgh9v\\\\nI0311 08:40:57.290392 6847 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-dptv8 after 0 failed attempt(s)\\\\nI0311 08:40:57.290400 6847 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-dgh9v\\\\nI0311 08:40:57.290405 6847 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc\\\\nI0311 08:40:57.290415 6847 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc\\\\nI0311 08:40:57.290385 6847 obj_r\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01f1b29fe1734c41ddcc1b36367ccc54148fa85f2e05072fc0b60987dfcbb078\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7129de475a69815e0ed556f2d45c208788efcd7ef2792f3d93994af37dee87b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7129de475a69815e0ed556f2d45c208788efcd7ef2792f3d93994af37dee87b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8wfl5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:59Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:59 crc kubenswrapper[4808]: I0311 08:40:59.065894 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7666v\" (UniqueName: \"kubernetes.io/projected/cf747e37-c201-4dcc-a2a5-2429f4eba47d-kube-api-access-7666v\") pod \"network-metrics-daemon-kqsq9\" (UID: \"cf747e37-c201-4dcc-a2a5-2429f4eba47d\") " pod="openshift-multus/network-metrics-daemon-kqsq9" Mar 11 08:40:59 crc kubenswrapper[4808]: I0311 08:40:59.071772 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dptv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af91f09b-b749-4f04-81ac-2f0079a0dca5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df6497d4025103c2efba8337b4d7541d3d7015be49a7b3478f5648122e9dbd92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtdnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dptv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:59Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:59 crc kubenswrapper[4808]: I0311 08:40:59.082696 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dgh9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1a75dfb-31dd-4275-a309-c9e7130feb05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81effdeab74d0bc6b6f904ca05d6ae5f9777772a1b6e69e9038bb6818af8d979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chgpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dgh9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:59Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:59 crc kubenswrapper[4808]: I0311 08:40:59.094587 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70aeaa0b-1b9a-450e-bac3-61beed554491\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9a36b70daad0e5e33a84c699b4c0680f47f80bad4c9852616ea933ba8c6b221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1f460dd013b0b42bfb8031fcb1e2f49ace0fe24ce8194bd6e6970ad3e750ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://675da4e867613d165a41176c1b725f3601ffdd7de762c52efec0ee5485b1d628\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f99d53f25011afeb242934cb64a3cce6d5a03c17bd6008729f5e07298f152fb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f99d53f25011afeb242934cb64a3cce6d5a03c17bd6008729f5e07298f152fb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-11T08:40:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0311 08:40:20.589250 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0311 08:40:20.589388 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0311 08:40:20.590094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1044918164/tls.crt::/tmp/serving-cert-1044918164/tls.key\\\\\\\"\\\\nI0311 08:40:21.049983 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0311 08:40:21.052943 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0311 08:40:21.052965 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0311 08:40:21.052991 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0311 08:40:21.052999 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0311 08:40:21.059520 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0311 08:40:21.059550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:40:21.059569 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:40:21.059578 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0311 08:40:21.059584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0311 08:40:21.059590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0311 08:40:21.059596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0311 08:40:21.059648 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0311 08:40:21.062132 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://506d71773927c014aeafefdc00c93d5faf4e6a5827bc8b47f525f3ea970ed14e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://184b4659910cdc9b9ee8dc34db1ede687099f4a5103dfba9e9a2fc0428c4e2ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://184b4659910cdc9b9ee8dc34db1ede687099f4a5103dfba9e9a2fc0428c4e2ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:59Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:59 crc kubenswrapper[4808]: I0311 08:40:59.105013 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"134c7d84-5243-4e2e-afcf-0eaf67967ce7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00fd34aefc93aa169e305232d0d0bf0fff66d3f4469ae96f68cc5f8a51a92c86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://023c9b432e3273f96abe72e9e1e4cf5ab68649d9d71d0ff59f75fc30ea5c2287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://023c9b432e3273f96abe72e9e1e4cf5ab68649d9d71d0ff59f75fc30ea5c2287\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:59Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:59 crc kubenswrapper[4808]: I0311 08:40:59.115821 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:59Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:59 crc kubenswrapper[4808]: I0311 08:40:59.130613 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dda5309-668d-4e3c-b3b2-1d708eecc578\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://125d3edbb47daf863a7f7b431cd468ba206a964c42d2a8fdbf163ef8ae3fd771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2hw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dad38518c1c86c0eb3926b0ff17b9fe7847bd1e942591307c49dfbd87a61d19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2hw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tfsm9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:59Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:59 crc kubenswrapper[4808]: I0311 08:40:59.140915 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a12f71a3a92a4513660a5b6c0bb73f40545f5528a45fab7aec18cb22283921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:59Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:59 crc kubenswrapper[4808]: I0311 08:40:59.146851 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:59 crc kubenswrapper[4808]: I0311 08:40:59.146905 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:59 crc kubenswrapper[4808]: I0311 08:40:59.146929 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:59 crc kubenswrapper[4808]: I0311 08:40:59.146954 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:59 crc kubenswrapper[4808]: I0311 08:40:59.146971 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:59Z","lastTransitionTime":"2026-03-11T08:40:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:59 crc kubenswrapper[4808]: I0311 08:40:59.249488 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:59 crc kubenswrapper[4808]: I0311 08:40:59.249534 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:59 crc kubenswrapper[4808]: I0311 08:40:59.249546 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:59 crc kubenswrapper[4808]: I0311 08:40:59.249562 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:59 crc kubenswrapper[4808]: I0311 08:40:59.249574 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:59Z","lastTransitionTime":"2026-03-11T08:40:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:59 crc kubenswrapper[4808]: I0311 08:40:59.358403 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:59 crc kubenswrapper[4808]: I0311 08:40:59.358487 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:59 crc kubenswrapper[4808]: I0311 08:40:59.358516 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:59 crc kubenswrapper[4808]: I0311 08:40:59.358548 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:59 crc kubenswrapper[4808]: I0311 08:40:59.358571 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:59Z","lastTransitionTime":"2026-03-11T08:40:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:59 crc kubenswrapper[4808]: I0311 08:40:59.394848 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nqcqr" event={"ID":"da0586e9-4a6f-4ba5-bc48-568fd9cdb0e6","Type":"ContainerStarted","Data":"16d8eb53e3ea80c3e2e9409ea178b232442d22adb54c055b8487fd6dca0ee4ee"} Mar 11 08:40:59 crc kubenswrapper[4808]: I0311 08:40:59.394903 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nqcqr" event={"ID":"da0586e9-4a6f-4ba5-bc48-568fd9cdb0e6","Type":"ContainerStarted","Data":"ddf318b7c04801c4d010fa760bf0a48a82d26569a7bbfffa015dab3c23e0d470"} Mar 11 08:40:59 crc kubenswrapper[4808]: I0311 08:40:59.394930 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nqcqr" event={"ID":"da0586e9-4a6f-4ba5-bc48-568fd9cdb0e6","Type":"ContainerStarted","Data":"28641b4b721248835e912ae81f65212b6136162b4a17f380a839007b5308c998"} Mar 11 08:40:59 crc kubenswrapper[4808]: I0311 08:40:59.413364 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dgh9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1a75dfb-31dd-4275-a309-c9e7130feb05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81effdeab74d0bc6b6f904ca05d6ae5f9777772a1b6e69e9038bb6818af8d979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chgpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dgh9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:59Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:59 crc kubenswrapper[4808]: I0311 08:40:59.434253 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70aeaa0b-1b9a-450e-bac3-61beed554491\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9a36b70daad0e5e33a84c699b4c0680f47f80bad4c9852616ea933ba8c6b221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1f460dd013b0b42bfb8031fcb1e2f49ace0fe24ce8194bd6e6970ad3e750ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://675da4e867613d165a41176c1b725f3601ffdd7de762c52efec0ee5485b1d628\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f99d53f25011afeb242934cb64a3cce6d5a03c17bd6008729f5e07298f152fb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f99d53f25011afeb242934cb64a3cce6d5a03c17bd6008729f5e07298f152fb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-11T08:40:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0311 08:40:20.589250 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0311 08:40:20.589388 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0311 08:40:20.590094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1044918164/tls.crt::/tmp/serving-cert-1044918164/tls.key\\\\\\\"\\\\nI0311 08:40:21.049983 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0311 08:40:21.052943 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0311 08:40:21.052965 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0311 08:40:21.052991 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0311 08:40:21.052999 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0311 08:40:21.059520 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0311 08:40:21.059550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:40:21.059569 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:40:21.059578 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0311 08:40:21.059584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0311 08:40:21.059590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0311 08:40:21.059596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0311 08:40:21.059648 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0311 08:40:21.062132 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://506d71773927c014aeafefdc00c93d5faf4e6a5827bc8b47f525f3ea970ed14e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://184b4659910cdc9b9ee8dc34db1ede687099f4a5103dfba9e9a2fc0428c4e2ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://184b4659910cdc9b9ee8dc34db1ede687099f4a5103dfba9e9a2fc0428c4e2ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:59Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:59 crc kubenswrapper[4808]: I0311 08:40:59.452663 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"134c7d84-5243-4e2e-afcf-0eaf67967ce7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00fd34aefc93aa169e305232d0d0bf0fff66d3f4469ae96f68cc5f8a51a92c86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://023c9b432e3273f96abe72e9e1e4cf5ab68649d9d71d0ff59f75fc30ea5c2287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://023c9b432e3273f96abe72e9e1e4cf5ab68649d9d71d0ff59f75fc30ea5c2287\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:59Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:59 crc kubenswrapper[4808]: I0311 08:40:59.462192 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:59 crc kubenswrapper[4808]: I0311 08:40:59.462240 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:59 crc kubenswrapper[4808]: I0311 08:40:59.462282 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:59 crc kubenswrapper[4808]: I0311 08:40:59.462300 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:59 crc kubenswrapper[4808]: I0311 08:40:59.462312 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:59Z","lastTransitionTime":"2026-03-11T08:40:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:59 crc kubenswrapper[4808]: I0311 08:40:59.475305 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:59Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:59 crc kubenswrapper[4808]: I0311 08:40:59.492111 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dda5309-668d-4e3c-b3b2-1d708eecc578\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://125d3edbb47daf863a7f7b431cd468ba206a964c42d2a8fdbf163ef8ae3fd771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2hw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dad38518c1c86c0eb3926b0ff17b9fe7847bd1e942591307c49dfbd87a61d19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2hw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tfsm9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:59Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:59 crc kubenswrapper[4808]: I0311 08:40:59.509617 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a12f71a3a92a4513660a5b6c0bb73f40545f5528a45fab7aec18cb22283921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:59Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:59 crc kubenswrapper[4808]: I0311 08:40:59.524408 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nqcqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da0586e9-4a6f-4ba5-bc48-568fd9cdb0e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddf318b7c04801c4d010fa760bf0a48a82d26569a7bbfffa015dab3c23e0d470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d5q8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d8eb53e3ea80c3e2e9409ea178b232442d22adb54c055b8487fd6dca0ee4ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d5q8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nqcqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:59Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:59 crc kubenswrapper[4808]: I0311 08:40:59.541765 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kqsq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf747e37-c201-4dcc-a2a5-2429f4eba47d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7666v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7666v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kqsq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:59Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:59 crc kubenswrapper[4808]: I0311 08:40:59.556589 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cf747e37-c201-4dcc-a2a5-2429f4eba47d-metrics-certs\") pod \"network-metrics-daemon-kqsq9\" (UID: \"cf747e37-c201-4dcc-a2a5-2429f4eba47d\") " pod="openshift-multus/network-metrics-daemon-kqsq9" Mar 11 08:40:59 crc kubenswrapper[4808]: E0311 08:40:59.556805 4808 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 11 08:40:59 crc kubenswrapper[4808]: E0311 08:40:59.556928 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cf747e37-c201-4dcc-a2a5-2429f4eba47d-metrics-certs podName:cf747e37-c201-4dcc-a2a5-2429f4eba47d nodeName:}" failed. No retries permitted until 2026-03-11 08:41:00.556892941 +0000 UTC m=+111.510216291 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cf747e37-c201-4dcc-a2a5-2429f4eba47d-metrics-certs") pod "network-metrics-daemon-kqsq9" (UID: "cf747e37-c201-4dcc-a2a5-2429f4eba47d") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 11 08:40:59 crc kubenswrapper[4808]: I0311 08:40:59.567589 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:59 crc kubenswrapper[4808]: I0311 08:40:59.567658 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:59 crc kubenswrapper[4808]: I0311 08:40:59.567682 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:59 crc kubenswrapper[4808]: I0311 08:40:59.567711 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:59 crc kubenswrapper[4808]: I0311 08:40:59.567765 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:59Z","lastTransitionTime":"2026-03-11T08:40:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:59 crc kubenswrapper[4808]: I0311 08:40:59.576761 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39fe7c21-7960-42c6-aa61-98bac9e7ecd5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46ab0655a5449d68489b2ad4cdadf1ab8539dfc66e9a09ef0bf492fc84c3df01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99862ebd2568f5b782683b45df57b331cb0b659d0a48b16d898658463fcaa023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fac42d74d7b0553ef1fda4393cbe7872b21bc18baada8519db8137f2ef71d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce1b28200696e0c819d326a53d481baf48418232ac2a66fe01504e7af648c56f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1da63ac4504d6a817f5c856ae4ec6feee655c983f804372f8100624e0fbb08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0167ad24e01c669d29e5bcad3d3dc4e2e7578cf20278001ea2dfe9f1390bc99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0167ad24e01c669d29e5bcad3d3dc4e2e7578cf20278001ea2dfe9f1390bc99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc6d6b373b603100e83b56735f848166bbe70eaffa795411d0bf95bbfbe31c6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc6d6b373b603100e83b56735f848166bbe70eaffa795411d0bf95bbfbe31c6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f029352eaf11747f4efebbb91000b740e298e03d72e8a1347316668b78c23cae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f029352eaf11747f4efebbb91000b740e298e03d72e8a1347316668b78c23cae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:59Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:59 crc kubenswrapper[4808]: I0311 08:40:59.600087 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b088d2a8d5895f936e824861275482977fb68600e1ca4fd0c08e124c4f11da52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:59Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:59 crc kubenswrapper[4808]: I0311 08:40:59.623912 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:59Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:59 crc kubenswrapper[4808]: I0311 08:40:59.643254 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:59Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:59 crc kubenswrapper[4808]: I0311 08:40:59.658172 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-twvrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6377c39c-8ecf-409c-b3e7-ea9d717e234f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a910f47666db2f5fb733718e848c75c15d9f0975c7339afd9e21f21ede7aa1b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c99gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-twvrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:59Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:59 crc kubenswrapper[4808]: I0311 08:40:59.670073 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:59 crc kubenswrapper[4808]: I0311 08:40:59.670145 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:59 crc kubenswrapper[4808]: I0311 08:40:59.670168 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:59 crc kubenswrapper[4808]: I0311 08:40:59.670196 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:59 crc kubenswrapper[4808]: I0311 08:40:59.670213 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:59Z","lastTransitionTime":"2026-03-11T08:40:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:59 crc kubenswrapper[4808]: I0311 08:40:59.682152 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2r84h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcd5dff4-0826-4876-9fd3-3f19781a17bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3814b88c4b0107b15c1dacb212e0a05451ebc05065fcb0c25318df4808220e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f840757548b4af73aae0c247025a649cbf50cfbc534fbd38f0592f0f0e97274\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f840757548b4af73aae0c247025a649cbf50cfbc534fbd38f0592f0f0e97274\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ade0f54a4b7718d83ed1ed5025f91b42175bf7daa1327a150a6eedc99df8e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ade0f54a4b7718d83ed1ed5025f91b42175bf7daa1327a150a6eedc99df8e3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e872db89da909105a9343e8b9951c7c88bfd5107edb4386faa05c08b37e34bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e872db89da909105a9343e8b9951c7c88bfd5107edb4386faa05c08b37e34bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14920dc320f545db70ffb1b588dcfed5cb019f04e0dddd1d38db7e0696429fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14920dc320f545db70ffb1b588dcfed5cb019f04e0dddd1d38db7e0696429fc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc79537b4712c9762fe2a813ef7f12b5d6314ed2aa66741561fe635591da8a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc79537b4712c9762fe2a813ef7f12b5d6314ed2aa66741561fe635591da8a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a689c8bcf97ed3a6950a775029df058df856a2cc37cdc511d0ed62a36d9f7d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a689c8bcf97ed3a6950a775029df058df856a2cc37cdc511d0ed62a36d9f7d21\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2r84h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:59Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:59 crc kubenswrapper[4808]: I0311 08:40:59.701786 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afd64db1e4772f4558a0a810e319481d1c8d521b299c9f6f79d5c0f752b5f89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8deb33e0cd83c7419f7ddd4cba0e84d33849e982b5da5e74e0e93eebb40b249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:59Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:59 crc kubenswrapper[4808]: I0311 08:40:59.734187 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17fbdac0c2d9f6080066329737fa9866b0e953b38687d6c7b2c62dea2e9964a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02a71d4f33fd4d8bcf666c2d01f03d31c6e0f2b8eaa0cc183b48251e36cdc474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1d10229731c437de7f29f504fec11ac3688a0a5039812d76551a867f814b95e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a351ca31265d6256ce9c08680495dd7a9b7882cd17a8faf4a0d224a800737e4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3674634f2b9f7a0e2fcf3758da17080b26a6284c8446d8326bc405b1307f8c22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f11f10c1e9b99cab011ae05555fe3dd8aeb6e7df5cb166f07c6bcfffc9f9164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2b75d7c063c9a55b7d1bbcc953c2b33048b4a5025f11c85af1f040f77501a1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc0e7d534992483e4ac7693478b4f531b2dba1bb7f7d882081b82f7857529b35\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-11T08:40:55Z\\\",\\\"message\\\":\\\"nalversions/factory.go:141\\\\nI0311 08:40:55.460987 6636 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0311 08:40:55.461176 6636 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0311 08:40:55.461284 6636 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0311 08:40:55.461631 6636 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0311 08:40:55.461668 6636 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0311 08:40:55.461696 6636 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0311 08:40:55.461718 6636 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0311 08:40:55.461745 6636 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0311 08:40:55.461785 6636 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0311 08:40:55.461814 6636 factory.go:656] Stopping watch factory\\\\nI0311 08:40:55.461831 6636 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0311 08:40:55.461834 6636 ovnkube.go:599] Stopped ovnkube\\\\nI0311 08:40:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2b75d7c063c9a55b7d1bbcc953c2b33048b4a5025f11c85af1f040f77501a1e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-11T08:40:57Z\\\",\\\"message\\\":\\\"troller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:57Z is after 2025-08-24T17:21:41Z]\\\\nI0311 08:40:57.290391 6847 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-dgh9v\\\\nI0311 08:40:57.290392 6847 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-dptv8 after 0 failed attempt(s)\\\\nI0311 08:40:57.290400 6847 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-dgh9v\\\\nI0311 08:40:57.290405 6847 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc\\\\nI0311 08:40:57.290415 6847 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc\\\\nI0311 08:40:57.290385 6847 obj_r\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01f1b29fe1734c41ddcc1b36367ccc54148fa85f2e05072fc0b60987dfcbb078\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7129de475a69815e0ed556f2d45c208788efcd7ef2792f3d93994af37dee87b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7129de475a69815e0ed556f2d45c208788efcd7ef2792f3d93994af37dee87b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8wfl5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:59Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:59 crc kubenswrapper[4808]: I0311 08:40:59.751301 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dptv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af91f09b-b749-4f04-81ac-2f0079a0dca5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df6497d4025103c2efba8337b4d7541d3d7015be49a7b3478f5648122e9dbd92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtdnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dptv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:59Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:59 crc kubenswrapper[4808]: I0311 08:40:59.772725 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:59 crc kubenswrapper[4808]: I0311 08:40:59.772775 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:59 crc kubenswrapper[4808]: I0311 08:40:59.772793 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:59 crc kubenswrapper[4808]: I0311 08:40:59.772816 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:59 crc kubenswrapper[4808]: I0311 08:40:59.772832 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:59Z","lastTransitionTime":"2026-03-11T08:40:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:59 crc kubenswrapper[4808]: I0311 08:40:59.788459 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:40:59 crc kubenswrapper[4808]: I0311 08:40:59.788552 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:40:59 crc kubenswrapper[4808]: I0311 08:40:59.788461 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:40:59 crc kubenswrapper[4808]: E0311 08:40:59.788649 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 11 08:40:59 crc kubenswrapper[4808]: E0311 08:40:59.788802 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 11 08:40:59 crc kubenswrapper[4808]: E0311 08:40:59.788942 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 11 08:40:59 crc kubenswrapper[4808]: I0311 08:40:59.809878 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a12f71a3a92a4513660a5b6c0bb73f40545f5528a45fab7aec18cb22283921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:59Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:59 crc kubenswrapper[4808]: I0311 08:40:59.831592 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:59Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:59 crc kubenswrapper[4808]: I0311 08:40:59.851502 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:59Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:59 crc kubenswrapper[4808]: I0311 08:40:59.866963 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-twvrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6377c39c-8ecf-409c-b3e7-ea9d717e234f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a910f47666db2f5fb733718e848c75c15d9f0975c7339afd9e21f21ede7aa1b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c99gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-twvrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:59Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:59 crc kubenswrapper[4808]: I0311 08:40:59.876094 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:59 crc kubenswrapper[4808]: I0311 08:40:59.876164 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:59 crc kubenswrapper[4808]: I0311 08:40:59.876192 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:59 crc kubenswrapper[4808]: I0311 08:40:59.876225 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:59 crc kubenswrapper[4808]: I0311 08:40:59.876250 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:59Z","lastTransitionTime":"2026-03-11T08:40:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:59 crc kubenswrapper[4808]: I0311 08:40:59.890530 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2r84h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcd5dff4-0826-4876-9fd3-3f19781a17bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3814b88c4b0107b15c1dacb212e0a05451ebc05065fcb0c25318df4808220e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f840757548b4af73aae0c247025a649cbf50cfbc534fbd38f0592f0f0e97274\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f840757548b4af73aae0c247025a649cbf50cfbc534fbd38f0592f0f0e97274\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ade0f54a4b7718d83ed1ed5025f91b42175bf7daa1327a150a6eedc99df8e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ade0f54a4b7718d83ed1ed5025f91b42175bf7daa1327a150a6eedc99df8e3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e872db89da909105a9343e8b9951c7c88bfd5107edb4386faa05c08b37e34bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e872db89da909105a9343e8b9951c7c88bfd5107edb4386faa05c08b37e34bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14920dc320f545db70ffb1b588dcfed5cb019f04e0dddd1d38db7e0696429fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14920dc320f545db70ffb1b588dcfed5cb019f04e0dddd1d38db7e0696429fc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc79537b4712c9762fe2a813ef7f12b5d6314ed2aa66741561fe635591da8a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc79537b4712c9762fe2a813ef7f12b5d6314ed2aa66741561fe635591da8a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a689c8bcf97ed3a6950a775029df058df856a2cc37cdc511d0ed62a36d9f7d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a689c8bcf97ed3a6950a775029df058df856a2cc37cdc511d0ed62a36d9f7d21\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2r84h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:59Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:59 crc kubenswrapper[4808]: I0311 08:40:59.903921 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nqcqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da0586e9-4a6f-4ba5-bc48-568fd9cdb0e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddf318b7c04801c4d010fa760bf0a48a82d26569a7bbfffa015dab3c23e0d470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d5q8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d8eb53e3ea80c3e2e9409ea178b232442d22adb54c055b8487fd6dca0ee4ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d5q8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nqcqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:59Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:59 crc kubenswrapper[4808]: I0311 08:40:59.921593 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kqsq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf747e37-c201-4dcc-a2a5-2429f4eba47d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7666v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7666v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kqsq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:59Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:59 crc kubenswrapper[4808]: I0311 08:40:59.955102 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39fe7c21-7960-42c6-aa61-98bac9e7ecd5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46ab0655a5449d68489b2ad4cdadf1ab8539dfc66e9a09ef0bf492fc84c3df01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99862ebd2568f5b782683b45df57b331cb0b659d0a48b16d898658463fcaa023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fac42d74d7b0553ef1fda4393cbe7872b21bc18baada8519db8137f2ef71d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce1b28200696e0c819d326a53d481baf48418232ac2a66fe01504e7af648c56f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1da63ac4504d6a817f5c856ae4ec6feee655c983f804372f8100624e0fbb08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0167ad24e01c669d29e5bcad3d3dc4e2e7578cf20278001ea2dfe9f1390bc99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0167ad24e01c669d29e5bcad3d3dc4e2e7578cf20278001ea2dfe9f1390bc99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc6d6b373b603100e83b56735f848166bbe70eaffa795411d0bf95bbfbe31c6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc6d6b373b603100e83b56735f848166bbe70eaffa795411d0bf95bbfbe31c6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f029352eaf11747f4efebbb91000b740e298e03d72e8a1347316668b78c23cae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f029352eaf11747f4efebbb91000b740e298e03d72e8a1347316668b78c23cae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:59Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:59 crc kubenswrapper[4808]: I0311 08:40:59.969125 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b088d2a8d5895f936e824861275482977fb68600e1ca4fd0c08e124c4f11da52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:59Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:59 crc kubenswrapper[4808]: I0311 08:40:59.978646 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:40:59 crc kubenswrapper[4808]: I0311 08:40:59.978694 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:40:59 crc kubenswrapper[4808]: I0311 08:40:59.978708 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:40:59 crc kubenswrapper[4808]: I0311 08:40:59.978725 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:40:59 crc kubenswrapper[4808]: I0311 08:40:59.978737 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:40:59Z","lastTransitionTime":"2026-03-11T08:40:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:40:59 crc kubenswrapper[4808]: I0311 08:40:59.983032 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dptv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af91f09b-b749-4f04-81ac-2f0079a0dca5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df6497d4025103c2efba8337b4d7541d3d7015be49a7b3478f5648122e9dbd92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtdnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dptv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:59Z is after 2025-08-24T17:21:41Z" Mar 11 08:40:59 crc kubenswrapper[4808]: I0311 08:40:59.997263 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afd64db1e4772f4558a0a810e319481d1c8d521b299c9f6f79d5c0f752b5f89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8deb33e0cd83c7419f7ddd4cba0e84d33849e982b5da5e74e0e93eebb40b249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:59Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:00 crc kubenswrapper[4808]: I0311 08:41:00.023145 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17fbdac0c2d9f6080066329737fa9866b0e953b38687d6c7b2c62dea2e9964a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02a71d4f33fd4d8bcf666c2d01f03d31c6e0f2b8eaa0cc183b48251e36cdc474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1d10229731c437de7f29f504fec11ac3688a0a5039812d76551a867f814b95e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a351ca31265d6256ce9c08680495dd7a9b7882cd17a8faf4a0d224a800737e4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3674634f2b9f7a0e2fcf3758da17080b26a6284c8446d8326bc405b1307f8c22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f11f10c1e9b99cab011ae05555fe3dd8aeb6e7df5cb166f07c6bcfffc9f9164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2b75d7c063c9a55b7d1bbcc953c2b33048b4a5025f11c85af1f040f77501a1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc0e7d534992483e4ac7693478b4f531b2dba1bb7f7d882081b82f7857529b35\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-11T08:40:55Z\\\",\\\"message\\\":\\\"nalversions/factory.go:141\\\\nI0311 08:40:55.460987 6636 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0311 08:40:55.461176 6636 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0311 08:40:55.461284 6636 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0311 08:40:55.461631 6636 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0311 08:40:55.461668 6636 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0311 08:40:55.461696 6636 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0311 08:40:55.461718 6636 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0311 08:40:55.461745 6636 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0311 08:40:55.461785 6636 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0311 08:40:55.461814 6636 factory.go:656] Stopping watch factory\\\\nI0311 08:40:55.461831 6636 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0311 08:40:55.461834 6636 ovnkube.go:599] Stopped ovnkube\\\\nI0311 08:40:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2b75d7c063c9a55b7d1bbcc953c2b33048b4a5025f11c85af1f040f77501a1e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-11T08:40:57Z\\\",\\\"message\\\":\\\"troller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:57Z is after 2025-08-24T17:21:41Z]\\\\nI0311 08:40:57.290391 6847 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-dgh9v\\\\nI0311 08:40:57.290392 6847 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-dptv8 after 0 failed attempt(s)\\\\nI0311 08:40:57.290400 6847 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-dgh9v\\\\nI0311 08:40:57.290405 6847 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc\\\\nI0311 08:40:57.290415 6847 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc\\\\nI0311 08:40:57.290385 6847 obj_r\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01f1b29fe1734c41ddcc1b36367ccc54148fa85f2e05072fc0b60987dfcbb078\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7129de475a69815e0ed556f2d45c208788efcd7ef2792f3d93994af37dee87b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7129de475a69815e0ed556f2d45c208788efcd7ef2792f3d93994af37dee87b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8wfl5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:00Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:00 crc kubenswrapper[4808]: I0311 08:41:00.033711 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"134c7d84-5243-4e2e-afcf-0eaf67967ce7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00fd34aefc93aa169e305232d0d0bf0fff66d3f4469ae96f68cc5f8a51a92c86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://023c9b432e3273f96abe72e9e1e4cf5ab68649d9d71d0ff59f75fc30ea5c2287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://023c9b432e3273f96abe72e9e1e4cf5ab68649d9d71d0ff59f75fc30ea5c2287\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:00Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:00 crc kubenswrapper[4808]: I0311 08:41:00.045975 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:00Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:00 crc kubenswrapper[4808]: I0311 08:41:00.056476 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dda5309-668d-4e3c-b3b2-1d708eecc578\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://125d3edbb47daf863a7f7b431cd468ba206a964c42d2a8fdbf163ef8ae3fd771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2hw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dad38518c1c86c0eb3926b0ff17b9fe7847bd1e942591307c49dfbd87a61d19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2hw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tfsm9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:00Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:00 crc kubenswrapper[4808]: I0311 08:41:00.067378 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dgh9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1a75dfb-31dd-4275-a309-c9e7130feb05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81effdeab74d0bc6b6f904ca05d6ae5f9777772a1b6e69e9038bb6818af8d979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chgpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dgh9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:00Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:00 crc kubenswrapper[4808]: I0311 08:41:00.078469 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70aeaa0b-1b9a-450e-bac3-61beed554491\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9a36b70daad0e5e33a84c699b4c0680f47f80bad4c9852616ea933ba8c6b221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1f460dd013b0b42bfb8031fcb1e2f49ace0fe24ce8194bd6e6970ad3e750ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://675da4e867613d165a41176c1b725f3601ffdd7de762c52efec0ee5485b1d628\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f99d53f25011afeb242934cb64a3cce6d5a03c17bd6008729f5e07298f152fb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f99d53f25011afeb242934cb64a3cce6d5a03c17bd6008729f5e07298f152fb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-11T08:40:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0311 08:40:20.589250 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0311 08:40:20.589388 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0311 08:40:20.590094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1044918164/tls.crt::/tmp/serving-cert-1044918164/tls.key\\\\\\\"\\\\nI0311 08:40:21.049983 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0311 08:40:21.052943 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0311 08:40:21.052965 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0311 08:40:21.052991 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0311 08:40:21.052999 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0311 08:40:21.059520 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0311 08:40:21.059550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:40:21.059569 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:40:21.059578 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0311 08:40:21.059584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0311 08:40:21.059590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0311 08:40:21.059596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0311 08:40:21.059648 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0311 08:40:21.062132 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://506d71773927c014aeafefdc00c93d5faf4e6a5827bc8b47f525f3ea970ed14e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://184b4659910cdc9b9ee8dc34db1ede687099f4a5103dfba9e9a2fc0428c4e2ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://184b4659910cdc9b9ee8dc34db1ede687099f4a5103dfba9e9a2fc0428c4e2ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:00Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:00 crc kubenswrapper[4808]: I0311 08:41:00.081527 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:00 crc kubenswrapper[4808]: I0311 08:41:00.081581 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:00 crc kubenswrapper[4808]: I0311 08:41:00.081596 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:00 crc kubenswrapper[4808]: I0311 08:41:00.081617 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:00 crc kubenswrapper[4808]: I0311 08:41:00.081663 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:00Z","lastTransitionTime":"2026-03-11T08:41:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:00 crc kubenswrapper[4808]: I0311 08:41:00.183943 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:00 crc kubenswrapper[4808]: I0311 08:41:00.184595 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:00 crc kubenswrapper[4808]: I0311 08:41:00.184623 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:00 crc kubenswrapper[4808]: I0311 08:41:00.184650 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:00 crc kubenswrapper[4808]: I0311 08:41:00.184668 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:00Z","lastTransitionTime":"2026-03-11T08:41:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:00 crc kubenswrapper[4808]: I0311 08:41:00.288749 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:00 crc kubenswrapper[4808]: I0311 08:41:00.288817 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:00 crc kubenswrapper[4808]: I0311 08:41:00.288844 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:00 crc kubenswrapper[4808]: I0311 08:41:00.288872 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:00 crc kubenswrapper[4808]: I0311 08:41:00.288890 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:00Z","lastTransitionTime":"2026-03-11T08:41:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:00 crc kubenswrapper[4808]: I0311 08:41:00.391989 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:00 crc kubenswrapper[4808]: I0311 08:41:00.392028 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:00 crc kubenswrapper[4808]: I0311 08:41:00.392037 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:00 crc kubenswrapper[4808]: I0311 08:41:00.392053 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:00 crc kubenswrapper[4808]: I0311 08:41:00.392061 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:00Z","lastTransitionTime":"2026-03-11T08:41:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:00 crc kubenswrapper[4808]: I0311 08:41:00.494826 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:00 crc kubenswrapper[4808]: I0311 08:41:00.494879 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:00 crc kubenswrapper[4808]: I0311 08:41:00.494897 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:00 crc kubenswrapper[4808]: I0311 08:41:00.494919 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:00 crc kubenswrapper[4808]: I0311 08:41:00.495005 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:00Z","lastTransitionTime":"2026-03-11T08:41:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:00 crc kubenswrapper[4808]: I0311 08:41:00.567780 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cf747e37-c201-4dcc-a2a5-2429f4eba47d-metrics-certs\") pod \"network-metrics-daemon-kqsq9\" (UID: \"cf747e37-c201-4dcc-a2a5-2429f4eba47d\") " pod="openshift-multus/network-metrics-daemon-kqsq9" Mar 11 08:41:00 crc kubenswrapper[4808]: E0311 08:41:00.567999 4808 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 11 08:41:00 crc kubenswrapper[4808]: E0311 08:41:00.568100 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cf747e37-c201-4dcc-a2a5-2429f4eba47d-metrics-certs podName:cf747e37-c201-4dcc-a2a5-2429f4eba47d nodeName:}" failed. No retries permitted until 2026-03-11 08:41:02.568079192 +0000 UTC m=+113.521402602 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cf747e37-c201-4dcc-a2a5-2429f4eba47d-metrics-certs") pod "network-metrics-daemon-kqsq9" (UID: "cf747e37-c201-4dcc-a2a5-2429f4eba47d") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 11 08:41:00 crc kubenswrapper[4808]: I0311 08:41:00.597541 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:00 crc kubenswrapper[4808]: I0311 08:41:00.597577 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:00 crc kubenswrapper[4808]: I0311 08:41:00.597588 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:00 crc kubenswrapper[4808]: I0311 08:41:00.597603 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:00 crc kubenswrapper[4808]: I0311 08:41:00.597613 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:00Z","lastTransitionTime":"2026-03-11T08:41:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:00 crc kubenswrapper[4808]: I0311 08:41:00.699736 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:00 crc kubenswrapper[4808]: I0311 08:41:00.699779 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:00 crc kubenswrapper[4808]: I0311 08:41:00.699838 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:00 crc kubenswrapper[4808]: I0311 08:41:00.699904 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:00 crc kubenswrapper[4808]: I0311 08:41:00.699917 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:00Z","lastTransitionTime":"2026-03-11T08:41:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:00 crc kubenswrapper[4808]: I0311 08:41:00.788608 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kqsq9" Mar 11 08:41:00 crc kubenswrapper[4808]: E0311 08:41:00.788842 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kqsq9" podUID="cf747e37-c201-4dcc-a2a5-2429f4eba47d" Mar 11 08:41:00 crc kubenswrapper[4808]: I0311 08:41:00.802541 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:00 crc kubenswrapper[4808]: I0311 08:41:00.802617 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:00 crc kubenswrapper[4808]: I0311 08:41:00.802642 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:00 crc kubenswrapper[4808]: I0311 08:41:00.802668 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:00 crc kubenswrapper[4808]: I0311 08:41:00.802684 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:00Z","lastTransitionTime":"2026-03-11T08:41:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:00 crc kubenswrapper[4808]: I0311 08:41:00.906662 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:00 crc kubenswrapper[4808]: I0311 08:41:00.906752 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:00 crc kubenswrapper[4808]: I0311 08:41:00.906770 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:00 crc kubenswrapper[4808]: I0311 08:41:00.906794 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:00 crc kubenswrapper[4808]: I0311 08:41:00.906812 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:00Z","lastTransitionTime":"2026-03-11T08:41:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:00 crc kubenswrapper[4808]: I0311 08:41:00.991403 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:00 crc kubenswrapper[4808]: I0311 08:41:00.991488 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:00 crc kubenswrapper[4808]: I0311 08:41:00.991506 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:00 crc kubenswrapper[4808]: I0311 08:41:00.991532 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:00 crc kubenswrapper[4808]: I0311 08:41:00.991550 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:00Z","lastTransitionTime":"2026-03-11T08:41:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:01 crc kubenswrapper[4808]: E0311 08:41:01.014081 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fb54ef7d-152b-411a-b511-256d1778abe5\\\",\\\"systemUUID\\\":\\\"8423e724-ca17-4e6e-9671-7e629ecf3f36\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:01Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:01 crc kubenswrapper[4808]: I0311 08:41:01.019042 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:01 crc kubenswrapper[4808]: I0311 08:41:01.019105 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:01 crc kubenswrapper[4808]: I0311 08:41:01.019123 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:01 crc kubenswrapper[4808]: I0311 08:41:01.019148 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:01 crc kubenswrapper[4808]: I0311 08:41:01.019169 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:01Z","lastTransitionTime":"2026-03-11T08:41:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:01 crc kubenswrapper[4808]: E0311 08:41:01.039261 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fb54ef7d-152b-411a-b511-256d1778abe5\\\",\\\"systemUUID\\\":\\\"8423e724-ca17-4e6e-9671-7e629ecf3f36\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:01Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:01 crc kubenswrapper[4808]: I0311 08:41:01.044245 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:01 crc kubenswrapper[4808]: I0311 08:41:01.044310 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:01 crc kubenswrapper[4808]: I0311 08:41:01.044336 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:01 crc kubenswrapper[4808]: I0311 08:41:01.044424 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:01 crc kubenswrapper[4808]: I0311 08:41:01.044456 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:01Z","lastTransitionTime":"2026-03-11T08:41:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:01 crc kubenswrapper[4808]: E0311 08:41:01.064483 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fb54ef7d-152b-411a-b511-256d1778abe5\\\",\\\"systemUUID\\\":\\\"8423e724-ca17-4e6e-9671-7e629ecf3f36\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:01Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:01 crc kubenswrapper[4808]: I0311 08:41:01.069804 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:01 crc kubenswrapper[4808]: I0311 08:41:01.069868 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:01 crc kubenswrapper[4808]: I0311 08:41:01.069884 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:01 crc kubenswrapper[4808]: I0311 08:41:01.069907 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:01 crc kubenswrapper[4808]: I0311 08:41:01.069924 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:01Z","lastTransitionTime":"2026-03-11T08:41:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:01 crc kubenswrapper[4808]: E0311 08:41:01.089962 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fb54ef7d-152b-411a-b511-256d1778abe5\\\",\\\"systemUUID\\\":\\\"8423e724-ca17-4e6e-9671-7e629ecf3f36\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:01Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:01 crc kubenswrapper[4808]: I0311 08:41:01.094258 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:01 crc kubenswrapper[4808]: I0311 08:41:01.094319 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:01 crc kubenswrapper[4808]: I0311 08:41:01.094343 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:01 crc kubenswrapper[4808]: I0311 08:41:01.094444 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:01 crc kubenswrapper[4808]: I0311 08:41:01.094470 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:01Z","lastTransitionTime":"2026-03-11T08:41:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:01 crc kubenswrapper[4808]: E0311 08:41:01.112732 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fb54ef7d-152b-411a-b511-256d1778abe5\\\",\\\"systemUUID\\\":\\\"8423e724-ca17-4e6e-9671-7e629ecf3f36\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:01Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:01 crc kubenswrapper[4808]: E0311 08:41:01.112993 4808 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 11 08:41:01 crc kubenswrapper[4808]: I0311 08:41:01.114640 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:01 crc kubenswrapper[4808]: I0311 08:41:01.114691 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:01 crc kubenswrapper[4808]: I0311 08:41:01.114702 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:01 crc kubenswrapper[4808]: I0311 08:41:01.114719 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:01 crc kubenswrapper[4808]: I0311 08:41:01.114731 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:01Z","lastTransitionTime":"2026-03-11T08:41:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:01 crc kubenswrapper[4808]: I0311 08:41:01.218416 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:01 crc kubenswrapper[4808]: I0311 08:41:01.218482 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:01 crc kubenswrapper[4808]: I0311 08:41:01.218503 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:01 crc kubenswrapper[4808]: I0311 08:41:01.218532 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:01 crc kubenswrapper[4808]: I0311 08:41:01.218553 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:01Z","lastTransitionTime":"2026-03-11T08:41:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:01 crc kubenswrapper[4808]: I0311 08:41:01.321651 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:01 crc kubenswrapper[4808]: I0311 08:41:01.321687 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:01 crc kubenswrapper[4808]: I0311 08:41:01.321695 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:01 crc kubenswrapper[4808]: I0311 08:41:01.321708 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:01 crc kubenswrapper[4808]: I0311 08:41:01.321721 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:01Z","lastTransitionTime":"2026-03-11T08:41:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:01 crc kubenswrapper[4808]: I0311 08:41:01.424806 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:01 crc kubenswrapper[4808]: I0311 08:41:01.424857 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:01 crc kubenswrapper[4808]: I0311 08:41:01.424869 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:01 crc kubenswrapper[4808]: I0311 08:41:01.424886 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:01 crc kubenswrapper[4808]: I0311 08:41:01.424898 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:01Z","lastTransitionTime":"2026-03-11T08:41:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:01 crc kubenswrapper[4808]: I0311 08:41:01.527863 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:01 crc kubenswrapper[4808]: I0311 08:41:01.527924 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:01 crc kubenswrapper[4808]: I0311 08:41:01.527945 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:01 crc kubenswrapper[4808]: I0311 08:41:01.527968 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:01 crc kubenswrapper[4808]: I0311 08:41:01.527987 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:01Z","lastTransitionTime":"2026-03-11T08:41:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:01 crc kubenswrapper[4808]: I0311 08:41:01.631316 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:01 crc kubenswrapper[4808]: I0311 08:41:01.631405 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:01 crc kubenswrapper[4808]: I0311 08:41:01.631423 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:01 crc kubenswrapper[4808]: I0311 08:41:01.631447 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:01 crc kubenswrapper[4808]: I0311 08:41:01.631463 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:01Z","lastTransitionTime":"2026-03-11T08:41:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:01 crc kubenswrapper[4808]: I0311 08:41:01.733645 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:01 crc kubenswrapper[4808]: I0311 08:41:01.733718 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:01 crc kubenswrapper[4808]: I0311 08:41:01.733732 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:01 crc kubenswrapper[4808]: I0311 08:41:01.733750 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:01 crc kubenswrapper[4808]: I0311 08:41:01.733764 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:01Z","lastTransitionTime":"2026-03-11T08:41:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:01 crc kubenswrapper[4808]: I0311 08:41:01.788822 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:41:01 crc kubenswrapper[4808]: I0311 08:41:01.788832 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:41:01 crc kubenswrapper[4808]: E0311 08:41:01.789071 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 11 08:41:01 crc kubenswrapper[4808]: I0311 08:41:01.789078 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:41:01 crc kubenswrapper[4808]: E0311 08:41:01.789174 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 11 08:41:01 crc kubenswrapper[4808]: E0311 08:41:01.789255 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 11 08:41:01 crc kubenswrapper[4808]: I0311 08:41:01.836933 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:01 crc kubenswrapper[4808]: I0311 08:41:01.837003 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:01 crc kubenswrapper[4808]: I0311 08:41:01.837024 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:01 crc kubenswrapper[4808]: I0311 08:41:01.837056 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:01 crc kubenswrapper[4808]: I0311 08:41:01.837078 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:01Z","lastTransitionTime":"2026-03-11T08:41:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:01 crc kubenswrapper[4808]: I0311 08:41:01.940146 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:01 crc kubenswrapper[4808]: I0311 08:41:01.940191 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:01 crc kubenswrapper[4808]: I0311 08:41:01.940203 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:01 crc kubenswrapper[4808]: I0311 08:41:01.940219 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:01 crc kubenswrapper[4808]: I0311 08:41:01.940230 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:01Z","lastTransitionTime":"2026-03-11T08:41:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:02 crc kubenswrapper[4808]: I0311 08:41:02.043042 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:02 crc kubenswrapper[4808]: I0311 08:41:02.043094 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:02 crc kubenswrapper[4808]: I0311 08:41:02.043110 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:02 crc kubenswrapper[4808]: I0311 08:41:02.043135 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:02 crc kubenswrapper[4808]: I0311 08:41:02.043153 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:02Z","lastTransitionTime":"2026-03-11T08:41:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:02 crc kubenswrapper[4808]: I0311 08:41:02.146532 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:02 crc kubenswrapper[4808]: I0311 08:41:02.146605 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:02 crc kubenswrapper[4808]: I0311 08:41:02.146624 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:02 crc kubenswrapper[4808]: I0311 08:41:02.146652 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:02 crc kubenswrapper[4808]: I0311 08:41:02.146672 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:02Z","lastTransitionTime":"2026-03-11T08:41:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:02 crc kubenswrapper[4808]: I0311 08:41:02.249957 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:02 crc kubenswrapper[4808]: I0311 08:41:02.250075 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:02 crc kubenswrapper[4808]: I0311 08:41:02.250106 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:02 crc kubenswrapper[4808]: I0311 08:41:02.250141 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:02 crc kubenswrapper[4808]: I0311 08:41:02.250164 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:02Z","lastTransitionTime":"2026-03-11T08:41:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:02 crc kubenswrapper[4808]: I0311 08:41:02.353899 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:02 crc kubenswrapper[4808]: I0311 08:41:02.353961 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:02 crc kubenswrapper[4808]: I0311 08:41:02.353983 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:02 crc kubenswrapper[4808]: I0311 08:41:02.354012 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:02 crc kubenswrapper[4808]: I0311 08:41:02.354033 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:02Z","lastTransitionTime":"2026-03-11T08:41:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:02 crc kubenswrapper[4808]: I0311 08:41:02.457404 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:02 crc kubenswrapper[4808]: I0311 08:41:02.457468 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:02 crc kubenswrapper[4808]: I0311 08:41:02.457492 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:02 crc kubenswrapper[4808]: I0311 08:41:02.457524 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:02 crc kubenswrapper[4808]: I0311 08:41:02.457546 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:02Z","lastTransitionTime":"2026-03-11T08:41:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:02 crc kubenswrapper[4808]: I0311 08:41:02.560577 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:02 crc kubenswrapper[4808]: I0311 08:41:02.560655 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:02 crc kubenswrapper[4808]: I0311 08:41:02.560679 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:02 crc kubenswrapper[4808]: I0311 08:41:02.560701 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:02 crc kubenswrapper[4808]: I0311 08:41:02.560717 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:02Z","lastTransitionTime":"2026-03-11T08:41:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:02 crc kubenswrapper[4808]: I0311 08:41:02.591904 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cf747e37-c201-4dcc-a2a5-2429f4eba47d-metrics-certs\") pod \"network-metrics-daemon-kqsq9\" (UID: \"cf747e37-c201-4dcc-a2a5-2429f4eba47d\") " pod="openshift-multus/network-metrics-daemon-kqsq9" Mar 11 08:41:02 crc kubenswrapper[4808]: E0311 08:41:02.592086 4808 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 11 08:41:02 crc kubenswrapper[4808]: E0311 08:41:02.592180 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cf747e37-c201-4dcc-a2a5-2429f4eba47d-metrics-certs podName:cf747e37-c201-4dcc-a2a5-2429f4eba47d nodeName:}" failed. No retries permitted until 2026-03-11 08:41:06.592154906 +0000 UTC m=+117.545478266 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cf747e37-c201-4dcc-a2a5-2429f4eba47d-metrics-certs") pod "network-metrics-daemon-kqsq9" (UID: "cf747e37-c201-4dcc-a2a5-2429f4eba47d") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 11 08:41:02 crc kubenswrapper[4808]: I0311 08:41:02.663902 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:02 crc kubenswrapper[4808]: I0311 08:41:02.663973 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:02 crc kubenswrapper[4808]: I0311 08:41:02.663995 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:02 crc kubenswrapper[4808]: I0311 08:41:02.664021 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:02 crc kubenswrapper[4808]: I0311 08:41:02.664038 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:02Z","lastTransitionTime":"2026-03-11T08:41:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:02 crc kubenswrapper[4808]: I0311 08:41:02.766648 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:02 crc kubenswrapper[4808]: I0311 08:41:02.766727 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:02 crc kubenswrapper[4808]: I0311 08:41:02.766762 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:02 crc kubenswrapper[4808]: I0311 08:41:02.766790 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:02 crc kubenswrapper[4808]: I0311 08:41:02.766814 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:02Z","lastTransitionTime":"2026-03-11T08:41:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:02 crc kubenswrapper[4808]: I0311 08:41:02.789411 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kqsq9" Mar 11 08:41:02 crc kubenswrapper[4808]: E0311 08:41:02.789586 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kqsq9" podUID="cf747e37-c201-4dcc-a2a5-2429f4eba47d" Mar 11 08:41:02 crc kubenswrapper[4808]: I0311 08:41:02.870270 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:02 crc kubenswrapper[4808]: I0311 08:41:02.870352 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:02 crc kubenswrapper[4808]: I0311 08:41:02.870405 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:02 crc kubenswrapper[4808]: I0311 08:41:02.870438 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:02 crc kubenswrapper[4808]: I0311 08:41:02.870461 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:02Z","lastTransitionTime":"2026-03-11T08:41:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:02 crc kubenswrapper[4808]: I0311 08:41:02.974032 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:02 crc kubenswrapper[4808]: I0311 08:41:02.974092 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:02 crc kubenswrapper[4808]: I0311 08:41:02.974112 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:02 crc kubenswrapper[4808]: I0311 08:41:02.974135 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:02 crc kubenswrapper[4808]: I0311 08:41:02.974153 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:02Z","lastTransitionTime":"2026-03-11T08:41:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:03 crc kubenswrapper[4808]: I0311 08:41:03.076991 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:03 crc kubenswrapper[4808]: I0311 08:41:03.077042 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:03 crc kubenswrapper[4808]: I0311 08:41:03.077059 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:03 crc kubenswrapper[4808]: I0311 08:41:03.077081 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:03 crc kubenswrapper[4808]: I0311 08:41:03.077100 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:03Z","lastTransitionTime":"2026-03-11T08:41:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:03 crc kubenswrapper[4808]: I0311 08:41:03.181504 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:03 crc kubenswrapper[4808]: I0311 08:41:03.181553 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:03 crc kubenswrapper[4808]: I0311 08:41:03.181569 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:03 crc kubenswrapper[4808]: I0311 08:41:03.181588 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:03 crc kubenswrapper[4808]: I0311 08:41:03.181602 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:03Z","lastTransitionTime":"2026-03-11T08:41:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:03 crc kubenswrapper[4808]: I0311 08:41:03.284529 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:03 crc kubenswrapper[4808]: I0311 08:41:03.284582 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:03 crc kubenswrapper[4808]: I0311 08:41:03.284599 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:03 crc kubenswrapper[4808]: I0311 08:41:03.284623 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:03 crc kubenswrapper[4808]: I0311 08:41:03.284640 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:03Z","lastTransitionTime":"2026-03-11T08:41:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:03 crc kubenswrapper[4808]: I0311 08:41:03.387042 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:03 crc kubenswrapper[4808]: I0311 08:41:03.387159 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:03 crc kubenswrapper[4808]: I0311 08:41:03.387185 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:03 crc kubenswrapper[4808]: I0311 08:41:03.387212 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:03 crc kubenswrapper[4808]: I0311 08:41:03.387228 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:03Z","lastTransitionTime":"2026-03-11T08:41:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:03 crc kubenswrapper[4808]: I0311 08:41:03.490395 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:03 crc kubenswrapper[4808]: I0311 08:41:03.490497 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:03 crc kubenswrapper[4808]: I0311 08:41:03.490520 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:03 crc kubenswrapper[4808]: I0311 08:41:03.490544 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:03 crc kubenswrapper[4808]: I0311 08:41:03.490561 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:03Z","lastTransitionTime":"2026-03-11T08:41:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:03 crc kubenswrapper[4808]: I0311 08:41:03.593455 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:03 crc kubenswrapper[4808]: I0311 08:41:03.593534 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:03 crc kubenswrapper[4808]: I0311 08:41:03.593570 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:03 crc kubenswrapper[4808]: I0311 08:41:03.593600 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:03 crc kubenswrapper[4808]: I0311 08:41:03.593621 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:03Z","lastTransitionTime":"2026-03-11T08:41:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:03 crc kubenswrapper[4808]: I0311 08:41:03.696965 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:03 crc kubenswrapper[4808]: I0311 08:41:03.697037 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:03 crc kubenswrapper[4808]: I0311 08:41:03.697058 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:03 crc kubenswrapper[4808]: I0311 08:41:03.697089 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:03 crc kubenswrapper[4808]: I0311 08:41:03.697112 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:03Z","lastTransitionTime":"2026-03-11T08:41:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:03 crc kubenswrapper[4808]: I0311 08:41:03.788638 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:41:03 crc kubenswrapper[4808]: I0311 08:41:03.788688 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:41:03 crc kubenswrapper[4808]: I0311 08:41:03.788700 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:41:03 crc kubenswrapper[4808]: E0311 08:41:03.788882 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 11 08:41:03 crc kubenswrapper[4808]: E0311 08:41:03.789136 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 11 08:41:03 crc kubenswrapper[4808]: E0311 08:41:03.789299 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 11 08:41:03 crc kubenswrapper[4808]: I0311 08:41:03.799553 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:03 crc kubenswrapper[4808]: I0311 08:41:03.799644 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:03 crc kubenswrapper[4808]: I0311 08:41:03.799666 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:03 crc kubenswrapper[4808]: I0311 08:41:03.799699 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:03 crc kubenswrapper[4808]: I0311 08:41:03.799723 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:03Z","lastTransitionTime":"2026-03-11T08:41:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:03 crc kubenswrapper[4808]: I0311 08:41:03.903192 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:03 crc kubenswrapper[4808]: I0311 08:41:03.903288 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:03 crc kubenswrapper[4808]: I0311 08:41:03.903311 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:03 crc kubenswrapper[4808]: I0311 08:41:03.903339 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:03 crc kubenswrapper[4808]: I0311 08:41:03.903409 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:03Z","lastTransitionTime":"2026-03-11T08:41:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:04 crc kubenswrapper[4808]: I0311 08:41:04.006642 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:04 crc kubenswrapper[4808]: I0311 08:41:04.006719 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:04 crc kubenswrapper[4808]: I0311 08:41:04.006735 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:04 crc kubenswrapper[4808]: I0311 08:41:04.006757 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:04 crc kubenswrapper[4808]: I0311 08:41:04.006774 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:04Z","lastTransitionTime":"2026-03-11T08:41:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:04 crc kubenswrapper[4808]: I0311 08:41:04.109532 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:04 crc kubenswrapper[4808]: I0311 08:41:04.109590 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:04 crc kubenswrapper[4808]: I0311 08:41:04.109607 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:04 crc kubenswrapper[4808]: I0311 08:41:04.109631 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:04 crc kubenswrapper[4808]: I0311 08:41:04.109649 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:04Z","lastTransitionTime":"2026-03-11T08:41:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:04 crc kubenswrapper[4808]: I0311 08:41:04.212955 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:04 crc kubenswrapper[4808]: I0311 08:41:04.213050 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:04 crc kubenswrapper[4808]: I0311 08:41:04.213074 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:04 crc kubenswrapper[4808]: I0311 08:41:04.213106 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:04 crc kubenswrapper[4808]: I0311 08:41:04.213129 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:04Z","lastTransitionTime":"2026-03-11T08:41:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:04 crc kubenswrapper[4808]: I0311 08:41:04.315878 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:04 crc kubenswrapper[4808]: I0311 08:41:04.315926 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:04 crc kubenswrapper[4808]: I0311 08:41:04.315939 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:04 crc kubenswrapper[4808]: I0311 08:41:04.315959 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:04 crc kubenswrapper[4808]: I0311 08:41:04.315973 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:04Z","lastTransitionTime":"2026-03-11T08:41:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:04 crc kubenswrapper[4808]: I0311 08:41:04.418442 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:04 crc kubenswrapper[4808]: I0311 08:41:04.418488 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:04 crc kubenswrapper[4808]: I0311 08:41:04.418506 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:04 crc kubenswrapper[4808]: I0311 08:41:04.418522 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:04 crc kubenswrapper[4808]: I0311 08:41:04.418531 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:04Z","lastTransitionTime":"2026-03-11T08:41:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:04 crc kubenswrapper[4808]: I0311 08:41:04.520208 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:04 crc kubenswrapper[4808]: I0311 08:41:04.520284 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:04 crc kubenswrapper[4808]: I0311 08:41:04.520298 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:04 crc kubenswrapper[4808]: I0311 08:41:04.520312 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:04 crc kubenswrapper[4808]: I0311 08:41:04.520323 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:04Z","lastTransitionTime":"2026-03-11T08:41:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:04 crc kubenswrapper[4808]: I0311 08:41:04.622715 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:04 crc kubenswrapper[4808]: I0311 08:41:04.622783 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:04 crc kubenswrapper[4808]: I0311 08:41:04.622807 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:04 crc kubenswrapper[4808]: I0311 08:41:04.622839 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:04 crc kubenswrapper[4808]: I0311 08:41:04.622862 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:04Z","lastTransitionTime":"2026-03-11T08:41:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:04 crc kubenswrapper[4808]: I0311 08:41:04.725160 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:04 crc kubenswrapper[4808]: I0311 08:41:04.725222 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:04 crc kubenswrapper[4808]: I0311 08:41:04.725245 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:04 crc kubenswrapper[4808]: I0311 08:41:04.725276 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:04 crc kubenswrapper[4808]: I0311 08:41:04.725298 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:04Z","lastTransitionTime":"2026-03-11T08:41:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:04 crc kubenswrapper[4808]: I0311 08:41:04.788766 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kqsq9" Mar 11 08:41:04 crc kubenswrapper[4808]: E0311 08:41:04.788973 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kqsq9" podUID="cf747e37-c201-4dcc-a2a5-2429f4eba47d" Mar 11 08:41:04 crc kubenswrapper[4808]: I0311 08:41:04.828672 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:04 crc kubenswrapper[4808]: I0311 08:41:04.828763 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:04 crc kubenswrapper[4808]: I0311 08:41:04.828780 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:04 crc kubenswrapper[4808]: I0311 08:41:04.828805 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:04 crc kubenswrapper[4808]: I0311 08:41:04.828829 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:04Z","lastTransitionTime":"2026-03-11T08:41:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:04 crc kubenswrapper[4808]: I0311 08:41:04.931924 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:04 crc kubenswrapper[4808]: I0311 08:41:04.931992 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:04 crc kubenswrapper[4808]: I0311 08:41:04.932009 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:04 crc kubenswrapper[4808]: I0311 08:41:04.932037 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:04 crc kubenswrapper[4808]: I0311 08:41:04.932056 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:04Z","lastTransitionTime":"2026-03-11T08:41:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:05 crc kubenswrapper[4808]: I0311 08:41:05.035441 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:05 crc kubenswrapper[4808]: I0311 08:41:05.035529 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:05 crc kubenswrapper[4808]: I0311 08:41:05.035551 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:05 crc kubenswrapper[4808]: I0311 08:41:05.035588 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:05 crc kubenswrapper[4808]: I0311 08:41:05.035610 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:05Z","lastTransitionTime":"2026-03-11T08:41:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:05 crc kubenswrapper[4808]: I0311 08:41:05.138754 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:05 crc kubenswrapper[4808]: I0311 08:41:05.138816 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:05 crc kubenswrapper[4808]: I0311 08:41:05.138829 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:05 crc kubenswrapper[4808]: I0311 08:41:05.138850 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:05 crc kubenswrapper[4808]: I0311 08:41:05.138862 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:05Z","lastTransitionTime":"2026-03-11T08:41:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:05 crc kubenswrapper[4808]: I0311 08:41:05.242190 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:05 crc kubenswrapper[4808]: I0311 08:41:05.242231 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:05 crc kubenswrapper[4808]: I0311 08:41:05.242243 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:05 crc kubenswrapper[4808]: I0311 08:41:05.242266 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:05 crc kubenswrapper[4808]: I0311 08:41:05.242278 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:05Z","lastTransitionTime":"2026-03-11T08:41:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:05 crc kubenswrapper[4808]: I0311 08:41:05.345644 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:05 crc kubenswrapper[4808]: I0311 08:41:05.345716 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:05 crc kubenswrapper[4808]: I0311 08:41:05.345734 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:05 crc kubenswrapper[4808]: I0311 08:41:05.345761 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:05 crc kubenswrapper[4808]: I0311 08:41:05.345776 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:05Z","lastTransitionTime":"2026-03-11T08:41:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:05 crc kubenswrapper[4808]: I0311 08:41:05.449476 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:05 crc kubenswrapper[4808]: I0311 08:41:05.449567 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:05 crc kubenswrapper[4808]: I0311 08:41:05.449603 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:05 crc kubenswrapper[4808]: I0311 08:41:05.449634 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:05 crc kubenswrapper[4808]: I0311 08:41:05.449658 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:05Z","lastTransitionTime":"2026-03-11T08:41:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:05 crc kubenswrapper[4808]: I0311 08:41:05.553528 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:05 crc kubenswrapper[4808]: I0311 08:41:05.553593 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:05 crc kubenswrapper[4808]: I0311 08:41:05.553615 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:05 crc kubenswrapper[4808]: I0311 08:41:05.553647 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:05 crc kubenswrapper[4808]: I0311 08:41:05.553667 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:05Z","lastTransitionTime":"2026-03-11T08:41:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:05 crc kubenswrapper[4808]: I0311 08:41:05.657022 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:05 crc kubenswrapper[4808]: I0311 08:41:05.657082 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:05 crc kubenswrapper[4808]: I0311 08:41:05.657103 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:05 crc kubenswrapper[4808]: I0311 08:41:05.657131 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:05 crc kubenswrapper[4808]: I0311 08:41:05.657152 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:05Z","lastTransitionTime":"2026-03-11T08:41:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:05 crc kubenswrapper[4808]: I0311 08:41:05.760085 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:05 crc kubenswrapper[4808]: I0311 08:41:05.760160 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:05 crc kubenswrapper[4808]: I0311 08:41:05.760184 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:05 crc kubenswrapper[4808]: I0311 08:41:05.760217 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:05 crc kubenswrapper[4808]: I0311 08:41:05.760243 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:05Z","lastTransitionTime":"2026-03-11T08:41:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:05 crc kubenswrapper[4808]: I0311 08:41:05.788687 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:41:05 crc kubenswrapper[4808]: E0311 08:41:05.788855 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 11 08:41:05 crc kubenswrapper[4808]: I0311 08:41:05.788915 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:41:05 crc kubenswrapper[4808]: I0311 08:41:05.788944 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:41:05 crc kubenswrapper[4808]: E0311 08:41:05.789191 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 11 08:41:05 crc kubenswrapper[4808]: E0311 08:41:05.789227 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 11 08:41:05 crc kubenswrapper[4808]: I0311 08:41:05.862945 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:05 crc kubenswrapper[4808]: I0311 08:41:05.863006 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:05 crc kubenswrapper[4808]: I0311 08:41:05.863024 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:05 crc kubenswrapper[4808]: I0311 08:41:05.863051 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:05 crc kubenswrapper[4808]: I0311 08:41:05.863069 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:05Z","lastTransitionTime":"2026-03-11T08:41:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:05 crc kubenswrapper[4808]: I0311 08:41:05.966305 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:05 crc kubenswrapper[4808]: I0311 08:41:05.966399 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:05 crc kubenswrapper[4808]: I0311 08:41:05.966419 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:05 crc kubenswrapper[4808]: I0311 08:41:05.966446 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:05 crc kubenswrapper[4808]: I0311 08:41:05.966463 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:05Z","lastTransitionTime":"2026-03-11T08:41:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:06 crc kubenswrapper[4808]: I0311 08:41:06.069737 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:06 crc kubenswrapper[4808]: I0311 08:41:06.069802 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:06 crc kubenswrapper[4808]: I0311 08:41:06.069825 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:06 crc kubenswrapper[4808]: I0311 08:41:06.069854 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:06 crc kubenswrapper[4808]: I0311 08:41:06.069873 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:06Z","lastTransitionTime":"2026-03-11T08:41:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:06 crc kubenswrapper[4808]: I0311 08:41:06.173348 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:06 crc kubenswrapper[4808]: I0311 08:41:06.173444 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:06 crc kubenswrapper[4808]: I0311 08:41:06.173463 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:06 crc kubenswrapper[4808]: I0311 08:41:06.173489 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:06 crc kubenswrapper[4808]: I0311 08:41:06.173508 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:06Z","lastTransitionTime":"2026-03-11T08:41:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:06 crc kubenswrapper[4808]: I0311 08:41:06.276770 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:06 crc kubenswrapper[4808]: I0311 08:41:06.276851 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:06 crc kubenswrapper[4808]: I0311 08:41:06.276875 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:06 crc kubenswrapper[4808]: I0311 08:41:06.276915 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:06 crc kubenswrapper[4808]: I0311 08:41:06.276937 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:06Z","lastTransitionTime":"2026-03-11T08:41:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:06 crc kubenswrapper[4808]: I0311 08:41:06.379729 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:06 crc kubenswrapper[4808]: I0311 08:41:06.379799 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:06 crc kubenswrapper[4808]: I0311 08:41:06.379821 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:06 crc kubenswrapper[4808]: I0311 08:41:06.379849 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:06 crc kubenswrapper[4808]: I0311 08:41:06.379870 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:06Z","lastTransitionTime":"2026-03-11T08:41:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:06 crc kubenswrapper[4808]: I0311 08:41:06.482546 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:06 crc kubenswrapper[4808]: I0311 08:41:06.482626 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:06 crc kubenswrapper[4808]: I0311 08:41:06.482639 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:06 crc kubenswrapper[4808]: I0311 08:41:06.482655 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:06 crc kubenswrapper[4808]: I0311 08:41:06.482667 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:06Z","lastTransitionTime":"2026-03-11T08:41:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:06 crc kubenswrapper[4808]: I0311 08:41:06.588992 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:06 crc kubenswrapper[4808]: I0311 08:41:06.589043 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:06 crc kubenswrapper[4808]: I0311 08:41:06.589055 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:06 crc kubenswrapper[4808]: I0311 08:41:06.589080 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:06 crc kubenswrapper[4808]: I0311 08:41:06.589091 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:06Z","lastTransitionTime":"2026-03-11T08:41:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:06 crc kubenswrapper[4808]: I0311 08:41:06.637005 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cf747e37-c201-4dcc-a2a5-2429f4eba47d-metrics-certs\") pod \"network-metrics-daemon-kqsq9\" (UID: \"cf747e37-c201-4dcc-a2a5-2429f4eba47d\") " pod="openshift-multus/network-metrics-daemon-kqsq9" Mar 11 08:41:06 crc kubenswrapper[4808]: E0311 08:41:06.637177 4808 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 11 08:41:06 crc kubenswrapper[4808]: E0311 08:41:06.637226 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cf747e37-c201-4dcc-a2a5-2429f4eba47d-metrics-certs podName:cf747e37-c201-4dcc-a2a5-2429f4eba47d nodeName:}" failed. No retries permitted until 2026-03-11 08:41:14.637211523 +0000 UTC m=+125.590534843 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cf747e37-c201-4dcc-a2a5-2429f4eba47d-metrics-certs") pod "network-metrics-daemon-kqsq9" (UID: "cf747e37-c201-4dcc-a2a5-2429f4eba47d") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 11 08:41:06 crc kubenswrapper[4808]: I0311 08:41:06.692190 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:06 crc kubenswrapper[4808]: I0311 08:41:06.692261 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:06 crc kubenswrapper[4808]: I0311 08:41:06.692283 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:06 crc kubenswrapper[4808]: I0311 08:41:06.692311 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:06 crc kubenswrapper[4808]: I0311 08:41:06.692332 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:06Z","lastTransitionTime":"2026-03-11T08:41:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:06 crc kubenswrapper[4808]: I0311 08:41:06.788833 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kqsq9" Mar 11 08:41:06 crc kubenswrapper[4808]: E0311 08:41:06.789345 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kqsq9" podUID="cf747e37-c201-4dcc-a2a5-2429f4eba47d" Mar 11 08:41:06 crc kubenswrapper[4808]: I0311 08:41:06.789681 4808 scope.go:117] "RemoveContainer" containerID="9f99d53f25011afeb242934cb64a3cce6d5a03c17bd6008729f5e07298f152fb" Mar 11 08:41:06 crc kubenswrapper[4808]: I0311 08:41:06.794703 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:06 crc kubenswrapper[4808]: I0311 08:41:06.794768 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:06 crc kubenswrapper[4808]: I0311 08:41:06.794792 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:06 crc kubenswrapper[4808]: I0311 08:41:06.794821 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:06 crc kubenswrapper[4808]: I0311 08:41:06.794844 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:06Z","lastTransitionTime":"2026-03-11T08:41:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:06 crc kubenswrapper[4808]: I0311 08:41:06.897907 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:06 crc kubenswrapper[4808]: I0311 08:41:06.897967 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:06 crc kubenswrapper[4808]: I0311 08:41:06.897989 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:06 crc kubenswrapper[4808]: I0311 08:41:06.898016 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:06 crc kubenswrapper[4808]: I0311 08:41:06.898038 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:06Z","lastTransitionTime":"2026-03-11T08:41:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:07 crc kubenswrapper[4808]: I0311 08:41:07.001107 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:07 crc kubenswrapper[4808]: I0311 08:41:07.001148 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:07 crc kubenswrapper[4808]: I0311 08:41:07.001158 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:07 crc kubenswrapper[4808]: I0311 08:41:07.001174 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:07 crc kubenswrapper[4808]: I0311 08:41:07.001187 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:07Z","lastTransitionTime":"2026-03-11T08:41:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:07 crc kubenswrapper[4808]: I0311 08:41:07.103238 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:07 crc kubenswrapper[4808]: I0311 08:41:07.103307 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:07 crc kubenswrapper[4808]: I0311 08:41:07.103325 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:07 crc kubenswrapper[4808]: I0311 08:41:07.103350 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:07 crc kubenswrapper[4808]: I0311 08:41:07.103393 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:07Z","lastTransitionTime":"2026-03-11T08:41:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:07 crc kubenswrapper[4808]: I0311 08:41:07.206432 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:07 crc kubenswrapper[4808]: I0311 08:41:07.206465 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:07 crc kubenswrapper[4808]: I0311 08:41:07.206472 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:07 crc kubenswrapper[4808]: I0311 08:41:07.206493 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:07 crc kubenswrapper[4808]: I0311 08:41:07.206503 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:07Z","lastTransitionTime":"2026-03-11T08:41:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:07 crc kubenswrapper[4808]: I0311 08:41:07.309935 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:07 crc kubenswrapper[4808]: I0311 08:41:07.310002 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:07 crc kubenswrapper[4808]: I0311 08:41:07.310024 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:07 crc kubenswrapper[4808]: I0311 08:41:07.310052 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:07 crc kubenswrapper[4808]: I0311 08:41:07.310074 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:07Z","lastTransitionTime":"2026-03-11T08:41:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:07 crc kubenswrapper[4808]: I0311 08:41:07.413347 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:07 crc kubenswrapper[4808]: I0311 08:41:07.413464 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:07 crc kubenswrapper[4808]: I0311 08:41:07.413481 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:07 crc kubenswrapper[4808]: I0311 08:41:07.413509 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:07 crc kubenswrapper[4808]: I0311 08:41:07.413525 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:07Z","lastTransitionTime":"2026-03-11T08:41:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:07 crc kubenswrapper[4808]: I0311 08:41:07.424349 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Mar 11 08:41:07 crc kubenswrapper[4808]: I0311 08:41:07.425785 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"46e7f27f0f53872704c9eff7a30b8cb82afea98d12ec0723521b901c7503f203"} Mar 11 08:41:07 crc kubenswrapper[4808]: I0311 08:41:07.426219 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 11 08:41:07 crc kubenswrapper[4808]: I0311 08:41:07.440834 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afd64db1e4772f4558a0a810e319481d1c8d521b299c9f6f79d5c0f752b5f89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8deb33e0cd83c7419f7ddd4cba0e84d33849e982b5da5e74e0e93eebb40b249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:07Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:07 crc kubenswrapper[4808]: I0311 08:41:07.461344 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17fbdac0c2d9f6080066329737fa9866b0e953b38687d6c7b2c62dea2e9964a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02a71d4f33fd4d8bcf666c2d01f03d31c6e0f2b8eaa0cc183b48251e36cdc474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1d10229731c437de7f29f504fec11ac3688a0a5039812d76551a867f814b95e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a351ca31265d6256ce9c08680495dd7a9b7882cd17a8faf4a0d224a800737e4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3674634f2b9f7a0e2fcf3758da17080b26a6284c8446d8326bc405b1307f8c22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f11f10c1e9b99cab011ae05555fe3dd8aeb6e7df5cb166f07c6bcfffc9f9164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2b75d7c063c9a55b7d1bbcc953c2b33048b4a5025f11c85af1f040f77501a1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc0e7d534992483e4ac7693478b4f531b2dba1bb7f7d882081b82f7857529b35\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-11T08:40:55Z\\\",\\\"message\\\":\\\"nalversions/factory.go:141\\\\nI0311 08:40:55.460987 6636 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0311 08:40:55.461176 6636 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0311 08:40:55.461284 6636 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0311 08:40:55.461631 6636 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0311 08:40:55.461668 6636 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0311 08:40:55.461696 6636 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0311 08:40:55.461718 6636 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0311 08:40:55.461745 6636 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0311 08:40:55.461785 6636 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0311 08:40:55.461814 6636 factory.go:656] Stopping watch factory\\\\nI0311 08:40:55.461831 6636 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0311 08:40:55.461834 6636 ovnkube.go:599] Stopped ovnkube\\\\nI0311 08:40:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2b75d7c063c9a55b7d1bbcc953c2b33048b4a5025f11c85af1f040f77501a1e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-11T08:40:57Z\\\",\\\"message\\\":\\\"troller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:57Z is after 2025-08-24T17:21:41Z]\\\\nI0311 08:40:57.290391 6847 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-dgh9v\\\\nI0311 08:40:57.290392 6847 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-dptv8 after 0 failed attempt(s)\\\\nI0311 08:40:57.290400 6847 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-dgh9v\\\\nI0311 08:40:57.290405 6847 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc\\\\nI0311 08:40:57.290415 6847 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc\\\\nI0311 08:40:57.290385 6847 obj_r\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01f1b29fe1734c41ddcc1b36367ccc54148fa85f2e05072fc0b60987dfcbb078\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7129de475a69815e0ed556f2d45c208788efcd7ef2792f3d93994af37dee87b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7129de475a69815e0ed556f2d45c208788efcd7ef2792f3d93994af37dee87b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8wfl5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:07Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:07 crc kubenswrapper[4808]: I0311 08:41:07.472183 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dptv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af91f09b-b749-4f04-81ac-2f0079a0dca5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df6497d4025103c2efba8337b4d7541d3d7015be49a7b3478f5648122e9dbd92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtdnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dptv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:07Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:07 crc kubenswrapper[4808]: I0311 08:41:07.482899 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dgh9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1a75dfb-31dd-4275-a309-c9e7130feb05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81effdeab74d0bc6b6f904ca05d6ae5f9777772a1b6e69e9038bb6818af8d979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chgpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dgh9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:07Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:07 crc kubenswrapper[4808]: I0311 08:41:07.494508 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70aeaa0b-1b9a-450e-bac3-61beed554491\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9a36b70daad0e5e33a84c699b4c0680f47f80bad4c9852616ea933ba8c6b221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1f460dd013b0b42bfb8031fcb1e2f49ace0fe24ce8194bd6e6970ad3e750ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://675da4e867613d165a41176c1b725f3601ffdd7de762c52efec0ee5485b1d628\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46e7f27f0f53872704c9eff7a30b8cb82afea98d12ec0723521b901c7503f203\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f99d53f25011afeb242934cb64a3cce6d5a03c17bd6008729f5e07298f152fb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-11T08:40:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0311 08:40:20.589250 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0311 08:40:20.589388 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0311 08:40:20.590094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1044918164/tls.crt::/tmp/serving-cert-1044918164/tls.key\\\\\\\"\\\\nI0311 08:40:21.049983 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0311 08:40:21.052943 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0311 08:40:21.052965 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0311 08:40:21.052991 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0311 08:40:21.052999 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0311 08:40:21.059520 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0311 08:40:21.059550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:40:21.059569 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:40:21.059578 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0311 08:40:21.059584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0311 08:40:21.059590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0311 08:40:21.059596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0311 08:40:21.059648 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0311 08:40:21.062132 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:41:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://506d71773927c014aeafefdc00c93d5faf4e6a5827bc8b47f525f3ea970ed14e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://184b4659910cdc9b9ee8dc34db1ede687099f4a5103dfba9e9a2fc0428c4e2ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://184b4659910cdc9b9ee8dc34db1ede687099f4a5103dfba9e9a2fc0428c4e2ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:07Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:07 crc kubenswrapper[4808]: I0311 08:41:07.504613 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"134c7d84-5243-4e2e-afcf-0eaf67967ce7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00fd34aefc93aa169e305232d0d0bf0fff66d3f4469ae96f68cc5f8a51a92c86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://023c9b432e3273f96abe72e9e1e4cf5ab68649d9d71d0ff59f75fc30ea5c2287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://023c9b432e3273f96abe72e9e1e4cf5ab68649d9d71d0ff59f75fc30ea5c2287\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:07Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:07 crc kubenswrapper[4808]: I0311 08:41:07.516208 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:07 crc kubenswrapper[4808]: I0311 08:41:07.516249 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:07 crc kubenswrapper[4808]: I0311 08:41:07.516261 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:07 crc kubenswrapper[4808]: I0311 08:41:07.516275 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:07 crc kubenswrapper[4808]: I0311 08:41:07.516286 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:07Z","lastTransitionTime":"2026-03-11T08:41:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:07 crc kubenswrapper[4808]: I0311 08:41:07.517419 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:07Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:07 crc kubenswrapper[4808]: I0311 08:41:07.535930 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dda5309-668d-4e3c-b3b2-1d708eecc578\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://125d3edbb47daf863a7f7b431cd468ba206a964c42d2a8fdbf163ef8ae3fd771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2hw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dad38518c1c86c0eb3926b0ff17b9fe7847bd1e942591307c49dfbd87a61d19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2hw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tfsm9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:07Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:07 crc kubenswrapper[4808]: I0311 08:41:07.548412 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a12f71a3a92a4513660a5b6c0bb73f40545f5528a45fab7aec18cb22283921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:07Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:07 crc kubenswrapper[4808]: I0311 08:41:07.560862 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nqcqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da0586e9-4a6f-4ba5-bc48-568fd9cdb0e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddf318b7c04801c4d010fa760bf0a48a82d26569a7bbfffa015dab3c23e0d470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d5q8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d8eb53e3ea80c3e2e9409ea178b232442d22adb54c055b8487fd6dca0ee4ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d5q8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nqcqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:07Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:07 crc kubenswrapper[4808]: I0311 08:41:07.570858 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kqsq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf747e37-c201-4dcc-a2a5-2429f4eba47d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7666v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7666v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kqsq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:07Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:07 crc kubenswrapper[4808]: I0311 08:41:07.587517 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39fe7c21-7960-42c6-aa61-98bac9e7ecd5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46ab0655a5449d68489b2ad4cdadf1ab8539dfc66e9a09ef0bf492fc84c3df01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99862ebd2568f5b782683b45df57b331cb0b659d0a48b16d898658463fcaa023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fac42d74d7b0553ef1fda4393cbe7872b21bc18baada8519db8137f2ef71d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce1b28200696e0c819d326a53d481baf48418232ac2a66fe01504e7af648c56f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1da63ac4504d6a817f5c856ae4ec6feee655c983f804372f8100624e0fbb08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0167ad24e01c669d29e5bcad3d3dc4e2e7578cf20278001ea2dfe9f1390bc99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0167ad24e01c669d29e5bcad3d3dc4e2e7578cf20278001ea2dfe9f1390bc99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc6d6b373b603100e83b56735f848166bbe70eaffa795411d0bf95bbfbe31c6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc6d6b373b603100e83b56735f848166bbe70eaffa795411d0bf95bbfbe31c6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f029352eaf11747f4efebbb91000b740e298e03d72e8a1347316668b78c23cae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f029352eaf11747f4efebbb91000b740e298e03d72e8a1347316668b78c23cae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:07Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:07 crc kubenswrapper[4808]: I0311 08:41:07.598661 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b088d2a8d5895f936e824861275482977fb68600e1ca4fd0c08e124c4f11da52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:07Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:07 crc kubenswrapper[4808]: I0311 08:41:07.610008 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:07Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:07 crc kubenswrapper[4808]: I0311 08:41:07.618480 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:07 crc kubenswrapper[4808]: I0311 08:41:07.618505 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:07 crc kubenswrapper[4808]: I0311 08:41:07.618515 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:07 crc kubenswrapper[4808]: I0311 08:41:07.618531 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:07 crc kubenswrapper[4808]: I0311 08:41:07.618542 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:07Z","lastTransitionTime":"2026-03-11T08:41:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:07 crc kubenswrapper[4808]: I0311 08:41:07.620594 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:07Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:07 crc kubenswrapper[4808]: I0311 08:41:07.629937 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-twvrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6377c39c-8ecf-409c-b3e7-ea9d717e234f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a910f47666db2f5fb733718e848c75c15d9f0975c7339afd9e21f21ede7aa1b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c99gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-twvrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:07Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:07 crc kubenswrapper[4808]: I0311 08:41:07.641795 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2r84h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcd5dff4-0826-4876-9fd3-3f19781a17bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3814b88c4b0107b15c1dacb212e0a05451ebc05065fcb0c25318df4808220e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f840757548b4af73aae0c247025a649cbf50cfbc534fbd38f0592f0f0e97274\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f840757548b4af73aae0c247025a649cbf50cfbc534fbd38f0592f0f0e97274\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ade0f54a4b7718d83ed1ed5025f91b42175bf7daa1327a150a6eedc99df8e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ade0f54a4b7718d83ed1ed5025f91b42175bf7daa1327a150a6eedc99df8e3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e872db89da909105a9343e8b9951c7c88bfd5107edb4386faa05c08b37e34bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e872db89da909105a9343e8b9951c7c88bfd5107edb4386faa05c08b37e34bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14920dc320f545db70ffb1b588dcfed5cb019f04e0dddd1d38db7e0696429fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14920dc320f545db70ffb1b588dcfed5cb019f04e0dddd1d38db7e0696429fc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc79537b4712c9762fe2a813ef7f12b5d6314ed2aa66741561fe635591da8a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc79537b4712c9762fe2a813ef7f12b5d6314ed2aa66741561fe635591da8a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a689c8bcf97ed3a6950a775029df058df856a2cc37cdc511d0ed62a36d9f7d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a689c8bcf97ed3a6950a775029df058df856a2cc37cdc511d0ed62a36d9f7d21\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2r84h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:07Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:07 crc kubenswrapper[4808]: I0311 08:41:07.721936 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:07 crc kubenswrapper[4808]: I0311 08:41:07.721997 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:07 crc kubenswrapper[4808]: I0311 08:41:07.722033 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:07 crc kubenswrapper[4808]: I0311 08:41:07.722064 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:07 crc kubenswrapper[4808]: I0311 08:41:07.722086 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:07Z","lastTransitionTime":"2026-03-11T08:41:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:07 crc kubenswrapper[4808]: I0311 08:41:07.789080 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:41:07 crc kubenswrapper[4808]: I0311 08:41:07.789172 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:41:07 crc kubenswrapper[4808]: I0311 08:41:07.789248 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:41:07 crc kubenswrapper[4808]: E0311 08:41:07.789307 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 11 08:41:07 crc kubenswrapper[4808]: E0311 08:41:07.789399 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 11 08:41:07 crc kubenswrapper[4808]: E0311 08:41:07.789488 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 11 08:41:07 crc kubenswrapper[4808]: I0311 08:41:07.825117 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:07 crc kubenswrapper[4808]: I0311 08:41:07.825199 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:07 crc kubenswrapper[4808]: I0311 08:41:07.825224 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:07 crc kubenswrapper[4808]: I0311 08:41:07.825254 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:07 crc kubenswrapper[4808]: I0311 08:41:07.825279 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:07Z","lastTransitionTime":"2026-03-11T08:41:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:07 crc kubenswrapper[4808]: I0311 08:41:07.928327 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:07 crc kubenswrapper[4808]: I0311 08:41:07.928463 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:07 crc kubenswrapper[4808]: I0311 08:41:07.928487 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:07 crc kubenswrapper[4808]: I0311 08:41:07.928519 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:07 crc kubenswrapper[4808]: I0311 08:41:07.928540 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:07Z","lastTransitionTime":"2026-03-11T08:41:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:08 crc kubenswrapper[4808]: I0311 08:41:08.031022 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:08 crc kubenswrapper[4808]: I0311 08:41:08.031067 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:08 crc kubenswrapper[4808]: I0311 08:41:08.031076 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:08 crc kubenswrapper[4808]: I0311 08:41:08.031090 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:08 crc kubenswrapper[4808]: I0311 08:41:08.031099 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:08Z","lastTransitionTime":"2026-03-11T08:41:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:08 crc kubenswrapper[4808]: I0311 08:41:08.134015 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:08 crc kubenswrapper[4808]: I0311 08:41:08.134092 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:08 crc kubenswrapper[4808]: I0311 08:41:08.134113 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:08 crc kubenswrapper[4808]: I0311 08:41:08.134139 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:08 crc kubenswrapper[4808]: I0311 08:41:08.134158 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:08Z","lastTransitionTime":"2026-03-11T08:41:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:08 crc kubenswrapper[4808]: I0311 08:41:08.237523 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:08 crc kubenswrapper[4808]: I0311 08:41:08.237597 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:08 crc kubenswrapper[4808]: I0311 08:41:08.237620 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:08 crc kubenswrapper[4808]: I0311 08:41:08.237653 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:08 crc kubenswrapper[4808]: I0311 08:41:08.237675 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:08Z","lastTransitionTime":"2026-03-11T08:41:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:08 crc kubenswrapper[4808]: I0311 08:41:08.340783 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:08 crc kubenswrapper[4808]: I0311 08:41:08.340854 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:08 crc kubenswrapper[4808]: I0311 08:41:08.340887 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:08 crc kubenswrapper[4808]: I0311 08:41:08.340915 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:08 crc kubenswrapper[4808]: I0311 08:41:08.340936 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:08Z","lastTransitionTime":"2026-03-11T08:41:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:08 crc kubenswrapper[4808]: I0311 08:41:08.443891 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:08 crc kubenswrapper[4808]: I0311 08:41:08.443952 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:08 crc kubenswrapper[4808]: I0311 08:41:08.443970 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:08 crc kubenswrapper[4808]: I0311 08:41:08.443994 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:08 crc kubenswrapper[4808]: I0311 08:41:08.444011 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:08Z","lastTransitionTime":"2026-03-11T08:41:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:08 crc kubenswrapper[4808]: I0311 08:41:08.546924 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:08 crc kubenswrapper[4808]: I0311 08:41:08.546999 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:08 crc kubenswrapper[4808]: I0311 08:41:08.547026 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:08 crc kubenswrapper[4808]: I0311 08:41:08.547058 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:08 crc kubenswrapper[4808]: I0311 08:41:08.547080 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:08Z","lastTransitionTime":"2026-03-11T08:41:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:08 crc kubenswrapper[4808]: I0311 08:41:08.650545 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:08 crc kubenswrapper[4808]: I0311 08:41:08.650681 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:08 crc kubenswrapper[4808]: I0311 08:41:08.650703 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:08 crc kubenswrapper[4808]: I0311 08:41:08.650729 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:08 crc kubenswrapper[4808]: I0311 08:41:08.650749 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:08Z","lastTransitionTime":"2026-03-11T08:41:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:08 crc kubenswrapper[4808]: I0311 08:41:08.754653 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:08 crc kubenswrapper[4808]: I0311 08:41:08.754740 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:08 crc kubenswrapper[4808]: I0311 08:41:08.754769 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:08 crc kubenswrapper[4808]: I0311 08:41:08.754801 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:08 crc kubenswrapper[4808]: I0311 08:41:08.754823 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:08Z","lastTransitionTime":"2026-03-11T08:41:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:08 crc kubenswrapper[4808]: I0311 08:41:08.788855 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kqsq9" Mar 11 08:41:08 crc kubenswrapper[4808]: E0311 08:41:08.789053 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kqsq9" podUID="cf747e37-c201-4dcc-a2a5-2429f4eba47d" Mar 11 08:41:08 crc kubenswrapper[4808]: I0311 08:41:08.858716 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:08 crc kubenswrapper[4808]: I0311 08:41:08.858794 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:08 crc kubenswrapper[4808]: I0311 08:41:08.858819 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:08 crc kubenswrapper[4808]: I0311 08:41:08.858854 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:08 crc kubenswrapper[4808]: I0311 08:41:08.858878 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:08Z","lastTransitionTime":"2026-03-11T08:41:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:08 crc kubenswrapper[4808]: I0311 08:41:08.961770 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:08 crc kubenswrapper[4808]: I0311 08:41:08.961842 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:08 crc kubenswrapper[4808]: I0311 08:41:08.961866 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:08 crc kubenswrapper[4808]: I0311 08:41:08.961889 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:08 crc kubenswrapper[4808]: I0311 08:41:08.961907 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:08Z","lastTransitionTime":"2026-03-11T08:41:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:09 crc kubenswrapper[4808]: I0311 08:41:09.064962 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:09 crc kubenswrapper[4808]: I0311 08:41:09.065037 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:09 crc kubenswrapper[4808]: I0311 08:41:09.065062 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:09 crc kubenswrapper[4808]: I0311 08:41:09.065092 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:09 crc kubenswrapper[4808]: I0311 08:41:09.065114 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:09Z","lastTransitionTime":"2026-03-11T08:41:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:09 crc kubenswrapper[4808]: I0311 08:41:09.168750 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:09 crc kubenswrapper[4808]: I0311 08:41:09.168814 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:09 crc kubenswrapper[4808]: I0311 08:41:09.168831 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:09 crc kubenswrapper[4808]: I0311 08:41:09.168855 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:09 crc kubenswrapper[4808]: I0311 08:41:09.168874 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:09Z","lastTransitionTime":"2026-03-11T08:41:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:09 crc kubenswrapper[4808]: I0311 08:41:09.272169 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:09 crc kubenswrapper[4808]: I0311 08:41:09.272240 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:09 crc kubenswrapper[4808]: I0311 08:41:09.272258 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:09 crc kubenswrapper[4808]: I0311 08:41:09.272282 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:09 crc kubenswrapper[4808]: I0311 08:41:09.272299 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:09Z","lastTransitionTime":"2026-03-11T08:41:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:09 crc kubenswrapper[4808]: I0311 08:41:09.374665 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:09 crc kubenswrapper[4808]: I0311 08:41:09.374728 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:09 crc kubenswrapper[4808]: I0311 08:41:09.374746 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:09 crc kubenswrapper[4808]: I0311 08:41:09.374771 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:09 crc kubenswrapper[4808]: I0311 08:41:09.374797 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:09Z","lastTransitionTime":"2026-03-11T08:41:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:09 crc kubenswrapper[4808]: I0311 08:41:09.477278 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:09 crc kubenswrapper[4808]: I0311 08:41:09.477343 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:09 crc kubenswrapper[4808]: I0311 08:41:09.477392 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:09 crc kubenswrapper[4808]: I0311 08:41:09.477427 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:09 crc kubenswrapper[4808]: I0311 08:41:09.477449 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:09Z","lastTransitionTime":"2026-03-11T08:41:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:09 crc kubenswrapper[4808]: I0311 08:41:09.580805 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:09 crc kubenswrapper[4808]: I0311 08:41:09.580847 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:09 crc kubenswrapper[4808]: I0311 08:41:09.580857 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:09 crc kubenswrapper[4808]: I0311 08:41:09.580891 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:09 crc kubenswrapper[4808]: I0311 08:41:09.580902 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:09Z","lastTransitionTime":"2026-03-11T08:41:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:09 crc kubenswrapper[4808]: I0311 08:41:09.682932 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:09 crc kubenswrapper[4808]: I0311 08:41:09.683005 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:09 crc kubenswrapper[4808]: I0311 08:41:09.683022 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:09 crc kubenswrapper[4808]: I0311 08:41:09.683047 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:09 crc kubenswrapper[4808]: I0311 08:41:09.683064 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:09Z","lastTransitionTime":"2026-03-11T08:41:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:09 crc kubenswrapper[4808]: E0311 08:41:09.783398 4808 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Mar 11 08:41:09 crc kubenswrapper[4808]: I0311 08:41:09.788448 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:41:09 crc kubenswrapper[4808]: I0311 08:41:09.788477 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:41:09 crc kubenswrapper[4808]: I0311 08:41:09.788477 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:41:09 crc kubenswrapper[4808]: E0311 08:41:09.788631 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 11 08:41:09 crc kubenswrapper[4808]: E0311 08:41:09.788887 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 11 08:41:09 crc kubenswrapper[4808]: E0311 08:41:09.788996 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 11 08:41:09 crc kubenswrapper[4808]: I0311 08:41:09.813536 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afd64db1e4772f4558a0a810e319481d1c8d521b299c9f6f79d5c0f752b5f89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8deb33e0cd83c7419f7ddd4cba0e84d33849e982b5da5e74e0e93eebb40b249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:09Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:09 crc kubenswrapper[4808]: I0311 08:41:09.838959 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17fbdac0c2d9f6080066329737fa9866b0e953b38687d6c7b2c62dea2e9964a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02a71d4f33fd4d8bcf666c2d01f03d31c6e0f2b8eaa0cc183b48251e36cdc474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1d10229731c437de7f29f504fec11ac3688a0a5039812d76551a867f814b95e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a351ca31265d6256ce9c08680495dd7a9b7882cd17a8faf4a0d224a800737e4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3674634f2b9f7a0e2fcf3758da17080b26a6284c8446d8326bc405b1307f8c22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f11f10c1e9b99cab011ae05555fe3dd8aeb6e7df5cb166f07c6bcfffc9f9164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2b75d7c063c9a55b7d1bbcc953c2b33048b4a5025f11c85af1f040f77501a1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc0e7d534992483e4ac7693478b4f531b2dba1bb7f7d882081b82f7857529b35\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-11T08:40:55Z\\\",\\\"message\\\":\\\"nalversions/factory.go:141\\\\nI0311 08:40:55.460987 6636 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0311 08:40:55.461176 6636 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0311 08:40:55.461284 6636 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0311 08:40:55.461631 6636 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0311 08:40:55.461668 6636 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0311 08:40:55.461696 6636 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0311 08:40:55.461718 6636 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0311 08:40:55.461745 6636 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0311 08:40:55.461785 6636 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0311 08:40:55.461814 6636 factory.go:656] Stopping watch factory\\\\nI0311 08:40:55.461831 6636 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0311 08:40:55.461834 6636 ovnkube.go:599] Stopped ovnkube\\\\nI0311 08:40:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2b75d7c063c9a55b7d1bbcc953c2b33048b4a5025f11c85af1f040f77501a1e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-11T08:40:57Z\\\",\\\"message\\\":\\\"troller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:57Z is after 2025-08-24T17:21:41Z]\\\\nI0311 08:40:57.290391 6847 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-dgh9v\\\\nI0311 08:40:57.290392 6847 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-dptv8 after 0 failed attempt(s)\\\\nI0311 08:40:57.290400 6847 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-dgh9v\\\\nI0311 08:40:57.290405 6847 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc\\\\nI0311 08:40:57.290415 6847 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc\\\\nI0311 08:40:57.290385 6847 obj_r\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01f1b29fe1734c41ddcc1b36367ccc54148fa85f2e05072fc0b60987dfcbb078\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7129de475a69815e0ed556f2d45c208788efcd7ef2792f3d93994af37dee87b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7129de475a69815e0ed556f2d45c208788efcd7ef2792f3d93994af37dee87b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8wfl5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:09Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:09 crc kubenswrapper[4808]: I0311 08:41:09.854534 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dptv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af91f09b-b749-4f04-81ac-2f0079a0dca5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df6497d4025103c2efba8337b4d7541d3d7015be49a7b3478f5648122e9dbd92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtdnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dptv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:09Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:09 crc kubenswrapper[4808]: I0311 08:41:09.872841 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dgh9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1a75dfb-31dd-4275-a309-c9e7130feb05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81effdeab74d0bc6b6f904ca05d6ae5f9777772a1b6e69e9038bb6818af8d979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chgpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dgh9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:09Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:09 crc kubenswrapper[4808]: I0311 08:41:09.896081 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70aeaa0b-1b9a-450e-bac3-61beed554491\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9a36b70daad0e5e33a84c699b4c0680f47f80bad4c9852616ea933ba8c6b221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1f460dd013b0b42bfb8031fcb1e2f49ace0fe24ce8194bd6e6970ad3e750ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://675da4e867613d165a41176c1b725f3601ffdd7de762c52efec0ee5485b1d628\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46e7f27f0f53872704c9eff7a30b8cb82afea98d12ec0723521b901c7503f203\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f99d53f25011afeb242934cb64a3cce6d5a03c17bd6008729f5e07298f152fb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-11T08:40:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0311 08:40:20.589250 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0311 08:40:20.589388 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0311 08:40:20.590094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1044918164/tls.crt::/tmp/serving-cert-1044918164/tls.key\\\\\\\"\\\\nI0311 08:40:21.049983 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0311 08:40:21.052943 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0311 08:40:21.052965 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0311 08:40:21.052991 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0311 08:40:21.052999 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0311 08:40:21.059520 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0311 08:40:21.059550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:40:21.059569 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:40:21.059578 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0311 08:40:21.059584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0311 08:40:21.059590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0311 08:40:21.059596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0311 08:40:21.059648 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0311 08:40:21.062132 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:41:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://506d71773927c014aeafefdc00c93d5faf4e6a5827bc8b47f525f3ea970ed14e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://184b4659910cdc9b9ee8dc34db1ede687099f4a5103dfba9e9a2fc0428c4e2ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://184b4659910cdc9b9ee8dc34db1ede687099f4a5103dfba9e9a2fc0428c4e2ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:09Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:09 crc kubenswrapper[4808]: E0311 08:41:09.909515 4808 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 11 08:41:09 crc kubenswrapper[4808]: I0311 08:41:09.914080 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"134c7d84-5243-4e2e-afcf-0eaf67967ce7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00fd34aefc93aa169e305232d0d0bf0fff66d3f4469ae96f68cc5f8a51a92c86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://023c9b432e3273f96abe72e9e1e4cf5ab68649d9d71d0ff59f75fc30ea5c2287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://023c9b432e3273f96abe72e9e1e4cf5ab68649d9d71d0ff59f75fc30ea5c2287\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:09Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:09 crc kubenswrapper[4808]: I0311 08:41:09.934480 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:09Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:09 crc kubenswrapper[4808]: I0311 08:41:09.970723 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dda5309-668d-4e3c-b3b2-1d708eecc578\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://125d3edbb47daf863a7f7b431cd468ba206a964c42d2a8fdbf163ef8ae3fd771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2hw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dad38518c1c86c0eb3926b0ff17b9fe7847bd1e942591307c49dfbd87a61d19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2hw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tfsm9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:09Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:09 crc kubenswrapper[4808]: I0311 08:41:09.990040 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a12f71a3a92a4513660a5b6c0bb73f40545f5528a45fab7aec18cb22283921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:09Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:10 crc kubenswrapper[4808]: I0311 08:41:10.011335 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nqcqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da0586e9-4a6f-4ba5-bc48-568fd9cdb0e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddf318b7c04801c4d010fa760bf0a48a82d26569a7bbfffa015dab3c23e0d470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d5q8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d8eb53e3ea80c3e2e9409ea178b232442d22adb54c055b8487fd6dca0ee4ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d5q8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nqcqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:10Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:10 crc kubenswrapper[4808]: I0311 08:41:10.064523 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kqsq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf747e37-c201-4dcc-a2a5-2429f4eba47d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7666v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7666v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kqsq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:10Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:10 crc kubenswrapper[4808]: I0311 08:41:10.085570 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39fe7c21-7960-42c6-aa61-98bac9e7ecd5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46ab0655a5449d68489b2ad4cdadf1ab8539dfc66e9a09ef0bf492fc84c3df01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99862ebd2568f5b782683b45df57b331cb0b659d0a48b16d898658463fcaa023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fac42d74d7b0553ef1fda4393cbe7872b21bc18baada8519db8137f2ef71d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce1b28200696e0c819d326a53d481baf48418232ac2a66fe01504e7af648c56f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1da63ac4504d6a817f5c856ae4ec6feee655c983f804372f8100624e0fbb08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0167ad24e01c669d29e5bcad3d3dc4e2e7578cf20278001ea2dfe9f1390bc99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0167ad24e01c669d29e5bcad3d3dc4e2e7578cf20278001ea2dfe9f1390bc99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc6d6b373b603100e83b56735f848166bbe70eaffa795411d0bf95bbfbe31c6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc6d6b373b603100e83b56735f848166bbe70eaffa795411d0bf95bbfbe31c6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f029352eaf11747f4efebbb91000b740e298e03d72e8a1347316668b78c23cae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f029352eaf11747f4efebbb91000b740e298e03d72e8a1347316668b78c23cae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:10Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:10 crc kubenswrapper[4808]: I0311 08:41:10.106983 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b088d2a8d5895f936e824861275482977fb68600e1ca4fd0c08e124c4f11da52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:10Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:10 crc kubenswrapper[4808]: I0311 08:41:10.126828 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:10Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:10 crc kubenswrapper[4808]: I0311 08:41:10.138418 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:10Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:10 crc kubenswrapper[4808]: I0311 08:41:10.147478 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-twvrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6377c39c-8ecf-409c-b3e7-ea9d717e234f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a910f47666db2f5fb733718e848c75c15d9f0975c7339afd9e21f21ede7aa1b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c99gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-twvrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:10Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:10 crc kubenswrapper[4808]: I0311 08:41:10.161547 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2r84h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcd5dff4-0826-4876-9fd3-3f19781a17bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3814b88c4b0107b15c1dacb212e0a05451ebc05065fcb0c25318df4808220e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f840757548b4af73aae0c247025a649cbf50cfbc534fbd38f0592f0f0e97274\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f840757548b4af73aae0c247025a649cbf50cfbc534fbd38f0592f0f0e97274\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ade0f54a4b7718d83ed1ed5025f91b42175bf7daa1327a150a6eedc99df8e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ade0f54a4b7718d83ed1ed5025f91b42175bf7daa1327a150a6eedc99df8e3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e872db89da909105a9343e8b9951c7c88bfd5107edb4386faa05c08b37e34bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e872db89da909105a9343e8b9951c7c88bfd5107edb4386faa05c08b37e34bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14920dc320f545db70ffb1b588dcfed5cb019f04e0dddd1d38db7e0696429fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14920dc320f545db70ffb1b588dcfed5cb019f04e0dddd1d38db7e0696429fc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc79537b4712c9762fe2a813ef7f12b5d6314ed2aa66741561fe635591da8a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc79537b4712c9762fe2a813ef7f12b5d6314ed2aa66741561fe635591da8a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a689c8bcf97ed3a6950a775029df058df856a2cc37cdc511d0ed62a36d9f7d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a689c8bcf97ed3a6950a775029df058df856a2cc37cdc511d0ed62a36d9f7d21\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2r84h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:10Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:10 crc kubenswrapper[4808]: I0311 08:41:10.788725 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kqsq9" Mar 11 08:41:10 crc kubenswrapper[4808]: E0311 08:41:10.789125 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kqsq9" podUID="cf747e37-c201-4dcc-a2a5-2429f4eba47d" Mar 11 08:41:10 crc kubenswrapper[4808]: I0311 08:41:10.803560 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Mar 11 08:41:11 crc kubenswrapper[4808]: I0311 08:41:11.430888 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:11 crc kubenswrapper[4808]: I0311 08:41:11.430945 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:11 crc kubenswrapper[4808]: I0311 08:41:11.430966 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:11 crc kubenswrapper[4808]: I0311 08:41:11.430991 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:11 crc kubenswrapper[4808]: I0311 08:41:11.431009 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:11Z","lastTransitionTime":"2026-03-11T08:41:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:11 crc kubenswrapper[4808]: E0311 08:41:11.452832 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fb54ef7d-152b-411a-b511-256d1778abe5\\\",\\\"systemUUID\\\":\\\"8423e724-ca17-4e6e-9671-7e629ecf3f36\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:11Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:11 crc kubenswrapper[4808]: I0311 08:41:11.458678 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:11 crc kubenswrapper[4808]: I0311 08:41:11.458745 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:11 crc kubenswrapper[4808]: I0311 08:41:11.458783 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:11 crc kubenswrapper[4808]: I0311 08:41:11.458815 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:11 crc kubenswrapper[4808]: I0311 08:41:11.458837 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:11Z","lastTransitionTime":"2026-03-11T08:41:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:11 crc kubenswrapper[4808]: E0311 08:41:11.486233 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fb54ef7d-152b-411a-b511-256d1778abe5\\\",\\\"systemUUID\\\":\\\"8423e724-ca17-4e6e-9671-7e629ecf3f36\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:11Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:11 crc kubenswrapper[4808]: I0311 08:41:11.491162 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:11 crc kubenswrapper[4808]: I0311 08:41:11.491214 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:11 crc kubenswrapper[4808]: I0311 08:41:11.491233 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:11 crc kubenswrapper[4808]: I0311 08:41:11.491256 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:11 crc kubenswrapper[4808]: I0311 08:41:11.491274 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:11Z","lastTransitionTime":"2026-03-11T08:41:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:11 crc kubenswrapper[4808]: E0311 08:41:11.507539 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fb54ef7d-152b-411a-b511-256d1778abe5\\\",\\\"systemUUID\\\":\\\"8423e724-ca17-4e6e-9671-7e629ecf3f36\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:11Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:11 crc kubenswrapper[4808]: I0311 08:41:11.513057 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:11 crc kubenswrapper[4808]: I0311 08:41:11.513116 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:11 crc kubenswrapper[4808]: I0311 08:41:11.513133 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:11 crc kubenswrapper[4808]: I0311 08:41:11.513156 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:11 crc kubenswrapper[4808]: I0311 08:41:11.513171 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:11Z","lastTransitionTime":"2026-03-11T08:41:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:11 crc kubenswrapper[4808]: E0311 08:41:11.533056 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fb54ef7d-152b-411a-b511-256d1778abe5\\\",\\\"systemUUID\\\":\\\"8423e724-ca17-4e6e-9671-7e629ecf3f36\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:11Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:11 crc kubenswrapper[4808]: I0311 08:41:11.537418 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:11 crc kubenswrapper[4808]: I0311 08:41:11.537472 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:11 crc kubenswrapper[4808]: I0311 08:41:11.537484 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:11 crc kubenswrapper[4808]: I0311 08:41:11.537501 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:11 crc kubenswrapper[4808]: I0311 08:41:11.537512 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:11Z","lastTransitionTime":"2026-03-11T08:41:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:11 crc kubenswrapper[4808]: E0311 08:41:11.549635 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fb54ef7d-152b-411a-b511-256d1778abe5\\\",\\\"systemUUID\\\":\\\"8423e724-ca17-4e6e-9671-7e629ecf3f36\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:11Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:11 crc kubenswrapper[4808]: E0311 08:41:11.549888 4808 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 11 08:41:11 crc kubenswrapper[4808]: I0311 08:41:11.789438 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:41:11 crc kubenswrapper[4808]: I0311 08:41:11.789556 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:41:11 crc kubenswrapper[4808]: E0311 08:41:11.789618 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 11 08:41:11 crc kubenswrapper[4808]: E0311 08:41:11.789704 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 11 08:41:11 crc kubenswrapper[4808]: I0311 08:41:11.790536 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:41:11 crc kubenswrapper[4808]: E0311 08:41:11.790828 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 11 08:41:11 crc kubenswrapper[4808]: I0311 08:41:11.791017 4808 scope.go:117] "RemoveContainer" containerID="f2b75d7c063c9a55b7d1bbcc953c2b33048b4a5025f11c85af1f040f77501a1e" Mar 11 08:41:11 crc kubenswrapper[4808]: I0311 08:41:11.808952 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a12f71a3a92a4513660a5b6c0bb73f40545f5528a45fab7aec18cb22283921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:11Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:11 crc kubenswrapper[4808]: I0311 08:41:11.824904 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-twvrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6377c39c-8ecf-409c-b3e7-ea9d717e234f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a910f47666db2f5fb733718e848c75c15d9f0975c7339afd9e21f21ede7aa1b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c99gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-twvrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:11Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:11 crc kubenswrapper[4808]: I0311 08:41:11.849209 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2r84h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcd5dff4-0826-4876-9fd3-3f19781a17bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3814b88c4b0107b15c1dacb212e0a05451ebc05065fcb0c25318df4808220e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f840757548b4af73aae0c247025a649cbf50cfbc534fbd38f0592f0f0e97274\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f840757548b4af73aae0c247025a649cbf50cfbc534fbd38f0592f0f0e97274\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ade0f54a4b7718d83ed1ed5025f91b42175bf7daa1327a150a6eedc99df8e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ade0f54a4b7718d83ed1ed5025f91b42175bf7daa1327a150a6eedc99df8e3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e872db89da909105a9343e8b9951c7c88bfd5107edb4386faa05c08b37e34bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e872db89da909105a9343e8b9951c7c88bfd5107edb4386faa05c08b37e34bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14920dc320f545db70ffb1b588dcfed5cb019f04e0dddd1d38db7e0696429fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14920dc320f545db70ffb1b588dcfed5cb019f04e0dddd1d38db7e0696429fc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc79537b4712c9762fe2a813ef7f12b5d6314ed2aa66741561fe635591da8a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc79537b4712c9762fe2a813ef7f12b5d6314ed2aa66741561fe635591da8a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a689c8bcf97ed3a6950a775029df058df856a2cc37cdc511d0ed62a36d9f7d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a689c8bcf97ed3a6950a775029df058df856a2cc37cdc511d0ed62a36d9f7d21\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2r84h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:11Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:11 crc kubenswrapper[4808]: I0311 08:41:11.866969 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nqcqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da0586e9-4a6f-4ba5-bc48-568fd9cdb0e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddf318b7c04801c4d010fa760bf0a48a82d26569a7bbfffa015dab3c23e0d470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d5q8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d8eb53e3ea80c3e2e9409ea178b232442d22adb54c055b8487fd6dca0ee4ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d5q8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nqcqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:11Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:11 crc kubenswrapper[4808]: I0311 08:41:11.882902 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kqsq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf747e37-c201-4dcc-a2a5-2429f4eba47d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7666v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7666v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kqsq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:11Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:11 crc kubenswrapper[4808]: I0311 08:41:11.916037 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39fe7c21-7960-42c6-aa61-98bac9e7ecd5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46ab0655a5449d68489b2ad4cdadf1ab8539dfc66e9a09ef0bf492fc84c3df01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99862ebd2568f5b782683b45df57b331cb0b659d0a48b16d898658463fcaa023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fac42d74d7b0553ef1fda4393cbe7872b21bc18baada8519db8137f2ef71d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce1b28200696e0c819d326a53d481baf48418232ac2a66fe01504e7af648c56f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1da63ac4504d6a817f5c856ae4ec6feee655c983f804372f8100624e0fbb08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0167ad24e01c669d29e5bcad3d3dc4e2e7578cf20278001ea2dfe9f1390bc99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0167ad24e01c669d29e5bcad3d3dc4e2e7578cf20278001ea2dfe9f1390bc99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc6d6b373b603100e83b56735f848166bbe70eaffa795411d0bf95bbfbe31c6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc6d6b373b603100e83b56735f848166bbe70eaffa795411d0bf95bbfbe31c6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f029352eaf11747f4efebbb91000b740e298e03d72e8a1347316668b78c23cae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f029352eaf11747f4efebbb91000b740e298e03d72e8a1347316668b78c23cae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:11Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:11 crc kubenswrapper[4808]: I0311 08:41:11.934965 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b088d2a8d5895f936e824861275482977fb68600e1ca4fd0c08e124c4f11da52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:11Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:11 crc kubenswrapper[4808]: I0311 08:41:11.948578 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:11Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:11 crc kubenswrapper[4808]: I0311 08:41:11.963262 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:11Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:11 crc kubenswrapper[4808]: I0311 08:41:11.977099 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afd64db1e4772f4558a0a810e319481d1c8d521b299c9f6f79d5c0f752b5f89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8deb33e0cd83c7419f7ddd4cba0e84d33849e982b5da5e74e0e93eebb40b249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:11Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:12 crc kubenswrapper[4808]: I0311 08:41:11.999917 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17fbdac0c2d9f6080066329737fa9866b0e953b38687d6c7b2c62dea2e9964a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02a71d4f33fd4d8bcf666c2d01f03d31c6e0f2b8eaa0cc183b48251e36cdc474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1d10229731c437de7f29f504fec11ac3688a0a5039812d76551a867f814b95e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a351ca31265d6256ce9c08680495dd7a9b7882cd17a8faf4a0d224a800737e4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3674634f2b9f7a0e2fcf3758da17080b26a6284c8446d8326bc405b1307f8c22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f11f10c1e9b99cab011ae05555fe3dd8aeb6e7df5cb166f07c6bcfffc9f9164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2b75d7c063c9a55b7d1bbcc953c2b33048b4a5025f11c85af1f040f77501a1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2b75d7c063c9a55b7d1bbcc953c2b33048b4a5025f11c85af1f040f77501a1e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-11T08:40:57Z\\\",\\\"message\\\":\\\"troller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:57Z is after 2025-08-24T17:21:41Z]\\\\nI0311 08:40:57.290391 6847 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-dgh9v\\\\nI0311 08:40:57.290392 6847 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-dptv8 after 0 failed attempt(s)\\\\nI0311 08:40:57.290400 6847 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-dgh9v\\\\nI0311 08:40:57.290405 6847 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc\\\\nI0311 08:40:57.290415 6847 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc\\\\nI0311 08:40:57.290385 6847 obj_r\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-8wfl5_openshift-ovn-kubernetes(afeac5d0-d84f-4776-ae37-a03c8f0f66b8)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01f1b29fe1734c41ddcc1b36367ccc54148fa85f2e05072fc0b60987dfcbb078\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7129de475a69815e0ed556f2d45c208788efcd7ef2792f3d93994af37dee87b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7129de475a69815e0ed556f2d45c208788efcd7ef2792f3d93994af37dee87b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8wfl5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:11Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:12 crc kubenswrapper[4808]: I0311 08:41:12.011974 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dptv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af91f09b-b749-4f04-81ac-2f0079a0dca5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df6497d4025103c2efba8337b4d7541d3d7015be49a7b3478f5648122e9dbd92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtdnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dptv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:12Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:12 crc kubenswrapper[4808]: I0311 08:41:12.026774 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:12Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:12 crc kubenswrapper[4808]: I0311 08:41:12.038387 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dda5309-668d-4e3c-b3b2-1d708eecc578\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://125d3edbb47daf863a7f7b431cd468ba206a964c42d2a8fdbf163ef8ae3fd771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2hw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dad38518c1c86c0eb3926b0ff17b9fe7847bd1e942591307c49dfbd87a61d19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2hw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tfsm9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:12Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:12 crc kubenswrapper[4808]: I0311 08:41:12.052806 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dgh9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1a75dfb-31dd-4275-a309-c9e7130feb05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81effdeab74d0bc6b6f904ca05d6ae5f9777772a1b6e69e9038bb6818af8d979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chgpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dgh9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:12Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:12 crc kubenswrapper[4808]: I0311 08:41:12.066330 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70aeaa0b-1b9a-450e-bac3-61beed554491\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9a36b70daad0e5e33a84c699b4c0680f47f80bad4c9852616ea933ba8c6b221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1f460dd013b0b42bfb8031fcb1e2f49ace0fe24ce8194bd6e6970ad3e750ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://675da4e867613d165a41176c1b725f3601ffdd7de762c52efec0ee5485b1d628\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46e7f27f0f53872704c9eff7a30b8cb82afea98d12ec0723521b901c7503f203\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f99d53f25011afeb242934cb64a3cce6d5a03c17bd6008729f5e07298f152fb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-11T08:40:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0311 08:40:20.589250 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0311 08:40:20.589388 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0311 08:40:20.590094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1044918164/tls.crt::/tmp/serving-cert-1044918164/tls.key\\\\\\\"\\\\nI0311 08:40:21.049983 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0311 08:40:21.052943 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0311 08:40:21.052965 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0311 08:40:21.052991 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0311 08:40:21.052999 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0311 08:40:21.059520 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0311 08:40:21.059550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:40:21.059569 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:40:21.059578 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0311 08:40:21.059584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0311 08:40:21.059590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0311 08:40:21.059596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0311 08:40:21.059648 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0311 08:40:21.062132 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:41:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://506d71773927c014aeafefdc00c93d5faf4e6a5827bc8b47f525f3ea970ed14e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://184b4659910cdc9b9ee8dc34db1ede687099f4a5103dfba9e9a2fc0428c4e2ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://184b4659910cdc9b9ee8dc34db1ede687099f4a5103dfba9e9a2fc0428c4e2ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:12Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:12 crc kubenswrapper[4808]: I0311 08:41:12.078896 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"134c7d84-5243-4e2e-afcf-0eaf67967ce7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00fd34aefc93aa169e305232d0d0bf0fff66d3f4469ae96f68cc5f8a51a92c86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://023c9b432e3273f96abe72e9e1e4cf5ab68649d9d71d0ff59f75fc30ea5c2287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://023c9b432e3273f96abe72e9e1e4cf5ab68649d9d71d0ff59f75fc30ea5c2287\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:12Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:12 crc kubenswrapper[4808]: I0311 08:41:12.090773 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dd34d63-dc29-46ce-86e3-70f413dfad90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b2b2eafbff37196bcb9cab96b32ba3af5acd4623128c00e2fc933040f09aab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f8347d0738c55456ccb8de04e3b599ef0a2bbaae8ab8eecda19a9bbd9abc7bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://047f453a7b029609fd6596a99014fc89181f9313fdbd391b5c755d6f7b7b0db0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8915625a5b41d93fa3b078e2e9b1b6cf18c36f3ba7ada3a505ab826c49ce3b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8915625a5b41d93fa3b078e2e9b1b6cf18c36f3ba7ada3a505ab826c49ce3b6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:12Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:12 crc kubenswrapper[4808]: I0311 08:41:12.446191 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8wfl5_afeac5d0-d84f-4776-ae37-a03c8f0f66b8/ovnkube-controller/1.log" Mar 11 08:41:12 crc kubenswrapper[4808]: I0311 08:41:12.449655 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" event={"ID":"afeac5d0-d84f-4776-ae37-a03c8f0f66b8","Type":"ContainerStarted","Data":"7422e36aafb7b9003baec36b0abbd0219b5ed0ae1b67122e9453cfd9016e8a38"} Mar 11 08:41:12 crc kubenswrapper[4808]: I0311 08:41:12.450235 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" Mar 11 08:41:12 crc kubenswrapper[4808]: I0311 08:41:12.462246 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a12f71a3a92a4513660a5b6c0bb73f40545f5528a45fab7aec18cb22283921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:12Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:12 crc kubenswrapper[4808]: I0311 08:41:12.473477 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-twvrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6377c39c-8ecf-409c-b3e7-ea9d717e234f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a910f47666db2f5fb733718e848c75c15d9f0975c7339afd9e21f21ede7aa1b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c99gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-twvrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:12Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:12 crc kubenswrapper[4808]: I0311 08:41:12.486820 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2r84h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcd5dff4-0826-4876-9fd3-3f19781a17bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3814b88c4b0107b15c1dacb212e0a05451ebc05065fcb0c25318df4808220e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f840757548b4af73aae0c247025a649cbf50cfbc534fbd38f0592f0f0e97274\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f840757548b4af73aae0c247025a649cbf50cfbc534fbd38f0592f0f0e97274\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ade0f54a4b7718d83ed1ed5025f91b42175bf7daa1327a150a6eedc99df8e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ade0f54a4b7718d83ed1ed5025f91b42175bf7daa1327a150a6eedc99df8e3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e872db89da909105a9343e8b9951c7c88bfd5107edb4386faa05c08b37e34bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e872db89da909105a9343e8b9951c7c88bfd5107edb4386faa05c08b37e34bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14920dc320f545db70ffb1b588dcfed5cb019f04e0dddd1d38db7e0696429fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14920dc320f545db70ffb1b588dcfed5cb019f04e0dddd1d38db7e0696429fc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc79537b4712c9762fe2a813ef7f12b5d6314ed2aa66741561fe635591da8a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc79537b4712c9762fe2a813ef7f12b5d6314ed2aa66741561fe635591da8a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a689c8bcf97ed3a6950a775029df058df856a2cc37cdc511d0ed62a36d9f7d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a689c8bcf97ed3a6950a775029df058df856a2cc37cdc511d0ed62a36d9f7d21\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2r84h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:12Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:12 crc kubenswrapper[4808]: I0311 08:41:12.500456 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nqcqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da0586e9-4a6f-4ba5-bc48-568fd9cdb0e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddf318b7c04801c4d010fa760bf0a48a82d26569a7bbfffa015dab3c23e0d470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d5q8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d8eb53e3ea80c3e2e9409ea178b232442d22adb54c055b8487fd6dca0ee4ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d5q8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nqcqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:12Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:12 crc kubenswrapper[4808]: I0311 08:41:12.515163 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kqsq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf747e37-c201-4dcc-a2a5-2429f4eba47d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7666v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7666v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kqsq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:12Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:12 crc kubenswrapper[4808]: I0311 08:41:12.539834 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39fe7c21-7960-42c6-aa61-98bac9e7ecd5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46ab0655a5449d68489b2ad4cdadf1ab8539dfc66e9a09ef0bf492fc84c3df01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99862ebd2568f5b782683b45df57b331cb0b659d0a48b16d898658463fcaa023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fac42d74d7b0553ef1fda4393cbe7872b21bc18baada8519db8137f2ef71d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce1b28200696e0c819d326a53d481baf48418232ac2a66fe01504e7af648c56f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1da63ac4504d6a817f5c856ae4ec6feee655c983f804372f8100624e0fbb08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0167ad24e01c669d29e5bcad3d3dc4e2e7578cf20278001ea2dfe9f1390bc99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0167ad24e01c669d29e5bcad3d3dc4e2e7578cf20278001ea2dfe9f1390bc99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc6d6b373b603100e83b56735f848166bbe70eaffa795411d0bf95bbfbe31c6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc6d6b373b603100e83b56735f848166bbe70eaffa795411d0bf95bbfbe31c6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f029352eaf11747f4efebbb91000b740e298e03d72e8a1347316668b78c23cae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f029352eaf11747f4efebbb91000b740e298e03d72e8a1347316668b78c23cae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:12Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:12 crc kubenswrapper[4808]: I0311 08:41:12.560576 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b088d2a8d5895f936e824861275482977fb68600e1ca4fd0c08e124c4f11da52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:12Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:12 crc kubenswrapper[4808]: I0311 08:41:12.572491 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:12Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:12 crc kubenswrapper[4808]: I0311 08:41:12.589287 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:12Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:12 crc kubenswrapper[4808]: I0311 08:41:12.606606 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afd64db1e4772f4558a0a810e319481d1c8d521b299c9f6f79d5c0f752b5f89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8deb33e0cd83c7419f7ddd4cba0e84d33849e982b5da5e74e0e93eebb40b249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:12Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:12 crc kubenswrapper[4808]: I0311 08:41:12.643627 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17fbdac0c2d9f6080066329737fa9866b0e953b38687d6c7b2c62dea2e9964a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02a71d4f33fd4d8bcf666c2d01f03d31c6e0f2b8eaa0cc183b48251e36cdc474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1d10229731c437de7f29f504fec11ac3688a0a5039812d76551a867f814b95e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a351ca31265d6256ce9c08680495dd7a9b7882cd17a8faf4a0d224a800737e4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3674634f2b9f7a0e2fcf3758da17080b26a6284c8446d8326bc405b1307f8c22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f11f10c1e9b99cab011ae05555fe3dd8aeb6e7df5cb166f07c6bcfffc9f9164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7422e36aafb7b9003baec36b0abbd0219b5ed0ae1b67122e9453cfd9016e8a38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2b75d7c063c9a55b7d1bbcc953c2b33048b4a5025f11c85af1f040f77501a1e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-11T08:40:57Z\\\",\\\"message\\\":\\\"troller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:57Z is after 2025-08-24T17:21:41Z]\\\\nI0311 08:40:57.290391 6847 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-dgh9v\\\\nI0311 08:40:57.290392 6847 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-dptv8 after 0 failed attempt(s)\\\\nI0311 08:40:57.290400 6847 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-dgh9v\\\\nI0311 08:40:57.290405 6847 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc\\\\nI0311 08:40:57.290415 6847 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc\\\\nI0311 08:40:57.290385 6847 obj_r\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:41:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01f1b29fe1734c41ddcc1b36367ccc54148fa85f2e05072fc0b60987dfcbb078\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7129de475a69815e0ed556f2d45c208788efcd7ef2792f3d93994af37dee87b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7129de475a69815e0ed556f2d45c208788efcd7ef2792f3d93994af37dee87b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8wfl5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:12Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:12 crc kubenswrapper[4808]: I0311 08:41:12.656075 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dptv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af91f09b-b749-4f04-81ac-2f0079a0dca5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df6497d4025103c2efba8337b4d7541d3d7015be49a7b3478f5648122e9dbd92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtdnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dptv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:12Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:12 crc kubenswrapper[4808]: I0311 08:41:12.672538 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:12Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:12 crc kubenswrapper[4808]: I0311 08:41:12.685077 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dda5309-668d-4e3c-b3b2-1d708eecc578\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://125d3edbb47daf863a7f7b431cd468ba206a964c42d2a8fdbf163ef8ae3fd771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2hw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dad38518c1c86c0eb3926b0ff17b9fe7847bd1e942591307c49dfbd87a61d19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2hw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tfsm9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:12Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:12 crc kubenswrapper[4808]: I0311 08:41:12.698541 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dgh9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1a75dfb-31dd-4275-a309-c9e7130feb05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81effdeab74d0bc6b6f904ca05d6ae5f9777772a1b6e69e9038bb6818af8d979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chgpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dgh9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:12Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:12 crc kubenswrapper[4808]: I0311 08:41:12.718986 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70aeaa0b-1b9a-450e-bac3-61beed554491\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9a36b70daad0e5e33a84c699b4c0680f47f80bad4c9852616ea933ba8c6b221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1f460dd013b0b42bfb8031fcb1e2f49ace0fe24ce8194bd6e6970ad3e750ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://675da4e867613d165a41176c1b725f3601ffdd7de762c52efec0ee5485b1d628\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46e7f27f0f53872704c9eff7a30b8cb82afea98d12ec0723521b901c7503f203\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f99d53f25011afeb242934cb64a3cce6d5a03c17bd6008729f5e07298f152fb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-11T08:40:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0311 08:40:20.589250 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0311 08:40:20.589388 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0311 08:40:20.590094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1044918164/tls.crt::/tmp/serving-cert-1044918164/tls.key\\\\\\\"\\\\nI0311 08:40:21.049983 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0311 08:40:21.052943 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0311 08:40:21.052965 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0311 08:40:21.052991 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0311 08:40:21.052999 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0311 08:40:21.059520 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0311 08:40:21.059550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:40:21.059569 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:40:21.059578 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0311 08:40:21.059584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0311 08:40:21.059590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0311 08:40:21.059596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0311 08:40:21.059648 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0311 08:40:21.062132 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:41:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://506d71773927c014aeafefdc00c93d5faf4e6a5827bc8b47f525f3ea970ed14e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://184b4659910cdc9b9ee8dc34db1ede687099f4a5103dfba9e9a2fc0428c4e2ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://184b4659910cdc9b9ee8dc34db1ede687099f4a5103dfba9e9a2fc0428c4e2ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:12Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:12 crc kubenswrapper[4808]: I0311 08:41:12.734519 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"134c7d84-5243-4e2e-afcf-0eaf67967ce7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00fd34aefc93aa169e305232d0d0bf0fff66d3f4469ae96f68cc5f8a51a92c86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://023c9b432e3273f96abe72e9e1e4cf5ab68649d9d71d0ff59f75fc30ea5c2287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://023c9b432e3273f96abe72e9e1e4cf5ab68649d9d71d0ff59f75fc30ea5c2287\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:12Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:12 crc kubenswrapper[4808]: I0311 08:41:12.747933 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dd34d63-dc29-46ce-86e3-70f413dfad90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b2b2eafbff37196bcb9cab96b32ba3af5acd4623128c00e2fc933040f09aab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f8347d0738c55456ccb8de04e3b599ef0a2bbaae8ab8eecda19a9bbd9abc7bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://047f453a7b029609fd6596a99014fc89181f9313fdbd391b5c755d6f7b7b0db0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8915625a5b41d93fa3b078e2e9b1b6cf18c36f3ba7ada3a505ab826c49ce3b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8915625a5b41d93fa3b078e2e9b1b6cf18c36f3ba7ada3a505ab826c49ce3b6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:12Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:12 crc kubenswrapper[4808]: I0311 08:41:12.788743 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kqsq9" Mar 11 08:41:12 crc kubenswrapper[4808]: E0311 08:41:12.788940 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kqsq9" podUID="cf747e37-c201-4dcc-a2a5-2429f4eba47d" Mar 11 08:41:13 crc kubenswrapper[4808]: I0311 08:41:13.453926 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8wfl5_afeac5d0-d84f-4776-ae37-a03c8f0f66b8/ovnkube-controller/2.log" Mar 11 08:41:13 crc kubenswrapper[4808]: I0311 08:41:13.454734 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8wfl5_afeac5d0-d84f-4776-ae37-a03c8f0f66b8/ovnkube-controller/1.log" Mar 11 08:41:13 crc kubenswrapper[4808]: I0311 08:41:13.458007 4808 generic.go:334] "Generic (PLEG): container finished" podID="afeac5d0-d84f-4776-ae37-a03c8f0f66b8" containerID="7422e36aafb7b9003baec36b0abbd0219b5ed0ae1b67122e9453cfd9016e8a38" exitCode=1 Mar 11 08:41:13 crc kubenswrapper[4808]: I0311 08:41:13.458062 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" event={"ID":"afeac5d0-d84f-4776-ae37-a03c8f0f66b8","Type":"ContainerDied","Data":"7422e36aafb7b9003baec36b0abbd0219b5ed0ae1b67122e9453cfd9016e8a38"} Mar 11 08:41:13 crc kubenswrapper[4808]: I0311 08:41:13.458120 4808 scope.go:117] "RemoveContainer" containerID="f2b75d7c063c9a55b7d1bbcc953c2b33048b4a5025f11c85af1f040f77501a1e" Mar 11 08:41:13 crc kubenswrapper[4808]: I0311 08:41:13.458599 4808 scope.go:117] "RemoveContainer" containerID="7422e36aafb7b9003baec36b0abbd0219b5ed0ae1b67122e9453cfd9016e8a38" Mar 11 08:41:13 crc kubenswrapper[4808]: E0311 08:41:13.458776 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8wfl5_openshift-ovn-kubernetes(afeac5d0-d84f-4776-ae37-a03c8f0f66b8)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" podUID="afeac5d0-d84f-4776-ae37-a03c8f0f66b8" Mar 11 08:41:13 crc kubenswrapper[4808]: I0311 08:41:13.478087 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nqcqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da0586e9-4a6f-4ba5-bc48-568fd9cdb0e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddf318b7c04801c4d010fa760bf0a48a82d26569a7bbfffa015dab3c23e0d470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d5q8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d8eb53e3ea80c3e2e9409ea178b232442d22adb54c055b8487fd6dca0ee4ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d5q8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nqcqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:13Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:13 crc kubenswrapper[4808]: I0311 08:41:13.491159 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kqsq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf747e37-c201-4dcc-a2a5-2429f4eba47d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7666v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7666v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kqsq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:13Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:13 crc kubenswrapper[4808]: I0311 08:41:13.512468 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39fe7c21-7960-42c6-aa61-98bac9e7ecd5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46ab0655a5449d68489b2ad4cdadf1ab8539dfc66e9a09ef0bf492fc84c3df01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99862ebd2568f5b782683b45df57b331cb0b659d0a48b16d898658463fcaa023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fac42d74d7b0553ef1fda4393cbe7872b21bc18baada8519db8137f2ef71d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce1b28200696e0c819d326a53d481baf48418232ac2a66fe01504e7af648c56f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1da63ac4504d6a817f5c856ae4ec6feee655c983f804372f8100624e0fbb08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0167ad24e01c669d29e5bcad3d3dc4e2e7578cf20278001ea2dfe9f1390bc99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0167ad24e01c669d29e5bcad3d3dc4e2e7578cf20278001ea2dfe9f1390bc99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc6d6b373b603100e83b56735f848166bbe70eaffa795411d0bf95bbfbe31c6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc6d6b373b603100e83b56735f848166bbe70eaffa795411d0bf95bbfbe31c6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f029352eaf11747f4efebbb91000b740e298e03d72e8a1347316668b78c23cae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f029352eaf11747f4efebbb91000b740e298e03d72e8a1347316668b78c23cae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:13Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:13 crc kubenswrapper[4808]: I0311 08:41:13.526220 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b088d2a8d5895f936e824861275482977fb68600e1ca4fd0c08e124c4f11da52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:13Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:13 crc kubenswrapper[4808]: I0311 08:41:13.540644 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:13Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:13 crc kubenswrapper[4808]: I0311 08:41:13.556216 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:13Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:13 crc kubenswrapper[4808]: I0311 08:41:13.570280 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-twvrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6377c39c-8ecf-409c-b3e7-ea9d717e234f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a910f47666db2f5fb733718e848c75c15d9f0975c7339afd9e21f21ede7aa1b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c99gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-twvrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:13Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:13 crc kubenswrapper[4808]: I0311 08:41:13.589045 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2r84h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcd5dff4-0826-4876-9fd3-3f19781a17bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3814b88c4b0107b15c1dacb212e0a05451ebc05065fcb0c25318df4808220e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f840757548b4af73aae0c247025a649cbf50cfbc534fbd38f0592f0f0e97274\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f840757548b4af73aae0c247025a649cbf50cfbc534fbd38f0592f0f0e97274\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ade0f54a4b7718d83ed1ed5025f91b42175bf7daa1327a150a6eedc99df8e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ade0f54a4b7718d83ed1ed5025f91b42175bf7daa1327a150a6eedc99df8e3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e872db89da909105a9343e8b9951c7c88bfd5107edb4386faa05c08b37e34bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e872db89da909105a9343e8b9951c7c88bfd5107edb4386faa05c08b37e34bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14920dc320f545db70ffb1b588dcfed5cb019f04e0dddd1d38db7e0696429fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14920dc320f545db70ffb1b588dcfed5cb019f04e0dddd1d38db7e0696429fc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc79537b4712c9762fe2a813ef7f12b5d6314ed2aa66741561fe635591da8a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc79537b4712c9762fe2a813ef7f12b5d6314ed2aa66741561fe635591da8a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a689c8bcf97ed3a6950a775029df058df856a2cc37cdc511d0ed62a36d9f7d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a689c8bcf97ed3a6950a775029df058df856a2cc37cdc511d0ed62a36d9f7d21\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2r84h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:13Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:13 crc kubenswrapper[4808]: I0311 08:41:13.602914 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afd64db1e4772f4558a0a810e319481d1c8d521b299c9f6f79d5c0f752b5f89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8deb33e0cd83c7419f7ddd4cba0e84d33849e982b5da5e74e0e93eebb40b249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:13Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:13 crc kubenswrapper[4808]: I0311 08:41:13.625421 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17fbdac0c2d9f6080066329737fa9866b0e953b38687d6c7b2c62dea2e9964a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02a71d4f33fd4d8bcf666c2d01f03d31c6e0f2b8eaa0cc183b48251e36cdc474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1d10229731c437de7f29f504fec11ac3688a0a5039812d76551a867f814b95e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a351ca31265d6256ce9c08680495dd7a9b7882cd17a8faf4a0d224a800737e4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3674634f2b9f7a0e2fcf3758da17080b26a6284c8446d8326bc405b1307f8c22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f11f10c1e9b99cab011ae05555fe3dd8aeb6e7df5cb166f07c6bcfffc9f9164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7422e36aafb7b9003baec36b0abbd0219b5ed0ae1b67122e9453cfd9016e8a38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2b75d7c063c9a55b7d1bbcc953c2b33048b4a5025f11c85af1f040f77501a1e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-11T08:40:57Z\\\",\\\"message\\\":\\\"troller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:40:57Z is after 2025-08-24T17:21:41Z]\\\\nI0311 08:40:57.290391 6847 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-dgh9v\\\\nI0311 08:40:57.290392 6847 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-dptv8 after 0 failed attempt(s)\\\\nI0311 08:40:57.290400 6847 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-dgh9v\\\\nI0311 08:40:57.290405 6847 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc\\\\nI0311 08:40:57.290415 6847 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc\\\\nI0311 08:40:57.290385 6847 obj_r\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7422e36aafb7b9003baec36b0abbd0219b5ed0ae1b67122e9453cfd9016e8a38\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-11T08:41:12Z\\\",\\\"message\\\":\\\".Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0311 08:41:12.635982 7112 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-2r84h in node crc\\\\nI0311 08:41:12.635989 7112 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nI0311 08:41:12.635995 7112 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-2r84h after 0 failed attempt(s)\\\\nI0311 08:41:12.636008 7112 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-2r84h\\\\nI0311 08:41:12.635953 7112 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nF0311 08:41:12.636023 7112 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:41:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01f1b29fe1734c41ddcc1b36367ccc54148fa85f2e05072fc0b60987dfcbb078\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7129de475a69815e0ed556f2d45c208788efcd7ef2792f3d93994af37dee87b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7129de475a69815e0ed556f2d45c208788efcd7ef2792f3d93994af37dee87b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8wfl5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:13Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:13 crc kubenswrapper[4808]: I0311 08:41:13.635943 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dptv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af91f09b-b749-4f04-81ac-2f0079a0dca5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df6497d4025103c2efba8337b4d7541d3d7015be49a7b3478f5648122e9dbd92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtdnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dptv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:13Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:13 crc kubenswrapper[4808]: I0311 08:41:13.649855 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dgh9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1a75dfb-31dd-4275-a309-c9e7130feb05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81effdeab74d0bc6b6f904ca05d6ae5f9777772a1b6e69e9038bb6818af8d979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chgpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dgh9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:13Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:13 crc kubenswrapper[4808]: I0311 08:41:13.666631 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70aeaa0b-1b9a-450e-bac3-61beed554491\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9a36b70daad0e5e33a84c699b4c0680f47f80bad4c9852616ea933ba8c6b221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1f460dd013b0b42bfb8031fcb1e2f49ace0fe24ce8194bd6e6970ad3e750ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://675da4e867613d165a41176c1b725f3601ffdd7de762c52efec0ee5485b1d628\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46e7f27f0f53872704c9eff7a30b8cb82afea98d12ec0723521b901c7503f203\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f99d53f25011afeb242934cb64a3cce6d5a03c17bd6008729f5e07298f152fb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-11T08:40:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0311 08:40:20.589250 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0311 08:40:20.589388 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0311 08:40:20.590094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1044918164/tls.crt::/tmp/serving-cert-1044918164/tls.key\\\\\\\"\\\\nI0311 08:40:21.049983 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0311 08:40:21.052943 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0311 08:40:21.052965 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0311 08:40:21.052991 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0311 08:40:21.052999 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0311 08:40:21.059520 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0311 08:40:21.059550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:40:21.059569 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:40:21.059578 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0311 08:40:21.059584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0311 08:40:21.059590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0311 08:40:21.059596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0311 08:40:21.059648 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0311 08:40:21.062132 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:41:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://506d71773927c014aeafefdc00c93d5faf4e6a5827bc8b47f525f3ea970ed14e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://184b4659910cdc9b9ee8dc34db1ede687099f4a5103dfba9e9a2fc0428c4e2ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://184b4659910cdc9b9ee8dc34db1ede687099f4a5103dfba9e9a2fc0428c4e2ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:13Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:13 crc kubenswrapper[4808]: I0311 08:41:13.678624 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"134c7d84-5243-4e2e-afcf-0eaf67967ce7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00fd34aefc93aa169e305232d0d0bf0fff66d3f4469ae96f68cc5f8a51a92c86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://023c9b432e3273f96abe72e9e1e4cf5ab68649d9d71d0ff59f75fc30ea5c2287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://023c9b432e3273f96abe72e9e1e4cf5ab68649d9d71d0ff59f75fc30ea5c2287\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:13Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:13 crc kubenswrapper[4808]: I0311 08:41:13.689880 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dd34d63-dc29-46ce-86e3-70f413dfad90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b2b2eafbff37196bcb9cab96b32ba3af5acd4623128c00e2fc933040f09aab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f8347d0738c55456ccb8de04e3b599ef0a2bbaae8ab8eecda19a9bbd9abc7bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://047f453a7b029609fd6596a99014fc89181f9313fdbd391b5c755d6f7b7b0db0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8915625a5b41d93fa3b078e2e9b1b6cf18c36f3ba7ada3a505ab826c49ce3b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8915625a5b41d93fa3b078e2e9b1b6cf18c36f3ba7ada3a505ab826c49ce3b6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:13Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:13 crc kubenswrapper[4808]: I0311 08:41:13.705523 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:13Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:13 crc kubenswrapper[4808]: I0311 08:41:13.718325 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dda5309-668d-4e3c-b3b2-1d708eecc578\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://125d3edbb47daf863a7f7b431cd468ba206a964c42d2a8fdbf163ef8ae3fd771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2hw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dad38518c1c86c0eb3926b0ff17b9fe7847bd1e942591307c49dfbd87a61d19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2hw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tfsm9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:13Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:13 crc kubenswrapper[4808]: I0311 08:41:13.728555 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a12f71a3a92a4513660a5b6c0bb73f40545f5528a45fab7aec18cb22283921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:13Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:13 crc kubenswrapper[4808]: I0311 08:41:13.789289 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:41:13 crc kubenswrapper[4808]: I0311 08:41:13.789453 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:41:13 crc kubenswrapper[4808]: E0311 08:41:13.789572 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 11 08:41:13 crc kubenswrapper[4808]: E0311 08:41:13.789705 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 11 08:41:13 crc kubenswrapper[4808]: I0311 08:41:13.789324 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:41:13 crc kubenswrapper[4808]: E0311 08:41:13.789887 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 11 08:41:14 crc kubenswrapper[4808]: I0311 08:41:14.461976 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8wfl5_afeac5d0-d84f-4776-ae37-a03c8f0f66b8/ovnkube-controller/2.log" Mar 11 08:41:14 crc kubenswrapper[4808]: I0311 08:41:14.465244 4808 scope.go:117] "RemoveContainer" containerID="7422e36aafb7b9003baec36b0abbd0219b5ed0ae1b67122e9453cfd9016e8a38" Mar 11 08:41:14 crc kubenswrapper[4808]: E0311 08:41:14.465389 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8wfl5_openshift-ovn-kubernetes(afeac5d0-d84f-4776-ae37-a03c8f0f66b8)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" podUID="afeac5d0-d84f-4776-ae37-a03c8f0f66b8" Mar 11 08:41:14 crc kubenswrapper[4808]: I0311 08:41:14.476807 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afd64db1e4772f4558a0a810e319481d1c8d521b299c9f6f79d5c0f752b5f89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8deb33e0cd83c7419f7ddd4cba0e84d33849e982b5da5e74e0e93eebb40b249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:14Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:14 crc kubenswrapper[4808]: I0311 08:41:14.496807 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17fbdac0c2d9f6080066329737fa9866b0e953b38687d6c7b2c62dea2e9964a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02a71d4f33fd4d8bcf666c2d01f03d31c6e0f2b8eaa0cc183b48251e36cdc474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1d10229731c437de7f29f504fec11ac3688a0a5039812d76551a867f814b95e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a351ca31265d6256ce9c08680495dd7a9b7882cd17a8faf4a0d224a800737e4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3674634f2b9f7a0e2fcf3758da17080b26a6284c8446d8326bc405b1307f8c22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f11f10c1e9b99cab011ae05555fe3dd8aeb6e7df5cb166f07c6bcfffc9f9164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7422e36aafb7b9003baec36b0abbd0219b5ed0ae1b67122e9453cfd9016e8a38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7422e36aafb7b9003baec36b0abbd0219b5ed0ae1b67122e9453cfd9016e8a38\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-11T08:41:12Z\\\",\\\"message\\\":\\\".Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0311 08:41:12.635982 7112 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-2r84h in node crc\\\\nI0311 08:41:12.635989 7112 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nI0311 08:41:12.635995 7112 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-2r84h after 0 failed attempt(s)\\\\nI0311 08:41:12.636008 7112 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-2r84h\\\\nI0311 08:41:12.635953 7112 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nF0311 08:41:12.636023 7112 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:41:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8wfl5_openshift-ovn-kubernetes(afeac5d0-d84f-4776-ae37-a03c8f0f66b8)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01f1b29fe1734c41ddcc1b36367ccc54148fa85f2e05072fc0b60987dfcbb078\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7129de475a69815e0ed556f2d45c208788efcd7ef2792f3d93994af37dee87b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7129de475a69815e0ed556f2d45c208788efcd7ef2792f3d93994af37dee87b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8wfl5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:14Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:14 crc kubenswrapper[4808]: I0311 08:41:14.505954 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dptv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af91f09b-b749-4f04-81ac-2f0079a0dca5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df6497d4025103c2efba8337b4d7541d3d7015be49a7b3478f5648122e9dbd92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtdnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dptv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:14Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:14 crc kubenswrapper[4808]: I0311 08:41:14.517656 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70aeaa0b-1b9a-450e-bac3-61beed554491\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9a36b70daad0e5e33a84c699b4c0680f47f80bad4c9852616ea933ba8c6b221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1f460dd013b0b42bfb8031fcb1e2f49ace0fe24ce8194bd6e6970ad3e750ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://675da4e867613d165a41176c1b725f3601ffdd7de762c52efec0ee5485b1d628\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46e7f27f0f53872704c9eff7a30b8cb82afea98d12ec0723521b901c7503f203\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f99d53f25011afeb242934cb64a3cce6d5a03c17bd6008729f5e07298f152fb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-11T08:40:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0311 08:40:20.589250 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0311 08:40:20.589388 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0311 08:40:20.590094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1044918164/tls.crt::/tmp/serving-cert-1044918164/tls.key\\\\\\\"\\\\nI0311 08:40:21.049983 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0311 08:40:21.052943 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0311 08:40:21.052965 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0311 08:40:21.052991 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0311 08:40:21.052999 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0311 08:40:21.059520 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0311 08:40:21.059550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:40:21.059569 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:40:21.059578 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0311 08:40:21.059584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0311 08:40:21.059590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0311 08:40:21.059596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0311 08:40:21.059648 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0311 08:40:21.062132 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:41:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://506d71773927c014aeafefdc00c93d5faf4e6a5827bc8b47f525f3ea970ed14e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://184b4659910cdc9b9ee8dc34db1ede687099f4a5103dfba9e9a2fc0428c4e2ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://184b4659910cdc9b9ee8dc34db1ede687099f4a5103dfba9e9a2fc0428c4e2ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:14Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:14 crc kubenswrapper[4808]: I0311 08:41:14.526725 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"134c7d84-5243-4e2e-afcf-0eaf67967ce7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00fd34aefc93aa169e305232d0d0bf0fff66d3f4469ae96f68cc5f8a51a92c86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://023c9b432e3273f96abe72e9e1e4cf5ab68649d9d71d0ff59f75fc30ea5c2287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://023c9b432e3273f96abe72e9e1e4cf5ab68649d9d71d0ff59f75fc30ea5c2287\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:14Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:14 crc kubenswrapper[4808]: I0311 08:41:14.536331 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dd34d63-dc29-46ce-86e3-70f413dfad90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b2b2eafbff37196bcb9cab96b32ba3af5acd4623128c00e2fc933040f09aab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f8347d0738c55456ccb8de04e3b599ef0a2bbaae8ab8eecda19a9bbd9abc7bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://047f453a7b029609fd6596a99014fc89181f9313fdbd391b5c755d6f7b7b0db0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8915625a5b41d93fa3b078e2e9b1b6cf18c36f3ba7ada3a505ab826c49ce3b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8915625a5b41d93fa3b078e2e9b1b6cf18c36f3ba7ada3a505ab826c49ce3b6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:14Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:14 crc kubenswrapper[4808]: I0311 08:41:14.547139 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:14Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:14 crc kubenswrapper[4808]: I0311 08:41:14.561905 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dda5309-668d-4e3c-b3b2-1d708eecc578\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://125d3edbb47daf863a7f7b431cd468ba206a964c42d2a8fdbf163ef8ae3fd771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2hw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dad38518c1c86c0eb3926b0ff17b9fe7847bd1e942591307c49dfbd87a61d19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2hw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tfsm9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:14Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:14 crc kubenswrapper[4808]: I0311 08:41:14.576976 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dgh9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1a75dfb-31dd-4275-a309-c9e7130feb05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81effdeab74d0bc6b6f904ca05d6ae5f9777772a1b6e69e9038bb6818af8d979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chgpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dgh9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:14Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:14 crc kubenswrapper[4808]: I0311 08:41:14.589038 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a12f71a3a92a4513660a5b6c0bb73f40545f5528a45fab7aec18cb22283921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:14Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:14 crc kubenswrapper[4808]: I0311 08:41:14.600189 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kqsq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf747e37-c201-4dcc-a2a5-2429f4eba47d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7666v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7666v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kqsq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:14Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:14 crc kubenswrapper[4808]: I0311 08:41:14.628259 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39fe7c21-7960-42c6-aa61-98bac9e7ecd5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46ab0655a5449d68489b2ad4cdadf1ab8539dfc66e9a09ef0bf492fc84c3df01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99862ebd2568f5b782683b45df57b331cb0b659d0a48b16d898658463fcaa023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fac42d74d7b0553ef1fda4393cbe7872b21bc18baada8519db8137f2ef71d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce1b28200696e0c819d326a53d481baf48418232ac2a66fe01504e7af648c56f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1da63ac4504d6a817f5c856ae4ec6feee655c983f804372f8100624e0fbb08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0167ad24e01c669d29e5bcad3d3dc4e2e7578cf20278001ea2dfe9f1390bc99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0167ad24e01c669d29e5bcad3d3dc4e2e7578cf20278001ea2dfe9f1390bc99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc6d6b373b603100e83b56735f848166bbe70eaffa795411d0bf95bbfbe31c6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc6d6b373b603100e83b56735f848166bbe70eaffa795411d0bf95bbfbe31c6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f029352eaf11747f4efebbb91000b740e298e03d72e8a1347316668b78c23cae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f029352eaf11747f4efebbb91000b740e298e03d72e8a1347316668b78c23cae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:14Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:14 crc kubenswrapper[4808]: I0311 08:41:14.640271 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b088d2a8d5895f936e824861275482977fb68600e1ca4fd0c08e124c4f11da52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:14Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:14 crc kubenswrapper[4808]: I0311 08:41:14.657798 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:14Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:14 crc kubenswrapper[4808]: I0311 08:41:14.669417 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:14Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:14 crc kubenswrapper[4808]: I0311 08:41:14.681895 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-twvrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6377c39c-8ecf-409c-b3e7-ea9d717e234f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a910f47666db2f5fb733718e848c75c15d9f0975c7339afd9e21f21ede7aa1b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c99gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-twvrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:14Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:14 crc kubenswrapper[4808]: I0311 08:41:14.697911 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2r84h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcd5dff4-0826-4876-9fd3-3f19781a17bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3814b88c4b0107b15c1dacb212e0a05451ebc05065fcb0c25318df4808220e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f840757548b4af73aae0c247025a649cbf50cfbc534fbd38f0592f0f0e97274\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f840757548b4af73aae0c247025a649cbf50cfbc534fbd38f0592f0f0e97274\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ade0f54a4b7718d83ed1ed5025f91b42175bf7daa1327a150a6eedc99df8e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ade0f54a4b7718d83ed1ed5025f91b42175bf7daa1327a150a6eedc99df8e3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e872db89da909105a9343e8b9951c7c88bfd5107edb4386faa05c08b37e34bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e872db89da909105a9343e8b9951c7c88bfd5107edb4386faa05c08b37e34bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14920dc320f545db70ffb1b588dcfed5cb019f04e0dddd1d38db7e0696429fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14920dc320f545db70ffb1b588dcfed5cb019f04e0dddd1d38db7e0696429fc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc79537b4712c9762fe2a813ef7f12b5d6314ed2aa66741561fe635591da8a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc79537b4712c9762fe2a813ef7f12b5d6314ed2aa66741561fe635591da8a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a689c8bcf97ed3a6950a775029df058df856a2cc37cdc511d0ed62a36d9f7d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a689c8bcf97ed3a6950a775029df058df856a2cc37cdc511d0ed62a36d9f7d21\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2r84h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:14Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:14 crc kubenswrapper[4808]: I0311 08:41:14.709461 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nqcqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da0586e9-4a6f-4ba5-bc48-568fd9cdb0e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddf318b7c04801c4d010fa760bf0a48a82d26569a7bbfffa015dab3c23e0d470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d5q8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d8eb53e3ea80c3e2e9409ea178b232442d22adb54c055b8487fd6dca0ee4ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d5q8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nqcqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:14Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:14 crc kubenswrapper[4808]: I0311 08:41:14.723121 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cf747e37-c201-4dcc-a2a5-2429f4eba47d-metrics-certs\") pod \"network-metrics-daemon-kqsq9\" (UID: \"cf747e37-c201-4dcc-a2a5-2429f4eba47d\") " pod="openshift-multus/network-metrics-daemon-kqsq9" Mar 11 08:41:14 crc kubenswrapper[4808]: E0311 08:41:14.723273 4808 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 11 08:41:14 crc kubenswrapper[4808]: E0311 08:41:14.723328 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cf747e37-c201-4dcc-a2a5-2429f4eba47d-metrics-certs podName:cf747e37-c201-4dcc-a2a5-2429f4eba47d nodeName:}" failed. No retries permitted until 2026-03-11 08:41:30.723314039 +0000 UTC m=+141.676637359 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cf747e37-c201-4dcc-a2a5-2429f4eba47d-metrics-certs") pod "network-metrics-daemon-kqsq9" (UID: "cf747e37-c201-4dcc-a2a5-2429f4eba47d") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 11 08:41:14 crc kubenswrapper[4808]: I0311 08:41:14.789002 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kqsq9" Mar 11 08:41:14 crc kubenswrapper[4808]: E0311 08:41:14.789179 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kqsq9" podUID="cf747e37-c201-4dcc-a2a5-2429f4eba47d" Mar 11 08:41:14 crc kubenswrapper[4808]: E0311 08:41:14.911700 4808 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 11 08:41:15 crc kubenswrapper[4808]: I0311 08:41:15.788731 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:41:15 crc kubenswrapper[4808]: I0311 08:41:15.788777 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:41:15 crc kubenswrapper[4808]: E0311 08:41:15.788895 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 11 08:41:15 crc kubenswrapper[4808]: I0311 08:41:15.788740 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:41:15 crc kubenswrapper[4808]: E0311 08:41:15.788993 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 11 08:41:15 crc kubenswrapper[4808]: E0311 08:41:15.789329 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 11 08:41:16 crc kubenswrapper[4808]: I0311 08:41:16.788762 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kqsq9" Mar 11 08:41:16 crc kubenswrapper[4808]: E0311 08:41:16.788969 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kqsq9" podUID="cf747e37-c201-4dcc-a2a5-2429f4eba47d" Mar 11 08:41:17 crc kubenswrapper[4808]: I0311 08:41:17.789326 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:41:17 crc kubenswrapper[4808]: I0311 08:41:17.789424 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:41:17 crc kubenswrapper[4808]: I0311 08:41:17.789682 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:41:17 crc kubenswrapper[4808]: E0311 08:41:17.789610 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 11 08:41:17 crc kubenswrapper[4808]: E0311 08:41:17.789853 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 11 08:41:17 crc kubenswrapper[4808]: E0311 08:41:17.789896 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 11 08:41:18 crc kubenswrapper[4808]: I0311 08:41:18.789415 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kqsq9" Mar 11 08:41:18 crc kubenswrapper[4808]: E0311 08:41:18.789743 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kqsq9" podUID="cf747e37-c201-4dcc-a2a5-2429f4eba47d" Mar 11 08:41:19 crc kubenswrapper[4808]: I0311 08:41:19.789185 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:41:19 crc kubenswrapper[4808]: E0311 08:41:19.789414 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 11 08:41:19 crc kubenswrapper[4808]: I0311 08:41:19.790485 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:41:19 crc kubenswrapper[4808]: I0311 08:41:19.790497 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:41:19 crc kubenswrapper[4808]: E0311 08:41:19.790744 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 11 08:41:19 crc kubenswrapper[4808]: E0311 08:41:19.790980 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 11 08:41:19 crc kubenswrapper[4808]: I0311 08:41:19.810139 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dgh9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1a75dfb-31dd-4275-a309-c9e7130feb05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81effdeab74d0bc6b6f904ca05d6ae5f9777772a1b6e69e9038bb6818af8d979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chgpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dgh9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:19Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:19 crc kubenswrapper[4808]: I0311 08:41:19.826448 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70aeaa0b-1b9a-450e-bac3-61beed554491\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9a36b70daad0e5e33a84c699b4c0680f47f80bad4c9852616ea933ba8c6b221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1f460dd013b0b42bfb8031fcb1e2f49ace0fe24ce8194bd6e6970ad3e750ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://675da4e867613d165a41176c1b725f3601ffdd7de762c52efec0ee5485b1d628\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46e7f27f0f53872704c9eff7a30b8cb82afea98d12ec0723521b901c7503f203\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f99d53f25011afeb242934cb64a3cce6d5a03c17bd6008729f5e07298f152fb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-11T08:40:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0311 08:40:20.589250 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0311 08:40:20.589388 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0311 08:40:20.590094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1044918164/tls.crt::/tmp/serving-cert-1044918164/tls.key\\\\\\\"\\\\nI0311 08:40:21.049983 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0311 08:40:21.052943 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0311 08:40:21.052965 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0311 08:40:21.052991 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0311 08:40:21.052999 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0311 08:40:21.059520 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0311 08:40:21.059550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:40:21.059569 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:40:21.059578 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0311 08:40:21.059584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0311 08:40:21.059590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0311 08:40:21.059596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0311 08:40:21.059648 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0311 08:40:21.062132 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:41:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://506d71773927c014aeafefdc00c93d5faf4e6a5827bc8b47f525f3ea970ed14e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://184b4659910cdc9b9ee8dc34db1ede687099f4a5103dfba9e9a2fc0428c4e2ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://184b4659910cdc9b9ee8dc34db1ede687099f4a5103dfba9e9a2fc0428c4e2ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:19Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:19 crc kubenswrapper[4808]: I0311 08:41:19.839690 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"134c7d84-5243-4e2e-afcf-0eaf67967ce7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00fd34aefc93aa169e305232d0d0bf0fff66d3f4469ae96f68cc5f8a51a92c86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://023c9b432e3273f96abe72e9e1e4cf5ab68649d9d71d0ff59f75fc30ea5c2287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://023c9b432e3273f96abe72e9e1e4cf5ab68649d9d71d0ff59f75fc30ea5c2287\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:19Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:19 crc kubenswrapper[4808]: I0311 08:41:19.857075 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dd34d63-dc29-46ce-86e3-70f413dfad90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b2b2eafbff37196bcb9cab96b32ba3af5acd4623128c00e2fc933040f09aab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f8347d0738c55456ccb8de04e3b599ef0a2bbaae8ab8eecda19a9bbd9abc7bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://047f453a7b029609fd6596a99014fc89181f9313fdbd391b5c755d6f7b7b0db0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8915625a5b41d93fa3b078e2e9b1b6cf18c36f3ba7ada3a505ab826c49ce3b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8915625a5b41d93fa3b078e2e9b1b6cf18c36f3ba7ada3a505ab826c49ce3b6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:19Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:19 crc kubenswrapper[4808]: I0311 08:41:19.874684 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:19Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:19 crc kubenswrapper[4808]: I0311 08:41:19.890726 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dda5309-668d-4e3c-b3b2-1d708eecc578\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://125d3edbb47daf863a7f7b431cd468ba206a964c42d2a8fdbf163ef8ae3fd771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2hw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dad38518c1c86c0eb3926b0ff17b9fe7847bd1e942591307c49dfbd87a61d19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2hw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tfsm9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:19Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:19 crc kubenswrapper[4808]: I0311 08:41:19.908307 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a12f71a3a92a4513660a5b6c0bb73f40545f5528a45fab7aec18cb22283921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:19Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:19 crc kubenswrapper[4808]: E0311 08:41:19.912572 4808 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 11 08:41:19 crc kubenswrapper[4808]: I0311 08:41:19.929408 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nqcqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da0586e9-4a6f-4ba5-bc48-568fd9cdb0e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddf318b7c04801c4d010fa760bf0a48a82d26569a7bbfffa015dab3c23e0d470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d5q8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d8eb53e3ea80c3e2e9409ea178b232442d22adb54c055b8487fd6dca0ee4ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d5q8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nqcqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:19Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:19 crc kubenswrapper[4808]: I0311 08:41:19.944490 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kqsq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf747e37-c201-4dcc-a2a5-2429f4eba47d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7666v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7666v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kqsq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:19Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:19 crc kubenswrapper[4808]: I0311 08:41:19.980470 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39fe7c21-7960-42c6-aa61-98bac9e7ecd5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46ab0655a5449d68489b2ad4cdadf1ab8539dfc66e9a09ef0bf492fc84c3df01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99862ebd2568f5b782683b45df57b331cb0b659d0a48b16d898658463fcaa023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fac42d74d7b0553ef1fda4393cbe7872b21bc18baada8519db8137f2ef71d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce1b28200696e0c819d326a53d481baf48418232ac2a66fe01504e7af648c56f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1da63ac4504d6a817f5c856ae4ec6feee655c983f804372f8100624e0fbb08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0167ad24e01c669d29e5bcad3d3dc4e2e7578cf20278001ea2dfe9f1390bc99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0167ad24e01c669d29e5bcad3d3dc4e2e7578cf20278001ea2dfe9f1390bc99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc6d6b373b603100e83b56735f848166bbe70eaffa795411d0bf95bbfbe31c6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc6d6b373b603100e83b56735f848166bbe70eaffa795411d0bf95bbfbe31c6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f029352eaf11747f4efebbb91000b740e298e03d72e8a1347316668b78c23cae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f029352eaf11747f4efebbb91000b740e298e03d72e8a1347316668b78c23cae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:19Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:20 crc kubenswrapper[4808]: I0311 08:41:20.002513 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b088d2a8d5895f936e824861275482977fb68600e1ca4fd0c08e124c4f11da52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:19Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:20 crc kubenswrapper[4808]: I0311 08:41:20.017749 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:20Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:20 crc kubenswrapper[4808]: I0311 08:41:20.034904 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:20Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:20 crc kubenswrapper[4808]: I0311 08:41:20.045353 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-twvrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6377c39c-8ecf-409c-b3e7-ea9d717e234f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a910f47666db2f5fb733718e848c75c15d9f0975c7339afd9e21f21ede7aa1b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c99gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-twvrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:20Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:20 crc kubenswrapper[4808]: I0311 08:41:20.062309 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2r84h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcd5dff4-0826-4876-9fd3-3f19781a17bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3814b88c4b0107b15c1dacb212e0a05451ebc05065fcb0c25318df4808220e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f840757548b4af73aae0c247025a649cbf50cfbc534fbd38f0592f0f0e97274\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f840757548b4af73aae0c247025a649cbf50cfbc534fbd38f0592f0f0e97274\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ade0f54a4b7718d83ed1ed5025f91b42175bf7daa1327a150a6eedc99df8e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ade0f54a4b7718d83ed1ed5025f91b42175bf7daa1327a150a6eedc99df8e3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e872db89da909105a9343e8b9951c7c88bfd5107edb4386faa05c08b37e34bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e872db89da909105a9343e8b9951c7c88bfd5107edb4386faa05c08b37e34bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14920dc320f545db70ffb1b588dcfed5cb019f04e0dddd1d38db7e0696429fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14920dc320f545db70ffb1b588dcfed5cb019f04e0dddd1d38db7e0696429fc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc79537b4712c9762fe2a813ef7f12b5d6314ed2aa66741561fe635591da8a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc79537b4712c9762fe2a813ef7f12b5d6314ed2aa66741561fe635591da8a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a689c8bcf97ed3a6950a775029df058df856a2cc37cdc511d0ed62a36d9f7d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a689c8bcf97ed3a6950a775029df058df856a2cc37cdc511d0ed62a36d9f7d21\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2r84h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:20Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:20 crc kubenswrapper[4808]: I0311 08:41:20.081127 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afd64db1e4772f4558a0a810e319481d1c8d521b299c9f6f79d5c0f752b5f89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8deb33e0cd83c7419f7ddd4cba0e84d33849e982b5da5e74e0e93eebb40b249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:20Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:20 crc kubenswrapper[4808]: I0311 08:41:20.112717 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17fbdac0c2d9f6080066329737fa9866b0e953b38687d6c7b2c62dea2e9964a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02a71d4f33fd4d8bcf666c2d01f03d31c6e0f2b8eaa0cc183b48251e36cdc474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1d10229731c437de7f29f504fec11ac3688a0a5039812d76551a867f814b95e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a351ca31265d6256ce9c08680495dd7a9b7882cd17a8faf4a0d224a800737e4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3674634f2b9f7a0e2fcf3758da17080b26a6284c8446d8326bc405b1307f8c22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f11f10c1e9b99cab011ae05555fe3dd8aeb6e7df5cb166f07c6bcfffc9f9164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7422e36aafb7b9003baec36b0abbd0219b5ed0ae1b67122e9453cfd9016e8a38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7422e36aafb7b9003baec36b0abbd0219b5ed0ae1b67122e9453cfd9016e8a38\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-11T08:41:12Z\\\",\\\"message\\\":\\\".Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0311 08:41:12.635982 7112 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-2r84h in node crc\\\\nI0311 08:41:12.635989 7112 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nI0311 08:41:12.635995 7112 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-2r84h after 0 failed attempt(s)\\\\nI0311 08:41:12.636008 7112 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-2r84h\\\\nI0311 08:41:12.635953 7112 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nF0311 08:41:12.636023 7112 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:41:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8wfl5_openshift-ovn-kubernetes(afeac5d0-d84f-4776-ae37-a03c8f0f66b8)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01f1b29fe1734c41ddcc1b36367ccc54148fa85f2e05072fc0b60987dfcbb078\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7129de475a69815e0ed556f2d45c208788efcd7ef2792f3d93994af37dee87b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7129de475a69815e0ed556f2d45c208788efcd7ef2792f3d93994af37dee87b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8wfl5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:20Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:20 crc kubenswrapper[4808]: I0311 08:41:20.130042 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dptv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af91f09b-b749-4f04-81ac-2f0079a0dca5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df6497d4025103c2efba8337b4d7541d3d7015be49a7b3478f5648122e9dbd92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtdnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dptv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:20Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:20 crc kubenswrapper[4808]: I0311 08:41:20.789384 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kqsq9" Mar 11 08:41:20 crc kubenswrapper[4808]: E0311 08:41:20.789593 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kqsq9" podUID="cf747e37-c201-4dcc-a2a5-2429f4eba47d" Mar 11 08:41:21 crc kubenswrapper[4808]: I0311 08:41:21.777336 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:21 crc kubenswrapper[4808]: I0311 08:41:21.778463 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:21 crc kubenswrapper[4808]: I0311 08:41:21.778683 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:21 crc kubenswrapper[4808]: I0311 08:41:21.778885 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:21 crc kubenswrapper[4808]: I0311 08:41:21.779105 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:21Z","lastTransitionTime":"2026-03-11T08:41:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:21 crc kubenswrapper[4808]: I0311 08:41:21.788666 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:41:21 crc kubenswrapper[4808]: I0311 08:41:21.788765 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:41:21 crc kubenswrapper[4808]: I0311 08:41:21.788773 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:41:21 crc kubenswrapper[4808]: E0311 08:41:21.789408 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 11 08:41:21 crc kubenswrapper[4808]: E0311 08:41:21.789161 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 11 08:41:21 crc kubenswrapper[4808]: E0311 08:41:21.789580 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 11 08:41:21 crc kubenswrapper[4808]: E0311 08:41:21.803697 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fb54ef7d-152b-411a-b511-256d1778abe5\\\",\\\"systemUUID\\\":\\\"8423e724-ca17-4e6e-9671-7e629ecf3f36\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:21Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:21 crc kubenswrapper[4808]: I0311 08:41:21.810075 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:21 crc kubenswrapper[4808]: I0311 08:41:21.810115 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:21 crc kubenswrapper[4808]: I0311 08:41:21.810123 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:21 crc kubenswrapper[4808]: I0311 08:41:21.810139 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:21 crc kubenswrapper[4808]: I0311 08:41:21.810149 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:21Z","lastTransitionTime":"2026-03-11T08:41:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:21 crc kubenswrapper[4808]: E0311 08:41:21.831206 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fb54ef7d-152b-411a-b511-256d1778abe5\\\",\\\"systemUUID\\\":\\\"8423e724-ca17-4e6e-9671-7e629ecf3f36\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:21Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:21 crc kubenswrapper[4808]: I0311 08:41:21.836684 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:21 crc kubenswrapper[4808]: I0311 08:41:21.836802 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:21 crc kubenswrapper[4808]: I0311 08:41:21.836830 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:21 crc kubenswrapper[4808]: I0311 08:41:21.836861 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:21 crc kubenswrapper[4808]: I0311 08:41:21.836885 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:21Z","lastTransitionTime":"2026-03-11T08:41:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:21 crc kubenswrapper[4808]: E0311 08:41:21.860149 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fb54ef7d-152b-411a-b511-256d1778abe5\\\",\\\"systemUUID\\\":\\\"8423e724-ca17-4e6e-9671-7e629ecf3f36\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:21Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:21 crc kubenswrapper[4808]: I0311 08:41:21.865725 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:21 crc kubenswrapper[4808]: I0311 08:41:21.865779 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:21 crc kubenswrapper[4808]: I0311 08:41:21.865788 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:21 crc kubenswrapper[4808]: I0311 08:41:21.865804 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:21 crc kubenswrapper[4808]: I0311 08:41:21.865816 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:21Z","lastTransitionTime":"2026-03-11T08:41:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:21 crc kubenswrapper[4808]: E0311 08:41:21.886415 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fb54ef7d-152b-411a-b511-256d1778abe5\\\",\\\"systemUUID\\\":\\\"8423e724-ca17-4e6e-9671-7e629ecf3f36\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:21Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:21 crc kubenswrapper[4808]: I0311 08:41:21.891900 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:21 crc kubenswrapper[4808]: I0311 08:41:21.891971 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:21 crc kubenswrapper[4808]: I0311 08:41:21.891991 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:21 crc kubenswrapper[4808]: I0311 08:41:21.892019 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:21 crc kubenswrapper[4808]: I0311 08:41:21.892040 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:21Z","lastTransitionTime":"2026-03-11T08:41:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:21 crc kubenswrapper[4808]: E0311 08:41:21.912813 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fb54ef7d-152b-411a-b511-256d1778abe5\\\",\\\"systemUUID\\\":\\\"8423e724-ca17-4e6e-9671-7e629ecf3f36\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:21Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:21 crc kubenswrapper[4808]: E0311 08:41:21.912962 4808 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 11 08:41:22 crc kubenswrapper[4808]: I0311 08:41:22.788924 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kqsq9" Mar 11 08:41:22 crc kubenswrapper[4808]: E0311 08:41:22.789190 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kqsq9" podUID="cf747e37-c201-4dcc-a2a5-2429f4eba47d" Mar 11 08:41:23 crc kubenswrapper[4808]: I0311 08:41:23.788472 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:41:23 crc kubenswrapper[4808]: I0311 08:41:23.788500 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:41:23 crc kubenswrapper[4808]: E0311 08:41:23.788990 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 11 08:41:23 crc kubenswrapper[4808]: I0311 08:41:23.789514 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:41:23 crc kubenswrapper[4808]: E0311 08:41:23.789505 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 11 08:41:23 crc kubenswrapper[4808]: E0311 08:41:23.789696 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 11 08:41:24 crc kubenswrapper[4808]: I0311 08:41:24.788861 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kqsq9" Mar 11 08:41:24 crc kubenswrapper[4808]: E0311 08:41:24.788982 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kqsq9" podUID="cf747e37-c201-4dcc-a2a5-2429f4eba47d" Mar 11 08:41:24 crc kubenswrapper[4808]: E0311 08:41:24.914120 4808 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 11 08:41:25 crc kubenswrapper[4808]: I0311 08:41:25.429016 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 11 08:41:25 crc kubenswrapper[4808]: I0311 08:41:25.439821 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dptv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af91f09b-b749-4f04-81ac-2f0079a0dca5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df6497d4025103c2efba8337b4d7541d3d7015be49a7b3478f5648122e9dbd92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtdnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dptv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:25Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:25 crc kubenswrapper[4808]: I0311 08:41:25.451068 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afd64db1e4772f4558a0a810e319481d1c8d521b299c9f6f79d5c0f752b5f89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8deb33e0cd83c7419f7ddd4cba0e84d33849e982b5da5e74e0e93eebb40b249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:25Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:25 crc kubenswrapper[4808]: I0311 08:41:25.476349 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17fbdac0c2d9f6080066329737fa9866b0e953b38687d6c7b2c62dea2e9964a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02a71d4f33fd4d8bcf666c2d01f03d31c6e0f2b8eaa0cc183b48251e36cdc474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1d10229731c437de7f29f504fec11ac3688a0a5039812d76551a867f814b95e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a351ca31265d6256ce9c08680495dd7a9b7882cd17a8faf4a0d224a800737e4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3674634f2b9f7a0e2fcf3758da17080b26a6284c8446d8326bc405b1307f8c22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f11f10c1e9b99cab011ae05555fe3dd8aeb6e7df5cb166f07c6bcfffc9f9164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7422e36aafb7b9003baec36b0abbd0219b5ed0ae1b67122e9453cfd9016e8a38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7422e36aafb7b9003baec36b0abbd0219b5ed0ae1b67122e9453cfd9016e8a38\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-11T08:41:12Z\\\",\\\"message\\\":\\\".Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0311 08:41:12.635982 7112 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-2r84h in node crc\\\\nI0311 08:41:12.635989 7112 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nI0311 08:41:12.635995 7112 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-2r84h after 0 failed attempt(s)\\\\nI0311 08:41:12.636008 7112 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-2r84h\\\\nI0311 08:41:12.635953 7112 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nF0311 08:41:12.636023 7112 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:41:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8wfl5_openshift-ovn-kubernetes(afeac5d0-d84f-4776-ae37-a03c8f0f66b8)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01f1b29fe1734c41ddcc1b36367ccc54148fa85f2e05072fc0b60987dfcbb078\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7129de475a69815e0ed556f2d45c208788efcd7ef2792f3d93994af37dee87b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7129de475a69815e0ed556f2d45c208788efcd7ef2792f3d93994af37dee87b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8wfl5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:25Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:25 crc kubenswrapper[4808]: I0311 08:41:25.497459 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"134c7d84-5243-4e2e-afcf-0eaf67967ce7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00fd34aefc93aa169e305232d0d0bf0fff66d3f4469ae96f68cc5f8a51a92c86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://023c9b432e3273f96abe72e9e1e4cf5ab68649d9d71d0ff59f75fc30ea5c2287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://023c9b432e3273f96abe72e9e1e4cf5ab68649d9d71d0ff59f75fc30ea5c2287\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:25Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:25 crc kubenswrapper[4808]: I0311 08:41:25.512986 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dd34d63-dc29-46ce-86e3-70f413dfad90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b2b2eafbff37196bcb9cab96b32ba3af5acd4623128c00e2fc933040f09aab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f8347d0738c55456ccb8de04e3b599ef0a2bbaae8ab8eecda19a9bbd9abc7bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://047f453a7b029609fd6596a99014fc89181f9313fdbd391b5c755d6f7b7b0db0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8915625a5b41d93fa3b078e2e9b1b6cf18c36f3ba7ada3a505ab826c49ce3b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8915625a5b41d93fa3b078e2e9b1b6cf18c36f3ba7ada3a505ab826c49ce3b6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:25Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:25 crc kubenswrapper[4808]: I0311 08:41:25.529796 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:25Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:25 crc kubenswrapper[4808]: I0311 08:41:25.545280 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dda5309-668d-4e3c-b3b2-1d708eecc578\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://125d3edbb47daf863a7f7b431cd468ba206a964c42d2a8fdbf163ef8ae3fd771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2hw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dad38518c1c86c0eb3926b0ff17b9fe7847bd1e942591307c49dfbd87a61d19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2hw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tfsm9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:25Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:25 crc kubenswrapper[4808]: I0311 08:41:25.565327 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dgh9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1a75dfb-31dd-4275-a309-c9e7130feb05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81effdeab74d0bc6b6f904ca05d6ae5f9777772a1b6e69e9038bb6818af8d979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chgpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dgh9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:25Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:25 crc kubenswrapper[4808]: I0311 08:41:25.579333 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70aeaa0b-1b9a-450e-bac3-61beed554491\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9a36b70daad0e5e33a84c699b4c0680f47f80bad4c9852616ea933ba8c6b221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1f460dd013b0b42bfb8031fcb1e2f49ace0fe24ce8194bd6e6970ad3e750ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://675da4e867613d165a41176c1b725f3601ffdd7de762c52efec0ee5485b1d628\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46e7f27f0f53872704c9eff7a30b8cb82afea98d12ec0723521b901c7503f203\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f99d53f25011afeb242934cb64a3cce6d5a03c17bd6008729f5e07298f152fb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-11T08:40:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0311 08:40:20.589250 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0311 08:40:20.589388 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0311 08:40:20.590094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1044918164/tls.crt::/tmp/serving-cert-1044918164/tls.key\\\\\\\"\\\\nI0311 08:40:21.049983 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0311 08:40:21.052943 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0311 08:40:21.052965 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0311 08:40:21.052991 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0311 08:40:21.052999 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0311 08:40:21.059520 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0311 08:40:21.059550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:40:21.059569 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:40:21.059578 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0311 08:40:21.059584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0311 08:40:21.059590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0311 08:40:21.059596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0311 08:40:21.059648 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0311 08:40:21.062132 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:41:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://506d71773927c014aeafefdc00c93d5faf4e6a5827bc8b47f525f3ea970ed14e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://184b4659910cdc9b9ee8dc34db1ede687099f4a5103dfba9e9a2fc0428c4e2ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://184b4659910cdc9b9ee8dc34db1ede687099f4a5103dfba9e9a2fc0428c4e2ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:25Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:25 crc kubenswrapper[4808]: I0311 08:41:25.591277 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a12f71a3a92a4513660a5b6c0bb73f40545f5528a45fab7aec18cb22283921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:25Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:25 crc kubenswrapper[4808]: I0311 08:41:25.606563 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:25Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:25 crc kubenswrapper[4808]: I0311 08:41:25.618962 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:25Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:25 crc kubenswrapper[4808]: I0311 08:41:25.628140 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-twvrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6377c39c-8ecf-409c-b3e7-ea9d717e234f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a910f47666db2f5fb733718e848c75c15d9f0975c7339afd9e21f21ede7aa1b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c99gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-twvrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:25Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:25 crc kubenswrapper[4808]: I0311 08:41:25.640978 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2r84h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcd5dff4-0826-4876-9fd3-3f19781a17bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3814b88c4b0107b15c1dacb212e0a05451ebc05065fcb0c25318df4808220e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f840757548b4af73aae0c247025a649cbf50cfbc534fbd38f0592f0f0e97274\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f840757548b4af73aae0c247025a649cbf50cfbc534fbd38f0592f0f0e97274\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ade0f54a4b7718d83ed1ed5025f91b42175bf7daa1327a150a6eedc99df8e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ade0f54a4b7718d83ed1ed5025f91b42175bf7daa1327a150a6eedc99df8e3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e872db89da909105a9343e8b9951c7c88bfd5107edb4386faa05c08b37e34bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e872db89da909105a9343e8b9951c7c88bfd5107edb4386faa05c08b37e34bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14920dc320f545db70ffb1b588dcfed5cb019f04e0dddd1d38db7e0696429fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14920dc320f545db70ffb1b588dcfed5cb019f04e0dddd1d38db7e0696429fc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc79537b4712c9762fe2a813ef7f12b5d6314ed2aa66741561fe635591da8a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc79537b4712c9762fe2a813ef7f12b5d6314ed2aa66741561fe635591da8a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a689c8bcf97ed3a6950a775029df058df856a2cc37cdc511d0ed62a36d9f7d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a689c8bcf97ed3a6950a775029df058df856a2cc37cdc511d0ed62a36d9f7d21\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2r84h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:25Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:25 crc kubenswrapper[4808]: I0311 08:41:25.657090 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nqcqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da0586e9-4a6f-4ba5-bc48-568fd9cdb0e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddf318b7c04801c4d010fa760bf0a48a82d26569a7bbfffa015dab3c23e0d470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d5q8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d8eb53e3ea80c3e2e9409ea178b232442d22adb54c055b8487fd6dca0ee4ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d5q8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nqcqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:25Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:25 crc kubenswrapper[4808]: I0311 08:41:25.666513 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kqsq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf747e37-c201-4dcc-a2a5-2429f4eba47d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7666v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7666v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kqsq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:25Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:25 crc kubenswrapper[4808]: I0311 08:41:25.686415 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39fe7c21-7960-42c6-aa61-98bac9e7ecd5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46ab0655a5449d68489b2ad4cdadf1ab8539dfc66e9a09ef0bf492fc84c3df01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99862ebd2568f5b782683b45df57b331cb0b659d0a48b16d898658463fcaa023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fac42d74d7b0553ef1fda4393cbe7872b21bc18baada8519db8137f2ef71d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce1b28200696e0c819d326a53d481baf48418232ac2a66fe01504e7af648c56f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1da63ac4504d6a817f5c856ae4ec6feee655c983f804372f8100624e0fbb08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0167ad24e01c669d29e5bcad3d3dc4e2e7578cf20278001ea2dfe9f1390bc99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0167ad24e01c669d29e5bcad3d3dc4e2e7578cf20278001ea2dfe9f1390bc99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc6d6b373b603100e83b56735f848166bbe70eaffa795411d0bf95bbfbe31c6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc6d6b373b603100e83b56735f848166bbe70eaffa795411d0bf95bbfbe31c6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f029352eaf11747f4efebbb91000b740e298e03d72e8a1347316668b78c23cae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f029352eaf11747f4efebbb91000b740e298e03d72e8a1347316668b78c23cae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:25Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:25 crc kubenswrapper[4808]: I0311 08:41:25.696804 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b088d2a8d5895f936e824861275482977fb68600e1ca4fd0c08e124c4f11da52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:25Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:25 crc kubenswrapper[4808]: I0311 08:41:25.788766 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:41:25 crc kubenswrapper[4808]: I0311 08:41:25.788777 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:41:25 crc kubenswrapper[4808]: E0311 08:41:25.788990 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 11 08:41:25 crc kubenswrapper[4808]: I0311 08:41:25.788787 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:41:25 crc kubenswrapper[4808]: E0311 08:41:25.789756 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 11 08:41:25 crc kubenswrapper[4808]: E0311 08:41:25.789179 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 11 08:41:26 crc kubenswrapper[4808]: I0311 08:41:26.788432 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kqsq9" Mar 11 08:41:26 crc kubenswrapper[4808]: E0311 08:41:26.789062 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kqsq9" podUID="cf747e37-c201-4dcc-a2a5-2429f4eba47d" Mar 11 08:41:27 crc kubenswrapper[4808]: I0311 08:41:27.788637 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:41:27 crc kubenswrapper[4808]: I0311 08:41:27.788689 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:41:27 crc kubenswrapper[4808]: E0311 08:41:27.788851 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 11 08:41:27 crc kubenswrapper[4808]: I0311 08:41:27.788867 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:41:27 crc kubenswrapper[4808]: E0311 08:41:27.789179 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 11 08:41:27 crc kubenswrapper[4808]: E0311 08:41:27.789656 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 11 08:41:27 crc kubenswrapper[4808]: I0311 08:41:27.790942 4808 scope.go:117] "RemoveContainer" containerID="7422e36aafb7b9003baec36b0abbd0219b5ed0ae1b67122e9453cfd9016e8a38" Mar 11 08:41:27 crc kubenswrapper[4808]: E0311 08:41:27.791219 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8wfl5_openshift-ovn-kubernetes(afeac5d0-d84f-4776-ae37-a03c8f0f66b8)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" podUID="afeac5d0-d84f-4776-ae37-a03c8f0f66b8" Mar 11 08:41:28 crc kubenswrapper[4808]: I0311 08:41:28.788489 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kqsq9" Mar 11 08:41:28 crc kubenswrapper[4808]: E0311 08:41:28.788750 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kqsq9" podUID="cf747e37-c201-4dcc-a2a5-2429f4eba47d" Mar 11 08:41:29 crc kubenswrapper[4808]: I0311 08:41:29.789219 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:41:29 crc kubenswrapper[4808]: E0311 08:41:29.789443 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 11 08:41:29 crc kubenswrapper[4808]: I0311 08:41:29.789504 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:41:29 crc kubenswrapper[4808]: E0311 08:41:29.789648 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 11 08:41:29 crc kubenswrapper[4808]: I0311 08:41:29.789667 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:41:29 crc kubenswrapper[4808]: E0311 08:41:29.789790 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 11 08:41:29 crc kubenswrapper[4808]: I0311 08:41:29.811063 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afd64db1e4772f4558a0a810e319481d1c8d521b299c9f6f79d5c0f752b5f89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8deb33e0cd83c7419f7ddd4cba0e84d33849e982b5da5e74e0e93eebb40b249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:29Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:29 crc kubenswrapper[4808]: I0311 08:41:29.850104 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17fbdac0c2d9f6080066329737fa9866b0e953b38687d6c7b2c62dea2e9964a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02a71d4f33fd4d8bcf666c2d01f03d31c6e0f2b8eaa0cc183b48251e36cdc474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1d10229731c437de7f29f504fec11ac3688a0a5039812d76551a867f814b95e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a351ca31265d6256ce9c08680495dd7a9b7882cd17a8faf4a0d224a800737e4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3674634f2b9f7a0e2fcf3758da17080b26a6284c8446d8326bc405b1307f8c22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f11f10c1e9b99cab011ae05555fe3dd8aeb6e7df5cb166f07c6bcfffc9f9164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7422e36aafb7b9003baec36b0abbd0219b5ed0ae1b67122e9453cfd9016e8a38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7422e36aafb7b9003baec36b0abbd0219b5ed0ae1b67122e9453cfd9016e8a38\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-11T08:41:12Z\\\",\\\"message\\\":\\\".Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0311 08:41:12.635982 7112 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-2r84h in node crc\\\\nI0311 08:41:12.635989 7112 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nI0311 08:41:12.635995 7112 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-2r84h after 0 failed attempt(s)\\\\nI0311 08:41:12.636008 7112 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-2r84h\\\\nI0311 08:41:12.635953 7112 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nF0311 08:41:12.636023 7112 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:41:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8wfl5_openshift-ovn-kubernetes(afeac5d0-d84f-4776-ae37-a03c8f0f66b8)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01f1b29fe1734c41ddcc1b36367ccc54148fa85f2e05072fc0b60987dfcbb078\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7129de475a69815e0ed556f2d45c208788efcd7ef2792f3d93994af37dee87b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7129de475a69815e0ed556f2d45c208788efcd7ef2792f3d93994af37dee87b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8wfl5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:29Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:29 crc kubenswrapper[4808]: I0311 08:41:29.864319 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dptv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af91f09b-b749-4f04-81ac-2f0079a0dca5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df6497d4025103c2efba8337b4d7541d3d7015be49a7b3478f5648122e9dbd92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtdnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dptv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:29Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:29 crc kubenswrapper[4808]: I0311 08:41:29.880482 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dda5309-668d-4e3c-b3b2-1d708eecc578\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://125d3edbb47daf863a7f7b431cd468ba206a964c42d2a8fdbf163ef8ae3fd771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2hw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dad38518c1c86c0eb3926b0ff17b9fe7847bd1e942591307c49dfbd87a61d19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2hw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tfsm9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:29Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:29 crc kubenswrapper[4808]: I0311 08:41:29.900869 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dgh9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1a75dfb-31dd-4275-a309-c9e7130feb05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81effdeab74d0bc6b6f904ca05d6ae5f9777772a1b6e69e9038bb6818af8d979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chgpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dgh9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:29Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:29 crc kubenswrapper[4808]: E0311 08:41:29.914754 4808 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 11 08:41:29 crc kubenswrapper[4808]: I0311 08:41:29.933741 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70aeaa0b-1b9a-450e-bac3-61beed554491\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9a36b70daad0e5e33a84c699b4c0680f47f80bad4c9852616ea933ba8c6b221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1f460dd013b0b42bfb8031fcb1e2f49ace0fe24ce8194bd6e6970ad3e750ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://675da4e867613d165a41176c1b725f3601ffdd7de762c52efec0ee5485b1d628\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46e7f27f0f53872704c9eff7a30b8cb82afea98d12ec0723521b901c7503f203\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f99d53f25011afeb242934cb64a3cce6d5a03c17bd6008729f5e07298f152fb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-11T08:40:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0311 08:40:20.589250 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0311 08:40:20.589388 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0311 08:40:20.590094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1044918164/tls.crt::/tmp/serving-cert-1044918164/tls.key\\\\\\\"\\\\nI0311 08:40:21.049983 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0311 08:40:21.052943 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0311 08:40:21.052965 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0311 08:40:21.052991 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0311 08:40:21.052999 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0311 08:40:21.059520 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0311 08:40:21.059550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:40:21.059569 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:40:21.059578 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0311 08:40:21.059584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0311 08:40:21.059590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0311 08:40:21.059596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0311 08:40:21.059648 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0311 08:40:21.062132 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:41:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://506d71773927c014aeafefdc00c93d5faf4e6a5827bc8b47f525f3ea970ed14e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://184b4659910cdc9b9ee8dc34db1ede687099f4a5103dfba9e9a2fc0428c4e2ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://184b4659910cdc9b9ee8dc34db1ede687099f4a5103dfba9e9a2fc0428c4e2ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:29Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:29 crc kubenswrapper[4808]: I0311 08:41:29.949548 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"134c7d84-5243-4e2e-afcf-0eaf67967ce7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00fd34aefc93aa169e305232d0d0bf0fff66d3f4469ae96f68cc5f8a51a92c86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://023c9b432e3273f96abe72e9e1e4cf5ab68649d9d71d0ff59f75fc30ea5c2287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://023c9b432e3273f96abe72e9e1e4cf5ab68649d9d71d0ff59f75fc30ea5c2287\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:29Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:29 crc kubenswrapper[4808]: I0311 08:41:29.970900 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dd34d63-dc29-46ce-86e3-70f413dfad90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b2b2eafbff37196bcb9cab96b32ba3af5acd4623128c00e2fc933040f09aab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f8347d0738c55456ccb8de04e3b599ef0a2bbaae8ab8eecda19a9bbd9abc7bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://047f453a7b029609fd6596a99014fc89181f9313fdbd391b5c755d6f7b7b0db0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8915625a5b41d93fa3b078e2e9b1b6cf18c36f3ba7ada3a505ab826c49ce3b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8915625a5b41d93fa3b078e2e9b1b6cf18c36f3ba7ada3a505ab826c49ce3b6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:29Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:29 crc kubenswrapper[4808]: I0311 08:41:29.991927 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:29Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:30 crc kubenswrapper[4808]: I0311 08:41:30.009141 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a12f71a3a92a4513660a5b6c0bb73f40545f5528a45fab7aec18cb22283921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:30Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:30 crc kubenswrapper[4808]: I0311 08:41:30.031626 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2r84h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcd5dff4-0826-4876-9fd3-3f19781a17bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3814b88c4b0107b15c1dacb212e0a05451ebc05065fcb0c25318df4808220e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f840757548b4af73aae0c247025a649cbf50cfbc534fbd38f0592f0f0e97274\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f840757548b4af73aae0c247025a649cbf50cfbc534fbd38f0592f0f0e97274\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ade0f54a4b7718d83ed1ed5025f91b42175bf7daa1327a150a6eedc99df8e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ade0f54a4b7718d83ed1ed5025f91b42175bf7daa1327a150a6eedc99df8e3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e872db89da909105a9343e8b9951c7c88bfd5107edb4386faa05c08b37e34bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e872db89da909105a9343e8b9951c7c88bfd5107edb4386faa05c08b37e34bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14920dc320f545db70ffb1b588dcfed5cb019f04e0dddd1d38db7e0696429fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14920dc320f545db70ffb1b588dcfed5cb019f04e0dddd1d38db7e0696429fc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc79537b4712c9762fe2a813ef7f12b5d6314ed2aa66741561fe635591da8a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc79537b4712c9762fe2a813ef7f12b5d6314ed2aa66741561fe635591da8a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a689c8bcf97ed3a6950a775029df058df856a2cc37cdc511d0ed62a36d9f7d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a689c8bcf97ed3a6950a775029df058df856a2cc37cdc511d0ed62a36d9f7d21\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2r84h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:30Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:30 crc kubenswrapper[4808]: I0311 08:41:30.047873 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nqcqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da0586e9-4a6f-4ba5-bc48-568fd9cdb0e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddf318b7c04801c4d010fa760bf0a48a82d26569a7bbfffa015dab3c23e0d470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d5q8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d8eb53e3ea80c3e2e9409ea178b232442d22adb54c055b8487fd6dca0ee4ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d5q8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nqcqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:30Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:30 crc kubenswrapper[4808]: I0311 08:41:30.061907 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kqsq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf747e37-c201-4dcc-a2a5-2429f4eba47d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7666v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7666v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kqsq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:30Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:30 crc kubenswrapper[4808]: I0311 08:41:30.078686 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:41:30 crc kubenswrapper[4808]: E0311 08:41:30.078939 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:42:34.078893479 +0000 UTC m=+205.032216859 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:41:30 crc kubenswrapper[4808]: I0311 08:41:30.079145 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:41:30 crc kubenswrapper[4808]: I0311 08:41:30.079322 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:41:30 crc kubenswrapper[4808]: I0311 08:41:30.079546 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:41:30 crc kubenswrapper[4808]: I0311 08:41:30.079712 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:41:30 crc kubenswrapper[4808]: E0311 08:41:30.079350 4808 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 11 08:41:30 crc kubenswrapper[4808]: E0311 08:41:30.080098 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-11 08:42:34.080073663 +0000 UTC m=+205.033397023 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 11 08:41:30 crc kubenswrapper[4808]: E0311 08:41:30.079510 4808 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 11 08:41:30 crc kubenswrapper[4808]: E0311 08:41:30.080410 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-11 08:42:34.080391832 +0000 UTC m=+205.033715182 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 11 08:41:30 crc kubenswrapper[4808]: E0311 08:41:30.079727 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 11 08:41:30 crc kubenswrapper[4808]: E0311 08:41:30.080721 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 11 08:41:30 crc kubenswrapper[4808]: E0311 08:41:30.080852 4808 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 11 08:41:30 crc kubenswrapper[4808]: E0311 08:41:30.079818 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 11 08:41:30 crc kubenswrapper[4808]: E0311 08:41:30.081080 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 11 08:41:30 crc kubenswrapper[4808]: E0311 08:41:30.081191 4808 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 11 08:41:30 crc kubenswrapper[4808]: E0311 08:41:30.081111 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-11 08:42:34.081074252 +0000 UTC m=+205.034397612 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 11 08:41:30 crc kubenswrapper[4808]: E0311 08:41:30.081470 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-11 08:42:34.081434623 +0000 UTC m=+205.034757993 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 11 08:41:30 crc kubenswrapper[4808]: I0311 08:41:30.097001 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39fe7c21-7960-42c6-aa61-98bac9e7ecd5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46ab0655a5449d68489b2ad4cdadf1ab8539dfc66e9a09ef0bf492fc84c3df01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99862ebd2568f5b782683b45df57b331cb0b659d0a48b16d898658463fcaa023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fac42d74d7b0553ef1fda4393cbe7872b21bc18baada8519db8137f2ef71d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce1b28200696e0c819d326a53d481baf48418232ac2a66fe01504e7af648c56f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1da63ac4504d6a817f5c856ae4ec6feee655c983f804372f8100624e0fbb08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0167ad24e01c669d29e5bcad3d3dc4e2e7578cf20278001ea2dfe9f1390bc99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0167ad24e01c669d29e5bcad3d3dc4e2e7578cf20278001ea2dfe9f1390bc99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc6d6b373b603100e83b56735f848166bbe70eaffa795411d0bf95bbfbe31c6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc6d6b373b603100e83b56735f848166bbe70eaffa795411d0bf95bbfbe31c6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f029352eaf11747f4efebbb91000b740e298e03d72e8a1347316668b78c23cae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f029352eaf11747f4efebbb91000b740e298e03d72e8a1347316668b78c23cae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:30Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:30 crc kubenswrapper[4808]: I0311 08:41:30.114322 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b088d2a8d5895f936e824861275482977fb68600e1ca4fd0c08e124c4f11da52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:30Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:30 crc kubenswrapper[4808]: I0311 08:41:30.132776 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:30Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:30 crc kubenswrapper[4808]: I0311 08:41:30.148865 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:30Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:30 crc kubenswrapper[4808]: I0311 08:41:30.160965 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-twvrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6377c39c-8ecf-409c-b3e7-ea9d717e234f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a910f47666db2f5fb733718e848c75c15d9f0975c7339afd9e21f21ede7aa1b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c99gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-twvrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:30Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:30 crc kubenswrapper[4808]: I0311 08:41:30.787573 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cf747e37-c201-4dcc-a2a5-2429f4eba47d-metrics-certs\") pod \"network-metrics-daemon-kqsq9\" (UID: \"cf747e37-c201-4dcc-a2a5-2429f4eba47d\") " pod="openshift-multus/network-metrics-daemon-kqsq9" Mar 11 08:41:30 crc kubenswrapper[4808]: E0311 08:41:30.787928 4808 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 11 08:41:30 crc kubenswrapper[4808]: E0311 08:41:30.788121 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cf747e37-c201-4dcc-a2a5-2429f4eba47d-metrics-certs podName:cf747e37-c201-4dcc-a2a5-2429f4eba47d nodeName:}" failed. No retries permitted until 2026-03-11 08:42:02.788073971 +0000 UTC m=+173.741397441 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cf747e37-c201-4dcc-a2a5-2429f4eba47d-metrics-certs") pod "network-metrics-daemon-kqsq9" (UID: "cf747e37-c201-4dcc-a2a5-2429f4eba47d") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 11 08:41:30 crc kubenswrapper[4808]: I0311 08:41:30.788416 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kqsq9" Mar 11 08:41:30 crc kubenswrapper[4808]: E0311 08:41:30.788677 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kqsq9" podUID="cf747e37-c201-4dcc-a2a5-2429f4eba47d" Mar 11 08:41:31 crc kubenswrapper[4808]: I0311 08:41:31.788393 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:41:31 crc kubenswrapper[4808]: I0311 08:41:31.788486 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:41:31 crc kubenswrapper[4808]: E0311 08:41:31.788535 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 11 08:41:31 crc kubenswrapper[4808]: E0311 08:41:31.788743 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 11 08:41:31 crc kubenswrapper[4808]: I0311 08:41:31.788414 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:41:31 crc kubenswrapper[4808]: E0311 08:41:31.790152 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 11 08:41:32 crc kubenswrapper[4808]: I0311 08:41:32.086741 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:32 crc kubenswrapper[4808]: I0311 08:41:32.086794 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:32 crc kubenswrapper[4808]: I0311 08:41:32.086808 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:32 crc kubenswrapper[4808]: I0311 08:41:32.086827 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:32 crc kubenswrapper[4808]: I0311 08:41:32.086841 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:32Z","lastTransitionTime":"2026-03-11T08:41:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:32 crc kubenswrapper[4808]: E0311 08:41:32.111426 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fb54ef7d-152b-411a-b511-256d1778abe5\\\",\\\"systemUUID\\\":\\\"8423e724-ca17-4e6e-9671-7e629ecf3f36\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:32Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:32 crc kubenswrapper[4808]: I0311 08:41:32.116798 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:32 crc kubenswrapper[4808]: I0311 08:41:32.117005 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:32 crc kubenswrapper[4808]: I0311 08:41:32.117121 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:32 crc kubenswrapper[4808]: I0311 08:41:32.117219 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:32 crc kubenswrapper[4808]: I0311 08:41:32.117303 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:32Z","lastTransitionTime":"2026-03-11T08:41:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:32 crc kubenswrapper[4808]: E0311 08:41:32.135969 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fb54ef7d-152b-411a-b511-256d1778abe5\\\",\\\"systemUUID\\\":\\\"8423e724-ca17-4e6e-9671-7e629ecf3f36\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:32Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:32 crc kubenswrapper[4808]: I0311 08:41:32.141263 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:32 crc kubenswrapper[4808]: I0311 08:41:32.141341 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:32 crc kubenswrapper[4808]: I0311 08:41:32.141403 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:32 crc kubenswrapper[4808]: I0311 08:41:32.141438 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:32 crc kubenswrapper[4808]: I0311 08:41:32.141509 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:32Z","lastTransitionTime":"2026-03-11T08:41:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:32 crc kubenswrapper[4808]: E0311 08:41:32.160931 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fb54ef7d-152b-411a-b511-256d1778abe5\\\",\\\"systemUUID\\\":\\\"8423e724-ca17-4e6e-9671-7e629ecf3f36\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:32Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:32 crc kubenswrapper[4808]: I0311 08:41:32.166055 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:32 crc kubenswrapper[4808]: I0311 08:41:32.166106 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:32 crc kubenswrapper[4808]: I0311 08:41:32.166124 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:32 crc kubenswrapper[4808]: I0311 08:41:32.166148 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:32 crc kubenswrapper[4808]: I0311 08:41:32.166166 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:32Z","lastTransitionTime":"2026-03-11T08:41:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:32 crc kubenswrapper[4808]: E0311 08:41:32.181046 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fb54ef7d-152b-411a-b511-256d1778abe5\\\",\\\"systemUUID\\\":\\\"8423e724-ca17-4e6e-9671-7e629ecf3f36\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:32Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:32 crc kubenswrapper[4808]: I0311 08:41:32.185517 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:32 crc kubenswrapper[4808]: I0311 08:41:32.185557 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:32 crc kubenswrapper[4808]: I0311 08:41:32.185568 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:32 crc kubenswrapper[4808]: I0311 08:41:32.185588 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:32 crc kubenswrapper[4808]: I0311 08:41:32.185602 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:32Z","lastTransitionTime":"2026-03-11T08:41:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:32 crc kubenswrapper[4808]: E0311 08:41:32.200282 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fb54ef7d-152b-411a-b511-256d1778abe5\\\",\\\"systemUUID\\\":\\\"8423e724-ca17-4e6e-9671-7e629ecf3f36\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:32Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:32 crc kubenswrapper[4808]: E0311 08:41:32.200458 4808 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 11 08:41:32 crc kubenswrapper[4808]: I0311 08:41:32.788954 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kqsq9" Mar 11 08:41:32 crc kubenswrapper[4808]: E0311 08:41:32.789187 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kqsq9" podUID="cf747e37-c201-4dcc-a2a5-2429f4eba47d" Mar 11 08:41:33 crc kubenswrapper[4808]: I0311 08:41:33.533830 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dgh9v_c1a75dfb-31dd-4275-a309-c9e7130feb05/kube-multus/0.log" Mar 11 08:41:33 crc kubenswrapper[4808]: I0311 08:41:33.533915 4808 generic.go:334] "Generic (PLEG): container finished" podID="c1a75dfb-31dd-4275-a309-c9e7130feb05" containerID="81effdeab74d0bc6b6f904ca05d6ae5f9777772a1b6e69e9038bb6818af8d979" exitCode=1 Mar 11 08:41:33 crc kubenswrapper[4808]: I0311 08:41:33.533969 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-dgh9v" event={"ID":"c1a75dfb-31dd-4275-a309-c9e7130feb05","Type":"ContainerDied","Data":"81effdeab74d0bc6b6f904ca05d6ae5f9777772a1b6e69e9038bb6818af8d979"} Mar 11 08:41:33 crc kubenswrapper[4808]: I0311 08:41:33.534595 4808 scope.go:117] "RemoveContainer" containerID="81effdeab74d0bc6b6f904ca05d6ae5f9777772a1b6e69e9038bb6818af8d979" Mar 11 08:41:33 crc kubenswrapper[4808]: I0311 08:41:33.550054 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-twvrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6377c39c-8ecf-409c-b3e7-ea9d717e234f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a910f47666db2f5fb733718e848c75c15d9f0975c7339afd9e21f21ede7aa1b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c99gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-twvrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:33Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:33 crc kubenswrapper[4808]: I0311 08:41:33.567726 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2r84h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcd5dff4-0826-4876-9fd3-3f19781a17bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3814b88c4b0107b15c1dacb212e0a05451ebc05065fcb0c25318df4808220e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f840757548b4af73aae0c247025a649cbf50cfbc534fbd38f0592f0f0e97274\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f840757548b4af73aae0c247025a649cbf50cfbc534fbd38f0592f0f0e97274\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ade0f54a4b7718d83ed1ed5025f91b42175bf7daa1327a150a6eedc99df8e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ade0f54a4b7718d83ed1ed5025f91b42175bf7daa1327a150a6eedc99df8e3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e872db89da909105a9343e8b9951c7c88bfd5107edb4386faa05c08b37e34bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e872db89da909105a9343e8b9951c7c88bfd5107edb4386faa05c08b37e34bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14920dc320f545db70ffb1b588dcfed5cb019f04e0dddd1d38db7e0696429fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14920dc320f545db70ffb1b588dcfed5cb019f04e0dddd1d38db7e0696429fc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc79537b4712c9762fe2a813ef7f12b5d6314ed2aa66741561fe635591da8a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc79537b4712c9762fe2a813ef7f12b5d6314ed2aa66741561fe635591da8a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a689c8bcf97ed3a6950a775029df058df856a2cc37cdc511d0ed62a36d9f7d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a689c8bcf97ed3a6950a775029df058df856a2cc37cdc511d0ed62a36d9f7d21\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2r84h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:33Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:33 crc kubenswrapper[4808]: I0311 08:41:33.587580 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nqcqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da0586e9-4a6f-4ba5-bc48-568fd9cdb0e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddf318b7c04801c4d010fa760bf0a48a82d26569a7bbfffa015dab3c23e0d470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d5q8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d8eb53e3ea80c3e2e9409ea178b232442d22adb54c055b8487fd6dca0ee4ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d5q8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nqcqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:33Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:33 crc kubenswrapper[4808]: I0311 08:41:33.601904 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kqsq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf747e37-c201-4dcc-a2a5-2429f4eba47d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7666v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7666v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kqsq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:33Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:33 crc kubenswrapper[4808]: I0311 08:41:33.637572 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39fe7c21-7960-42c6-aa61-98bac9e7ecd5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46ab0655a5449d68489b2ad4cdadf1ab8539dfc66e9a09ef0bf492fc84c3df01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99862ebd2568f5b782683b45df57b331cb0b659d0a48b16d898658463fcaa023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fac42d74d7b0553ef1fda4393cbe7872b21bc18baada8519db8137f2ef71d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce1b28200696e0c819d326a53d481baf48418232ac2a66fe01504e7af648c56f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1da63ac4504d6a817f5c856ae4ec6feee655c983f804372f8100624e0fbb08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0167ad24e01c669d29e5bcad3d3dc4e2e7578cf20278001ea2dfe9f1390bc99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0167ad24e01c669d29e5bcad3d3dc4e2e7578cf20278001ea2dfe9f1390bc99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc6d6b373b603100e83b56735f848166bbe70eaffa795411d0bf95bbfbe31c6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc6d6b373b603100e83b56735f848166bbe70eaffa795411d0bf95bbfbe31c6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f029352eaf11747f4efebbb91000b740e298e03d72e8a1347316668b78c23cae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f029352eaf11747f4efebbb91000b740e298e03d72e8a1347316668b78c23cae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:33Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:33 crc kubenswrapper[4808]: I0311 08:41:33.654853 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b088d2a8d5895f936e824861275482977fb68600e1ca4fd0c08e124c4f11da52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:33Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:33 crc kubenswrapper[4808]: I0311 08:41:33.668972 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:33Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:33 crc kubenswrapper[4808]: I0311 08:41:33.681385 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:33Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:33 crc kubenswrapper[4808]: I0311 08:41:33.695824 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afd64db1e4772f4558a0a810e319481d1c8d521b299c9f6f79d5c0f752b5f89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8deb33e0cd83c7419f7ddd4cba0e84d33849e982b5da5e74e0e93eebb40b249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:33Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:33 crc kubenswrapper[4808]: I0311 08:41:33.715591 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17fbdac0c2d9f6080066329737fa9866b0e953b38687d6c7b2c62dea2e9964a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02a71d4f33fd4d8bcf666c2d01f03d31c6e0f2b8eaa0cc183b48251e36cdc474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1d10229731c437de7f29f504fec11ac3688a0a5039812d76551a867f814b95e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a351ca31265d6256ce9c08680495dd7a9b7882cd17a8faf4a0d224a800737e4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3674634f2b9f7a0e2fcf3758da17080b26a6284c8446d8326bc405b1307f8c22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f11f10c1e9b99cab011ae05555fe3dd8aeb6e7df5cb166f07c6bcfffc9f9164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7422e36aafb7b9003baec36b0abbd0219b5ed0ae1b67122e9453cfd9016e8a38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7422e36aafb7b9003baec36b0abbd0219b5ed0ae1b67122e9453cfd9016e8a38\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-11T08:41:12Z\\\",\\\"message\\\":\\\".Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0311 08:41:12.635982 7112 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-2r84h in node crc\\\\nI0311 08:41:12.635989 7112 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nI0311 08:41:12.635995 7112 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-2r84h after 0 failed attempt(s)\\\\nI0311 08:41:12.636008 7112 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-2r84h\\\\nI0311 08:41:12.635953 7112 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nF0311 08:41:12.636023 7112 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:41:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8wfl5_openshift-ovn-kubernetes(afeac5d0-d84f-4776-ae37-a03c8f0f66b8)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01f1b29fe1734c41ddcc1b36367ccc54148fa85f2e05072fc0b60987dfcbb078\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7129de475a69815e0ed556f2d45c208788efcd7ef2792f3d93994af37dee87b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7129de475a69815e0ed556f2d45c208788efcd7ef2792f3d93994af37dee87b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8wfl5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:33Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:33 crc kubenswrapper[4808]: I0311 08:41:33.727909 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dptv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af91f09b-b749-4f04-81ac-2f0079a0dca5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df6497d4025103c2efba8337b4d7541d3d7015be49a7b3478f5648122e9dbd92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtdnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dptv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:33Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:33 crc kubenswrapper[4808]: I0311 08:41:33.741922 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:33Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:33 crc kubenswrapper[4808]: I0311 08:41:33.755993 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dda5309-668d-4e3c-b3b2-1d708eecc578\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://125d3edbb47daf863a7f7b431cd468ba206a964c42d2a8fdbf163ef8ae3fd771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2hw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dad38518c1c86c0eb3926b0ff17b9fe7847bd1e942591307c49dfbd87a61d19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2hw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tfsm9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:33Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:33 crc kubenswrapper[4808]: I0311 08:41:33.773918 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dgh9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1a75dfb-31dd-4275-a309-c9e7130feb05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81effdeab74d0bc6b6f904ca05d6ae5f9777772a1b6e69e9038bb6818af8d979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81effdeab74d0bc6b6f904ca05d6ae5f9777772a1b6e69e9038bb6818af8d979\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-11T08:41:32Z\\\",\\\"message\\\":\\\"2026-03-11T08:40:47+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_eb742e97-ae81-42ce-937c-bf6082493dd3\\\\n2026-03-11T08:40:47+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_eb742e97-ae81-42ce-937c-bf6082493dd3 to /host/opt/cni/bin/\\\\n2026-03-11T08:40:47Z [verbose] multus-daemon started\\\\n2026-03-11T08:40:47Z [verbose] Readiness Indicator file check\\\\n2026-03-11T08:41:32Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chgpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dgh9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:33Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:33 crc kubenswrapper[4808]: I0311 08:41:33.789379 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:41:33 crc kubenswrapper[4808]: I0311 08:41:33.789521 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:41:33 crc kubenswrapper[4808]: I0311 08:41:33.789434 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:41:33 crc kubenswrapper[4808]: E0311 08:41:33.789633 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 11 08:41:33 crc kubenswrapper[4808]: E0311 08:41:33.789798 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 11 08:41:33 crc kubenswrapper[4808]: E0311 08:41:33.790005 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 11 08:41:33 crc kubenswrapper[4808]: I0311 08:41:33.791999 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70aeaa0b-1b9a-450e-bac3-61beed554491\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9a36b70daad0e5e33a84c699b4c0680f47f80bad4c9852616ea933ba8c6b221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1f460dd013b0b42bfb8031fcb1e2f49ace0fe24ce8194bd6e6970ad3e750ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://675da4e867613d165a41176c1b725f3601ffdd7de762c52efec0ee5485b1d628\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46e7f27f0f53872704c9eff7a30b8cb82afea98d12ec0723521b901c7503f203\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f99d53f25011afeb242934cb64a3cce6d5a03c17bd6008729f5e07298f152fb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-11T08:40:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0311 08:40:20.589250 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0311 08:40:20.589388 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0311 08:40:20.590094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1044918164/tls.crt::/tmp/serving-cert-1044918164/tls.key\\\\\\\"\\\\nI0311 08:40:21.049983 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0311 08:40:21.052943 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0311 08:40:21.052965 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0311 08:40:21.052991 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0311 08:40:21.052999 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0311 08:40:21.059520 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0311 08:40:21.059550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:40:21.059569 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:40:21.059578 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0311 08:40:21.059584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0311 08:40:21.059590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0311 08:40:21.059596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0311 08:40:21.059648 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0311 08:40:21.062132 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:41:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://506d71773927c014aeafefdc00c93d5faf4e6a5827bc8b47f525f3ea970ed14e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://184b4659910cdc9b9ee8dc34db1ede687099f4a5103dfba9e9a2fc0428c4e2ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://184b4659910cdc9b9ee8dc34db1ede687099f4a5103dfba9e9a2fc0428c4e2ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:33Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:33 crc kubenswrapper[4808]: I0311 08:41:33.806335 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"134c7d84-5243-4e2e-afcf-0eaf67967ce7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00fd34aefc93aa169e305232d0d0bf0fff66d3f4469ae96f68cc5f8a51a92c86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://023c9b432e3273f96abe72e9e1e4cf5ab68649d9d71d0ff59f75fc30ea5c2287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://023c9b432e3273f96abe72e9e1e4cf5ab68649d9d71d0ff59f75fc30ea5c2287\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:33Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:33 crc kubenswrapper[4808]: I0311 08:41:33.822860 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dd34d63-dc29-46ce-86e3-70f413dfad90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b2b2eafbff37196bcb9cab96b32ba3af5acd4623128c00e2fc933040f09aab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f8347d0738c55456ccb8de04e3b599ef0a2bbaae8ab8eecda19a9bbd9abc7bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://047f453a7b029609fd6596a99014fc89181f9313fdbd391b5c755d6f7b7b0db0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8915625a5b41d93fa3b078e2e9b1b6cf18c36f3ba7ada3a505ab826c49ce3b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8915625a5b41d93fa3b078e2e9b1b6cf18c36f3ba7ada3a505ab826c49ce3b6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:33Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:33 crc kubenswrapper[4808]: I0311 08:41:33.837290 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a12f71a3a92a4513660a5b6c0bb73f40545f5528a45fab7aec18cb22283921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:33Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:34 crc kubenswrapper[4808]: I0311 08:41:34.541828 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dgh9v_c1a75dfb-31dd-4275-a309-c9e7130feb05/kube-multus/0.log" Mar 11 08:41:34 crc kubenswrapper[4808]: I0311 08:41:34.541917 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-dgh9v" event={"ID":"c1a75dfb-31dd-4275-a309-c9e7130feb05","Type":"ContainerStarted","Data":"e012d5673ee21ba84ca94a2309891fa86969898e35da381f78fc2c18734d636c"} Mar 11 08:41:34 crc kubenswrapper[4808]: I0311 08:41:34.566730 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afd64db1e4772f4558a0a810e319481d1c8d521b299c9f6f79d5c0f752b5f89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8deb33e0cd83c7419f7ddd4cba0e84d33849e982b5da5e74e0e93eebb40b249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:34Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:34 crc kubenswrapper[4808]: I0311 08:41:34.591046 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17fbdac0c2d9f6080066329737fa9866b0e953b38687d6c7b2c62dea2e9964a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02a71d4f33fd4d8bcf666c2d01f03d31c6e0f2b8eaa0cc183b48251e36cdc474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1d10229731c437de7f29f504fec11ac3688a0a5039812d76551a867f814b95e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a351ca31265d6256ce9c08680495dd7a9b7882cd17a8faf4a0d224a800737e4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3674634f2b9f7a0e2fcf3758da17080b26a6284c8446d8326bc405b1307f8c22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f11f10c1e9b99cab011ae05555fe3dd8aeb6e7df5cb166f07c6bcfffc9f9164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7422e36aafb7b9003baec36b0abbd0219b5ed0ae1b67122e9453cfd9016e8a38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7422e36aafb7b9003baec36b0abbd0219b5ed0ae1b67122e9453cfd9016e8a38\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-11T08:41:12Z\\\",\\\"message\\\":\\\".Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0311 08:41:12.635982 7112 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-2r84h in node crc\\\\nI0311 08:41:12.635989 7112 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nI0311 08:41:12.635995 7112 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-2r84h after 0 failed attempt(s)\\\\nI0311 08:41:12.636008 7112 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-2r84h\\\\nI0311 08:41:12.635953 7112 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nF0311 08:41:12.636023 7112 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:41:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8wfl5_openshift-ovn-kubernetes(afeac5d0-d84f-4776-ae37-a03c8f0f66b8)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01f1b29fe1734c41ddcc1b36367ccc54148fa85f2e05072fc0b60987dfcbb078\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7129de475a69815e0ed556f2d45c208788efcd7ef2792f3d93994af37dee87b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7129de475a69815e0ed556f2d45c208788efcd7ef2792f3d93994af37dee87b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8wfl5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:34Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:34 crc kubenswrapper[4808]: I0311 08:41:34.605463 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dptv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af91f09b-b749-4f04-81ac-2f0079a0dca5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df6497d4025103c2efba8337b4d7541d3d7015be49a7b3478f5648122e9dbd92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtdnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dptv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:34Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:34 crc kubenswrapper[4808]: I0311 08:41:34.623316 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dda5309-668d-4e3c-b3b2-1d708eecc578\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://125d3edbb47daf863a7f7b431cd468ba206a964c42d2a8fdbf163ef8ae3fd771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2hw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dad38518c1c86c0eb3926b0ff17b9fe7847bd1e942591307c49dfbd87a61d19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2hw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tfsm9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:34Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:34 crc kubenswrapper[4808]: I0311 08:41:34.637482 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dgh9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1a75dfb-31dd-4275-a309-c9e7130feb05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e012d5673ee21ba84ca94a2309891fa86969898e35da381f78fc2c18734d636c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81effdeab74d0bc6b6f904ca05d6ae5f9777772a1b6e69e9038bb6818af8d979\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-11T08:41:32Z\\\",\\\"message\\\":\\\"2026-03-11T08:40:47+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_eb742e97-ae81-42ce-937c-bf6082493dd3\\\\n2026-03-11T08:40:47+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_eb742e97-ae81-42ce-937c-bf6082493dd3 to /host/opt/cni/bin/\\\\n2026-03-11T08:40:47Z [verbose] multus-daemon started\\\\n2026-03-11T08:40:47Z [verbose] Readiness Indicator file check\\\\n2026-03-11T08:41:32Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chgpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dgh9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:34Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:34 crc kubenswrapper[4808]: I0311 08:41:34.657628 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70aeaa0b-1b9a-450e-bac3-61beed554491\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9a36b70daad0e5e33a84c699b4c0680f47f80bad4c9852616ea933ba8c6b221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1f460dd013b0b42bfb8031fcb1e2f49ace0fe24ce8194bd6e6970ad3e750ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://675da4e867613d165a41176c1b725f3601ffdd7de762c52efec0ee5485b1d628\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46e7f27f0f53872704c9eff7a30b8cb82afea98d12ec0723521b901c7503f203\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f99d53f25011afeb242934cb64a3cce6d5a03c17bd6008729f5e07298f152fb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-11T08:40:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0311 08:40:20.589250 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0311 08:40:20.589388 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0311 08:40:20.590094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1044918164/tls.crt::/tmp/serving-cert-1044918164/tls.key\\\\\\\"\\\\nI0311 08:40:21.049983 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0311 08:40:21.052943 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0311 08:40:21.052965 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0311 08:40:21.052991 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0311 08:40:21.052999 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0311 08:40:21.059520 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0311 08:40:21.059550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:40:21.059569 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:40:21.059578 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0311 08:40:21.059584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0311 08:40:21.059590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0311 08:40:21.059596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0311 08:40:21.059648 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0311 08:40:21.062132 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:41:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://506d71773927c014aeafefdc00c93d5faf4e6a5827bc8b47f525f3ea970ed14e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://184b4659910cdc9b9ee8dc34db1ede687099f4a5103dfba9e9a2fc0428c4e2ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://184b4659910cdc9b9ee8dc34db1ede687099f4a5103dfba9e9a2fc0428c4e2ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:34Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:34 crc kubenswrapper[4808]: I0311 08:41:34.673532 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"134c7d84-5243-4e2e-afcf-0eaf67967ce7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00fd34aefc93aa169e305232d0d0bf0fff66d3f4469ae96f68cc5f8a51a92c86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://023c9b432e3273f96abe72e9e1e4cf5ab68649d9d71d0ff59f75fc30ea5c2287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://023c9b432e3273f96abe72e9e1e4cf5ab68649d9d71d0ff59f75fc30ea5c2287\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:34Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:34 crc kubenswrapper[4808]: I0311 08:41:34.689821 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dd34d63-dc29-46ce-86e3-70f413dfad90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b2b2eafbff37196bcb9cab96b32ba3af5acd4623128c00e2fc933040f09aab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f8347d0738c55456ccb8de04e3b599ef0a2bbaae8ab8eecda19a9bbd9abc7bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://047f453a7b029609fd6596a99014fc89181f9313fdbd391b5c755d6f7b7b0db0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8915625a5b41d93fa3b078e2e9b1b6cf18c36f3ba7ada3a505ab826c49ce3b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8915625a5b41d93fa3b078e2e9b1b6cf18c36f3ba7ada3a505ab826c49ce3b6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:34Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:34 crc kubenswrapper[4808]: I0311 08:41:34.705234 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:34Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:34 crc kubenswrapper[4808]: I0311 08:41:34.721054 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a12f71a3a92a4513660a5b6c0bb73f40545f5528a45fab7aec18cb22283921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:34Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:34 crc kubenswrapper[4808]: I0311 08:41:34.738117 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2r84h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcd5dff4-0826-4876-9fd3-3f19781a17bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3814b88c4b0107b15c1dacb212e0a05451ebc05065fcb0c25318df4808220e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f840757548b4af73aae0c247025a649cbf50cfbc534fbd38f0592f0f0e97274\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f840757548b4af73aae0c247025a649cbf50cfbc534fbd38f0592f0f0e97274\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ade0f54a4b7718d83ed1ed5025f91b42175bf7daa1327a150a6eedc99df8e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ade0f54a4b7718d83ed1ed5025f91b42175bf7daa1327a150a6eedc99df8e3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e872db89da909105a9343e8b9951c7c88bfd5107edb4386faa05c08b37e34bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e872db89da909105a9343e8b9951c7c88bfd5107edb4386faa05c08b37e34bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14920dc320f545db70ffb1b588dcfed5cb019f04e0dddd1d38db7e0696429fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14920dc320f545db70ffb1b588dcfed5cb019f04e0dddd1d38db7e0696429fc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc79537b4712c9762fe2a813ef7f12b5d6314ed2aa66741561fe635591da8a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc79537b4712c9762fe2a813ef7f12b5d6314ed2aa66741561fe635591da8a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a689c8bcf97ed3a6950a775029df058df856a2cc37cdc511d0ed62a36d9f7d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a689c8bcf97ed3a6950a775029df058df856a2cc37cdc511d0ed62a36d9f7d21\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2r84h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:34Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:34 crc kubenswrapper[4808]: I0311 08:41:34.752218 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nqcqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da0586e9-4a6f-4ba5-bc48-568fd9cdb0e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddf318b7c04801c4d010fa760bf0a48a82d26569a7bbfffa015dab3c23e0d470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d5q8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d8eb53e3ea80c3e2e9409ea178b232442d22adb54c055b8487fd6dca0ee4ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d5q8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nqcqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:34Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:34 crc kubenswrapper[4808]: I0311 08:41:34.764029 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kqsq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf747e37-c201-4dcc-a2a5-2429f4eba47d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7666v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7666v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kqsq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:34Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:34 crc kubenswrapper[4808]: I0311 08:41:34.784443 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39fe7c21-7960-42c6-aa61-98bac9e7ecd5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46ab0655a5449d68489b2ad4cdadf1ab8539dfc66e9a09ef0bf492fc84c3df01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99862ebd2568f5b782683b45df57b331cb0b659d0a48b16d898658463fcaa023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fac42d74d7b0553ef1fda4393cbe7872b21bc18baada8519db8137f2ef71d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce1b28200696e0c819d326a53d481baf48418232ac2a66fe01504e7af648c56f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1da63ac4504d6a817f5c856ae4ec6feee655c983f804372f8100624e0fbb08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0167ad24e01c669d29e5bcad3d3dc4e2e7578cf20278001ea2dfe9f1390bc99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0167ad24e01c669d29e5bcad3d3dc4e2e7578cf20278001ea2dfe9f1390bc99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc6d6b373b603100e83b56735f848166bbe70eaffa795411d0bf95bbfbe31c6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc6d6b373b603100e83b56735f848166bbe70eaffa795411d0bf95bbfbe31c6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f029352eaf11747f4efebbb91000b740e298e03d72e8a1347316668b78c23cae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f029352eaf11747f4efebbb91000b740e298e03d72e8a1347316668b78c23cae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:34Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:34 crc kubenswrapper[4808]: I0311 08:41:34.788335 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kqsq9" Mar 11 08:41:34 crc kubenswrapper[4808]: E0311 08:41:34.788501 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kqsq9" podUID="cf747e37-c201-4dcc-a2a5-2429f4eba47d" Mar 11 08:41:34 crc kubenswrapper[4808]: I0311 08:41:34.816078 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b088d2a8d5895f936e824861275482977fb68600e1ca4fd0c08e124c4f11da52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:34Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:34 crc kubenswrapper[4808]: I0311 08:41:34.829180 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:34Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:34 crc kubenswrapper[4808]: I0311 08:41:34.844540 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:34Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:34 crc kubenswrapper[4808]: I0311 08:41:34.859606 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-twvrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6377c39c-8ecf-409c-b3e7-ea9d717e234f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a910f47666db2f5fb733718e848c75c15d9f0975c7339afd9e21f21ede7aa1b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c99gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-twvrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:34Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:34 crc kubenswrapper[4808]: E0311 08:41:34.916685 4808 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 11 08:41:35 crc kubenswrapper[4808]: I0311 08:41:35.789686 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:41:35 crc kubenswrapper[4808]: I0311 08:41:35.789780 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:41:35 crc kubenswrapper[4808]: I0311 08:41:35.789710 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:41:35 crc kubenswrapper[4808]: E0311 08:41:35.789897 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 11 08:41:35 crc kubenswrapper[4808]: E0311 08:41:35.790043 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 11 08:41:35 crc kubenswrapper[4808]: E0311 08:41:35.790144 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 11 08:41:36 crc kubenswrapper[4808]: I0311 08:41:36.788731 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kqsq9" Mar 11 08:41:36 crc kubenswrapper[4808]: E0311 08:41:36.788921 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kqsq9" podUID="cf747e37-c201-4dcc-a2a5-2429f4eba47d" Mar 11 08:41:37 crc kubenswrapper[4808]: I0311 08:41:37.789199 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:41:37 crc kubenswrapper[4808]: I0311 08:41:37.789286 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:41:37 crc kubenswrapper[4808]: I0311 08:41:37.789224 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:41:37 crc kubenswrapper[4808]: E0311 08:41:37.789448 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 11 08:41:37 crc kubenswrapper[4808]: E0311 08:41:37.789676 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 11 08:41:37 crc kubenswrapper[4808]: E0311 08:41:37.789917 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 11 08:41:38 crc kubenswrapper[4808]: I0311 08:41:38.789084 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kqsq9" Mar 11 08:41:38 crc kubenswrapper[4808]: E0311 08:41:38.789335 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kqsq9" podUID="cf747e37-c201-4dcc-a2a5-2429f4eba47d" Mar 11 08:41:39 crc kubenswrapper[4808]: I0311 08:41:39.789427 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:41:39 crc kubenswrapper[4808]: I0311 08:41:39.789453 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:41:39 crc kubenswrapper[4808]: I0311 08:41:39.789536 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:41:39 crc kubenswrapper[4808]: E0311 08:41:39.790466 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 11 08:41:39 crc kubenswrapper[4808]: E0311 08:41:39.790610 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 11 08:41:39 crc kubenswrapper[4808]: E0311 08:41:39.790753 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 11 08:41:39 crc kubenswrapper[4808]: I0311 08:41:39.791165 4808 scope.go:117] "RemoveContainer" containerID="7422e36aafb7b9003baec36b0abbd0219b5ed0ae1b67122e9453cfd9016e8a38" Mar 11 08:41:39 crc kubenswrapper[4808]: I0311 08:41:39.809721 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"134c7d84-5243-4e2e-afcf-0eaf67967ce7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00fd34aefc93aa169e305232d0d0bf0fff66d3f4469ae96f68cc5f8a51a92c86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://023c9b432e3273f96abe72e9e1e4cf5ab68649d9d71d0ff59f75fc30ea5c2287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://023c9b432e3273f96abe72e9e1e4cf5ab68649d9d71d0ff59f75fc30ea5c2287\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:39Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:39 crc kubenswrapper[4808]: I0311 08:41:39.831612 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dd34d63-dc29-46ce-86e3-70f413dfad90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b2b2eafbff37196bcb9cab96b32ba3af5acd4623128c00e2fc933040f09aab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f8347d0738c55456ccb8de04e3b599ef0a2bbaae8ab8eecda19a9bbd9abc7bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://047f453a7b029609fd6596a99014fc89181f9313fdbd391b5c755d6f7b7b0db0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8915625a5b41d93fa3b078e2e9b1b6cf18c36f3ba7ada3a505ab826c49ce3b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8915625a5b41d93fa3b078e2e9b1b6cf18c36f3ba7ada3a505ab826c49ce3b6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:39Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:39 crc kubenswrapper[4808]: I0311 08:41:39.852216 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:39Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:39 crc kubenswrapper[4808]: I0311 08:41:39.872413 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dda5309-668d-4e3c-b3b2-1d708eecc578\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://125d3edbb47daf863a7f7b431cd468ba206a964c42d2a8fdbf163ef8ae3fd771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2hw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dad38518c1c86c0eb3926b0ff17b9fe7847bd1e942591307c49dfbd87a61d19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2hw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tfsm9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:39Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:39 crc kubenswrapper[4808]: I0311 08:41:39.899484 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dgh9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1a75dfb-31dd-4275-a309-c9e7130feb05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e012d5673ee21ba84ca94a2309891fa86969898e35da381f78fc2c18734d636c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81effdeab74d0bc6b6f904ca05d6ae5f9777772a1b6e69e9038bb6818af8d979\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-11T08:41:32Z\\\",\\\"message\\\":\\\"2026-03-11T08:40:47+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_eb742e97-ae81-42ce-937c-bf6082493dd3\\\\n2026-03-11T08:40:47+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_eb742e97-ae81-42ce-937c-bf6082493dd3 to /host/opt/cni/bin/\\\\n2026-03-11T08:40:47Z [verbose] multus-daemon started\\\\n2026-03-11T08:40:47Z [verbose] Readiness Indicator file check\\\\n2026-03-11T08:41:32Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chgpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dgh9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:39Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:39 crc kubenswrapper[4808]: E0311 08:41:39.917320 4808 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 11 08:41:39 crc kubenswrapper[4808]: I0311 08:41:39.924026 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70aeaa0b-1b9a-450e-bac3-61beed554491\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9a36b70daad0e5e33a84c699b4c0680f47f80bad4c9852616ea933ba8c6b221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1f460dd013b0b42bfb8031fcb1e2f49ace0fe24ce8194bd6e6970ad3e750ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://675da4e867613d165a41176c1b725f3601ffdd7de762c52efec0ee5485b1d628\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46e7f27f0f53872704c9eff7a30b8cb82afea98d12ec0723521b901c7503f203\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f99d53f25011afeb242934cb64a3cce6d5a03c17bd6008729f5e07298f152fb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-11T08:40:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0311 08:40:20.589250 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0311 08:40:20.589388 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0311 08:40:20.590094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1044918164/tls.crt::/tmp/serving-cert-1044918164/tls.key\\\\\\\"\\\\nI0311 08:40:21.049983 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0311 08:40:21.052943 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0311 08:40:21.052965 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0311 08:40:21.052991 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0311 08:40:21.052999 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0311 08:40:21.059520 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0311 08:40:21.059550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:40:21.059569 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:40:21.059578 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0311 08:40:21.059584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0311 08:40:21.059590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0311 08:40:21.059596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0311 08:40:21.059648 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0311 08:40:21.062132 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:41:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://506d71773927c014aeafefdc00c93d5faf4e6a5827bc8b47f525f3ea970ed14e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://184b4659910cdc9b9ee8dc34db1ede687099f4a5103dfba9e9a2fc0428c4e2ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://184b4659910cdc9b9ee8dc34db1ede687099f4a5103dfba9e9a2fc0428c4e2ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:39Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:39 crc kubenswrapper[4808]: I0311 08:41:39.947991 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a12f71a3a92a4513660a5b6c0bb73f40545f5528a45fab7aec18cb22283921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:39Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:39 crc kubenswrapper[4808]: I0311 08:41:39.969980 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:39Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:39 crc kubenswrapper[4808]: I0311 08:41:39.992922 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:39Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:40 crc kubenswrapper[4808]: I0311 08:41:40.010667 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-twvrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6377c39c-8ecf-409c-b3e7-ea9d717e234f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a910f47666db2f5fb733718e848c75c15d9f0975c7339afd9e21f21ede7aa1b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c99gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-twvrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:40Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:40 crc kubenswrapper[4808]: I0311 08:41:40.039518 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2r84h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcd5dff4-0826-4876-9fd3-3f19781a17bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3814b88c4b0107b15c1dacb212e0a05451ebc05065fcb0c25318df4808220e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f840757548b4af73aae0c247025a649cbf50cfbc534fbd38f0592f0f0e97274\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f840757548b4af73aae0c247025a649cbf50cfbc534fbd38f0592f0f0e97274\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ade0f54a4b7718d83ed1ed5025f91b42175bf7daa1327a150a6eedc99df8e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ade0f54a4b7718d83ed1ed5025f91b42175bf7daa1327a150a6eedc99df8e3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e872db89da909105a9343e8b9951c7c88bfd5107edb4386faa05c08b37e34bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e872db89da909105a9343e8b9951c7c88bfd5107edb4386faa05c08b37e34bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14920dc320f545db70ffb1b588dcfed5cb019f04e0dddd1d38db7e0696429fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14920dc320f545db70ffb1b588dcfed5cb019f04e0dddd1d38db7e0696429fc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc79537b4712c9762fe2a813ef7f12b5d6314ed2aa66741561fe635591da8a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc79537b4712c9762fe2a813ef7f12b5d6314ed2aa66741561fe635591da8a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a689c8bcf97ed3a6950a775029df058df856a2cc37cdc511d0ed62a36d9f7d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a689c8bcf97ed3a6950a775029df058df856a2cc37cdc511d0ed62a36d9f7d21\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2r84h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:40Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:40 crc kubenswrapper[4808]: I0311 08:41:40.055455 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nqcqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da0586e9-4a6f-4ba5-bc48-568fd9cdb0e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddf318b7c04801c4d010fa760bf0a48a82d26569a7bbfffa015dab3c23e0d470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d5q8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d8eb53e3ea80c3e2e9409ea178b232442d22adb54c055b8487fd6dca0ee4ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d5q8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nqcqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:40Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:40 crc kubenswrapper[4808]: I0311 08:41:40.069751 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kqsq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf747e37-c201-4dcc-a2a5-2429f4eba47d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7666v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7666v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kqsq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:40Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:40 crc kubenswrapper[4808]: I0311 08:41:40.096868 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39fe7c21-7960-42c6-aa61-98bac9e7ecd5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46ab0655a5449d68489b2ad4cdadf1ab8539dfc66e9a09ef0bf492fc84c3df01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99862ebd2568f5b782683b45df57b331cb0b659d0a48b16d898658463fcaa023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fac42d74d7b0553ef1fda4393cbe7872b21bc18baada8519db8137f2ef71d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce1b28200696e0c819d326a53d481baf48418232ac2a66fe01504e7af648c56f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1da63ac4504d6a817f5c856ae4ec6feee655c983f804372f8100624e0fbb08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0167ad24e01c669d29e5bcad3d3dc4e2e7578cf20278001ea2dfe9f1390bc99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0167ad24e01c669d29e5bcad3d3dc4e2e7578cf20278001ea2dfe9f1390bc99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc6d6b373b603100e83b56735f848166bbe70eaffa795411d0bf95bbfbe31c6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc6d6b373b603100e83b56735f848166bbe70eaffa795411d0bf95bbfbe31c6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f029352eaf11747f4efebbb91000b740e298e03d72e8a1347316668b78c23cae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f029352eaf11747f4efebbb91000b740e298e03d72e8a1347316668b78c23cae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:40Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:40 crc kubenswrapper[4808]: I0311 08:41:40.122288 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b088d2a8d5895f936e824861275482977fb68600e1ca4fd0c08e124c4f11da52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:40Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:40 crc kubenswrapper[4808]: I0311 08:41:40.140971 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dptv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af91f09b-b749-4f04-81ac-2f0079a0dca5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df6497d4025103c2efba8337b4d7541d3d7015be49a7b3478f5648122e9dbd92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtdnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dptv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:40Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:40 crc kubenswrapper[4808]: I0311 08:41:40.200537 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afd64db1e4772f4558a0a810e319481d1c8d521b299c9f6f79d5c0f752b5f89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8deb33e0cd83c7419f7ddd4cba0e84d33849e982b5da5e74e0e93eebb40b249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:40Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:40 crc kubenswrapper[4808]: I0311 08:41:40.238767 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17fbdac0c2d9f6080066329737fa9866b0e953b38687d6c7b2c62dea2e9964a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02a71d4f33fd4d8bcf666c2d01f03d31c6e0f2b8eaa0cc183b48251e36cdc474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1d10229731c437de7f29f504fec11ac3688a0a5039812d76551a867f814b95e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a351ca31265d6256ce9c08680495dd7a9b7882cd17a8faf4a0d224a800737e4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3674634f2b9f7a0e2fcf3758da17080b26a6284c8446d8326bc405b1307f8c22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f11f10c1e9b99cab011ae05555fe3dd8aeb6e7df5cb166f07c6bcfffc9f9164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7422e36aafb7b9003baec36b0abbd0219b5ed0ae1b67122e9453cfd9016e8a38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7422e36aafb7b9003baec36b0abbd0219b5ed0ae1b67122e9453cfd9016e8a38\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-11T08:41:12Z\\\",\\\"message\\\":\\\".Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0311 08:41:12.635982 7112 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-2r84h in node crc\\\\nI0311 08:41:12.635989 7112 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nI0311 08:41:12.635995 7112 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-2r84h after 0 failed attempt(s)\\\\nI0311 08:41:12.636008 7112 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-2r84h\\\\nI0311 08:41:12.635953 7112 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nF0311 08:41:12.636023 7112 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:41:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8wfl5_openshift-ovn-kubernetes(afeac5d0-d84f-4776-ae37-a03c8f0f66b8)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01f1b29fe1734c41ddcc1b36367ccc54148fa85f2e05072fc0b60987dfcbb078\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7129de475a69815e0ed556f2d45c208788efcd7ef2792f3d93994af37dee87b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7129de475a69815e0ed556f2d45c208788efcd7ef2792f3d93994af37dee87b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8wfl5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:40Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:40 crc kubenswrapper[4808]: I0311 08:41:40.564697 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8wfl5_afeac5d0-d84f-4776-ae37-a03c8f0f66b8/ovnkube-controller/2.log" Mar 11 08:41:40 crc kubenswrapper[4808]: I0311 08:41:40.567843 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" event={"ID":"afeac5d0-d84f-4776-ae37-a03c8f0f66b8","Type":"ContainerStarted","Data":"e8ed6835884a9397b2548e8636b4e06a1733e82faf47693035faf763591bfa72"} Mar 11 08:41:40 crc kubenswrapper[4808]: I0311 08:41:40.568295 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" Mar 11 08:41:40 crc kubenswrapper[4808]: I0311 08:41:40.584950 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70aeaa0b-1b9a-450e-bac3-61beed554491\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9a36b70daad0e5e33a84c699b4c0680f47f80bad4c9852616ea933ba8c6b221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1f460dd013b0b42bfb8031fcb1e2f49ace0fe24ce8194bd6e6970ad3e750ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://675da4e867613d165a41176c1b725f3601ffdd7de762c52efec0ee5485b1d628\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46e7f27f0f53872704c9eff7a30b8cb82afea98d12ec0723521b901c7503f203\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f99d53f25011afeb242934cb64a3cce6d5a03c17bd6008729f5e07298f152fb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-11T08:40:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0311 08:40:20.589250 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0311 08:40:20.589388 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0311 08:40:20.590094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1044918164/tls.crt::/tmp/serving-cert-1044918164/tls.key\\\\\\\"\\\\nI0311 08:40:21.049983 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0311 08:40:21.052943 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0311 08:40:21.052965 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0311 08:40:21.052991 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0311 08:40:21.052999 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0311 08:40:21.059520 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0311 08:40:21.059550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:40:21.059569 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:40:21.059578 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0311 08:40:21.059584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0311 08:40:21.059590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0311 08:40:21.059596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0311 08:40:21.059648 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0311 08:40:21.062132 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:41:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://506d71773927c014aeafefdc00c93d5faf4e6a5827bc8b47f525f3ea970ed14e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://184b4659910cdc9b9ee8dc34db1ede687099f4a5103dfba9e9a2fc0428c4e2ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://184b4659910cdc9b9ee8dc34db1ede687099f4a5103dfba9e9a2fc0428c4e2ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:40Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:40 crc kubenswrapper[4808]: I0311 08:41:40.599106 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"134c7d84-5243-4e2e-afcf-0eaf67967ce7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00fd34aefc93aa169e305232d0d0bf0fff66d3f4469ae96f68cc5f8a51a92c86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://023c9b432e3273f96abe72e9e1e4cf5ab68649d9d71d0ff59f75fc30ea5c2287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://023c9b432e3273f96abe72e9e1e4cf5ab68649d9d71d0ff59f75fc30ea5c2287\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:40Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:40 crc kubenswrapper[4808]: I0311 08:41:40.613285 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dd34d63-dc29-46ce-86e3-70f413dfad90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b2b2eafbff37196bcb9cab96b32ba3af5acd4623128c00e2fc933040f09aab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f8347d0738c55456ccb8de04e3b599ef0a2bbaae8ab8eecda19a9bbd9abc7bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://047f453a7b029609fd6596a99014fc89181f9313fdbd391b5c755d6f7b7b0db0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8915625a5b41d93fa3b078e2e9b1b6cf18c36f3ba7ada3a505ab826c49ce3b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8915625a5b41d93fa3b078e2e9b1b6cf18c36f3ba7ada3a505ab826c49ce3b6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:40Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:40 crc kubenswrapper[4808]: I0311 08:41:40.629755 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:40Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:40 crc kubenswrapper[4808]: I0311 08:41:40.646903 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dda5309-668d-4e3c-b3b2-1d708eecc578\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://125d3edbb47daf863a7f7b431cd468ba206a964c42d2a8fdbf163ef8ae3fd771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2hw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dad38518c1c86c0eb3926b0ff17b9fe7847bd1e942591307c49dfbd87a61d19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2hw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tfsm9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:40Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:40 crc kubenswrapper[4808]: I0311 08:41:40.662504 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dgh9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1a75dfb-31dd-4275-a309-c9e7130feb05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e012d5673ee21ba84ca94a2309891fa86969898e35da381f78fc2c18734d636c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81effdeab74d0bc6b6f904ca05d6ae5f9777772a1b6e69e9038bb6818af8d979\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-11T08:41:32Z\\\",\\\"message\\\":\\\"2026-03-11T08:40:47+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_eb742e97-ae81-42ce-937c-bf6082493dd3\\\\n2026-03-11T08:40:47+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_eb742e97-ae81-42ce-937c-bf6082493dd3 to /host/opt/cni/bin/\\\\n2026-03-11T08:40:47Z [verbose] multus-daemon started\\\\n2026-03-11T08:40:47Z [verbose] Readiness Indicator file check\\\\n2026-03-11T08:41:32Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chgpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dgh9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:40Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:40 crc kubenswrapper[4808]: I0311 08:41:40.676883 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a12f71a3a92a4513660a5b6c0bb73f40545f5528a45fab7aec18cb22283921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:40Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:40 crc kubenswrapper[4808]: I0311 08:41:40.706768 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39fe7c21-7960-42c6-aa61-98bac9e7ecd5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46ab0655a5449d68489b2ad4cdadf1ab8539dfc66e9a09ef0bf492fc84c3df01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99862ebd2568f5b782683b45df57b331cb0b659d0a48b16d898658463fcaa023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fac42d74d7b0553ef1fda4393cbe7872b21bc18baada8519db8137f2ef71d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce1b28200696e0c819d326a53d481baf48418232ac2a66fe01504e7af648c56f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1da63ac4504d6a817f5c856ae4ec6feee655c983f804372f8100624e0fbb08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0167ad24e01c669d29e5bcad3d3dc4e2e7578cf20278001ea2dfe9f1390bc99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0167ad24e01c669d29e5bcad3d3dc4e2e7578cf20278001ea2dfe9f1390bc99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc6d6b373b603100e83b56735f848166bbe70eaffa795411d0bf95bbfbe31c6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc6d6b373b603100e83b56735f848166bbe70eaffa795411d0bf95bbfbe31c6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f029352eaf11747f4efebbb91000b740e298e03d72e8a1347316668b78c23cae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f029352eaf11747f4efebbb91000b740e298e03d72e8a1347316668b78c23cae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:40Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:40 crc kubenswrapper[4808]: I0311 08:41:40.722756 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b088d2a8d5895f936e824861275482977fb68600e1ca4fd0c08e124c4f11da52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:40Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:40 crc kubenswrapper[4808]: I0311 08:41:40.737674 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:40Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:40 crc kubenswrapper[4808]: I0311 08:41:40.752057 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:40Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:40 crc kubenswrapper[4808]: I0311 08:41:40.763384 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-twvrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6377c39c-8ecf-409c-b3e7-ea9d717e234f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a910f47666db2f5fb733718e848c75c15d9f0975c7339afd9e21f21ede7aa1b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c99gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-twvrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:40Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:40 crc kubenswrapper[4808]: I0311 08:41:40.789285 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kqsq9" Mar 11 08:41:40 crc kubenswrapper[4808]: E0311 08:41:40.789521 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kqsq9" podUID="cf747e37-c201-4dcc-a2a5-2429f4eba47d" Mar 11 08:41:40 crc kubenswrapper[4808]: I0311 08:41:40.789818 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2r84h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcd5dff4-0826-4876-9fd3-3f19781a17bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3814b88c4b0107b15c1dacb212e0a05451ebc05065fcb0c25318df4808220e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f840757548b4af73aae0c247025a649cbf50cfbc534fbd38f0592f0f0e97274\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f840757548b4af73aae0c247025a649cbf50cfbc534fbd38f0592f0f0e97274\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ade0f54a4b7718d83ed1ed5025f91b42175bf7daa1327a150a6eedc99df8e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ade0f54a4b7718d83ed1ed5025f91b42175bf7daa1327a150a6eedc99df8e3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e872db89da909105a9343e8b9951c7c88bfd5107edb4386faa05c08b37e34bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e872db89da909105a9343e8b9951c7c88bfd5107edb4386faa05c08b37e34bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14920dc320f545db70ffb1b588dcfed5cb019f04e0dddd1d38db7e0696429fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14920dc320f545db70ffb1b588dcfed5cb019f04e0dddd1d38db7e0696429fc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc79537b4712c9762fe2a813ef7f12b5d6314ed2aa66741561fe635591da8a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc79537b4712c9762fe2a813ef7f12b5d6314ed2aa66741561fe635591da8a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a689c8bcf97ed3a6950a775029df058df856a2cc37cdc511d0ed62a36d9f7d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a689c8bcf97ed3a6950a775029df058df856a2cc37cdc511d0ed62a36d9f7d21\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2r84h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:40Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:40 crc kubenswrapper[4808]: I0311 08:41:40.803167 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nqcqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da0586e9-4a6f-4ba5-bc48-568fd9cdb0e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddf318b7c04801c4d010fa760bf0a48a82d26569a7bbfffa015dab3c23e0d470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d5q8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d8eb53e3ea80c3e2e9409ea178b232442d22adb54c055b8487fd6dca0ee4ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d5q8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nqcqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:40Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:40 crc kubenswrapper[4808]: I0311 08:41:40.811922 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kqsq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf747e37-c201-4dcc-a2a5-2429f4eba47d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7666v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7666v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kqsq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:40Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:40 crc kubenswrapper[4808]: I0311 08:41:40.825476 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afd64db1e4772f4558a0a810e319481d1c8d521b299c9f6f79d5c0f752b5f89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8deb33e0cd83c7419f7ddd4cba0e84d33849e982b5da5e74e0e93eebb40b249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:40Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:40 crc kubenswrapper[4808]: I0311 08:41:40.845028 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17fbdac0c2d9f6080066329737fa9866b0e953b38687d6c7b2c62dea2e9964a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02a71d4f33fd4d8bcf666c2d01f03d31c6e0f2b8eaa0cc183b48251e36cdc474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1d10229731c437de7f29f504fec11ac3688a0a5039812d76551a867f814b95e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a351ca31265d6256ce9c08680495dd7a9b7882cd17a8faf4a0d224a800737e4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3674634f2b9f7a0e2fcf3758da17080b26a6284c8446d8326bc405b1307f8c22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f11f10c1e9b99cab011ae05555fe3dd8aeb6e7df5cb166f07c6bcfffc9f9164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8ed6835884a9397b2548e8636b4e06a1733e82faf47693035faf763591bfa72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7422e36aafb7b9003baec36b0abbd0219b5ed0ae1b67122e9453cfd9016e8a38\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-11T08:41:12Z\\\",\\\"message\\\":\\\".Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0311 08:41:12.635982 7112 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-2r84h in node crc\\\\nI0311 08:41:12.635989 7112 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nI0311 08:41:12.635995 7112 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-2r84h after 0 failed attempt(s)\\\\nI0311 08:41:12.636008 7112 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-2r84h\\\\nI0311 08:41:12.635953 7112 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nF0311 08:41:12.636023 7112 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:41:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01f1b29fe1734c41ddcc1b36367ccc54148fa85f2e05072fc0b60987dfcbb078\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7129de475a69815e0ed556f2d45c208788efcd7ef2792f3d93994af37dee87b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7129de475a69815e0ed556f2d45c208788efcd7ef2792f3d93994af37dee87b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8wfl5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:40Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:40 crc kubenswrapper[4808]: I0311 08:41:40.856169 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dptv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af91f09b-b749-4f04-81ac-2f0079a0dca5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df6497d4025103c2efba8337b4d7541d3d7015be49a7b3478f5648122e9dbd92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtdnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dptv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:40Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:41 crc kubenswrapper[4808]: I0311 08:41:41.574286 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8wfl5_afeac5d0-d84f-4776-ae37-a03c8f0f66b8/ovnkube-controller/3.log" Mar 11 08:41:41 crc kubenswrapper[4808]: I0311 08:41:41.575797 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8wfl5_afeac5d0-d84f-4776-ae37-a03c8f0f66b8/ovnkube-controller/2.log" Mar 11 08:41:41 crc kubenswrapper[4808]: I0311 08:41:41.581525 4808 generic.go:334] "Generic (PLEG): container finished" podID="afeac5d0-d84f-4776-ae37-a03c8f0f66b8" containerID="e8ed6835884a9397b2548e8636b4e06a1733e82faf47693035faf763591bfa72" exitCode=1 Mar 11 08:41:41 crc kubenswrapper[4808]: I0311 08:41:41.581589 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" event={"ID":"afeac5d0-d84f-4776-ae37-a03c8f0f66b8","Type":"ContainerDied","Data":"e8ed6835884a9397b2548e8636b4e06a1733e82faf47693035faf763591bfa72"} Mar 11 08:41:41 crc kubenswrapper[4808]: I0311 08:41:41.581667 4808 scope.go:117] "RemoveContainer" containerID="7422e36aafb7b9003baec36b0abbd0219b5ed0ae1b67122e9453cfd9016e8a38" Mar 11 08:41:41 crc kubenswrapper[4808]: I0311 08:41:41.582669 4808 scope.go:117] "RemoveContainer" containerID="e8ed6835884a9397b2548e8636b4e06a1733e82faf47693035faf763591bfa72" Mar 11 08:41:41 crc kubenswrapper[4808]: E0311 08:41:41.582985 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8wfl5_openshift-ovn-kubernetes(afeac5d0-d84f-4776-ae37-a03c8f0f66b8)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" podUID="afeac5d0-d84f-4776-ae37-a03c8f0f66b8" Mar 11 08:41:41 crc kubenswrapper[4808]: I0311 08:41:41.605961 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nqcqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da0586e9-4a6f-4ba5-bc48-568fd9cdb0e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddf318b7c04801c4d010fa760bf0a48a82d26569a7bbfffa015dab3c23e0d470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d5q8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d8eb53e3ea80c3e2e9409ea178b232442d22adb54c055b8487fd6dca0ee4ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d5q8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nqcqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:41Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:41 crc kubenswrapper[4808]: I0311 08:41:41.623502 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kqsq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf747e37-c201-4dcc-a2a5-2429f4eba47d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7666v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7666v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kqsq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:41Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:41 crc kubenswrapper[4808]: I0311 08:41:41.655205 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39fe7c21-7960-42c6-aa61-98bac9e7ecd5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46ab0655a5449d68489b2ad4cdadf1ab8539dfc66e9a09ef0bf492fc84c3df01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99862ebd2568f5b782683b45df57b331cb0b659d0a48b16d898658463fcaa023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fac42d74d7b0553ef1fda4393cbe7872b21bc18baada8519db8137f2ef71d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce1b28200696e0c819d326a53d481baf48418232ac2a66fe01504e7af648c56f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1da63ac4504d6a817f5c856ae4ec6feee655c983f804372f8100624e0fbb08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0167ad24e01c669d29e5bcad3d3dc4e2e7578cf20278001ea2dfe9f1390bc99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0167ad24e01c669d29e5bcad3d3dc4e2e7578cf20278001ea2dfe9f1390bc99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc6d6b373b603100e83b56735f848166bbe70eaffa795411d0bf95bbfbe31c6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc6d6b373b603100e83b56735f848166bbe70eaffa795411d0bf95bbfbe31c6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f029352eaf11747f4efebbb91000b740e298e03d72e8a1347316668b78c23cae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f029352eaf11747f4efebbb91000b740e298e03d72e8a1347316668b78c23cae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:41Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:41 crc kubenswrapper[4808]: I0311 08:41:41.677009 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b088d2a8d5895f936e824861275482977fb68600e1ca4fd0c08e124c4f11da52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:41Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:41 crc kubenswrapper[4808]: I0311 08:41:41.694090 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:41Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:41 crc kubenswrapper[4808]: I0311 08:41:41.714488 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:41Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:41 crc kubenswrapper[4808]: I0311 08:41:41.731506 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-twvrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6377c39c-8ecf-409c-b3e7-ea9d717e234f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a910f47666db2f5fb733718e848c75c15d9f0975c7339afd9e21f21ede7aa1b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c99gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-twvrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:41Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:41 crc kubenswrapper[4808]: I0311 08:41:41.752657 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2r84h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcd5dff4-0826-4876-9fd3-3f19781a17bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3814b88c4b0107b15c1dacb212e0a05451ebc05065fcb0c25318df4808220e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f840757548b4af73aae0c247025a649cbf50cfbc534fbd38f0592f0f0e97274\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f840757548b4af73aae0c247025a649cbf50cfbc534fbd38f0592f0f0e97274\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ade0f54a4b7718d83ed1ed5025f91b42175bf7daa1327a150a6eedc99df8e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ade0f54a4b7718d83ed1ed5025f91b42175bf7daa1327a150a6eedc99df8e3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e872db89da909105a9343e8b9951c7c88bfd5107edb4386faa05c08b37e34bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e872db89da909105a9343e8b9951c7c88bfd5107edb4386faa05c08b37e34bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14920dc320f545db70ffb1b588dcfed5cb019f04e0dddd1d38db7e0696429fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14920dc320f545db70ffb1b588dcfed5cb019f04e0dddd1d38db7e0696429fc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc79537b4712c9762fe2a813ef7f12b5d6314ed2aa66741561fe635591da8a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc79537b4712c9762fe2a813ef7f12b5d6314ed2aa66741561fe635591da8a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a689c8bcf97ed3a6950a775029df058df856a2cc37cdc511d0ed62a36d9f7d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a689c8bcf97ed3a6950a775029df058df856a2cc37cdc511d0ed62a36d9f7d21\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2r84h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:41Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:41 crc kubenswrapper[4808]: I0311 08:41:41.772428 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afd64db1e4772f4558a0a810e319481d1c8d521b299c9f6f79d5c0f752b5f89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8deb33e0cd83c7419f7ddd4cba0e84d33849e982b5da5e74e0e93eebb40b249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:41Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:41 crc kubenswrapper[4808]: I0311 08:41:41.788709 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:41:41 crc kubenswrapper[4808]: I0311 08:41:41.788709 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:41:41 crc kubenswrapper[4808]: E0311 08:41:41.788898 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 11 08:41:41 crc kubenswrapper[4808]: I0311 08:41:41.788786 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:41:41 crc kubenswrapper[4808]: E0311 08:41:41.789037 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 11 08:41:41 crc kubenswrapper[4808]: E0311 08:41:41.789205 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 11 08:41:41 crc kubenswrapper[4808]: I0311 08:41:41.803575 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17fbdac0c2d9f6080066329737fa9866b0e953b38687d6c7b2c62dea2e9964a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02a71d4f33fd4d8bcf666c2d01f03d31c6e0f2b8eaa0cc183b48251e36cdc474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1d10229731c437de7f29f504fec11ac3688a0a5039812d76551a867f814b95e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a351ca31265d6256ce9c08680495dd7a9b7882cd17a8faf4a0d224a800737e4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3674634f2b9f7a0e2fcf3758da17080b26a6284c8446d8326bc405b1307f8c22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f11f10c1e9b99cab011ae05555fe3dd8aeb6e7df5cb166f07c6bcfffc9f9164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8ed6835884a9397b2548e8636b4e06a1733e82faf47693035faf763591bfa72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7422e36aafb7b9003baec36b0abbd0219b5ed0ae1b67122e9453cfd9016e8a38\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-11T08:41:12Z\\\",\\\"message\\\":\\\".Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0311 08:41:12.635982 7112 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-2r84h in node crc\\\\nI0311 08:41:12.635989 7112 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nI0311 08:41:12.635995 7112 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-2r84h after 0 failed attempt(s)\\\\nI0311 08:41:12.636008 7112 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-2r84h\\\\nI0311 08:41:12.635953 7112 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nF0311 08:41:12.636023 7112 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:41:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8ed6835884a9397b2548e8636b4e06a1733e82faf47693035faf763591bfa72\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-11T08:41:40Z\\\",\\\"message\\\":\\\"08:41:40.746178 7403 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0311 08:41:40.746207 7403 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0311 08:41:40.746211 7403 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0311 08:41:40.746236 7403 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0311 08:41:40.746249 7403 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0311 08:41:40.746270 7403 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0311 08:41:40.746275 7403 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0311 08:41:40.746285 7403 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0311 08:41:40.746313 7403 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0311 08:41:40.746374 7403 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0311 08:41:40.746405 7403 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0311 08:41:40.746434 7403 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0311 08:41:40.746438 7403 factory.go:656] Stopping watch factory\\\\nI0311 08:41:40.746476 7403 ovnkube.go:599] Stopped ovnkube\\\\nI0311 08:41:40.746471 7403 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0311 08:41:40.746453 7403 handler.go:208] Removed *v1.Node event handler 7\\\\nI03\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01f1b29fe1734c41ddcc1b36367ccc54148fa85f2e05072fc0b60987dfcbb078\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7129de475a69815e0ed556f2d45c208788efcd7ef2792f3d93994af37dee87b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7129de475a69815e0ed556f2d45c208788efcd7ef2792f3d93994af37dee87b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8wfl5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:41Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:41 crc kubenswrapper[4808]: I0311 08:41:41.818751 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dptv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af91f09b-b749-4f04-81ac-2f0079a0dca5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df6497d4025103c2efba8337b4d7541d3d7015be49a7b3478f5648122e9dbd92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtdnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dptv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:41Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:41 crc kubenswrapper[4808]: I0311 08:41:41.840940 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dgh9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1a75dfb-31dd-4275-a309-c9e7130feb05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e012d5673ee21ba84ca94a2309891fa86969898e35da381f78fc2c18734d636c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81effdeab74d0bc6b6f904ca05d6ae5f9777772a1b6e69e9038bb6818af8d979\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-11T08:41:32Z\\\",\\\"message\\\":\\\"2026-03-11T08:40:47+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_eb742e97-ae81-42ce-937c-bf6082493dd3\\\\n2026-03-11T08:40:47+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_eb742e97-ae81-42ce-937c-bf6082493dd3 to /host/opt/cni/bin/\\\\n2026-03-11T08:40:47Z [verbose] multus-daemon started\\\\n2026-03-11T08:40:47Z [verbose] Readiness Indicator file check\\\\n2026-03-11T08:41:32Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chgpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dgh9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:41Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:41 crc kubenswrapper[4808]: I0311 08:41:41.858239 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70aeaa0b-1b9a-450e-bac3-61beed554491\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9a36b70daad0e5e33a84c699b4c0680f47f80bad4c9852616ea933ba8c6b221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1f460dd013b0b42bfb8031fcb1e2f49ace0fe24ce8194bd6e6970ad3e750ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://675da4e867613d165a41176c1b725f3601ffdd7de762c52efec0ee5485b1d628\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46e7f27f0f53872704c9eff7a30b8cb82afea98d12ec0723521b901c7503f203\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f99d53f25011afeb242934cb64a3cce6d5a03c17bd6008729f5e07298f152fb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-11T08:40:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0311 08:40:20.589250 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0311 08:40:20.589388 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0311 08:40:20.590094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1044918164/tls.crt::/tmp/serving-cert-1044918164/tls.key\\\\\\\"\\\\nI0311 08:40:21.049983 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0311 08:40:21.052943 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0311 08:40:21.052965 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0311 08:40:21.052991 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0311 08:40:21.052999 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0311 08:40:21.059520 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0311 08:40:21.059550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:40:21.059569 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:40:21.059578 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0311 08:40:21.059584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0311 08:40:21.059590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0311 08:40:21.059596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0311 08:40:21.059648 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0311 08:40:21.062132 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:41:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://506d71773927c014aeafefdc00c93d5faf4e6a5827bc8b47f525f3ea970ed14e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://184b4659910cdc9b9ee8dc34db1ede687099f4a5103dfba9e9a2fc0428c4e2ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://184b4659910cdc9b9ee8dc34db1ede687099f4a5103dfba9e9a2fc0428c4e2ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:41Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:41 crc kubenswrapper[4808]: I0311 08:41:41.871043 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"134c7d84-5243-4e2e-afcf-0eaf67967ce7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00fd34aefc93aa169e305232d0d0bf0fff66d3f4469ae96f68cc5f8a51a92c86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://023c9b432e3273f96abe72e9e1e4cf5ab68649d9d71d0ff59f75fc30ea5c2287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://023c9b432e3273f96abe72e9e1e4cf5ab68649d9d71d0ff59f75fc30ea5c2287\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:41Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:41 crc kubenswrapper[4808]: I0311 08:41:41.892593 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dd34d63-dc29-46ce-86e3-70f413dfad90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b2b2eafbff37196bcb9cab96b32ba3af5acd4623128c00e2fc933040f09aab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f8347d0738c55456ccb8de04e3b599ef0a2bbaae8ab8eecda19a9bbd9abc7bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://047f453a7b029609fd6596a99014fc89181f9313fdbd391b5c755d6f7b7b0db0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8915625a5b41d93fa3b078e2e9b1b6cf18c36f3ba7ada3a505ab826c49ce3b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8915625a5b41d93fa3b078e2e9b1b6cf18c36f3ba7ada3a505ab826c49ce3b6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:41Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:41 crc kubenswrapper[4808]: I0311 08:41:41.910889 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:41Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:41 crc kubenswrapper[4808]: I0311 08:41:41.930199 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dda5309-668d-4e3c-b3b2-1d708eecc578\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://125d3edbb47daf863a7f7b431cd468ba206a964c42d2a8fdbf163ef8ae3fd771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2hw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dad38518c1c86c0eb3926b0ff17b9fe7847bd1e942591307c49dfbd87a61d19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2hw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tfsm9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:41Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:41 crc kubenswrapper[4808]: I0311 08:41:41.941257 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a12f71a3a92a4513660a5b6c0bb73f40545f5528a45fab7aec18cb22283921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:41Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:42 crc kubenswrapper[4808]: I0311 08:41:42.444782 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:42 crc kubenswrapper[4808]: I0311 08:41:42.445057 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:42 crc kubenswrapper[4808]: I0311 08:41:42.445071 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:42 crc kubenswrapper[4808]: I0311 08:41:42.445089 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:42 crc kubenswrapper[4808]: I0311 08:41:42.445100 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:42Z","lastTransitionTime":"2026-03-11T08:41:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:42 crc kubenswrapper[4808]: E0311 08:41:42.464083 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fb54ef7d-152b-411a-b511-256d1778abe5\\\",\\\"systemUUID\\\":\\\"8423e724-ca17-4e6e-9671-7e629ecf3f36\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:42Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:42 crc kubenswrapper[4808]: I0311 08:41:42.469699 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:42 crc kubenswrapper[4808]: I0311 08:41:42.469755 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:42 crc kubenswrapper[4808]: I0311 08:41:42.469773 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:42 crc kubenswrapper[4808]: I0311 08:41:42.469798 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:42 crc kubenswrapper[4808]: I0311 08:41:42.469816 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:42Z","lastTransitionTime":"2026-03-11T08:41:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:42 crc kubenswrapper[4808]: E0311 08:41:42.488331 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fb54ef7d-152b-411a-b511-256d1778abe5\\\",\\\"systemUUID\\\":\\\"8423e724-ca17-4e6e-9671-7e629ecf3f36\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:42Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:42 crc kubenswrapper[4808]: I0311 08:41:42.492900 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:42 crc kubenswrapper[4808]: I0311 08:41:42.493025 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:42 crc kubenswrapper[4808]: I0311 08:41:42.493046 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:42 crc kubenswrapper[4808]: I0311 08:41:42.493073 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:42 crc kubenswrapper[4808]: I0311 08:41:42.493091 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:42Z","lastTransitionTime":"2026-03-11T08:41:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:42 crc kubenswrapper[4808]: E0311 08:41:42.513186 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fb54ef7d-152b-411a-b511-256d1778abe5\\\",\\\"systemUUID\\\":\\\"8423e724-ca17-4e6e-9671-7e629ecf3f36\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:42Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:42 crc kubenswrapper[4808]: I0311 08:41:42.517715 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:42 crc kubenswrapper[4808]: I0311 08:41:42.517778 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:42 crc kubenswrapper[4808]: I0311 08:41:42.517804 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:42 crc kubenswrapper[4808]: I0311 08:41:42.517835 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:42 crc kubenswrapper[4808]: I0311 08:41:42.517858 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:42Z","lastTransitionTime":"2026-03-11T08:41:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:42 crc kubenswrapper[4808]: E0311 08:41:42.538104 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fb54ef7d-152b-411a-b511-256d1778abe5\\\",\\\"systemUUID\\\":\\\"8423e724-ca17-4e6e-9671-7e629ecf3f36\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:42Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:42 crc kubenswrapper[4808]: I0311 08:41:42.543644 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:42 crc kubenswrapper[4808]: I0311 08:41:42.543706 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:42 crc kubenswrapper[4808]: I0311 08:41:42.543724 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:42 crc kubenswrapper[4808]: I0311 08:41:42.543748 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:42 crc kubenswrapper[4808]: I0311 08:41:42.543766 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:42Z","lastTransitionTime":"2026-03-11T08:41:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:42 crc kubenswrapper[4808]: E0311 08:41:42.565932 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fb54ef7d-152b-411a-b511-256d1778abe5\\\",\\\"systemUUID\\\":\\\"8423e724-ca17-4e6e-9671-7e629ecf3f36\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:42Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:42 crc kubenswrapper[4808]: E0311 08:41:42.566096 4808 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 11 08:41:42 crc kubenswrapper[4808]: I0311 08:41:42.587654 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8wfl5_afeac5d0-d84f-4776-ae37-a03c8f0f66b8/ovnkube-controller/3.log" Mar 11 08:41:42 crc kubenswrapper[4808]: I0311 08:41:42.592464 4808 scope.go:117] "RemoveContainer" containerID="e8ed6835884a9397b2548e8636b4e06a1733e82faf47693035faf763591bfa72" Mar 11 08:41:42 crc kubenswrapper[4808]: E0311 08:41:42.592642 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8wfl5_openshift-ovn-kubernetes(afeac5d0-d84f-4776-ae37-a03c8f0f66b8)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" podUID="afeac5d0-d84f-4776-ae37-a03c8f0f66b8" Mar 11 08:41:42 crc kubenswrapper[4808]: I0311 08:41:42.610546 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dda5309-668d-4e3c-b3b2-1d708eecc578\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://125d3edbb47daf863a7f7b431cd468ba206a964c42d2a8fdbf163ef8ae3fd771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2hw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dad38518c1c86c0eb3926b0ff17b9fe7847bd1e942591307c49dfbd87a61d19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2hw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tfsm9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:42Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:42 crc kubenswrapper[4808]: I0311 08:41:42.631064 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dgh9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1a75dfb-31dd-4275-a309-c9e7130feb05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e012d5673ee21ba84ca94a2309891fa86969898e35da381f78fc2c18734d636c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81effdeab74d0bc6b6f904ca05d6ae5f9777772a1b6e69e9038bb6818af8d979\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-11T08:41:32Z\\\",\\\"message\\\":\\\"2026-03-11T08:40:47+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_eb742e97-ae81-42ce-937c-bf6082493dd3\\\\n2026-03-11T08:40:47+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_eb742e97-ae81-42ce-937c-bf6082493dd3 to /host/opt/cni/bin/\\\\n2026-03-11T08:40:47Z [verbose] multus-daemon started\\\\n2026-03-11T08:40:47Z [verbose] Readiness Indicator file check\\\\n2026-03-11T08:41:32Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chgpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dgh9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:42Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:42 crc kubenswrapper[4808]: I0311 08:41:42.653469 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70aeaa0b-1b9a-450e-bac3-61beed554491\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9a36b70daad0e5e33a84c699b4c0680f47f80bad4c9852616ea933ba8c6b221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1f460dd013b0b42bfb8031fcb1e2f49ace0fe24ce8194bd6e6970ad3e750ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://675da4e867613d165a41176c1b725f3601ffdd7de762c52efec0ee5485b1d628\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46e7f27f0f53872704c9eff7a30b8cb82afea98d12ec0723521b901c7503f203\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f99d53f25011afeb242934cb64a3cce6d5a03c17bd6008729f5e07298f152fb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-11T08:40:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0311 08:40:20.589250 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0311 08:40:20.589388 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0311 08:40:20.590094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1044918164/tls.crt::/tmp/serving-cert-1044918164/tls.key\\\\\\\"\\\\nI0311 08:40:21.049983 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0311 08:40:21.052943 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0311 08:40:21.052965 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0311 08:40:21.052991 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0311 08:40:21.052999 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0311 08:40:21.059520 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0311 08:40:21.059550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:40:21.059569 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:40:21.059578 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0311 08:40:21.059584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0311 08:40:21.059590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0311 08:40:21.059596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0311 08:40:21.059648 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0311 08:40:21.062132 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:41:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://506d71773927c014aeafefdc00c93d5faf4e6a5827bc8b47f525f3ea970ed14e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://184b4659910cdc9b9ee8dc34db1ede687099f4a5103dfba9e9a2fc0428c4e2ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://184b4659910cdc9b9ee8dc34db1ede687099f4a5103dfba9e9a2fc0428c4e2ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:42Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:42 crc kubenswrapper[4808]: I0311 08:41:42.669322 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"134c7d84-5243-4e2e-afcf-0eaf67967ce7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00fd34aefc93aa169e305232d0d0bf0fff66d3f4469ae96f68cc5f8a51a92c86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://023c9b432e3273f96abe72e9e1e4cf5ab68649d9d71d0ff59f75fc30ea5c2287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://023c9b432e3273f96abe72e9e1e4cf5ab68649d9d71d0ff59f75fc30ea5c2287\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:42Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:42 crc kubenswrapper[4808]: I0311 08:41:42.686989 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dd34d63-dc29-46ce-86e3-70f413dfad90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b2b2eafbff37196bcb9cab96b32ba3af5acd4623128c00e2fc933040f09aab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f8347d0738c55456ccb8de04e3b599ef0a2bbaae8ab8eecda19a9bbd9abc7bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://047f453a7b029609fd6596a99014fc89181f9313fdbd391b5c755d6f7b7b0db0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8915625a5b41d93fa3b078e2e9b1b6cf18c36f3ba7ada3a505ab826c49ce3b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8915625a5b41d93fa3b078e2e9b1b6cf18c36f3ba7ada3a505ab826c49ce3b6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:42Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:42 crc kubenswrapper[4808]: I0311 08:41:42.704297 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:42Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:42 crc kubenswrapper[4808]: I0311 08:41:42.721104 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a12f71a3a92a4513660a5b6c0bb73f40545f5528a45fab7aec18cb22283921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:42Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:42 crc kubenswrapper[4808]: I0311 08:41:42.743444 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2r84h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcd5dff4-0826-4876-9fd3-3f19781a17bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3814b88c4b0107b15c1dacb212e0a05451ebc05065fcb0c25318df4808220e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f840757548b4af73aae0c247025a649cbf50cfbc534fbd38f0592f0f0e97274\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f840757548b4af73aae0c247025a649cbf50cfbc534fbd38f0592f0f0e97274\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ade0f54a4b7718d83ed1ed5025f91b42175bf7daa1327a150a6eedc99df8e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ade0f54a4b7718d83ed1ed5025f91b42175bf7daa1327a150a6eedc99df8e3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e872db89da909105a9343e8b9951c7c88bfd5107edb4386faa05c08b37e34bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e872db89da909105a9343e8b9951c7c88bfd5107edb4386faa05c08b37e34bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14920dc320f545db70ffb1b588dcfed5cb019f04e0dddd1d38db7e0696429fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14920dc320f545db70ffb1b588dcfed5cb019f04e0dddd1d38db7e0696429fc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc79537b4712c9762fe2a813ef7f12b5d6314ed2aa66741561fe635591da8a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc79537b4712c9762fe2a813ef7f12b5d6314ed2aa66741561fe635591da8a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a689c8bcf97ed3a6950a775029df058df856a2cc37cdc511d0ed62a36d9f7d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a689c8bcf97ed3a6950a775029df058df856a2cc37cdc511d0ed62a36d9f7d21\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2r84h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:42Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:42 crc kubenswrapper[4808]: I0311 08:41:42.765092 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nqcqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da0586e9-4a6f-4ba5-bc48-568fd9cdb0e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddf318b7c04801c4d010fa760bf0a48a82d26569a7bbfffa015dab3c23e0d470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d5q8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d8eb53e3ea80c3e2e9409ea178b232442d22adb54c055b8487fd6dca0ee4ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d5q8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nqcqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:42Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:42 crc kubenswrapper[4808]: I0311 08:41:42.782961 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kqsq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf747e37-c201-4dcc-a2a5-2429f4eba47d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7666v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7666v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kqsq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:42Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:42 crc kubenswrapper[4808]: I0311 08:41:42.788748 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kqsq9" Mar 11 08:41:42 crc kubenswrapper[4808]: E0311 08:41:42.788983 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kqsq9" podUID="cf747e37-c201-4dcc-a2a5-2429f4eba47d" Mar 11 08:41:42 crc kubenswrapper[4808]: I0311 08:41:42.819531 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39fe7c21-7960-42c6-aa61-98bac9e7ecd5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46ab0655a5449d68489b2ad4cdadf1ab8539dfc66e9a09ef0bf492fc84c3df01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99862ebd2568f5b782683b45df57b331cb0b659d0a48b16d898658463fcaa023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fac42d74d7b0553ef1fda4393cbe7872b21bc18baada8519db8137f2ef71d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce1b28200696e0c819d326a53d481baf48418232ac2a66fe01504e7af648c56f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1da63ac4504d6a817f5c856ae4ec6feee655c983f804372f8100624e0fbb08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0167ad24e01c669d29e5bcad3d3dc4e2e7578cf20278001ea2dfe9f1390bc99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0167ad24e01c669d29e5bcad3d3dc4e2e7578cf20278001ea2dfe9f1390bc99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc6d6b373b603100e83b56735f848166bbe70eaffa795411d0bf95bbfbe31c6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc6d6b373b603100e83b56735f848166bbe70eaffa795411d0bf95bbfbe31c6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f029352eaf11747f4efebbb91000b740e298e03d72e8a1347316668b78c23cae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f029352eaf11747f4efebbb91000b740e298e03d72e8a1347316668b78c23cae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:42Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:42 crc kubenswrapper[4808]: I0311 08:41:42.839981 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b088d2a8d5895f936e824861275482977fb68600e1ca4fd0c08e124c4f11da52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:42Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:42 crc kubenswrapper[4808]: I0311 08:41:42.855814 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:42Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:42 crc kubenswrapper[4808]: I0311 08:41:42.872265 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:42Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:42 crc kubenswrapper[4808]: I0311 08:41:42.884857 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-twvrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6377c39c-8ecf-409c-b3e7-ea9d717e234f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a910f47666db2f5fb733718e848c75c15d9f0975c7339afd9e21f21ede7aa1b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c99gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-twvrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:42Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:42 crc kubenswrapper[4808]: I0311 08:41:42.904291 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afd64db1e4772f4558a0a810e319481d1c8d521b299c9f6f79d5c0f752b5f89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8deb33e0cd83c7419f7ddd4cba0e84d33849e982b5da5e74e0e93eebb40b249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:42Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:42 crc kubenswrapper[4808]: I0311 08:41:42.925058 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17fbdac0c2d9f6080066329737fa9866b0e953b38687d6c7b2c62dea2e9964a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02a71d4f33fd4d8bcf666c2d01f03d31c6e0f2b8eaa0cc183b48251e36cdc474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1d10229731c437de7f29f504fec11ac3688a0a5039812d76551a867f814b95e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a351ca31265d6256ce9c08680495dd7a9b7882cd17a8faf4a0d224a800737e4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3674634f2b9f7a0e2fcf3758da17080b26a6284c8446d8326bc405b1307f8c22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f11f10c1e9b99cab011ae05555fe3dd8aeb6e7df5cb166f07c6bcfffc9f9164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8ed6835884a9397b2548e8636b4e06a1733e82faf47693035faf763591bfa72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8ed6835884a9397b2548e8636b4e06a1733e82faf47693035faf763591bfa72\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-11T08:41:40Z\\\",\\\"message\\\":\\\"08:41:40.746178 7403 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0311 08:41:40.746207 7403 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0311 08:41:40.746211 7403 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0311 08:41:40.746236 7403 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0311 08:41:40.746249 7403 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0311 08:41:40.746270 7403 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0311 08:41:40.746275 7403 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0311 08:41:40.746285 7403 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0311 08:41:40.746313 7403 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0311 08:41:40.746374 7403 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0311 08:41:40.746405 7403 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0311 08:41:40.746434 7403 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0311 08:41:40.746438 7403 factory.go:656] Stopping watch factory\\\\nI0311 08:41:40.746476 7403 ovnkube.go:599] Stopped ovnkube\\\\nI0311 08:41:40.746471 7403 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0311 08:41:40.746453 7403 handler.go:208] Removed *v1.Node event handler 7\\\\nI03\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:41:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8wfl5_openshift-ovn-kubernetes(afeac5d0-d84f-4776-ae37-a03c8f0f66b8)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01f1b29fe1734c41ddcc1b36367ccc54148fa85f2e05072fc0b60987dfcbb078\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7129de475a69815e0ed556f2d45c208788efcd7ef2792f3d93994af37dee87b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7129de475a69815e0ed556f2d45c208788efcd7ef2792f3d93994af37dee87b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8wfl5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:42Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:42 crc kubenswrapper[4808]: I0311 08:41:42.938190 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dptv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af91f09b-b749-4f04-81ac-2f0079a0dca5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df6497d4025103c2efba8337b4d7541d3d7015be49a7b3478f5648122e9dbd92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtdnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dptv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:42Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:43 crc kubenswrapper[4808]: I0311 08:41:43.788451 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:41:43 crc kubenswrapper[4808]: I0311 08:41:43.788739 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:41:43 crc kubenswrapper[4808]: I0311 08:41:43.788465 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:41:43 crc kubenswrapper[4808]: E0311 08:41:43.788980 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 11 08:41:43 crc kubenswrapper[4808]: E0311 08:41:43.789148 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 11 08:41:43 crc kubenswrapper[4808]: E0311 08:41:43.789275 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 11 08:41:44 crc kubenswrapper[4808]: I0311 08:41:44.789564 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kqsq9" Mar 11 08:41:44 crc kubenswrapper[4808]: E0311 08:41:44.789820 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kqsq9" podUID="cf747e37-c201-4dcc-a2a5-2429f4eba47d" Mar 11 08:41:44 crc kubenswrapper[4808]: E0311 08:41:44.918840 4808 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 11 08:41:45 crc kubenswrapper[4808]: I0311 08:41:45.789332 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:41:45 crc kubenswrapper[4808]: I0311 08:41:45.789436 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:41:45 crc kubenswrapper[4808]: E0311 08:41:45.790166 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 11 08:41:45 crc kubenswrapper[4808]: I0311 08:41:45.789505 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:41:45 crc kubenswrapper[4808]: E0311 08:41:45.790244 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 11 08:41:45 crc kubenswrapper[4808]: E0311 08:41:45.790534 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 11 08:41:46 crc kubenswrapper[4808]: I0311 08:41:46.788300 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kqsq9" Mar 11 08:41:46 crc kubenswrapper[4808]: E0311 08:41:46.788497 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kqsq9" podUID="cf747e37-c201-4dcc-a2a5-2429f4eba47d" Mar 11 08:41:46 crc kubenswrapper[4808]: I0311 08:41:46.800561 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Mar 11 08:41:47 crc kubenswrapper[4808]: I0311 08:41:47.788693 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:41:47 crc kubenswrapper[4808]: I0311 08:41:47.788756 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:41:47 crc kubenswrapper[4808]: I0311 08:41:47.788756 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:41:47 crc kubenswrapper[4808]: E0311 08:41:47.788875 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 11 08:41:47 crc kubenswrapper[4808]: E0311 08:41:47.788988 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 11 08:41:47 crc kubenswrapper[4808]: E0311 08:41:47.789226 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 11 08:41:48 crc kubenswrapper[4808]: I0311 08:41:48.789318 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kqsq9" Mar 11 08:41:48 crc kubenswrapper[4808]: E0311 08:41:48.789579 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kqsq9" podUID="cf747e37-c201-4dcc-a2a5-2429f4eba47d" Mar 11 08:41:49 crc kubenswrapper[4808]: I0311 08:41:49.788516 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:41:49 crc kubenswrapper[4808]: I0311 08:41:49.788638 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:41:49 crc kubenswrapper[4808]: I0311 08:41:49.788887 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:41:49 crc kubenswrapper[4808]: E0311 08:41:49.789014 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 11 08:41:49 crc kubenswrapper[4808]: E0311 08:41:49.789075 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 11 08:41:49 crc kubenswrapper[4808]: E0311 08:41:49.789172 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 11 08:41:49 crc kubenswrapper[4808]: I0311 08:41:49.801733 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a12f71a3a92a4513660a5b6c0bb73f40545f5528a45fab7aec18cb22283921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:49Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:49 crc kubenswrapper[4808]: I0311 08:41:49.817199 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kqsq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf747e37-c201-4dcc-a2a5-2429f4eba47d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7666v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7666v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kqsq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:49Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:49 crc kubenswrapper[4808]: I0311 08:41:49.848951 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39fe7c21-7960-42c6-aa61-98bac9e7ecd5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46ab0655a5449d68489b2ad4cdadf1ab8539dfc66e9a09ef0bf492fc84c3df01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99862ebd2568f5b782683b45df57b331cb0b659d0a48b16d898658463fcaa023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fac42d74d7b0553ef1fda4393cbe7872b21bc18baada8519db8137f2ef71d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce1b28200696e0c819d326a53d481baf48418232ac2a66fe01504e7af648c56f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1da63ac4504d6a817f5c856ae4ec6feee655c983f804372f8100624e0fbb08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0167ad24e01c669d29e5bcad3d3dc4e2e7578cf20278001ea2dfe9f1390bc99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0167ad24e01c669d29e5bcad3d3dc4e2e7578cf20278001ea2dfe9f1390bc99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc6d6b373b603100e83b56735f848166bbe70eaffa795411d0bf95bbfbe31c6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc6d6b373b603100e83b56735f848166bbe70eaffa795411d0bf95bbfbe31c6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f029352eaf11747f4efebbb91000b740e298e03d72e8a1347316668b78c23cae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f029352eaf11747f4efebbb91000b740e298e03d72e8a1347316668b78c23cae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:49Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:49 crc kubenswrapper[4808]: I0311 08:41:49.866350 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b088d2a8d5895f936e824861275482977fb68600e1ca4fd0c08e124c4f11da52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:49Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:49 crc kubenswrapper[4808]: I0311 08:41:49.880993 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:49Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:49 crc kubenswrapper[4808]: I0311 08:41:49.899051 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:49Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:49 crc kubenswrapper[4808]: I0311 08:41:49.915470 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-twvrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6377c39c-8ecf-409c-b3e7-ea9d717e234f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a910f47666db2f5fb733718e848c75c15d9f0975c7339afd9e21f21ede7aa1b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c99gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-twvrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:49Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:49 crc kubenswrapper[4808]: E0311 08:41:49.919307 4808 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 11 08:41:49 crc kubenswrapper[4808]: I0311 08:41:49.937525 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2r84h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcd5dff4-0826-4876-9fd3-3f19781a17bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3814b88c4b0107b15c1dacb212e0a05451ebc05065fcb0c25318df4808220e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f840757548b4af73aae0c247025a649cbf50cfbc534fbd38f0592f0f0e97274\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f840757548b4af73aae0c247025a649cbf50cfbc534fbd38f0592f0f0e97274\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ade0f54a4b7718d83ed1ed5025f91b42175bf7daa1327a150a6eedc99df8e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ade0f54a4b7718d83ed1ed5025f91b42175bf7daa1327a150a6eedc99df8e3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e872db89da909105a9343e8b9951c7c88bfd5107edb4386faa05c08b37e34bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e872db89da909105a9343e8b9951c7c88bfd5107edb4386faa05c08b37e34bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14920dc320f545db70ffb1b588dcfed5cb019f04e0dddd1d38db7e0696429fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14920dc320f545db70ffb1b588dcfed5cb019f04e0dddd1d38db7e0696429fc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc79537b4712c9762fe2a813ef7f12b5d6314ed2aa66741561fe635591da8a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc79537b4712c9762fe2a813ef7f12b5d6314ed2aa66741561fe635591da8a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a689c8bcf97ed3a6950a775029df058df856a2cc37cdc511d0ed62a36d9f7d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a689c8bcf97ed3a6950a775029df058df856a2cc37cdc511d0ed62a36d9f7d21\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhgnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2r84h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:49Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:49 crc kubenswrapper[4808]: I0311 08:41:49.949547 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nqcqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da0586e9-4a6f-4ba5-bc48-568fd9cdb0e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddf318b7c04801c4d010fa760bf0a48a82d26569a7bbfffa015dab3c23e0d470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d5q8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d8eb53e3ea80c3e2e9409ea178b232442d22adb54c055b8487fd6dca0ee4ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d5q8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nqcqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:49Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:49 crc kubenswrapper[4808]: I0311 08:41:49.964574 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afd64db1e4772f4558a0a810e319481d1c8d521b299c9f6f79d5c0f752b5f89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8deb33e0cd83c7419f7ddd4cba0e84d33849e982b5da5e74e0e93eebb40b249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:49Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:49 crc kubenswrapper[4808]: I0311 08:41:49.987466 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17fbdac0c2d9f6080066329737fa9866b0e953b38687d6c7b2c62dea2e9964a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02a71d4f33fd4d8bcf666c2d01f03d31c6e0f2b8eaa0cc183b48251e36cdc474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1d10229731c437de7f29f504fec11ac3688a0a5039812d76551a867f814b95e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a351ca31265d6256ce9c08680495dd7a9b7882cd17a8faf4a0d224a800737e4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3674634f2b9f7a0e2fcf3758da17080b26a6284c8446d8326bc405b1307f8c22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f11f10c1e9b99cab011ae05555fe3dd8aeb6e7df5cb166f07c6bcfffc9f9164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8ed6835884a9397b2548e8636b4e06a1733e82faf47693035faf763591bfa72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8ed6835884a9397b2548e8636b4e06a1733e82faf47693035faf763591bfa72\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-11T08:41:40Z\\\",\\\"message\\\":\\\"08:41:40.746178 7403 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0311 08:41:40.746207 7403 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0311 08:41:40.746211 7403 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0311 08:41:40.746236 7403 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0311 08:41:40.746249 7403 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0311 08:41:40.746270 7403 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0311 08:41:40.746275 7403 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0311 08:41:40.746285 7403 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0311 08:41:40.746313 7403 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0311 08:41:40.746374 7403 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0311 08:41:40.746405 7403 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0311 08:41:40.746434 7403 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0311 08:41:40.746438 7403 factory.go:656] Stopping watch factory\\\\nI0311 08:41:40.746476 7403 ovnkube.go:599] Stopped ovnkube\\\\nI0311 08:41:40.746471 7403 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0311 08:41:40.746453 7403 handler.go:208] Removed *v1.Node event handler 7\\\\nI03\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:41:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8wfl5_openshift-ovn-kubernetes(afeac5d0-d84f-4776-ae37-a03c8f0f66b8)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01f1b29fe1734c41ddcc1b36367ccc54148fa85f2e05072fc0b60987dfcbb078\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7129de475a69815e0ed556f2d45c208788efcd7ef2792f3d93994af37dee87b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7129de475a69815e0ed556f2d45c208788efcd7ef2792f3d93994af37dee87b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k5ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8wfl5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:49Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:50 crc kubenswrapper[4808]: I0311 08:41:50.002092 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dptv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af91f09b-b749-4f04-81ac-2f0079a0dca5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df6497d4025103c2efba8337b4d7541d3d7015be49a7b3478f5648122e9dbd92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtdnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dptv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:49Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:50 crc kubenswrapper[4808]: I0311 08:41:50.021873 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70aeaa0b-1b9a-450e-bac3-61beed554491\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9a36b70daad0e5e33a84c699b4c0680f47f80bad4c9852616ea933ba8c6b221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1f460dd013b0b42bfb8031fcb1e2f49ace0fe24ce8194bd6e6970ad3e750ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://675da4e867613d165a41176c1b725f3601ffdd7de762c52efec0ee5485b1d628\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46e7f27f0f53872704c9eff7a30b8cb82afea98d12ec0723521b901c7503f203\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f99d53f25011afeb242934cb64a3cce6d5a03c17bd6008729f5e07298f152fb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-11T08:40:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0311 08:40:20.589250 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0311 08:40:20.589388 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0311 08:40:20.590094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1044918164/tls.crt::/tmp/serving-cert-1044918164/tls.key\\\\\\\"\\\\nI0311 08:40:21.049983 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0311 08:40:21.052943 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0311 08:40:21.052965 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0311 08:40:21.052991 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0311 08:40:21.052999 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0311 08:40:21.059520 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0311 08:40:21.059550 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:40:21.059569 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:40:21.059578 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0311 08:40:21.059584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0311 08:40:21.059590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0311 08:40:21.059596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0311 08:40:21.059648 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0311 08:40:21.062132 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:41:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://506d71773927c014aeafefdc00c93d5faf4e6a5827bc8b47f525f3ea970ed14e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://184b4659910cdc9b9ee8dc34db1ede687099f4a5103dfba9e9a2fc0428c4e2ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://184b4659910cdc9b9ee8dc34db1ede687099f4a5103dfba9e9a2fc0428c4e2ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:50Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:50 crc kubenswrapper[4808]: I0311 08:41:50.040011 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51abded0-d810-4966-8470-e899cf01a67c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e245f8f898c89b9c15271b5381bcd9c4dd3de809948843bdf23b025c4b58b667\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92167e4d93f32c4e5d29be5deeb05f6d9a65f5bdc4959226dd4681484f476658\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-11T08:40:08Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0311 08:39:38.937560 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0311 08:39:38.939046 1 observer_polling.go:159] Starting file observer\\\\nI0311 08:39:38.941094 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0311 08:39:38.941779 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0311 08:40:08.473002 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0311 08:40:08.473061 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:38Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e54127863996440e3e037d7cbbb6a23234fb6b4fcf9fcfd071867341bb3b7963\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72cacaca67f766f06463136d31ebea03efe7d2f215fe0c453b5918c82c4e3536\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b6250310d7b389fc8a10a376b7a92788768aba4d457abe540cf4b23507929f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:50Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:50 crc kubenswrapper[4808]: I0311 08:41:50.053560 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"134c7d84-5243-4e2e-afcf-0eaf67967ce7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00fd34aefc93aa169e305232d0d0bf0fff66d3f4469ae96f68cc5f8a51a92c86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://023c9b432e3273f96abe72e9e1e4cf5ab68649d9d71d0ff59f75fc30ea5c2287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://023c9b432e3273f96abe72e9e1e4cf5ab68649d9d71d0ff59f75fc30ea5c2287\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:50Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:50 crc kubenswrapper[4808]: I0311 08:41:50.069929 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dd34d63-dc29-46ce-86e3-70f413dfad90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b2b2eafbff37196bcb9cab96b32ba3af5acd4623128c00e2fc933040f09aab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f8347d0738c55456ccb8de04e3b599ef0a2bbaae8ab8eecda19a9bbd9abc7bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://047f453a7b029609fd6596a99014fc89181f9313fdbd391b5c755d6f7b7b0db0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:39:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8915625a5b41d93fa3b078e2e9b1b6cf18c36f3ba7ada3a505ab826c49ce3b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8915625a5b41d93fa3b078e2e9b1b6cf18c36f3ba7ada3a505ab826c49ce3b6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:39:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:39:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:50Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:50 crc kubenswrapper[4808]: I0311 08:41:50.084424 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:50Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:50 crc kubenswrapper[4808]: I0311 08:41:50.096730 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dda5309-668d-4e3c-b3b2-1d708eecc578\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://125d3edbb47daf863a7f7b431cd468ba206a964c42d2a8fdbf163ef8ae3fd771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2hw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dad38518c1c86c0eb3926b0ff17b9fe7847bd1e942591307c49dfbd87a61d19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2hw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tfsm9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:50Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:50 crc kubenswrapper[4808]: I0311 08:41:50.114746 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dgh9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1a75dfb-31dd-4275-a309-c9e7130feb05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e012d5673ee21ba84ca94a2309891fa86969898e35da381f78fc2c18734d636c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81effdeab74d0bc6b6f904ca05d6ae5f9777772a1b6e69e9038bb6818af8d979\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-11T08:41:32Z\\\",\\\"message\\\":\\\"2026-03-11T08:40:47+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_eb742e97-ae81-42ce-937c-bf6082493dd3\\\\n2026-03-11T08:40:47+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_eb742e97-ae81-42ce-937c-bf6082493dd3 to /host/opt/cni/bin/\\\\n2026-03-11T08:40:47Z [verbose] multus-daemon started\\\\n2026-03-11T08:40:47Z [verbose] Readiness Indicator file check\\\\n2026-03-11T08:41:32Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:40:46Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:41:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chgpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:40:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dgh9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:50Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:50 crc kubenswrapper[4808]: I0311 08:41:50.789236 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kqsq9" Mar 11 08:41:50 crc kubenswrapper[4808]: E0311 08:41:50.789657 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kqsq9" podUID="cf747e37-c201-4dcc-a2a5-2429f4eba47d" Mar 11 08:41:51 crc kubenswrapper[4808]: I0311 08:41:51.789218 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:41:51 crc kubenswrapper[4808]: I0311 08:41:51.789258 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:41:51 crc kubenswrapper[4808]: E0311 08:41:51.790239 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 11 08:41:51 crc kubenswrapper[4808]: I0311 08:41:51.789341 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:41:51 crc kubenswrapper[4808]: E0311 08:41:51.790413 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 11 08:41:51 crc kubenswrapper[4808]: E0311 08:41:51.790600 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 11 08:41:52 crc kubenswrapper[4808]: I0311 08:41:52.702312 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:52 crc kubenswrapper[4808]: I0311 08:41:52.702416 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:52 crc kubenswrapper[4808]: I0311 08:41:52.702437 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:52 crc kubenswrapper[4808]: I0311 08:41:52.702462 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:52 crc kubenswrapper[4808]: I0311 08:41:52.702481 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:52Z","lastTransitionTime":"2026-03-11T08:41:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:52 crc kubenswrapper[4808]: E0311 08:41:52.716510 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fb54ef7d-152b-411a-b511-256d1778abe5\\\",\\\"systemUUID\\\":\\\"8423e724-ca17-4e6e-9671-7e629ecf3f36\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:52Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:52 crc kubenswrapper[4808]: I0311 08:41:52.720984 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:52 crc kubenswrapper[4808]: I0311 08:41:52.721034 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:52 crc kubenswrapper[4808]: I0311 08:41:52.721046 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:52 crc kubenswrapper[4808]: I0311 08:41:52.721065 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:52 crc kubenswrapper[4808]: I0311 08:41:52.721078 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:52Z","lastTransitionTime":"2026-03-11T08:41:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:52 crc kubenswrapper[4808]: E0311 08:41:52.736265 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fb54ef7d-152b-411a-b511-256d1778abe5\\\",\\\"systemUUID\\\":\\\"8423e724-ca17-4e6e-9671-7e629ecf3f36\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:52Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:52 crc kubenswrapper[4808]: I0311 08:41:52.742098 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:52 crc kubenswrapper[4808]: I0311 08:41:52.742162 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:52 crc kubenswrapper[4808]: I0311 08:41:52.742185 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:52 crc kubenswrapper[4808]: I0311 08:41:52.742221 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:52 crc kubenswrapper[4808]: I0311 08:41:52.742242 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:52Z","lastTransitionTime":"2026-03-11T08:41:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:52 crc kubenswrapper[4808]: E0311 08:41:52.759418 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fb54ef7d-152b-411a-b511-256d1778abe5\\\",\\\"systemUUID\\\":\\\"8423e724-ca17-4e6e-9671-7e629ecf3f36\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:52Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:52 crc kubenswrapper[4808]: I0311 08:41:52.763304 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:52 crc kubenswrapper[4808]: I0311 08:41:52.763388 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:52 crc kubenswrapper[4808]: I0311 08:41:52.763417 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:52 crc kubenswrapper[4808]: I0311 08:41:52.763447 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:52 crc kubenswrapper[4808]: I0311 08:41:52.763468 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:52Z","lastTransitionTime":"2026-03-11T08:41:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:52 crc kubenswrapper[4808]: E0311 08:41:52.781188 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fb54ef7d-152b-411a-b511-256d1778abe5\\\",\\\"systemUUID\\\":\\\"8423e724-ca17-4e6e-9671-7e629ecf3f36\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:52Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:52 crc kubenswrapper[4808]: I0311 08:41:52.785263 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:41:52 crc kubenswrapper[4808]: I0311 08:41:52.785635 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:41:52 crc kubenswrapper[4808]: I0311 08:41:52.785743 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:41:52 crc kubenswrapper[4808]: I0311 08:41:52.785826 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:41:52 crc kubenswrapper[4808]: I0311 08:41:52.785922 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:41:52Z","lastTransitionTime":"2026-03-11T08:41:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:41:52 crc kubenswrapper[4808]: I0311 08:41:52.788430 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kqsq9" Mar 11 08:41:52 crc kubenswrapper[4808]: E0311 08:41:52.788555 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kqsq9" podUID="cf747e37-c201-4dcc-a2a5-2429f4eba47d" Mar 11 08:41:52 crc kubenswrapper[4808]: E0311 08:41:52.797741 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:41:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:41:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fb54ef7d-152b-411a-b511-256d1778abe5\\\",\\\"systemUUID\\\":\\\"8423e724-ca17-4e6e-9671-7e629ecf3f36\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:41:52Z is after 2025-08-24T17:21:41Z" Mar 11 08:41:52 crc kubenswrapper[4808]: E0311 08:41:52.798482 4808 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 11 08:41:53 crc kubenswrapper[4808]: I0311 08:41:53.789064 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:41:53 crc kubenswrapper[4808]: I0311 08:41:53.789116 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:41:53 crc kubenswrapper[4808]: E0311 08:41:53.789210 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 11 08:41:53 crc kubenswrapper[4808]: I0311 08:41:53.789243 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:41:53 crc kubenswrapper[4808]: E0311 08:41:53.789419 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 11 08:41:53 crc kubenswrapper[4808]: E0311 08:41:53.789454 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 11 08:41:54 crc kubenswrapper[4808]: I0311 08:41:54.788973 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kqsq9" Mar 11 08:41:54 crc kubenswrapper[4808]: E0311 08:41:54.789178 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kqsq9" podUID="cf747e37-c201-4dcc-a2a5-2429f4eba47d" Mar 11 08:41:54 crc kubenswrapper[4808]: E0311 08:41:54.920903 4808 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 11 08:41:55 crc kubenswrapper[4808]: I0311 08:41:55.789623 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:41:55 crc kubenswrapper[4808]: I0311 08:41:55.789839 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:41:55 crc kubenswrapper[4808]: E0311 08:41:55.789937 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 11 08:41:55 crc kubenswrapper[4808]: E0311 08:41:55.790133 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 11 08:41:55 crc kubenswrapper[4808]: I0311 08:41:55.790745 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:41:55 crc kubenswrapper[4808]: E0311 08:41:55.790882 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 11 08:41:56 crc kubenswrapper[4808]: I0311 08:41:56.788847 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kqsq9" Mar 11 08:41:56 crc kubenswrapper[4808]: E0311 08:41:56.788996 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kqsq9" podUID="cf747e37-c201-4dcc-a2a5-2429f4eba47d" Mar 11 08:41:56 crc kubenswrapper[4808]: I0311 08:41:56.790348 4808 scope.go:117] "RemoveContainer" containerID="e8ed6835884a9397b2548e8636b4e06a1733e82faf47693035faf763591bfa72" Mar 11 08:41:56 crc kubenswrapper[4808]: E0311 08:41:56.790797 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8wfl5_openshift-ovn-kubernetes(afeac5d0-d84f-4776-ae37-a03c8f0f66b8)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" podUID="afeac5d0-d84f-4776-ae37-a03c8f0f66b8" Mar 11 08:41:57 crc kubenswrapper[4808]: I0311 08:41:57.788565 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:41:57 crc kubenswrapper[4808]: E0311 08:41:57.788744 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 11 08:41:57 crc kubenswrapper[4808]: I0311 08:41:57.788774 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:41:57 crc kubenswrapper[4808]: I0311 08:41:57.788807 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:41:57 crc kubenswrapper[4808]: E0311 08:41:57.789103 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 11 08:41:57 crc kubenswrapper[4808]: E0311 08:41:57.789248 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 11 08:41:58 crc kubenswrapper[4808]: I0311 08:41:58.788506 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kqsq9" Mar 11 08:41:58 crc kubenswrapper[4808]: E0311 08:41:58.788973 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kqsq9" podUID="cf747e37-c201-4dcc-a2a5-2429f4eba47d" Mar 11 08:41:59 crc kubenswrapper[4808]: I0311 08:41:59.788866 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:41:59 crc kubenswrapper[4808]: I0311 08:41:59.789029 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:41:59 crc kubenswrapper[4808]: E0311 08:41:59.789058 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 11 08:41:59 crc kubenswrapper[4808]: I0311 08:41:59.789100 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:41:59 crc kubenswrapper[4808]: E0311 08:41:59.789223 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 11 08:41:59 crc kubenswrapper[4808]: E0311 08:41:59.789322 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 11 08:41:59 crc kubenswrapper[4808]: I0311 08:41:59.828631 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=83.828613286 podStartE2EDuration="1m23.828613286s" podCreationTimestamp="2026-03-11 08:40:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:41:59.826024491 +0000 UTC m=+170.779347871" watchObservedRunningTime="2026-03-11 08:41:59.828613286 +0000 UTC m=+170.781936616" Mar 11 08:41:59 crc kubenswrapper[4808]: I0311 08:41:59.895645 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-twvrg" podStartSLOduration=105.895610927 podStartE2EDuration="1m45.895610927s" podCreationTimestamp="2026-03-11 08:40:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:41:59.895176425 +0000 UTC m=+170.848499755" watchObservedRunningTime="2026-03-11 08:41:59.895610927 +0000 UTC m=+170.848934297" Mar 11 08:41:59 crc kubenswrapper[4808]: I0311 08:41:59.915975 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-2r84h" podStartSLOduration=105.915953257 podStartE2EDuration="1m45.915953257s" podCreationTimestamp="2026-03-11 08:40:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:41:59.915288047 +0000 UTC m=+170.868611377" watchObservedRunningTime="2026-03-11 08:41:59.915953257 +0000 UTC m=+170.869276607" Mar 11 08:41:59 crc kubenswrapper[4808]: E0311 08:41:59.922111 4808 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 11 08:41:59 crc kubenswrapper[4808]: I0311 08:41:59.934949 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nqcqr" podStartSLOduration=105.934925917 podStartE2EDuration="1m45.934925917s" podCreationTimestamp="2026-03-11 08:40:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:41:59.934201356 +0000 UTC m=+170.887524706" watchObservedRunningTime="2026-03-11 08:41:59.934925917 +0000 UTC m=+170.888249247" Mar 11 08:42:00 crc kubenswrapper[4808]: I0311 08:42:00.019968 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-dptv8" podStartSLOduration=106.019949121 podStartE2EDuration="1m46.019949121s" podCreationTimestamp="2026-03-11 08:40:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:42:00.002413152 +0000 UTC m=+170.955736472" watchObservedRunningTime="2026-03-11 08:42:00.019949121 +0000 UTC m=+170.973272441" Mar 11 08:42:00 crc kubenswrapper[4808]: I0311 08:42:00.033788 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=80.033761991 podStartE2EDuration="1m20.033761991s" podCreationTimestamp="2026-03-11 08:40:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:42:00.020099315 +0000 UTC m=+170.973422625" watchObservedRunningTime="2026-03-11 08:42:00.033761991 +0000 UTC m=+170.987085321" Mar 11 08:42:00 crc kubenswrapper[4808]: I0311 08:42:00.034318 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=14.034312737 podStartE2EDuration="14.034312737s" podCreationTimestamp="2026-03-11 08:41:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:42:00.033547785 +0000 UTC m=+170.986871105" watchObservedRunningTime="2026-03-11 08:42:00.034312737 +0000 UTC m=+170.987636067" Mar 11 08:42:00 crc kubenswrapper[4808]: I0311 08:42:00.045120 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=89.04510816 podStartE2EDuration="1m29.04510816s" podCreationTimestamp="2026-03-11 08:40:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:42:00.044498092 +0000 UTC m=+170.997821432" watchObservedRunningTime="2026-03-11 08:42:00.04510816 +0000 UTC m=+170.998431480" Mar 11 08:42:00 crc kubenswrapper[4808]: I0311 08:42:00.056336 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=50.056318605 podStartE2EDuration="50.056318605s" podCreationTimestamp="2026-03-11 08:41:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:42:00.055834671 +0000 UTC m=+171.009157991" watchObservedRunningTime="2026-03-11 08:42:00.056318605 +0000 UTC m=+171.009641915" Mar 11 08:42:00 crc kubenswrapper[4808]: I0311 08:42:00.094377 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podStartSLOduration=106.09411694 podStartE2EDuration="1m46.09411694s" podCreationTimestamp="2026-03-11 08:40:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:42:00.079456705 +0000 UTC m=+171.032780055" watchObservedRunningTime="2026-03-11 08:42:00.09411694 +0000 UTC m=+171.047440280" Mar 11 08:42:00 crc kubenswrapper[4808]: I0311 08:42:00.094641 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-dgh9v" podStartSLOduration=106.094632445 podStartE2EDuration="1m46.094632445s" podCreationTimestamp="2026-03-11 08:40:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:42:00.092970247 +0000 UTC m=+171.046293577" watchObservedRunningTime="2026-03-11 08:42:00.094632445 +0000 UTC m=+171.047955785" Mar 11 08:42:00 crc kubenswrapper[4808]: I0311 08:42:00.788977 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kqsq9" Mar 11 08:42:00 crc kubenswrapper[4808]: E0311 08:42:00.789182 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kqsq9" podUID="cf747e37-c201-4dcc-a2a5-2429f4eba47d" Mar 11 08:42:01 crc kubenswrapper[4808]: I0311 08:42:01.789239 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:42:01 crc kubenswrapper[4808]: I0311 08:42:01.789281 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:42:01 crc kubenswrapper[4808]: E0311 08:42:01.789459 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 11 08:42:01 crc kubenswrapper[4808]: I0311 08:42:01.789489 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:42:01 crc kubenswrapper[4808]: E0311 08:42:01.789595 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 11 08:42:01 crc kubenswrapper[4808]: E0311 08:42:01.789695 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 11 08:42:02 crc kubenswrapper[4808]: I0311 08:42:02.788642 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kqsq9" Mar 11 08:42:02 crc kubenswrapper[4808]: E0311 08:42:02.788863 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kqsq9" podUID="cf747e37-c201-4dcc-a2a5-2429f4eba47d" Mar 11 08:42:02 crc kubenswrapper[4808]: I0311 08:42:02.853540 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cf747e37-c201-4dcc-a2a5-2429f4eba47d-metrics-certs\") pod \"network-metrics-daemon-kqsq9\" (UID: \"cf747e37-c201-4dcc-a2a5-2429f4eba47d\") " pod="openshift-multus/network-metrics-daemon-kqsq9" Mar 11 08:42:02 crc kubenswrapper[4808]: E0311 08:42:02.853731 4808 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 11 08:42:02 crc kubenswrapper[4808]: E0311 08:42:02.853825 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cf747e37-c201-4dcc-a2a5-2429f4eba47d-metrics-certs podName:cf747e37-c201-4dcc-a2a5-2429f4eba47d nodeName:}" failed. No retries permitted until 2026-03-11 08:43:06.853795993 +0000 UTC m=+237.807119353 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cf747e37-c201-4dcc-a2a5-2429f4eba47d-metrics-certs") pod "network-metrics-daemon-kqsq9" (UID: "cf747e37-c201-4dcc-a2a5-2429f4eba47d") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 11 08:42:03 crc kubenswrapper[4808]: I0311 08:42:03.093384 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:42:03 crc kubenswrapper[4808]: I0311 08:42:03.093424 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:42:03 crc kubenswrapper[4808]: I0311 08:42:03.093435 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:42:03 crc kubenswrapper[4808]: I0311 08:42:03.093450 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:42:03 crc kubenswrapper[4808]: I0311 08:42:03.093461 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:42:03Z","lastTransitionTime":"2026-03-11T08:42:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:42:03 crc kubenswrapper[4808]: I0311 08:42:03.120301 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Mar 11 08:42:03 crc kubenswrapper[4808]: I0311 08:42:03.128522 4808 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Mar 11 08:42:03 crc kubenswrapper[4808]: I0311 08:42:03.151946 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-bjjpz"] Mar 11 08:42:03 crc kubenswrapper[4808]: I0311 08:42:03.152256 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bjjpz" Mar 11 08:42:03 crc kubenswrapper[4808]: I0311 08:42:03.155425 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 11 08:42:03 crc kubenswrapper[4808]: I0311 08:42:03.155656 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 11 08:42:03 crc kubenswrapper[4808]: I0311 08:42:03.157155 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 11 08:42:03 crc kubenswrapper[4808]: I0311 08:42:03.157377 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Mar 11 08:42:03 crc kubenswrapper[4808]: I0311 08:42:03.255667 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a4bf2486-842b-4f27-a9d8-4124a6b465ba-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-bjjpz\" (UID: \"a4bf2486-842b-4f27-a9d8-4124a6b465ba\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bjjpz" Mar 11 08:42:03 crc kubenswrapper[4808]: I0311 08:42:03.255751 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/a4bf2486-842b-4f27-a9d8-4124a6b465ba-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-bjjpz\" (UID: \"a4bf2486-842b-4f27-a9d8-4124a6b465ba\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bjjpz" Mar 11 08:42:03 crc kubenswrapper[4808]: I0311 08:42:03.255781 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4bf2486-842b-4f27-a9d8-4124a6b465ba-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-bjjpz\" (UID: \"a4bf2486-842b-4f27-a9d8-4124a6b465ba\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bjjpz" Mar 11 08:42:03 crc kubenswrapper[4808]: I0311 08:42:03.255847 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a4bf2486-842b-4f27-a9d8-4124a6b465ba-service-ca\") pod \"cluster-version-operator-5c965bbfc6-bjjpz\" (UID: \"a4bf2486-842b-4f27-a9d8-4124a6b465ba\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bjjpz" Mar 11 08:42:03 crc kubenswrapper[4808]: I0311 08:42:03.255884 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/a4bf2486-842b-4f27-a9d8-4124a6b465ba-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-bjjpz\" (UID: \"a4bf2486-842b-4f27-a9d8-4124a6b465ba\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bjjpz" Mar 11 08:42:03 crc kubenswrapper[4808]: I0311 08:42:03.357234 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/a4bf2486-842b-4f27-a9d8-4124a6b465ba-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-bjjpz\" (UID: \"a4bf2486-842b-4f27-a9d8-4124a6b465ba\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bjjpz" Mar 11 08:42:03 crc kubenswrapper[4808]: I0311 08:42:03.357281 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4bf2486-842b-4f27-a9d8-4124a6b465ba-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-bjjpz\" (UID: \"a4bf2486-842b-4f27-a9d8-4124a6b465ba\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bjjpz" Mar 11 08:42:03 crc kubenswrapper[4808]: I0311 08:42:03.357341 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a4bf2486-842b-4f27-a9d8-4124a6b465ba-service-ca\") pod \"cluster-version-operator-5c965bbfc6-bjjpz\" (UID: \"a4bf2486-842b-4f27-a9d8-4124a6b465ba\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bjjpz" Mar 11 08:42:03 crc kubenswrapper[4808]: I0311 08:42:03.357410 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/a4bf2486-842b-4f27-a9d8-4124a6b465ba-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-bjjpz\" (UID: \"a4bf2486-842b-4f27-a9d8-4124a6b465ba\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bjjpz" Mar 11 08:42:03 crc kubenswrapper[4808]: I0311 08:42:03.357416 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/a4bf2486-842b-4f27-a9d8-4124a6b465ba-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-bjjpz\" (UID: \"a4bf2486-842b-4f27-a9d8-4124a6b465ba\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bjjpz" Mar 11 08:42:03 crc kubenswrapper[4808]: I0311 08:42:03.357436 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a4bf2486-842b-4f27-a9d8-4124a6b465ba-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-bjjpz\" (UID: \"a4bf2486-842b-4f27-a9d8-4124a6b465ba\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bjjpz" Mar 11 08:42:03 crc kubenswrapper[4808]: I0311 08:42:03.357731 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/a4bf2486-842b-4f27-a9d8-4124a6b465ba-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-bjjpz\" (UID: \"a4bf2486-842b-4f27-a9d8-4124a6b465ba\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bjjpz" Mar 11 08:42:03 crc kubenswrapper[4808]: I0311 08:42:03.359289 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a4bf2486-842b-4f27-a9d8-4124a6b465ba-service-ca\") pod \"cluster-version-operator-5c965bbfc6-bjjpz\" (UID: \"a4bf2486-842b-4f27-a9d8-4124a6b465ba\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bjjpz" Mar 11 08:42:03 crc kubenswrapper[4808]: I0311 08:42:03.367900 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4bf2486-842b-4f27-a9d8-4124a6b465ba-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-bjjpz\" (UID: \"a4bf2486-842b-4f27-a9d8-4124a6b465ba\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bjjpz" Mar 11 08:42:03 crc kubenswrapper[4808]: I0311 08:42:03.388278 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a4bf2486-842b-4f27-a9d8-4124a6b465ba-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-bjjpz\" (UID: \"a4bf2486-842b-4f27-a9d8-4124a6b465ba\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bjjpz" Mar 11 08:42:03 crc kubenswrapper[4808]: I0311 08:42:03.470478 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bjjpz" Mar 11 08:42:03 crc kubenswrapper[4808]: I0311 08:42:03.668262 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bjjpz" event={"ID":"a4bf2486-842b-4f27-a9d8-4124a6b465ba","Type":"ContainerStarted","Data":"9925f5fbff72268c833a0543c96e0c1ec424ff573cb4cdca99deff1143d87810"} Mar 11 08:42:03 crc kubenswrapper[4808]: I0311 08:42:03.668345 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bjjpz" event={"ID":"a4bf2486-842b-4f27-a9d8-4124a6b465ba","Type":"ContainerStarted","Data":"c38e1101c14f51c11761ddf7728eeea09490633f432d0af34c6f576504762acc"} Mar 11 08:42:03 crc kubenswrapper[4808]: I0311 08:42:03.686815 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bjjpz" podStartSLOduration=109.686738901 podStartE2EDuration="1m49.686738901s" podCreationTimestamp="2026-03-11 08:40:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:42:03.683912289 +0000 UTC m=+174.637235639" watchObservedRunningTime="2026-03-11 08:42:03.686738901 +0000 UTC m=+174.640062261" Mar 11 08:42:03 crc kubenswrapper[4808]: I0311 08:42:03.788677 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:42:03 crc kubenswrapper[4808]: I0311 08:42:03.788682 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:42:03 crc kubenswrapper[4808]: I0311 08:42:03.788839 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:42:03 crc kubenswrapper[4808]: E0311 08:42:03.789253 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 11 08:42:03 crc kubenswrapper[4808]: E0311 08:42:03.789438 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 11 08:42:03 crc kubenswrapper[4808]: E0311 08:42:03.789268 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 11 08:42:04 crc kubenswrapper[4808]: I0311 08:42:04.788731 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kqsq9" Mar 11 08:42:04 crc kubenswrapper[4808]: E0311 08:42:04.789281 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kqsq9" podUID="cf747e37-c201-4dcc-a2a5-2429f4eba47d" Mar 11 08:42:04 crc kubenswrapper[4808]: E0311 08:42:04.924005 4808 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 11 08:42:05 crc kubenswrapper[4808]: I0311 08:42:05.789285 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:42:05 crc kubenswrapper[4808]: I0311 08:42:05.789303 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:42:05 crc kubenswrapper[4808]: E0311 08:42:05.790062 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 11 08:42:05 crc kubenswrapper[4808]: I0311 08:42:05.789497 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:42:05 crc kubenswrapper[4808]: E0311 08:42:05.790175 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 11 08:42:05 crc kubenswrapper[4808]: E0311 08:42:05.790517 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 11 08:42:06 crc kubenswrapper[4808]: I0311 08:42:06.789220 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kqsq9" Mar 11 08:42:06 crc kubenswrapper[4808]: E0311 08:42:06.789393 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kqsq9" podUID="cf747e37-c201-4dcc-a2a5-2429f4eba47d" Mar 11 08:42:07 crc kubenswrapper[4808]: I0311 08:42:07.790612 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:42:07 crc kubenswrapper[4808]: E0311 08:42:07.790759 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 11 08:42:07 crc kubenswrapper[4808]: I0311 08:42:07.790609 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:42:07 crc kubenswrapper[4808]: E0311 08:42:07.790862 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 11 08:42:07 crc kubenswrapper[4808]: I0311 08:42:07.790901 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:42:07 crc kubenswrapper[4808]: E0311 08:42:07.790961 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 11 08:42:08 crc kubenswrapper[4808]: I0311 08:42:08.788349 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kqsq9" Mar 11 08:42:08 crc kubenswrapper[4808]: E0311 08:42:08.788577 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kqsq9" podUID="cf747e37-c201-4dcc-a2a5-2429f4eba47d" Mar 11 08:42:09 crc kubenswrapper[4808]: I0311 08:42:09.789156 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:42:09 crc kubenswrapper[4808]: I0311 08:42:09.789248 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:42:09 crc kubenswrapper[4808]: I0311 08:42:09.789292 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:42:09 crc kubenswrapper[4808]: E0311 08:42:09.791603 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 11 08:42:09 crc kubenswrapper[4808]: E0311 08:42:09.791736 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 11 08:42:09 crc kubenswrapper[4808]: E0311 08:42:09.791861 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 11 08:42:09 crc kubenswrapper[4808]: E0311 08:42:09.924714 4808 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 11 08:42:10 crc kubenswrapper[4808]: I0311 08:42:10.789279 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kqsq9" Mar 11 08:42:10 crc kubenswrapper[4808]: E0311 08:42:10.789530 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kqsq9" podUID="cf747e37-c201-4dcc-a2a5-2429f4eba47d" Mar 11 08:42:11 crc kubenswrapper[4808]: I0311 08:42:11.788914 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:42:11 crc kubenswrapper[4808]: E0311 08:42:11.789060 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 11 08:42:11 crc kubenswrapper[4808]: I0311 08:42:11.789114 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:42:11 crc kubenswrapper[4808]: E0311 08:42:11.789242 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 11 08:42:11 crc kubenswrapper[4808]: I0311 08:42:11.789770 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:42:11 crc kubenswrapper[4808]: E0311 08:42:11.789998 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 11 08:42:11 crc kubenswrapper[4808]: I0311 08:42:11.790308 4808 scope.go:117] "RemoveContainer" containerID="e8ed6835884a9397b2548e8636b4e06a1733e82faf47693035faf763591bfa72" Mar 11 08:42:11 crc kubenswrapper[4808]: E0311 08:42:11.790579 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8wfl5_openshift-ovn-kubernetes(afeac5d0-d84f-4776-ae37-a03c8f0f66b8)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" podUID="afeac5d0-d84f-4776-ae37-a03c8f0f66b8" Mar 11 08:42:12 crc kubenswrapper[4808]: I0311 08:42:12.788720 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kqsq9" Mar 11 08:42:12 crc kubenswrapper[4808]: E0311 08:42:12.788868 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kqsq9" podUID="cf747e37-c201-4dcc-a2a5-2429f4eba47d" Mar 11 08:42:13 crc kubenswrapper[4808]: I0311 08:42:13.789014 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:42:13 crc kubenswrapper[4808]: I0311 08:42:13.789059 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:42:13 crc kubenswrapper[4808]: I0311 08:42:13.789018 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:42:13 crc kubenswrapper[4808]: E0311 08:42:13.789215 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 11 08:42:13 crc kubenswrapper[4808]: E0311 08:42:13.789398 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 11 08:42:13 crc kubenswrapper[4808]: E0311 08:42:13.789552 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 11 08:42:14 crc kubenswrapper[4808]: I0311 08:42:14.788675 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kqsq9" Mar 11 08:42:14 crc kubenswrapper[4808]: E0311 08:42:14.788900 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kqsq9" podUID="cf747e37-c201-4dcc-a2a5-2429f4eba47d" Mar 11 08:42:14 crc kubenswrapper[4808]: E0311 08:42:14.926145 4808 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 11 08:42:15 crc kubenswrapper[4808]: I0311 08:42:15.788529 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:42:15 crc kubenswrapper[4808]: E0311 08:42:15.788778 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 11 08:42:15 crc kubenswrapper[4808]: I0311 08:42:15.788539 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:42:15 crc kubenswrapper[4808]: E0311 08:42:15.789127 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 11 08:42:15 crc kubenswrapper[4808]: I0311 08:42:15.788529 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:42:15 crc kubenswrapper[4808]: E0311 08:42:15.789306 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 11 08:42:16 crc kubenswrapper[4808]: I0311 08:42:16.788561 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kqsq9" Mar 11 08:42:16 crc kubenswrapper[4808]: E0311 08:42:16.788744 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kqsq9" podUID="cf747e37-c201-4dcc-a2a5-2429f4eba47d" Mar 11 08:42:17 crc kubenswrapper[4808]: I0311 08:42:17.788975 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:42:17 crc kubenswrapper[4808]: I0311 08:42:17.789027 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:42:17 crc kubenswrapper[4808]: I0311 08:42:17.789048 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:42:17 crc kubenswrapper[4808]: E0311 08:42:17.789163 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 11 08:42:17 crc kubenswrapper[4808]: E0311 08:42:17.789271 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 11 08:42:17 crc kubenswrapper[4808]: E0311 08:42:17.789417 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 11 08:42:18 crc kubenswrapper[4808]: I0311 08:42:18.789058 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kqsq9" Mar 11 08:42:18 crc kubenswrapper[4808]: E0311 08:42:18.789264 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kqsq9" podUID="cf747e37-c201-4dcc-a2a5-2429f4eba47d" Mar 11 08:42:19 crc kubenswrapper[4808]: I0311 08:42:19.766248 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dgh9v_c1a75dfb-31dd-4275-a309-c9e7130feb05/kube-multus/1.log" Mar 11 08:42:19 crc kubenswrapper[4808]: I0311 08:42:19.766695 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dgh9v_c1a75dfb-31dd-4275-a309-c9e7130feb05/kube-multus/0.log" Mar 11 08:42:19 crc kubenswrapper[4808]: I0311 08:42:19.766727 4808 generic.go:334] "Generic (PLEG): container finished" podID="c1a75dfb-31dd-4275-a309-c9e7130feb05" containerID="e012d5673ee21ba84ca94a2309891fa86969898e35da381f78fc2c18734d636c" exitCode=1 Mar 11 08:42:19 crc kubenswrapper[4808]: I0311 08:42:19.766755 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-dgh9v" event={"ID":"c1a75dfb-31dd-4275-a309-c9e7130feb05","Type":"ContainerDied","Data":"e012d5673ee21ba84ca94a2309891fa86969898e35da381f78fc2c18734d636c"} Mar 11 08:42:19 crc kubenswrapper[4808]: I0311 08:42:19.766787 4808 scope.go:117] "RemoveContainer" containerID="81effdeab74d0bc6b6f904ca05d6ae5f9777772a1b6e69e9038bb6818af8d979" Mar 11 08:42:19 crc kubenswrapper[4808]: I0311 08:42:19.767150 4808 scope.go:117] "RemoveContainer" containerID="e012d5673ee21ba84ca94a2309891fa86969898e35da381f78fc2c18734d636c" Mar 11 08:42:19 crc kubenswrapper[4808]: E0311 08:42:19.767448 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-dgh9v_openshift-multus(c1a75dfb-31dd-4275-a309-c9e7130feb05)\"" pod="openshift-multus/multus-dgh9v" podUID="c1a75dfb-31dd-4275-a309-c9e7130feb05" Mar 11 08:42:19 crc kubenswrapper[4808]: I0311 08:42:19.789078 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:42:19 crc kubenswrapper[4808]: E0311 08:42:19.789259 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 11 08:42:19 crc kubenswrapper[4808]: I0311 08:42:19.791086 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:42:19 crc kubenswrapper[4808]: E0311 08:42:19.791200 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 11 08:42:19 crc kubenswrapper[4808]: I0311 08:42:19.791244 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:42:19 crc kubenswrapper[4808]: E0311 08:42:19.791288 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 11 08:42:19 crc kubenswrapper[4808]: E0311 08:42:19.926601 4808 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 11 08:42:21 crc kubenswrapper[4808]: I0311 08:42:20.772100 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dgh9v_c1a75dfb-31dd-4275-a309-c9e7130feb05/kube-multus/1.log" Mar 11 08:42:21 crc kubenswrapper[4808]: I0311 08:42:20.788950 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kqsq9" Mar 11 08:42:21 crc kubenswrapper[4808]: E0311 08:42:20.789054 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kqsq9" podUID="cf747e37-c201-4dcc-a2a5-2429f4eba47d" Mar 11 08:42:21 crc kubenswrapper[4808]: I0311 08:42:21.789071 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:42:21 crc kubenswrapper[4808]: I0311 08:42:21.789136 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:42:21 crc kubenswrapper[4808]: I0311 08:42:21.789212 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:42:21 crc kubenswrapper[4808]: E0311 08:42:21.789447 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 11 08:42:21 crc kubenswrapper[4808]: E0311 08:42:21.789591 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 11 08:42:21 crc kubenswrapper[4808]: E0311 08:42:21.789725 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 11 08:42:22 crc kubenswrapper[4808]: I0311 08:42:22.788994 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kqsq9" Mar 11 08:42:22 crc kubenswrapper[4808]: E0311 08:42:22.789153 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kqsq9" podUID="cf747e37-c201-4dcc-a2a5-2429f4eba47d" Mar 11 08:42:23 crc kubenswrapper[4808]: I0311 08:42:23.789186 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:42:23 crc kubenswrapper[4808]: I0311 08:42:23.789247 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:42:23 crc kubenswrapper[4808]: E0311 08:42:23.789326 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 11 08:42:23 crc kubenswrapper[4808]: I0311 08:42:23.789214 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:42:23 crc kubenswrapper[4808]: E0311 08:42:23.789420 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 11 08:42:23 crc kubenswrapper[4808]: E0311 08:42:23.789537 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 11 08:42:24 crc kubenswrapper[4808]: I0311 08:42:24.788329 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kqsq9" Mar 11 08:42:24 crc kubenswrapper[4808]: E0311 08:42:24.788473 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kqsq9" podUID="cf747e37-c201-4dcc-a2a5-2429f4eba47d" Mar 11 08:42:24 crc kubenswrapper[4808]: E0311 08:42:24.927996 4808 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 11 08:42:25 crc kubenswrapper[4808]: I0311 08:42:25.788280 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:42:25 crc kubenswrapper[4808]: I0311 08:42:25.788289 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:42:25 crc kubenswrapper[4808]: I0311 08:42:25.788450 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:42:25 crc kubenswrapper[4808]: E0311 08:42:25.789445 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 11 08:42:25 crc kubenswrapper[4808]: E0311 08:42:25.789682 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 11 08:42:25 crc kubenswrapper[4808]: E0311 08:42:25.789801 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 11 08:42:25 crc kubenswrapper[4808]: I0311 08:42:25.790264 4808 scope.go:117] "RemoveContainer" containerID="e8ed6835884a9397b2548e8636b4e06a1733e82faf47693035faf763591bfa72" Mar 11 08:42:26 crc kubenswrapper[4808]: I0311 08:42:26.734983 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-kqsq9"] Mar 11 08:42:26 crc kubenswrapper[4808]: I0311 08:42:26.735099 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kqsq9" Mar 11 08:42:26 crc kubenswrapper[4808]: E0311 08:42:26.735226 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kqsq9" podUID="cf747e37-c201-4dcc-a2a5-2429f4eba47d" Mar 11 08:42:26 crc kubenswrapper[4808]: I0311 08:42:26.795682 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8wfl5_afeac5d0-d84f-4776-ae37-a03c8f0f66b8/ovnkube-controller/3.log" Mar 11 08:42:26 crc kubenswrapper[4808]: I0311 08:42:26.798802 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" event={"ID":"afeac5d0-d84f-4776-ae37-a03c8f0f66b8","Type":"ContainerStarted","Data":"14f606735543a42f93d6dcfef5cf41caeb01d4d37301694aac50fa3451463180"} Mar 11 08:42:26 crc kubenswrapper[4808]: I0311 08:42:26.799413 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" Mar 11 08:42:26 crc kubenswrapper[4808]: I0311 08:42:26.831393 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" podStartSLOduration=132.831376021 podStartE2EDuration="2m12.831376021s" podCreationTimestamp="2026-03-11 08:40:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:42:26.830187007 +0000 UTC m=+197.783510347" watchObservedRunningTime="2026-03-11 08:42:26.831376021 +0000 UTC m=+197.784699341" Mar 11 08:42:27 crc kubenswrapper[4808]: I0311 08:42:27.788668 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:42:27 crc kubenswrapper[4808]: I0311 08:42:27.788688 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:42:27 crc kubenswrapper[4808]: I0311 08:42:27.788708 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:42:27 crc kubenswrapper[4808]: E0311 08:42:27.788960 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 11 08:42:27 crc kubenswrapper[4808]: E0311 08:42:27.789111 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 11 08:42:27 crc kubenswrapper[4808]: E0311 08:42:27.789275 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 11 08:42:28 crc kubenswrapper[4808]: I0311 08:42:28.789128 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kqsq9" Mar 11 08:42:28 crc kubenswrapper[4808]: E0311 08:42:28.789393 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kqsq9" podUID="cf747e37-c201-4dcc-a2a5-2429f4eba47d" Mar 11 08:42:29 crc kubenswrapper[4808]: I0311 08:42:29.788597 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:42:29 crc kubenswrapper[4808]: I0311 08:42:29.788666 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:42:29 crc kubenswrapper[4808]: I0311 08:42:29.788719 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:42:29 crc kubenswrapper[4808]: E0311 08:42:29.790895 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 11 08:42:29 crc kubenswrapper[4808]: E0311 08:42:29.790975 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 11 08:42:29 crc kubenswrapper[4808]: E0311 08:42:29.791166 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 11 08:42:29 crc kubenswrapper[4808]: E0311 08:42:29.929172 4808 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 11 08:42:30 crc kubenswrapper[4808]: I0311 08:42:30.789323 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kqsq9" Mar 11 08:42:30 crc kubenswrapper[4808]: I0311 08:42:30.789527 4808 scope.go:117] "RemoveContainer" containerID="e012d5673ee21ba84ca94a2309891fa86969898e35da381f78fc2c18734d636c" Mar 11 08:42:30 crc kubenswrapper[4808]: E0311 08:42:30.789581 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kqsq9" podUID="cf747e37-c201-4dcc-a2a5-2429f4eba47d" Mar 11 08:42:31 crc kubenswrapper[4808]: I0311 08:42:31.789061 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:42:31 crc kubenswrapper[4808]: I0311 08:42:31.789191 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:42:31 crc kubenswrapper[4808]: I0311 08:42:31.789298 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:42:31 crc kubenswrapper[4808]: E0311 08:42:31.789288 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 11 08:42:31 crc kubenswrapper[4808]: E0311 08:42:31.789525 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 11 08:42:31 crc kubenswrapper[4808]: E0311 08:42:31.789689 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 11 08:42:31 crc kubenswrapper[4808]: I0311 08:42:31.822625 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dgh9v_c1a75dfb-31dd-4275-a309-c9e7130feb05/kube-multus/1.log" Mar 11 08:42:31 crc kubenswrapper[4808]: I0311 08:42:31.822726 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-dgh9v" event={"ID":"c1a75dfb-31dd-4275-a309-c9e7130feb05","Type":"ContainerStarted","Data":"a74cf4fc7a6efc5e697b4c5b638237ec2b87c79fbcc672ad3c5e57df7e0e9cd7"} Mar 11 08:42:32 crc kubenswrapper[4808]: I0311 08:42:32.789256 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kqsq9" Mar 11 08:42:32 crc kubenswrapper[4808]: E0311 08:42:32.789469 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kqsq9" podUID="cf747e37-c201-4dcc-a2a5-2429f4eba47d" Mar 11 08:42:33 crc kubenswrapper[4808]: I0311 08:42:33.788633 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:42:33 crc kubenswrapper[4808]: I0311 08:42:33.788710 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:42:33 crc kubenswrapper[4808]: I0311 08:42:33.788646 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:42:33 crc kubenswrapper[4808]: E0311 08:42:33.788867 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 11 08:42:33 crc kubenswrapper[4808]: E0311 08:42:33.789104 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 11 08:42:33 crc kubenswrapper[4808]: E0311 08:42:33.789012 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 11 08:42:34 crc kubenswrapper[4808]: I0311 08:42:34.085105 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:42:34 crc kubenswrapper[4808]: E0311 08:42:34.085529 4808 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 11 08:42:34 crc kubenswrapper[4808]: E0311 08:42:34.085615 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-11 08:44:36.085593635 +0000 UTC m=+327.038916985 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 11 08:42:34 crc kubenswrapper[4808]: I0311 08:42:34.085353 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:42:34 crc kubenswrapper[4808]: I0311 08:42:34.085917 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:42:34 crc kubenswrapper[4808]: E0311 08:42:34.086030 4808 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 11 08:42:34 crc kubenswrapper[4808]: E0311 08:42:34.086087 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-11 08:44:36.086069778 +0000 UTC m=+327.039393138 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 11 08:42:34 crc kubenswrapper[4808]: I0311 08:42:34.086289 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:42:34 crc kubenswrapper[4808]: E0311 08:42:34.086486 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 11 08:42:34 crc kubenswrapper[4808]: E0311 08:42:34.086519 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 11 08:42:34 crc kubenswrapper[4808]: E0311 08:42:34.086540 4808 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 11 08:42:34 crc kubenswrapper[4808]: E0311 08:42:34.086551 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:44:36.086525072 +0000 UTC m=+327.039848422 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:42:34 crc kubenswrapper[4808]: E0311 08:42:34.086650 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-11 08:44:36.086628725 +0000 UTC m=+327.039952075 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 11 08:42:34 crc kubenswrapper[4808]: I0311 08:42:34.086894 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:42:34 crc kubenswrapper[4808]: E0311 08:42:34.087032 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 11 08:42:34 crc kubenswrapper[4808]: E0311 08:42:34.087052 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 11 08:42:34 crc kubenswrapper[4808]: E0311 08:42:34.087067 4808 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 11 08:42:34 crc kubenswrapper[4808]: E0311 08:42:34.087123 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-11 08:44:36.087108808 +0000 UTC m=+327.040432168 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 11 08:42:34 crc kubenswrapper[4808]: I0311 08:42:34.789016 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kqsq9" Mar 11 08:42:34 crc kubenswrapper[4808]: E0311 08:42:34.789236 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kqsq9" podUID="cf747e37-c201-4dcc-a2a5-2429f4eba47d" Mar 11 08:42:35 crc kubenswrapper[4808]: I0311 08:42:35.789065 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:42:35 crc kubenswrapper[4808]: I0311 08:42:35.789082 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:42:35 crc kubenswrapper[4808]: I0311 08:42:35.789637 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:42:35 crc kubenswrapper[4808]: I0311 08:42:35.791498 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 11 08:42:35 crc kubenswrapper[4808]: I0311 08:42:35.791642 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Mar 11 08:42:35 crc kubenswrapper[4808]: I0311 08:42:35.791895 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 11 08:42:35 crc kubenswrapper[4808]: I0311 08:42:35.792664 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Mar 11 08:42:36 crc kubenswrapper[4808]: I0311 08:42:36.788420 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kqsq9" Mar 11 08:42:36 crc kubenswrapper[4808]: I0311 08:42:36.791668 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 11 08:42:36 crc kubenswrapper[4808]: I0311 08:42:36.792124 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Mar 11 08:42:43 crc kubenswrapper[4808]: I0311 08:42:43.928418 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Mar 11 08:42:43 crc kubenswrapper[4808]: I0311 08:42:43.974573 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-zld8r"] Mar 11 08:42:43 crc kubenswrapper[4808]: I0311 08:42:43.975053 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-zld8r" Mar 11 08:42:43 crc kubenswrapper[4808]: I0311 08:42:43.978989 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 11 08:42:43 crc kubenswrapper[4808]: I0311 08:42:43.979226 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Mar 11 08:42:43 crc kubenswrapper[4808]: I0311 08:42:43.979413 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 11 08:42:43 crc kubenswrapper[4808]: I0311 08:42:43.979686 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 11 08:42:43 crc kubenswrapper[4808]: I0311 08:42:43.979747 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 11 08:42:43 crc kubenswrapper[4808]: I0311 08:42:43.979974 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 11 08:42:43 crc kubenswrapper[4808]: I0311 08:42:43.998404 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p5m5n"] Mar 11 08:42:43 crc kubenswrapper[4808]: I0311 08:42:43.999011 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p5m5n" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.001671 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-vvz25"] Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.002300 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vvz25" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.003875 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.004250 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.004506 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.004982 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.005318 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.010579 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.014101 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.015523 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.017255 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-bp68q"] Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.027877 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-bp68q" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.030818 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-fwsjb"] Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.031385 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-fwsjb" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.031965 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.032031 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.033421 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-n2lkc"] Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.034008 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n2lkc" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.035080 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-6hxxc"] Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.035840 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-6hxxc" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.048568 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.048824 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.049001 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.049108 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.049222 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.049400 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.049570 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-8wmnr"] Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.050025 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-td5rr"] Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.050321 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-td5rr" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.050469 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8wmnr" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.052112 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.052176 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.052379 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.052736 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.052954 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.053023 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.053093 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.053164 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.053236 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.053309 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.053392 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.053473 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.053538 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.053605 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.053712 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.053723 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.053814 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.053851 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.054162 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.054348 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.056047 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.057230 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.057891 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.058573 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.058963 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.059157 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.059281 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-vbbdr"] Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.059737 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-zvh9m"] Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.059983 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-vbbdr" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.060597 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zvh9m" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.061141 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-w6wrf"] Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.062751 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.063146 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.063472 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.063636 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.067029 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.067798 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-zp4ks"] Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.068079 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.068258 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-w6wrf" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.068562 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-zp4ks" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.068596 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.068711 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.068796 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.068822 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.068895 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.089412 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.089458 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.089583 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.089686 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.089699 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.089898 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.090089 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.090153 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.090210 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.090250 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.090302 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.090344 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.090401 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.090454 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.090593 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.091045 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.091087 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.091161 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.094271 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-bjpgk"] Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.094750 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-gv48h"] Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.095047 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-76p85"] Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.095620 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-76p85" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.096501 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.099299 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.100451 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.101741 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.102339 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.102886 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.103293 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.103735 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.104018 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.104051 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-gv48h" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.104194 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-bjpgk" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.127305 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-h9wnt"] Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.127750 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/83b44b9a-4c3d-4100-ade1-2645a32a237e-client-ca\") pod \"controller-manager-879f6c89f-6hxxc\" (UID: \"83b44b9a-4c3d-4100-ade1-2645a32a237e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6hxxc" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.127799 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/adc95b24-ac31-47c8-8221-b27da4ea0564-serving-cert\") pod \"apiserver-76f77b778f-bp68q\" (UID: \"adc95b24-ac31-47c8-8221-b27da4ea0564\") " pod="openshift-apiserver/apiserver-76f77b778f-bp68q" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.127832 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da668c40-37d5-4ebf-ae7e-59e9c301b386-config\") pod \"machine-api-operator-5694c8668f-zld8r\" (UID: \"da668c40-37d5-4ebf-ae7e-59e9c301b386\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zld8r" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.127891 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/0eec4c1a-2ffd-487b-bd91-ecb5008f789e-available-featuregates\") pod \"openshift-config-operator-7777fb866f-vvz25\" (UID: \"0eec4c1a-2ffd-487b-bd91-ecb5008f789e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vvz25" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.127925 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/adc95b24-ac31-47c8-8221-b27da4ea0564-node-pullsecrets\") pod \"apiserver-76f77b778f-bp68q\" (UID: \"adc95b24-ac31-47c8-8221-b27da4ea0564\") " pod="openshift-apiserver/apiserver-76f77b778f-bp68q" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.127958 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfrgp\" (UniqueName: \"kubernetes.io/projected/a8210275-38af-4411-b048-dc409cc7b88d-kube-api-access-pfrgp\") pod \"authentication-operator-69f744f599-fwsjb\" (UID: \"a8210275-38af-4411-b048-dc409cc7b88d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fwsjb" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.127995 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b163e19a-baca-49ff-a20e-457983474014-config\") pod \"openshift-apiserver-operator-796bbdcf4f-td5rr\" (UID: \"b163e19a-baca-49ff-a20e-457983474014\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-td5rr" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.128024 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/adc95b24-ac31-47c8-8221-b27da4ea0564-etcd-serving-ca\") pod \"apiserver-76f77b778f-bp68q\" (UID: \"adc95b24-ac31-47c8-8221-b27da4ea0564\") " pod="openshift-apiserver/apiserver-76f77b778f-bp68q" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.128051 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8210275-38af-4411-b048-dc409cc7b88d-serving-cert\") pod \"authentication-operator-69f744f599-fwsjb\" (UID: \"a8210275-38af-4411-b048-dc409cc7b88d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fwsjb" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.128078 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-h9wnt" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.128077 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b163e19a-baca-49ff-a20e-457983474014-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-td5rr\" (UID: \"b163e19a-baca-49ff-a20e-457983474014\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-td5rr" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.128388 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/adc95b24-ac31-47c8-8221-b27da4ea0564-audit\") pod \"apiserver-76f77b778f-bp68q\" (UID: \"adc95b24-ac31-47c8-8221-b27da4ea0564\") " pod="openshift-apiserver/apiserver-76f77b778f-bp68q" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.128423 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pffcs\" (UniqueName: \"kubernetes.io/projected/9381da56-e194-4d1c-a8ac-577035653c33-kube-api-access-pffcs\") pod \"cluster-samples-operator-665b6dd947-p5m5n\" (UID: \"9381da56-e194-4d1c-a8ac-577035653c33\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p5m5n" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.128457 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/adc95b24-ac31-47c8-8221-b27da4ea0564-trusted-ca-bundle\") pod \"apiserver-76f77b778f-bp68q\" (UID: \"adc95b24-ac31-47c8-8221-b27da4ea0564\") " pod="openshift-apiserver/apiserver-76f77b778f-bp68q" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.128488 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsrtw\" (UniqueName: \"kubernetes.io/projected/2b5f17e9-1e47-4c7a-b225-a874c78f88ef-kube-api-access-qsrtw\") pod \"route-controller-manager-6576b87f9c-n2lkc\" (UID: \"2b5f17e9-1e47-4c7a-b225-a874c78f88ef\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n2lkc" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.128522 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlgnv\" (UniqueName: \"kubernetes.io/projected/adc95b24-ac31-47c8-8221-b27da4ea0564-kube-api-access-wlgnv\") pod \"apiserver-76f77b778f-bp68q\" (UID: \"adc95b24-ac31-47c8-8221-b27da4ea0564\") " pod="openshift-apiserver/apiserver-76f77b778f-bp68q" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.128551 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/9381da56-e194-4d1c-a8ac-577035653c33-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-p5m5n\" (UID: \"9381da56-e194-4d1c-a8ac-577035653c33\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p5m5n" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.128583 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8210275-38af-4411-b048-dc409cc7b88d-config\") pod \"authentication-operator-69f744f599-fwsjb\" (UID: \"a8210275-38af-4411-b048-dc409cc7b88d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fwsjb" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.128672 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8pvp\" (UniqueName: \"kubernetes.io/projected/da668c40-37d5-4ebf-ae7e-59e9c301b386-kube-api-access-q8pvp\") pod \"machine-api-operator-5694c8668f-zld8r\" (UID: \"da668c40-37d5-4ebf-ae7e-59e9c301b386\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zld8r" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.128709 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83b44b9a-4c3d-4100-ade1-2645a32a237e-config\") pod \"controller-manager-879f6c89f-6hxxc\" (UID: \"83b44b9a-4c3d-4100-ade1-2645a32a237e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6hxxc" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.128748 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/adc95b24-ac31-47c8-8221-b27da4ea0564-etcd-client\") pod \"apiserver-76f77b778f-bp68q\" (UID: \"adc95b24-ac31-47c8-8221-b27da4ea0564\") " pod="openshift-apiserver/apiserver-76f77b778f-bp68q" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.128779 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b5f17e9-1e47-4c7a-b225-a874c78f88ef-serving-cert\") pod \"route-controller-manager-6576b87f9c-n2lkc\" (UID: \"2b5f17e9-1e47-4c7a-b225-a874c78f88ef\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n2lkc" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.128798 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s9lbs"] Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.128833 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/adc95b24-ac31-47c8-8221-b27da4ea0564-image-import-ca\") pod \"apiserver-76f77b778f-bp68q\" (UID: \"adc95b24-ac31-47c8-8221-b27da4ea0564\") " pod="openshift-apiserver/apiserver-76f77b778f-bp68q" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.128902 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/83b44b9a-4c3d-4100-ade1-2645a32a237e-serving-cert\") pod \"controller-manager-879f6c89f-6hxxc\" (UID: \"83b44b9a-4c3d-4100-ade1-2645a32a237e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6hxxc" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.128938 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/83b44b9a-4c3d-4100-ade1-2645a32a237e-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-6hxxc\" (UID: \"83b44b9a-4c3d-4100-ade1-2645a32a237e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6hxxc" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.128967 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/adc95b24-ac31-47c8-8221-b27da4ea0564-encryption-config\") pod \"apiserver-76f77b778f-bp68q\" (UID: \"adc95b24-ac31-47c8-8221-b27da4ea0564\") " pod="openshift-apiserver/apiserver-76f77b778f-bp68q" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.129015 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b5f17e9-1e47-4c7a-b225-a874c78f88ef-config\") pod \"route-controller-manager-6576b87f9c-n2lkc\" (UID: \"2b5f17e9-1e47-4c7a-b225-a874c78f88ef\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n2lkc" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.129038 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2b5f17e9-1e47-4c7a-b225-a874c78f88ef-client-ca\") pod \"route-controller-manager-6576b87f9c-n2lkc\" (UID: \"2b5f17e9-1e47-4c7a-b225-a874c78f88ef\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n2lkc" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.129117 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/30ee7d2f-3ff9-44c0-8128-e5a25ed43613-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-8wmnr\" (UID: \"30ee7d2f-3ff9-44c0-8128-e5a25ed43613\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8wmnr" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.129152 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/30ee7d2f-3ff9-44c0-8128-e5a25ed43613-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-8wmnr\" (UID: \"30ee7d2f-3ff9-44c0-8128-e5a25ed43613\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8wmnr" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.129180 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/30ee7d2f-3ff9-44c0-8128-e5a25ed43613-audit-dir\") pod \"apiserver-7bbb656c7d-8wmnr\" (UID: \"30ee7d2f-3ff9-44c0-8128-e5a25ed43613\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8wmnr" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.129278 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqvww\" (UniqueName: \"kubernetes.io/projected/30ee7d2f-3ff9-44c0-8128-e5a25ed43613-kube-api-access-sqvww\") pod \"apiserver-7bbb656c7d-8wmnr\" (UID: \"30ee7d2f-3ff9-44c0-8128-e5a25ed43613\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8wmnr" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.129406 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/30ee7d2f-3ff9-44c0-8128-e5a25ed43613-serving-cert\") pod \"apiserver-7bbb656c7d-8wmnr\" (UID: \"30ee7d2f-3ff9-44c0-8128-e5a25ed43613\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8wmnr" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.129432 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adc95b24-ac31-47c8-8221-b27da4ea0564-config\") pod \"apiserver-76f77b778f-bp68q\" (UID: \"adc95b24-ac31-47c8-8221-b27da4ea0564\") " pod="openshift-apiserver/apiserver-76f77b778f-bp68q" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.129497 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a8210275-38af-4411-b048-dc409cc7b88d-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-fwsjb\" (UID: \"a8210275-38af-4411-b048-dc409cc7b88d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fwsjb" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.129537 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/30ee7d2f-3ff9-44c0-8128-e5a25ed43613-encryption-config\") pod \"apiserver-7bbb656c7d-8wmnr\" (UID: \"30ee7d2f-3ff9-44c0-8128-e5a25ed43613\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8wmnr" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.129571 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6862\" (UniqueName: \"kubernetes.io/projected/0eec4c1a-2ffd-487b-bd91-ecb5008f789e-kube-api-access-b6862\") pod \"openshift-config-operator-7777fb866f-vvz25\" (UID: \"0eec4c1a-2ffd-487b-bd91-ecb5008f789e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vvz25" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.129579 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s9lbs" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.129595 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/30ee7d2f-3ff9-44c0-8128-e5a25ed43613-audit-policies\") pod \"apiserver-7bbb656c7d-8wmnr\" (UID: \"30ee7d2f-3ff9-44c0-8128-e5a25ed43613\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8wmnr" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.129637 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0eec4c1a-2ffd-487b-bd91-ecb5008f789e-serving-cert\") pod \"openshift-config-operator-7777fb866f-vvz25\" (UID: \"0eec4c1a-2ffd-487b-bd91-ecb5008f789e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vvz25" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.129661 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/da668c40-37d5-4ebf-ae7e-59e9c301b386-images\") pod \"machine-api-operator-5694c8668f-zld8r\" (UID: \"da668c40-37d5-4ebf-ae7e-59e9c301b386\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zld8r" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.129678 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/da668c40-37d5-4ebf-ae7e-59e9c301b386-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-zld8r\" (UID: \"da668c40-37d5-4ebf-ae7e-59e9c301b386\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zld8r" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.129710 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a8210275-38af-4411-b048-dc409cc7b88d-service-ca-bundle\") pod \"authentication-operator-69f744f599-fwsjb\" (UID: \"a8210275-38af-4411-b048-dc409cc7b88d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fwsjb" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.129760 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/adc95b24-ac31-47c8-8221-b27da4ea0564-audit-dir\") pod \"apiserver-76f77b778f-bp68q\" (UID: \"adc95b24-ac31-47c8-8221-b27da4ea0564\") " pod="openshift-apiserver/apiserver-76f77b778f-bp68q" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.129792 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zvdm\" (UniqueName: \"kubernetes.io/projected/b163e19a-baca-49ff-a20e-457983474014-kube-api-access-4zvdm\") pod \"openshift-apiserver-operator-796bbdcf4f-td5rr\" (UID: \"b163e19a-baca-49ff-a20e-457983474014\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-td5rr" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.129949 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-b6vjx"] Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.130264 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5clr\" (UniqueName: \"kubernetes.io/projected/83b44b9a-4c3d-4100-ade1-2645a32a237e-kube-api-access-x5clr\") pod \"controller-manager-879f6c89f-6hxxc\" (UID: \"83b44b9a-4c3d-4100-ade1-2645a32a237e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6hxxc" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.130456 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/30ee7d2f-3ff9-44c0-8128-e5a25ed43613-etcd-client\") pod \"apiserver-7bbb656c7d-8wmnr\" (UID: \"30ee7d2f-3ff9-44c0-8128-e5a25ed43613\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8wmnr" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.133458 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-b6vjx" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.133761 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.134641 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.134898 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.135192 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.135477 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.135948 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.136143 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.136384 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.136549 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.140499 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-ntqfc"] Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.141542 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ntqfc" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.152151 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.152440 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.166480 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4llxn"] Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.173398 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.173732 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.179996 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4llxn" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.180416 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8bbz5"] Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.180820 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8bbz5" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.181440 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-97qlz"] Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.187001 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.187112 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-sph2s"] Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.187479 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-97qlz" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.187519 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.188525 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-sph2s" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.188926 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.189229 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.190626 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dljmc"] Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.191246 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-h82vj"] Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.191611 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-zld8r"] Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.191709 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-h82vj" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.191770 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dljmc" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.193771 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vxs94"] Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.193981 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.194283 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-lfhrh"] Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.194889 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-2z52n"] Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.194893 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vxs94" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.195009 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lfhrh" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.195421 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.195753 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-2z52n" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.200874 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.202653 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.204191 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-b25qz"] Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.204799 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-b25qz" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.205113 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-t54r9"] Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.205566 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-t54r9" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.205724 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r7bc7"] Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.206198 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r7bc7" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.210662 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kgwqw"] Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.211576 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kgwqw" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.216603 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xpv26"] Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.217232 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xpv26" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.217793 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.218137 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553642-n69sp"] Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.218914 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553642-n69sp" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.220335 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nwcwz"] Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.222093 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-8mblp"] Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.222622 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-8mblp" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.222866 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nwcwz" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.225908 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-mp49l"] Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.227067 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-mp49l" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.236389 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b5f17e9-1e47-4c7a-b225-a874c78f88ef-config\") pod \"route-controller-manager-6576b87f9c-n2lkc\" (UID: \"2b5f17e9-1e47-4c7a-b225-a874c78f88ef\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n2lkc" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.236483 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2b5f17e9-1e47-4c7a-b225-a874c78f88ef-client-ca\") pod \"route-controller-manager-6576b87f9c-n2lkc\" (UID: \"2b5f17e9-1e47-4c7a-b225-a874c78f88ef\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n2lkc" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.236519 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/30ee7d2f-3ff9-44c0-8128-e5a25ed43613-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-8wmnr\" (UID: \"30ee7d2f-3ff9-44c0-8128-e5a25ed43613\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8wmnr" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.236547 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/30ee7d2f-3ff9-44c0-8128-e5a25ed43613-audit-dir\") pod \"apiserver-7bbb656c7d-8wmnr\" (UID: \"30ee7d2f-3ff9-44c0-8128-e5a25ed43613\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8wmnr" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.236575 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/30ee7d2f-3ff9-44c0-8128-e5a25ed43613-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-8wmnr\" (UID: \"30ee7d2f-3ff9-44c0-8128-e5a25ed43613\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8wmnr" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.236602 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqvww\" (UniqueName: \"kubernetes.io/projected/30ee7d2f-3ff9-44c0-8128-e5a25ed43613-kube-api-access-sqvww\") pod \"apiserver-7bbb656c7d-8wmnr\" (UID: \"30ee7d2f-3ff9-44c0-8128-e5a25ed43613\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8wmnr" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.236636 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adc95b24-ac31-47c8-8221-b27da4ea0564-config\") pod \"apiserver-76f77b778f-bp68q\" (UID: \"adc95b24-ac31-47c8-8221-b27da4ea0564\") " pod="openshift-apiserver/apiserver-76f77b778f-bp68q" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.236671 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a23f2035-3d2d-4caa-a1d4-62dfd729f876-config\") pod \"console-operator-58897d9998-gv48h\" (UID: \"a23f2035-3d2d-4caa-a1d4-62dfd729f876\") " pod="openshift-console-operator/console-operator-58897d9998-gv48h" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.236721 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/30ee7d2f-3ff9-44c0-8128-e5a25ed43613-serving-cert\") pod \"apiserver-7bbb656c7d-8wmnr\" (UID: \"30ee7d2f-3ff9-44c0-8128-e5a25ed43613\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8wmnr" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.236765 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a8210275-38af-4411-b048-dc409cc7b88d-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-fwsjb\" (UID: \"a8210275-38af-4411-b048-dc409cc7b88d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fwsjb" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.236798 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/30ee7d2f-3ff9-44c0-8128-e5a25ed43613-encryption-config\") pod \"apiserver-7bbb656c7d-8wmnr\" (UID: \"30ee7d2f-3ff9-44c0-8128-e5a25ed43613\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8wmnr" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.236830 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6862\" (UniqueName: \"kubernetes.io/projected/0eec4c1a-2ffd-487b-bd91-ecb5008f789e-kube-api-access-b6862\") pod \"openshift-config-operator-7777fb866f-vvz25\" (UID: \"0eec4c1a-2ffd-487b-bd91-ecb5008f789e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vvz25" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.236865 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/30ee7d2f-3ff9-44c0-8128-e5a25ed43613-audit-policies\") pod \"apiserver-7bbb656c7d-8wmnr\" (UID: \"30ee7d2f-3ff9-44c0-8128-e5a25ed43613\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8wmnr" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.236895 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0eec4c1a-2ffd-487b-bd91-ecb5008f789e-serving-cert\") pod \"openshift-config-operator-7777fb866f-vvz25\" (UID: \"0eec4c1a-2ffd-487b-bd91-ecb5008f789e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vvz25" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.236923 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/da668c40-37d5-4ebf-ae7e-59e9c301b386-images\") pod \"machine-api-operator-5694c8668f-zld8r\" (UID: \"da668c40-37d5-4ebf-ae7e-59e9c301b386\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zld8r" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.236982 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/da668c40-37d5-4ebf-ae7e-59e9c301b386-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-zld8r\" (UID: \"da668c40-37d5-4ebf-ae7e-59e9c301b386\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zld8r" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.237016 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a8210275-38af-4411-b048-dc409cc7b88d-service-ca-bundle\") pod \"authentication-operator-69f744f599-fwsjb\" (UID: \"a8210275-38af-4411-b048-dc409cc7b88d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fwsjb" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.237049 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/adc95b24-ac31-47c8-8221-b27da4ea0564-audit-dir\") pod \"apiserver-76f77b778f-bp68q\" (UID: \"adc95b24-ac31-47c8-8221-b27da4ea0564\") " pod="openshift-apiserver/apiserver-76f77b778f-bp68q" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.237077 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zvdm\" (UniqueName: \"kubernetes.io/projected/b163e19a-baca-49ff-a20e-457983474014-kube-api-access-4zvdm\") pod \"openshift-apiserver-operator-796bbdcf4f-td5rr\" (UID: \"b163e19a-baca-49ff-a20e-457983474014\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-td5rr" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.237154 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5clr\" (UniqueName: \"kubernetes.io/projected/83b44b9a-4c3d-4100-ade1-2645a32a237e-kube-api-access-x5clr\") pod \"controller-manager-879f6c89f-6hxxc\" (UID: \"83b44b9a-4c3d-4100-ade1-2645a32a237e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6hxxc" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.237238 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/30ee7d2f-3ff9-44c0-8128-e5a25ed43613-etcd-client\") pod \"apiserver-7bbb656c7d-8wmnr\" (UID: \"30ee7d2f-3ff9-44c0-8128-e5a25ed43613\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8wmnr" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.237280 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/83b44b9a-4c3d-4100-ade1-2645a32a237e-client-ca\") pod \"controller-manager-879f6c89f-6hxxc\" (UID: \"83b44b9a-4c3d-4100-ade1-2645a32a237e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6hxxc" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.237310 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/adc95b24-ac31-47c8-8221-b27da4ea0564-serving-cert\") pod \"apiserver-76f77b778f-bp68q\" (UID: \"adc95b24-ac31-47c8-8221-b27da4ea0564\") " pod="openshift-apiserver/apiserver-76f77b778f-bp68q" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.237344 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j69kd\" (UniqueName: \"kubernetes.io/projected/e0ffc2f5-a2b0-40bd-9732-c24caee2dfdb-kube-api-access-j69kd\") pod \"machine-config-controller-84d6567774-ntqfc\" (UID: \"e0ffc2f5-a2b0-40bd-9732-c24caee2dfdb\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ntqfc" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.237430 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da668c40-37d5-4ebf-ae7e-59e9c301b386-config\") pod \"machine-api-operator-5694c8668f-zld8r\" (UID: \"da668c40-37d5-4ebf-ae7e-59e9c301b386\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zld8r" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.237496 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/0eec4c1a-2ffd-487b-bd91-ecb5008f789e-available-featuregates\") pod \"openshift-config-operator-7777fb866f-vvz25\" (UID: \"0eec4c1a-2ffd-487b-bd91-ecb5008f789e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vvz25" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.237526 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/adc95b24-ac31-47c8-8221-b27da4ea0564-node-pullsecrets\") pod \"apiserver-76f77b778f-bp68q\" (UID: \"adc95b24-ac31-47c8-8221-b27da4ea0564\") " pod="openshift-apiserver/apiserver-76f77b778f-bp68q" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.237565 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a23f2035-3d2d-4caa-a1d4-62dfd729f876-trusted-ca\") pod \"console-operator-58897d9998-gv48h\" (UID: \"a23f2035-3d2d-4caa-a1d4-62dfd729f876\") " pod="openshift-console-operator/console-operator-58897d9998-gv48h" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.237603 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfrgp\" (UniqueName: \"kubernetes.io/projected/a8210275-38af-4411-b048-dc409cc7b88d-kube-api-access-pfrgp\") pod \"authentication-operator-69f744f599-fwsjb\" (UID: \"a8210275-38af-4411-b048-dc409cc7b88d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fwsjb" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.237638 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b163e19a-baca-49ff-a20e-457983474014-config\") pod \"openshift-apiserver-operator-796bbdcf4f-td5rr\" (UID: \"b163e19a-baca-49ff-a20e-457983474014\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-td5rr" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.237663 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/adc95b24-ac31-47c8-8221-b27da4ea0564-etcd-serving-ca\") pod \"apiserver-76f77b778f-bp68q\" (UID: \"adc95b24-ac31-47c8-8221-b27da4ea0564\") " pod="openshift-apiserver/apiserver-76f77b778f-bp68q" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.237698 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8210275-38af-4411-b048-dc409cc7b88d-serving-cert\") pod \"authentication-operator-69f744f599-fwsjb\" (UID: \"a8210275-38af-4411-b048-dc409cc7b88d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fwsjb" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.237729 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b163e19a-baca-49ff-a20e-457983474014-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-td5rr\" (UID: \"b163e19a-baca-49ff-a20e-457983474014\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-td5rr" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.237759 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/adc95b24-ac31-47c8-8221-b27da4ea0564-audit\") pod \"apiserver-76f77b778f-bp68q\" (UID: \"adc95b24-ac31-47c8-8221-b27da4ea0564\") " pod="openshift-apiserver/apiserver-76f77b778f-bp68q" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.237793 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pffcs\" (UniqueName: \"kubernetes.io/projected/9381da56-e194-4d1c-a8ac-577035653c33-kube-api-access-pffcs\") pod \"cluster-samples-operator-665b6dd947-p5m5n\" (UID: \"9381da56-e194-4d1c-a8ac-577035653c33\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p5m5n" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.237822 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/adc95b24-ac31-47c8-8221-b27da4ea0564-trusted-ca-bundle\") pod \"apiserver-76f77b778f-bp68q\" (UID: \"adc95b24-ac31-47c8-8221-b27da4ea0564\") " pod="openshift-apiserver/apiserver-76f77b778f-bp68q" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.237853 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qsrtw\" (UniqueName: \"kubernetes.io/projected/2b5f17e9-1e47-4c7a-b225-a874c78f88ef-kube-api-access-qsrtw\") pod \"route-controller-manager-6576b87f9c-n2lkc\" (UID: \"2b5f17e9-1e47-4c7a-b225-a874c78f88ef\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n2lkc" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.237928 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlgnv\" (UniqueName: \"kubernetes.io/projected/adc95b24-ac31-47c8-8221-b27da4ea0564-kube-api-access-wlgnv\") pod \"apiserver-76f77b778f-bp68q\" (UID: \"adc95b24-ac31-47c8-8221-b27da4ea0564\") " pod="openshift-apiserver/apiserver-76f77b778f-bp68q" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.238122 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qd2f\" (UniqueName: \"kubernetes.io/projected/a23f2035-3d2d-4caa-a1d4-62dfd729f876-kube-api-access-5qd2f\") pod \"console-operator-58897d9998-gv48h\" (UID: \"a23f2035-3d2d-4caa-a1d4-62dfd729f876\") " pod="openshift-console-operator/console-operator-58897d9998-gv48h" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.238180 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/9381da56-e194-4d1c-a8ac-577035653c33-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-p5m5n\" (UID: \"9381da56-e194-4d1c-a8ac-577035653c33\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p5m5n" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.238308 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e0ffc2f5-a2b0-40bd-9732-c24caee2dfdb-proxy-tls\") pod \"machine-config-controller-84d6567774-ntqfc\" (UID: \"e0ffc2f5-a2b0-40bd-9732-c24caee2dfdb\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ntqfc" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.238343 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8210275-38af-4411-b048-dc409cc7b88d-config\") pod \"authentication-operator-69f744f599-fwsjb\" (UID: \"a8210275-38af-4411-b048-dc409cc7b88d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fwsjb" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.238391 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b5f17e9-1e47-4c7a-b225-a874c78f88ef-serving-cert\") pod \"route-controller-manager-6576b87f9c-n2lkc\" (UID: \"2b5f17e9-1e47-4c7a-b225-a874c78f88ef\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n2lkc" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.238421 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a23f2035-3d2d-4caa-a1d4-62dfd729f876-serving-cert\") pod \"console-operator-58897d9998-gv48h\" (UID: \"a23f2035-3d2d-4caa-a1d4-62dfd729f876\") " pod="openshift-console-operator/console-operator-58897d9998-gv48h" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.238450 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8pvp\" (UniqueName: \"kubernetes.io/projected/da668c40-37d5-4ebf-ae7e-59e9c301b386-kube-api-access-q8pvp\") pod \"machine-api-operator-5694c8668f-zld8r\" (UID: \"da668c40-37d5-4ebf-ae7e-59e9c301b386\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zld8r" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.238480 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83b44b9a-4c3d-4100-ade1-2645a32a237e-config\") pod \"controller-manager-879f6c89f-6hxxc\" (UID: \"83b44b9a-4c3d-4100-ade1-2645a32a237e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6hxxc" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.238508 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/adc95b24-ac31-47c8-8221-b27da4ea0564-etcd-client\") pod \"apiserver-76f77b778f-bp68q\" (UID: \"adc95b24-ac31-47c8-8221-b27da4ea0564\") " pod="openshift-apiserver/apiserver-76f77b778f-bp68q" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.238541 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/adc95b24-ac31-47c8-8221-b27da4ea0564-image-import-ca\") pod \"apiserver-76f77b778f-bp68q\" (UID: \"adc95b24-ac31-47c8-8221-b27da4ea0564\") " pod="openshift-apiserver/apiserver-76f77b778f-bp68q" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.238572 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e0ffc2f5-a2b0-40bd-9732-c24caee2dfdb-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-ntqfc\" (UID: \"e0ffc2f5-a2b0-40bd-9732-c24caee2dfdb\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ntqfc" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.238613 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/adc95b24-ac31-47c8-8221-b27da4ea0564-encryption-config\") pod \"apiserver-76f77b778f-bp68q\" (UID: \"adc95b24-ac31-47c8-8221-b27da4ea0564\") " pod="openshift-apiserver/apiserver-76f77b778f-bp68q" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.238646 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/83b44b9a-4c3d-4100-ade1-2645a32a237e-serving-cert\") pod \"controller-manager-879f6c89f-6hxxc\" (UID: \"83b44b9a-4c3d-4100-ade1-2645a32a237e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6hxxc" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.238675 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/83b44b9a-4c3d-4100-ade1-2645a32a237e-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-6hxxc\" (UID: \"83b44b9a-4c3d-4100-ade1-2645a32a237e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6hxxc" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.240573 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/83b44b9a-4c3d-4100-ade1-2645a32a237e-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-6hxxc\" (UID: \"83b44b9a-4c3d-4100-ade1-2645a32a237e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6hxxc" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.242650 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/adc95b24-ac31-47c8-8221-b27da4ea0564-node-pullsecrets\") pod \"apiserver-76f77b778f-bp68q\" (UID: \"adc95b24-ac31-47c8-8221-b27da4ea0564\") " pod="openshift-apiserver/apiserver-76f77b778f-bp68q" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.242758 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/0eec4c1a-2ffd-487b-bd91-ecb5008f789e-available-featuregates\") pod \"openshift-config-operator-7777fb866f-vvz25\" (UID: \"0eec4c1a-2ffd-487b-bd91-ecb5008f789e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vvz25" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.243933 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2b5f17e9-1e47-4c7a-b225-a874c78f88ef-client-ca\") pod \"route-controller-manager-6576b87f9c-n2lkc\" (UID: \"2b5f17e9-1e47-4c7a-b225-a874c78f88ef\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n2lkc" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.244000 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b163e19a-baca-49ff-a20e-457983474014-config\") pod \"openshift-apiserver-operator-796bbdcf4f-td5rr\" (UID: \"b163e19a-baca-49ff-a20e-457983474014\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-td5rr" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.244020 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b5f17e9-1e47-4c7a-b225-a874c78f88ef-config\") pod \"route-controller-manager-6576b87f9c-n2lkc\" (UID: \"2b5f17e9-1e47-4c7a-b225-a874c78f88ef\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n2lkc" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.244827 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/adc95b24-ac31-47c8-8221-b27da4ea0564-etcd-serving-ca\") pod \"apiserver-76f77b778f-bp68q\" (UID: \"adc95b24-ac31-47c8-8221-b27da4ea0564\") " pod="openshift-apiserver/apiserver-76f77b778f-bp68q" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.245281 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/30ee7d2f-3ff9-44c0-8128-e5a25ed43613-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-8wmnr\" (UID: \"30ee7d2f-3ff9-44c0-8128-e5a25ed43613\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8wmnr" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.245336 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/30ee7d2f-3ff9-44c0-8128-e5a25ed43613-audit-dir\") pod \"apiserver-7bbb656c7d-8wmnr\" (UID: \"30ee7d2f-3ff9-44c0-8128-e5a25ed43613\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8wmnr" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.245600 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83b44b9a-4c3d-4100-ade1-2645a32a237e-config\") pod \"controller-manager-879f6c89f-6hxxc\" (UID: \"83b44b9a-4c3d-4100-ade1-2645a32a237e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6hxxc" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.245788 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/30ee7d2f-3ff9-44c0-8128-e5a25ed43613-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-8wmnr\" (UID: \"30ee7d2f-3ff9-44c0-8128-e5a25ed43613\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8wmnr" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.247182 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adc95b24-ac31-47c8-8221-b27da4ea0564-config\") pod \"apiserver-76f77b778f-bp68q\" (UID: \"adc95b24-ac31-47c8-8221-b27da4ea0564\") " pod="openshift-apiserver/apiserver-76f77b778f-bp68q" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.250782 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8210275-38af-4411-b048-dc409cc7b88d-serving-cert\") pod \"authentication-operator-69f744f599-fwsjb\" (UID: \"a8210275-38af-4411-b048-dc409cc7b88d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fwsjb" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.251255 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/9381da56-e194-4d1c-a8ac-577035653c33-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-p5m5n\" (UID: \"9381da56-e194-4d1c-a8ac-577035653c33\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p5m5n" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.251909 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8210275-38af-4411-b048-dc409cc7b88d-config\") pod \"authentication-operator-69f744f599-fwsjb\" (UID: \"a8210275-38af-4411-b048-dc409cc7b88d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fwsjb" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.253692 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b163e19a-baca-49ff-a20e-457983474014-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-td5rr\" (UID: \"b163e19a-baca-49ff-a20e-457983474014\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-td5rr" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.253704 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/adc95b24-ac31-47c8-8221-b27da4ea0564-etcd-client\") pod \"apiserver-76f77b778f-bp68q\" (UID: \"adc95b24-ac31-47c8-8221-b27da4ea0564\") " pod="openshift-apiserver/apiserver-76f77b778f-bp68q" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.254212 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29553630-gsnv8"] Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.254581 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/adc95b24-ac31-47c8-8221-b27da4ea0564-audit\") pod \"apiserver-76f77b778f-bp68q\" (UID: \"adc95b24-ac31-47c8-8221-b27da4ea0564\") " pod="openshift-apiserver/apiserver-76f77b778f-bp68q" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.255038 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/adc95b24-ac31-47c8-8221-b27da4ea0564-image-import-ca\") pod \"apiserver-76f77b778f-bp68q\" (UID: \"adc95b24-ac31-47c8-8221-b27da4ea0564\") " pod="openshift-apiserver/apiserver-76f77b778f-bp68q" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.255237 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29553630-gsnv8" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.255772 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/adc95b24-ac31-47c8-8221-b27da4ea0564-trusted-ca-bundle\") pod \"apiserver-76f77b778f-bp68q\" (UID: \"adc95b24-ac31-47c8-8221-b27da4ea0564\") " pod="openshift-apiserver/apiserver-76f77b778f-bp68q" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.255855 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b5f17e9-1e47-4c7a-b225-a874c78f88ef-serving-cert\") pod \"route-controller-manager-6576b87f9c-n2lkc\" (UID: \"2b5f17e9-1e47-4c7a-b225-a874c78f88ef\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n2lkc" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.256656 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-vvz25"] Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.256767 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a8210275-38af-4411-b048-dc409cc7b88d-service-ca-bundle\") pod \"authentication-operator-69f744f599-fwsjb\" (UID: \"a8210275-38af-4411-b048-dc409cc7b88d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fwsjb" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.257033 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/30ee7d2f-3ff9-44c0-8128-e5a25ed43613-audit-policies\") pod \"apiserver-7bbb656c7d-8wmnr\" (UID: \"30ee7d2f-3ff9-44c0-8128-e5a25ed43613\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8wmnr" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.257496 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.259171 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a8210275-38af-4411-b048-dc409cc7b88d-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-fwsjb\" (UID: \"a8210275-38af-4411-b048-dc409cc7b88d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fwsjb" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.260135 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/83b44b9a-4c3d-4100-ade1-2645a32a237e-client-ca\") pod \"controller-manager-879f6c89f-6hxxc\" (UID: \"83b44b9a-4c3d-4100-ade1-2645a32a237e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6hxxc" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.261150 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/adc95b24-ac31-47c8-8221-b27da4ea0564-encryption-config\") pod \"apiserver-76f77b778f-bp68q\" (UID: \"adc95b24-ac31-47c8-8221-b27da4ea0564\") " pod="openshift-apiserver/apiserver-76f77b778f-bp68q" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.261318 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/adc95b24-ac31-47c8-8221-b27da4ea0564-audit-dir\") pod \"apiserver-76f77b778f-bp68q\" (UID: \"adc95b24-ac31-47c8-8221-b27da4ea0564\") " pod="openshift-apiserver/apiserver-76f77b778f-bp68q" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.261485 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0eec4c1a-2ffd-487b-bd91-ecb5008f789e-serving-cert\") pod \"openshift-config-operator-7777fb866f-vvz25\" (UID: \"0eec4c1a-2ffd-487b-bd91-ecb5008f789e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vvz25" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.261546 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-6hxxc"] Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.261560 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/da668c40-37d5-4ebf-ae7e-59e9c301b386-images\") pod \"machine-api-operator-5694c8668f-zld8r\" (UID: \"da668c40-37d5-4ebf-ae7e-59e9c301b386\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zld8r" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.262160 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da668c40-37d5-4ebf-ae7e-59e9c301b386-config\") pod \"machine-api-operator-5694c8668f-zld8r\" (UID: \"da668c40-37d5-4ebf-ae7e-59e9c301b386\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zld8r" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.262342 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/30ee7d2f-3ff9-44c0-8128-e5a25ed43613-encryption-config\") pod \"apiserver-7bbb656c7d-8wmnr\" (UID: \"30ee7d2f-3ff9-44c0-8128-e5a25ed43613\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8wmnr" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.263767 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.263810 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-bp68q"] Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.265748 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s9lbs"] Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.266093 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/30ee7d2f-3ff9-44c0-8128-e5a25ed43613-serving-cert\") pod \"apiserver-7bbb656c7d-8wmnr\" (UID: \"30ee7d2f-3ff9-44c0-8128-e5a25ed43613\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8wmnr" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.267705 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/30ee7d2f-3ff9-44c0-8128-e5a25ed43613-etcd-client\") pod \"apiserver-7bbb656c7d-8wmnr\" (UID: \"30ee7d2f-3ff9-44c0-8128-e5a25ed43613\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8wmnr" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.267993 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-h2qnq"] Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.268823 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/adc95b24-ac31-47c8-8221-b27da4ea0564-serving-cert\") pod \"apiserver-76f77b778f-bp68q\" (UID: \"adc95b24-ac31-47c8-8221-b27da4ea0564\") " pod="openshift-apiserver/apiserver-76f77b778f-bp68q" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.269900 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-h2qnq" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.271896 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-fwsjb"] Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.272530 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/83b44b9a-4c3d-4100-ade1-2645a32a237e-serving-cert\") pod \"controller-manager-879f6c89f-6hxxc\" (UID: \"83b44b9a-4c3d-4100-ade1-2645a32a237e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6hxxc" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.273873 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-vbbdr"] Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.274801 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p5m5n"] Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.276512 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-zp4ks"] Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.276901 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.277637 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-h82vj"] Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.279024 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kgwqw"] Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.280377 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-w6wrf"] Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.281782 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-n2lkc"] Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.283127 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8bbz5"] Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.285310 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/da668c40-37d5-4ebf-ae7e-59e9c301b386-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-zld8r\" (UID: \"da668c40-37d5-4ebf-ae7e-59e9c301b386\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zld8r" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.285916 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-td5rr"] Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.287335 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-b6vjx"] Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.288794 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-bjpgk"] Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.290163 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-97qlz"] Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.291590 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-8wmnr"] Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.292957 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-2z52n"] Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.294321 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553642-n69sp"] Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.295703 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4llxn"] Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.296959 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-mp49l"] Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.296998 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.298523 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-gv48h"] Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.299528 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-sph2s"] Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.300556 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vxs94"] Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.301644 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-b25qz"] Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.302676 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-ntqfc"] Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.303723 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r7bc7"] Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.304758 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-t54r9"] Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.305797 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-lfhrh"] Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.306848 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-76p85"] Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.307883 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dljmc"] Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.309974 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nwcwz"] Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.311039 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-9mn64"] Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.311803 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-9mn64" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.312156 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xpv26"] Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.313286 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-f7nlr"] Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.314349 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-f7nlr" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.314452 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-9mn64"] Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.315531 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29553630-gsnv8"] Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.316624 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-8mblp"] Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.317686 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.318196 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-h2qnq"] Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.319288 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-f7nlr"] Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.320325 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-nnfrn"] Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.320970 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-nnfrn" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.337988 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.339412 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a23f2035-3d2d-4caa-a1d4-62dfd729f876-trusted-ca\") pod \"console-operator-58897d9998-gv48h\" (UID: \"a23f2035-3d2d-4caa-a1d4-62dfd729f876\") " pod="openshift-console-operator/console-operator-58897d9998-gv48h" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.339464 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qd2f\" (UniqueName: \"kubernetes.io/projected/a23f2035-3d2d-4caa-a1d4-62dfd729f876-kube-api-access-5qd2f\") pod \"console-operator-58897d9998-gv48h\" (UID: \"a23f2035-3d2d-4caa-a1d4-62dfd729f876\") " pod="openshift-console-operator/console-operator-58897d9998-gv48h" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.339487 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e0ffc2f5-a2b0-40bd-9732-c24caee2dfdb-proxy-tls\") pod \"machine-config-controller-84d6567774-ntqfc\" (UID: \"e0ffc2f5-a2b0-40bd-9732-c24caee2dfdb\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ntqfc" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.339505 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a23f2035-3d2d-4caa-a1d4-62dfd729f876-serving-cert\") pod \"console-operator-58897d9998-gv48h\" (UID: \"a23f2035-3d2d-4caa-a1d4-62dfd729f876\") " pod="openshift-console-operator/console-operator-58897d9998-gv48h" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.339536 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e0ffc2f5-a2b0-40bd-9732-c24caee2dfdb-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-ntqfc\" (UID: \"e0ffc2f5-a2b0-40bd-9732-c24caee2dfdb\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ntqfc" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.339566 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a23f2035-3d2d-4caa-a1d4-62dfd729f876-config\") pod \"console-operator-58897d9998-gv48h\" (UID: \"a23f2035-3d2d-4caa-a1d4-62dfd729f876\") " pod="openshift-console-operator/console-operator-58897d9998-gv48h" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.339631 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j69kd\" (UniqueName: \"kubernetes.io/projected/e0ffc2f5-a2b0-40bd-9732-c24caee2dfdb-kube-api-access-j69kd\") pod \"machine-config-controller-84d6567774-ntqfc\" (UID: \"e0ffc2f5-a2b0-40bd-9732-c24caee2dfdb\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ntqfc" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.340338 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a23f2035-3d2d-4caa-a1d4-62dfd729f876-config\") pod \"console-operator-58897d9998-gv48h\" (UID: \"a23f2035-3d2d-4caa-a1d4-62dfd729f876\") " pod="openshift-console-operator/console-operator-58897d9998-gv48h" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.340587 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e0ffc2f5-a2b0-40bd-9732-c24caee2dfdb-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-ntqfc\" (UID: \"e0ffc2f5-a2b0-40bd-9732-c24caee2dfdb\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ntqfc" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.340633 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a23f2035-3d2d-4caa-a1d4-62dfd729f876-trusted-ca\") pod \"console-operator-58897d9998-gv48h\" (UID: \"a23f2035-3d2d-4caa-a1d4-62dfd729f876\") " pod="openshift-console-operator/console-operator-58897d9998-gv48h" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.342628 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a23f2035-3d2d-4caa-a1d4-62dfd729f876-serving-cert\") pod \"console-operator-58897d9998-gv48h\" (UID: \"a23f2035-3d2d-4caa-a1d4-62dfd729f876\") " pod="openshift-console-operator/console-operator-58897d9998-gv48h" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.358052 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.378800 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.397882 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.417820 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.436981 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.457399 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.477098 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.497392 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.502647 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e0ffc2f5-a2b0-40bd-9732-c24caee2dfdb-proxy-tls\") pod \"machine-config-controller-84d6567774-ntqfc\" (UID: \"e0ffc2f5-a2b0-40bd-9732-c24caee2dfdb\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ntqfc" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.517525 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.579054 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.598054 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.628601 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.638405 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.656881 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.677187 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.697594 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.724156 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.737251 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.757712 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.784283 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.798527 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.817928 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.856260 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.857317 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.878324 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.898197 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.918393 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.937659 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.958149 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 11 08:42:44 crc kubenswrapper[4808]: I0311 08:42:44.978737 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Mar 11 08:42:45 crc kubenswrapper[4808]: I0311 08:42:45.003085 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 11 08:42:45 crc kubenswrapper[4808]: I0311 08:42:45.018301 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Mar 11 08:42:45 crc kubenswrapper[4808]: I0311 08:42:45.038310 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 11 08:42:45 crc kubenswrapper[4808]: I0311 08:42:45.057767 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 11 08:42:45 crc kubenswrapper[4808]: I0311 08:42:45.079030 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 11 08:42:45 crc kubenswrapper[4808]: I0311 08:42:45.099250 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 11 08:42:45 crc kubenswrapper[4808]: I0311 08:42:45.117762 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Mar 11 08:42:45 crc kubenswrapper[4808]: I0311 08:42:45.138774 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 11 08:42:45 crc kubenswrapper[4808]: I0311 08:42:45.158104 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 11 08:42:45 crc kubenswrapper[4808]: I0311 08:42:45.177895 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Mar 11 08:42:45 crc kubenswrapper[4808]: I0311 08:42:45.195580 4808 request.go:700] Waited for 1.000014931s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmco-proxy-tls&limit=500&resourceVersion=0 Mar 11 08:42:45 crc kubenswrapper[4808]: I0311 08:42:45.197280 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 11 08:42:45 crc kubenswrapper[4808]: I0311 08:42:45.218122 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 11 08:42:45 crc kubenswrapper[4808]: I0311 08:42:45.238011 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Mar 11 08:42:45 crc kubenswrapper[4808]: I0311 08:42:45.258558 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 11 08:42:45 crc kubenswrapper[4808]: I0311 08:42:45.278794 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 11 08:42:45 crc kubenswrapper[4808]: I0311 08:42:45.299223 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Mar 11 08:42:45 crc kubenswrapper[4808]: I0311 08:42:45.318416 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 11 08:42:45 crc kubenswrapper[4808]: I0311 08:42:45.347926 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 11 08:42:45 crc kubenswrapper[4808]: I0311 08:42:45.357977 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 11 08:42:45 crc kubenswrapper[4808]: I0311 08:42:45.377632 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 11 08:42:45 crc kubenswrapper[4808]: I0311 08:42:45.398195 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Mar 11 08:42:45 crc kubenswrapper[4808]: I0311 08:42:45.418135 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 11 08:42:45 crc kubenswrapper[4808]: I0311 08:42:45.438078 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Mar 11 08:42:45 crc kubenswrapper[4808]: I0311 08:42:45.457684 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 11 08:42:45 crc kubenswrapper[4808]: I0311 08:42:45.477818 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 11 08:42:45 crc kubenswrapper[4808]: I0311 08:42:45.496972 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 11 08:42:45 crc kubenswrapper[4808]: I0311 08:42:45.517800 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Mar 11 08:42:45 crc kubenswrapper[4808]: I0311 08:42:45.537799 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 11 08:42:45 crc kubenswrapper[4808]: I0311 08:42:45.557101 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 08:42:45 crc kubenswrapper[4808]: I0311 08:42:45.577420 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 08:42:45 crc kubenswrapper[4808]: I0311 08:42:45.597180 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 11 08:42:45 crc kubenswrapper[4808]: I0311 08:42:45.617778 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 11 08:42:45 crc kubenswrapper[4808]: I0311 08:42:45.637027 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Mar 11 08:42:45 crc kubenswrapper[4808]: I0311 08:42:45.656640 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 11 08:42:45 crc kubenswrapper[4808]: I0311 08:42:45.677800 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 11 08:42:45 crc kubenswrapper[4808]: I0311 08:42:45.698814 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 11 08:42:45 crc kubenswrapper[4808]: I0311 08:42:45.718024 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 11 08:42:45 crc kubenswrapper[4808]: I0311 08:42:45.736892 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Mar 11 08:42:45 crc kubenswrapper[4808]: I0311 08:42:45.757936 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 11 08:42:45 crc kubenswrapper[4808]: I0311 08:42:45.778558 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 11 08:42:45 crc kubenswrapper[4808]: I0311 08:42:45.797605 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 11 08:42:45 crc kubenswrapper[4808]: I0311 08:42:45.836325 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfrgp\" (UniqueName: \"kubernetes.io/projected/a8210275-38af-4411-b048-dc409cc7b88d-kube-api-access-pfrgp\") pod \"authentication-operator-69f744f599-fwsjb\" (UID: \"a8210275-38af-4411-b048-dc409cc7b88d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fwsjb" Mar 11 08:42:45 crc kubenswrapper[4808]: I0311 08:42:45.853518 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8pvp\" (UniqueName: \"kubernetes.io/projected/da668c40-37d5-4ebf-ae7e-59e9c301b386-kube-api-access-q8pvp\") pod \"machine-api-operator-5694c8668f-zld8r\" (UID: \"da668c40-37d5-4ebf-ae7e-59e9c301b386\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zld8r" Mar 11 08:42:45 crc kubenswrapper[4808]: I0311 08:42:45.878533 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqvww\" (UniqueName: \"kubernetes.io/projected/30ee7d2f-3ff9-44c0-8128-e5a25ed43613-kube-api-access-sqvww\") pod \"apiserver-7bbb656c7d-8wmnr\" (UID: \"30ee7d2f-3ff9-44c0-8128-e5a25ed43613\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8wmnr" Mar 11 08:42:45 crc kubenswrapper[4808]: I0311 08:42:45.894976 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pffcs\" (UniqueName: \"kubernetes.io/projected/9381da56-e194-4d1c-a8ac-577035653c33-kube-api-access-pffcs\") pod \"cluster-samples-operator-665b6dd947-p5m5n\" (UID: \"9381da56-e194-4d1c-a8ac-577035653c33\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p5m5n" Mar 11 08:42:45 crc kubenswrapper[4808]: I0311 08:42:45.897980 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 11 08:42:45 crc kubenswrapper[4808]: I0311 08:42:45.898571 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-fwsjb" Mar 11 08:42:45 crc kubenswrapper[4808]: I0311 08:42:45.917926 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 11 08:42:45 crc kubenswrapper[4808]: I0311 08:42:45.952223 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qsrtw\" (UniqueName: \"kubernetes.io/projected/2b5f17e9-1e47-4c7a-b225-a874c78f88ef-kube-api-access-qsrtw\") pod \"route-controller-manager-6576b87f9c-n2lkc\" (UID: \"2b5f17e9-1e47-4c7a-b225-a874c78f88ef\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n2lkc" Mar 11 08:42:45 crc kubenswrapper[4808]: I0311 08:42:45.971871 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlgnv\" (UniqueName: \"kubernetes.io/projected/adc95b24-ac31-47c8-8221-b27da4ea0564-kube-api-access-wlgnv\") pod \"apiserver-76f77b778f-bp68q\" (UID: \"adc95b24-ac31-47c8-8221-b27da4ea0564\") " pod="openshift-apiserver/apiserver-76f77b778f-bp68q" Mar 11 08:42:45 crc kubenswrapper[4808]: I0311 08:42:45.992271 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6862\" (UniqueName: \"kubernetes.io/projected/0eec4c1a-2ffd-487b-bd91-ecb5008f789e-kube-api-access-b6862\") pod \"openshift-config-operator-7777fb866f-vvz25\" (UID: \"0eec4c1a-2ffd-487b-bd91-ecb5008f789e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vvz25" Mar 11 08:42:45 crc kubenswrapper[4808]: I0311 08:42:45.994162 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8wmnr" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.012814 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zvdm\" (UniqueName: \"kubernetes.io/projected/b163e19a-baca-49ff-a20e-457983474014-kube-api-access-4zvdm\") pod \"openshift-apiserver-operator-796bbdcf4f-td5rr\" (UID: \"b163e19a-baca-49ff-a20e-457983474014\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-td5rr" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.027659 4808 patch_prober.go:28] interesting pod/machine-config-daemon-tfsm9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.027728 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.039734 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.040147 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5clr\" (UniqueName: \"kubernetes.io/projected/83b44b9a-4c3d-4100-ade1-2645a32a237e-kube-api-access-x5clr\") pod \"controller-manager-879f6c89f-6hxxc\" (UID: \"83b44b9a-4c3d-4100-ade1-2645a32a237e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6hxxc" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.058230 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.078427 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.092290 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-zld8r" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.097889 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.115532 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p5m5n" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.117234 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.137834 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.152054 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vvz25" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.160036 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.166175 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-fwsjb"] Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.173544 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-8wmnr"] Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.177212 4808 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.182751 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-bp68q" Mar 11 08:42:46 crc kubenswrapper[4808]: W0311 08:42:46.189870 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8210275_38af_4411_b048_dc409cc7b88d.slice/crio-afaa3d95f13ac7cb1395efaa21abb78e5cb44ce39bbd3aea574fcf91ca5f431b WatchSource:0}: Error finding container afaa3d95f13ac7cb1395efaa21abb78e5cb44ce39bbd3aea574fcf91ca5f431b: Status 404 returned error can't find the container with id afaa3d95f13ac7cb1395efaa21abb78e5cb44ce39bbd3aea574fcf91ca5f431b Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.196008 4808 request.go:700] Waited for 1.881453322s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/hostpath-provisioner/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.198712 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.207598 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n2lkc" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.217402 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.236806 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-6hxxc" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.237539 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.257561 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.270327 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-zld8r"] Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.277534 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.279589 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-td5rr" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.303474 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p5m5n"] Mar 11 08:42:46 crc kubenswrapper[4808]: W0311 08:42:46.304912 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podda668c40_37d5_4ebf_ae7e_59e9c301b386.slice/crio-485bd4535e6da3dc9bf18ece89e1b75ad6c1fd56cef0f3ed4300d40aee152325 WatchSource:0}: Error finding container 485bd4535e6da3dc9bf18ece89e1b75ad6c1fd56cef0f3ed4300d40aee152325: Status 404 returned error can't find the container with id 485bd4535e6da3dc9bf18ece89e1b75ad6c1fd56cef0f3ed4300d40aee152325 Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.312240 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qd2f\" (UniqueName: \"kubernetes.io/projected/a23f2035-3d2d-4caa-a1d4-62dfd729f876-kube-api-access-5qd2f\") pod \"console-operator-58897d9998-gv48h\" (UID: \"a23f2035-3d2d-4caa-a1d4-62dfd729f876\") " pod="openshift-console-operator/console-operator-58897d9998-gv48h" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.335779 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j69kd\" (UniqueName: \"kubernetes.io/projected/e0ffc2f5-a2b0-40bd-9732-c24caee2dfdb-kube-api-access-j69kd\") pod \"machine-config-controller-84d6567774-ntqfc\" (UID: \"e0ffc2f5-a2b0-40bd-9732-c24caee2dfdb\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ntqfc" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.401667 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-vvz25"] Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.410819 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6e36469c-36ee-4c28-b4f3-c37c5b25330d-etcd-client\") pod \"etcd-operator-b45778765-bjpgk\" (UID: \"6e36469c-36ee-4c28-b4f3-c37c5b25330d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bjpgk" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.410843 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/65550926-3f8b-436d-8a5c-e425d8c4875f-trusted-ca\") pod \"image-registry-697d97f7c8-76p85\" (UID: \"65550926-3f8b-436d-8a5c-e425d8c4875f\") " pod="openshift-image-registry/image-registry-697d97f7c8-76p85" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.410859 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e36469c-36ee-4c28-b4f3-c37c5b25330d-config\") pod \"etcd-operator-b45778765-bjpgk\" (UID: \"6e36469c-36ee-4c28-b4f3-c37c5b25330d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bjpgk" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.410873 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/65550926-3f8b-436d-8a5c-e425d8c4875f-ca-trust-extracted\") pod \"image-registry-697d97f7c8-76p85\" (UID: \"65550926-3f8b-436d-8a5c-e425d8c4875f\") " pod="openshift-image-registry/image-registry-697d97f7c8-76p85" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.410891 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/65550926-3f8b-436d-8a5c-e425d8c4875f-bound-sa-token\") pod \"image-registry-697d97f7c8-76p85\" (UID: \"65550926-3f8b-436d-8a5c-e425d8c4875f\") " pod="openshift-image-registry/image-registry-697d97f7c8-76p85" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.410905 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6e36469c-36ee-4c28-b4f3-c37c5b25330d-etcd-service-ca\") pod \"etcd-operator-b45778765-bjpgk\" (UID: \"6e36469c-36ee-4c28-b4f3-c37c5b25330d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bjpgk" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.410921 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9e02138-74fa-484f-a87d-b88e05e92d58-config\") pod \"machine-approver-56656f9798-zvh9m\" (UID: \"a9e02138-74fa-484f-a87d-b88e05e92d58\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zvh9m" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.410936 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jl98p\" (UniqueName: \"kubernetes.io/projected/a9e02138-74fa-484f-a87d-b88e05e92d58-kube-api-access-jl98p\") pod \"machine-approver-56656f9798-zvh9m\" (UID: \"a9e02138-74fa-484f-a87d-b88e05e92d58\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zvh9m" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.411109 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/a9e02138-74fa-484f-a87d-b88e05e92d58-machine-approver-tls\") pod \"machine-approver-56656f9798-zvh9m\" (UID: \"a9e02138-74fa-484f-a87d-b88e05e92d58\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zvh9m" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.411288 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/24ab5c03-d768-4147-bbc2-4e71ac337623-console-oauth-config\") pod \"console-f9d7485db-zp4ks\" (UID: \"24ab5c03-d768-4147-bbc2-4e71ac337623\") " pod="openshift-console/console-f9d7485db-zp4ks" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.411434 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/24ab5c03-d768-4147-bbc2-4e71ac337623-service-ca\") pod \"console-f9d7485db-zp4ks\" (UID: \"24ab5c03-d768-4147-bbc2-4e71ac337623\") " pod="openshift-console/console-f9d7485db-zp4ks" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.412100 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/24ab5c03-d768-4147-bbc2-4e71ac337623-trusted-ca-bundle\") pod \"console-f9d7485db-zp4ks\" (UID: \"24ab5c03-d768-4147-bbc2-4e71ac337623\") " pod="openshift-console/console-f9d7485db-zp4ks" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.412185 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e36469c-36ee-4c28-b4f3-c37c5b25330d-serving-cert\") pod \"etcd-operator-b45778765-bjpgk\" (UID: \"6e36469c-36ee-4c28-b4f3-c37c5b25330d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bjpgk" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.412252 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6e36469c-36ee-4c28-b4f3-c37c5b25330d-etcd-ca\") pod \"etcd-operator-b45778765-bjpgk\" (UID: \"6e36469c-36ee-4c28-b4f3-c37c5b25330d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bjpgk" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.412284 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c4d7251-5000-4973-aeaa-0d085d4f264d-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-b6vjx\" (UID: \"7c4d7251-5000-4973-aeaa-0d085d4f264d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-b6vjx" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.412299 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7c4d7251-5000-4973-aeaa-0d085d4f264d-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-b6vjx\" (UID: \"7c4d7251-5000-4973-aeaa-0d085d4f264d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-b6vjx" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.412313 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a9e02138-74fa-484f-a87d-b88e05e92d58-auth-proxy-config\") pod \"machine-approver-56656f9798-zvh9m\" (UID: \"a9e02138-74fa-484f-a87d-b88e05e92d58\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zvh9m" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.412517 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/24ab5c03-d768-4147-bbc2-4e71ac337623-console-config\") pod \"console-f9d7485db-zp4ks\" (UID: \"24ab5c03-d768-4147-bbc2-4e71ac337623\") " pod="openshift-console/console-f9d7485db-zp4ks" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.412607 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twpbf\" (UniqueName: \"kubernetes.io/projected/24ab5c03-d768-4147-bbc2-4e71ac337623-kube-api-access-twpbf\") pod \"console-f9d7485db-zp4ks\" (UID: \"24ab5c03-d768-4147-bbc2-4e71ac337623\") " pod="openshift-console/console-f9d7485db-zp4ks" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.412622 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/6c9411d8-3c3c-4ae8-9580-10ae4884967b-default-certificate\") pod \"router-default-5444994796-h9wnt\" (UID: \"6c9411d8-3c3c-4ae8-9580-10ae4884967b\") " pod="openshift-ingress/router-default-5444994796-h9wnt" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.412638 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/65550926-3f8b-436d-8a5c-e425d8c4875f-registry-certificates\") pod \"image-registry-697d97f7c8-76p85\" (UID: \"65550926-3f8b-436d-8a5c-e425d8c4875f\") " pod="openshift-image-registry/image-registry-697d97f7c8-76p85" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.412654 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/24ab5c03-d768-4147-bbc2-4e71ac337623-console-serving-cert\") pod \"console-f9d7485db-zp4ks\" (UID: \"24ab5c03-d768-4147-bbc2-4e71ac337623\") " pod="openshift-console/console-f9d7485db-zp4ks" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.412670 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4ea394d-b845-43b1-93e2-31db42fe21bd-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-4llxn\" (UID: \"d4ea394d-b845-43b1-93e2-31db42fe21bd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4llxn" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.412685 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/6c9411d8-3c3c-4ae8-9580-10ae4884967b-stats-auth\") pod \"router-default-5444994796-h9wnt\" (UID: \"6c9411d8-3c3c-4ae8-9580-10ae4884967b\") " pod="openshift-ingress/router-default-5444994796-h9wnt" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.412748 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/cde5dc2c-6004-42ac-bd4a-93f0bec898fa-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-s9lbs\" (UID: \"cde5dc2c-6004-42ac-bd4a-93f0bec898fa\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s9lbs" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.412782 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9c5fj\" (UniqueName: \"kubernetes.io/projected/65550926-3f8b-436d-8a5c-e425d8c4875f-kube-api-access-9c5fj\") pod \"image-registry-697d97f7c8-76p85\" (UID: \"65550926-3f8b-436d-8a5c-e425d8c4875f\") " pod="openshift-image-registry/image-registry-697d97f7c8-76p85" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.412849 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4ea394d-b845-43b1-93e2-31db42fe21bd-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-4llxn\" (UID: \"d4ea394d-b845-43b1-93e2-31db42fe21bd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4llxn" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.412893 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqlx2\" (UniqueName: \"kubernetes.io/projected/d4ea394d-b845-43b1-93e2-31db42fe21bd-kube-api-access-nqlx2\") pod \"openshift-controller-manager-operator-756b6f6bc6-4llxn\" (UID: \"d4ea394d-b845-43b1-93e2-31db42fe21bd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4llxn" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.412914 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-76p85\" (UID: \"65550926-3f8b-436d-8a5c-e425d8c4875f\") " pod="openshift-image-registry/image-registry-697d97f7c8-76p85" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.412930 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/24ab5c03-d768-4147-bbc2-4e71ac337623-oauth-serving-cert\") pod \"console-f9d7485db-zp4ks\" (UID: \"24ab5c03-d768-4147-bbc2-4e71ac337623\") " pod="openshift-console/console-f9d7485db-zp4ks" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.412999 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/65550926-3f8b-436d-8a5c-e425d8c4875f-registry-tls\") pod \"image-registry-697d97f7c8-76p85\" (UID: \"65550926-3f8b-436d-8a5c-e425d8c4875f\") " pod="openshift-image-registry/image-registry-697d97f7c8-76p85" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.413032 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/65550926-3f8b-436d-8a5c-e425d8c4875f-installation-pull-secrets\") pod \"image-registry-697d97f7c8-76p85\" (UID: \"65550926-3f8b-436d-8a5c-e425d8c4875f\") " pod="openshift-image-registry/image-registry-697d97f7c8-76p85" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.413046 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6954h\" (UniqueName: \"kubernetes.io/projected/6c9411d8-3c3c-4ae8-9580-10ae4884967b-kube-api-access-6954h\") pod \"router-default-5444994796-h9wnt\" (UID: \"6c9411d8-3c3c-4ae8-9580-10ae4884967b\") " pod="openshift-ingress/router-default-5444994796-h9wnt" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.413497 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzcnx\" (UniqueName: \"kubernetes.io/projected/6e36469c-36ee-4c28-b4f3-c37c5b25330d-kube-api-access-dzcnx\") pod \"etcd-operator-b45778765-bjpgk\" (UID: \"6e36469c-36ee-4c28-b4f3-c37c5b25330d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bjpgk" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.413524 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c4d7251-5000-4973-aeaa-0d085d4f264d-config\") pod \"kube-apiserver-operator-766d6c64bb-b6vjx\" (UID: \"7c4d7251-5000-4973-aeaa-0d085d4f264d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-b6vjx" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.413543 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c9411d8-3c3c-4ae8-9580-10ae4884967b-service-ca-bundle\") pod \"router-default-5444994796-h9wnt\" (UID: \"6c9411d8-3c3c-4ae8-9580-10ae4884967b\") " pod="openshift-ingress/router-default-5444994796-h9wnt" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.413576 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tf4w5\" (UniqueName: \"kubernetes.io/projected/cde5dc2c-6004-42ac-bd4a-93f0bec898fa-kube-api-access-tf4w5\") pod \"control-plane-machine-set-operator-78cbb6b69f-s9lbs\" (UID: \"cde5dc2c-6004-42ac-bd4a-93f0bec898fa\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s9lbs" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.413600 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6c9411d8-3c3c-4ae8-9580-10ae4884967b-metrics-certs\") pod \"router-default-5444994796-h9wnt\" (UID: \"6c9411d8-3c3c-4ae8-9580-10ae4884967b\") " pod="openshift-ingress/router-default-5444994796-h9wnt" Mar 11 08:42:46 crc kubenswrapper[4808]: E0311 08:42:46.413769 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-11 08:42:46.91375578 +0000 UTC m=+217.867079170 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-76p85" (UID: "65550926-3f8b-436d-8a5c-e425d8c4875f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.421799 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-bp68q"] Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.436707 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-gv48h" Mar 11 08:42:46 crc kubenswrapper[4808]: W0311 08:42:46.444349 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podadc95b24_ac31_47c8_8221_b27da4ea0564.slice/crio-9ca0ded6cf60645349aaea5474ab53cb48253c03f19c5488ae65f842358e5d17 WatchSource:0}: Error finding container 9ca0ded6cf60645349aaea5474ab53cb48253c03f19c5488ae65f842358e5d17: Status 404 returned error can't find the container with id 9ca0ded6cf60645349aaea5474ab53cb48253c03f19c5488ae65f842358e5d17 Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.466155 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.481257 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ntqfc" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.491439 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-n2lkc"] Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.513963 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.514038 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7b05200d-8025-468a-9c30-fbfd45a80b8b-audit-policies\") pod \"oauth-openshift-558db77b4-vbbdr\" (UID: \"7b05200d-8025-468a-9c30-fbfd45a80b8b\") " pod="openshift-authentication/oauth-openshift-558db77b4-vbbdr" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.514057 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/dfecc13f-f972-4fac-acb4-b1c8e9ce1b5e-proxy-tls\") pod \"machine-config-operator-74547568cd-lfhrh\" (UID: \"dfecc13f-f972-4fac-acb4-b1c8e9ce1b5e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lfhrh" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.514084 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/24ab5c03-d768-4147-bbc2-4e71ac337623-console-serving-cert\") pod \"console-f9d7485db-zp4ks\" (UID: \"24ab5c03-d768-4147-bbc2-4e71ac337623\") " pod="openshift-console/console-f9d7485db-zp4ks" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.514101 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4ea394d-b845-43b1-93e2-31db42fe21bd-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-4llxn\" (UID: \"d4ea394d-b845-43b1-93e2-31db42fe21bd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4llxn" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.514117 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/43783d3c-c9b4-456b-a821-09ddf3e9ca75-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vxs94\" (UID: \"43783d3c-c9b4-456b-a821-09ddf3e9ca75\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vxs94" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.514139 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxxmn\" (UniqueName: \"kubernetes.io/projected/7b05200d-8025-468a-9c30-fbfd45a80b8b-kube-api-access-bxxmn\") pod \"oauth-openshift-558db77b4-vbbdr\" (UID: \"7b05200d-8025-468a-9c30-fbfd45a80b8b\") " pod="openshift-authentication/oauth-openshift-558db77b4-vbbdr" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.514153 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/cc204192-dc8a-4a27-ba31-7b45ed831217-srv-cert\") pod \"catalog-operator-68c6474976-xpv26\" (UID: \"cc204192-dc8a-4a27-ba31-7b45ed831217\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xpv26" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.514168 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3e544519-8cbf-476f-8025-c4c867dead85-webhook-cert\") pod \"packageserver-d55dfcdfc-r7bc7\" (UID: \"3e544519-8cbf-476f-8025-c4c867dead85\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r7bc7" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.514196 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b05200d-8025-468a-9c30-fbfd45a80b8b-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-vbbdr\" (UID: \"7b05200d-8025-468a-9c30-fbfd45a80b8b\") " pod="openshift-authentication/oauth-openshift-558db77b4-vbbdr" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.514213 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdhrf\" (UniqueName: \"kubernetes.io/projected/3e544519-8cbf-476f-8025-c4c867dead85-kube-api-access-fdhrf\") pod \"packageserver-d55dfcdfc-r7bc7\" (UID: \"3e544519-8cbf-476f-8025-c4c867dead85\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r7bc7" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.514236 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e02b1fd1-95ae-45ac-91b5-3f9376f87b41-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-dljmc\" (UID: \"e02b1fd1-95ae-45ac-91b5-3f9376f87b41\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dljmc" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.514250 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/10a3cae7-8c99-405a-a5ec-cbed98e037bd-config-volume\") pod \"dns-default-9mn64\" (UID: \"10a3cae7-8c99-405a-a5ec-cbed98e037bd\") " pod="openshift-dns/dns-default-9mn64" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.514263 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7b05200d-8025-468a-9c30-fbfd45a80b8b-audit-dir\") pod \"oauth-openshift-558db77b4-vbbdr\" (UID: \"7b05200d-8025-468a-9c30-fbfd45a80b8b\") " pod="openshift-authentication/oauth-openshift-558db77b4-vbbdr" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.514282 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqlx2\" (UniqueName: \"kubernetes.io/projected/d4ea394d-b845-43b1-93e2-31db42fe21bd-kube-api-access-nqlx2\") pod \"openshift-controller-manager-operator-756b6f6bc6-4llxn\" (UID: \"d4ea394d-b845-43b1-93e2-31db42fe21bd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4llxn" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.514314 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/24ab5c03-d768-4147-bbc2-4e71ac337623-oauth-serving-cert\") pod \"console-f9d7485db-zp4ks\" (UID: \"24ab5c03-d768-4147-bbc2-4e71ac337623\") " pod="openshift-console/console-f9d7485db-zp4ks" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.514344 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rv5vc\" (UniqueName: \"kubernetes.io/projected/77bcd05b-916e-481a-989f-eb860903e1a1-kube-api-access-rv5vc\") pod \"ingress-operator-5b745b69d9-sph2s\" (UID: \"77bcd05b-916e-481a-989f-eb860903e1a1\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-sph2s" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.514388 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df778b51-e71b-4518-8cd8-e01c12cfd03f-config\") pod \"service-ca-operator-777779d784-8mblp\" (UID: \"df778b51-e71b-4518-8cd8-e01c12cfd03f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8mblp" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.514424 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzcnx\" (UniqueName: \"kubernetes.io/projected/6e36469c-36ee-4c28-b4f3-c37c5b25330d-kube-api-access-dzcnx\") pod \"etcd-operator-b45778765-bjpgk\" (UID: \"6e36469c-36ee-4c28-b4f3-c37c5b25330d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bjpgk" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.514444 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6954h\" (UniqueName: \"kubernetes.io/projected/6c9411d8-3c3c-4ae8-9580-10ae4884967b-kube-api-access-6954h\") pod \"router-default-5444994796-h9wnt\" (UID: \"6c9411d8-3c3c-4ae8-9580-10ae4884967b\") " pod="openshift-ingress/router-default-5444994796-h9wnt" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.514466 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/bef53da3-4222-4e59-9ef7-373d15c9721b-signing-cabundle\") pod \"service-ca-9c57cc56f-mp49l\" (UID: \"bef53da3-4222-4e59-9ef7-373d15c9721b\") " pod="openshift-service-ca/service-ca-9c57cc56f-mp49l" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.514492 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tf4w5\" (UniqueName: \"kubernetes.io/projected/cde5dc2c-6004-42ac-bd4a-93f0bec898fa-kube-api-access-tf4w5\") pod \"control-plane-machine-set-operator-78cbb6b69f-s9lbs\" (UID: \"cde5dc2c-6004-42ac-bd4a-93f0bec898fa\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s9lbs" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.514517 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c9411d8-3c3c-4ae8-9580-10ae4884967b-service-ca-bundle\") pod \"router-default-5444994796-h9wnt\" (UID: \"6c9411d8-3c3c-4ae8-9580-10ae4884967b\") " pod="openshift-ingress/router-default-5444994796-h9wnt" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.514539 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6c9411d8-3c3c-4ae8-9580-10ae4884967b-metrics-certs\") pod \"router-default-5444994796-h9wnt\" (UID: \"6c9411d8-3c3c-4ae8-9580-10ae4884967b\") " pod="openshift-ingress/router-default-5444994796-h9wnt" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.514670 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6e36469c-36ee-4c28-b4f3-c37c5b25330d-etcd-client\") pod \"etcd-operator-b45778765-bjpgk\" (UID: \"6e36469c-36ee-4c28-b4f3-c37c5b25330d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bjpgk" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.514696 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7b05200d-8025-468a-9c30-fbfd45a80b8b-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-vbbdr\" (UID: \"7b05200d-8025-468a-9c30-fbfd45a80b8b\") " pod="openshift-authentication/oauth-openshift-558db77b4-vbbdr" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.514718 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43783d3c-c9b4-456b-a821-09ddf3e9ca75-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vxs94\" (UID: \"43783d3c-c9b4-456b-a821-09ddf3e9ca75\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vxs94" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.514740 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkvv2\" (UniqueName: \"kubernetes.io/projected/cc204192-dc8a-4a27-ba31-7b45ed831217-kube-api-access-pkvv2\") pod \"catalog-operator-68c6474976-xpv26\" (UID: \"cc204192-dc8a-4a27-ba31-7b45ed831217\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xpv26" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.514755 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/281b54ef-f700-437e-b975-1979a9d31151-profile-collector-cert\") pod \"olm-operator-6b444d44fb-nwcwz\" (UID: \"281b54ef-f700-437e-b975-1979a9d31151\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nwcwz" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.514771 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a27437d-c17a-4c8e-837c-f86587dc9346-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-h82vj\" (UID: \"8a27437d-c17a-4c8e-837c-f86587dc9346\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-h82vj" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.514823 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bps2h\" (UniqueName: \"kubernetes.io/projected/9194a5de-636b-410e-b05b-64f861a6daf9-kube-api-access-bps2h\") pod \"migrator-59844c95c7-2z52n\" (UID: \"9194a5de-636b-410e-b05b-64f861a6daf9\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-2z52n" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.514840 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/65550926-3f8b-436d-8a5c-e425d8c4875f-bound-sa-token\") pod \"image-registry-697d97f7c8-76p85\" (UID: \"65550926-3f8b-436d-8a5c-e425d8c4875f\") " pod="openshift-image-registry/image-registry-697d97f7c8-76p85" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.514855 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/0dca839a-7824-4d7b-9c48-ebc16a0d156f-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-8bbz5\" (UID: \"0dca839a-7824-4d7b-9c48-ebc16a0d156f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8bbz5" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.514874 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/a9e02138-74fa-484f-a87d-b88e05e92d58-machine-approver-tls\") pod \"machine-approver-56656f9798-zvh9m\" (UID: \"a9e02138-74fa-484f-a87d-b88e05e92d58\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zvh9m" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.514889 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7s62s\" (UniqueName: \"kubernetes.io/projected/bef53da3-4222-4e59-9ef7-373d15c9721b-kube-api-access-7s62s\") pod \"service-ca-9c57cc56f-mp49l\" (UID: \"bef53da3-4222-4e59-9ef7-373d15c9721b\") " pod="openshift-service-ca/service-ca-9c57cc56f-mp49l" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.514905 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/24ab5c03-d768-4147-bbc2-4e71ac337623-console-oauth-config\") pod \"console-f9d7485db-zp4ks\" (UID: \"24ab5c03-d768-4147-bbc2-4e71ac337623\") " pod="openshift-console/console-f9d7485db-zp4ks" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.514920 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/77bcd05b-916e-481a-989f-eb860903e1a1-metrics-tls\") pod \"ingress-operator-5b745b69d9-sph2s\" (UID: \"77bcd05b-916e-481a-989f-eb860903e1a1\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-sph2s" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.514938 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/24ab5c03-d768-4147-bbc2-4e71ac337623-service-ca\") pod \"console-f9d7485db-zp4ks\" (UID: \"24ab5c03-d768-4147-bbc2-4e71ac337623\") " pod="openshift-console/console-f9d7485db-zp4ks" Mar 11 08:42:46 crc kubenswrapper[4808]: E0311 08:42:46.515016 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:42:47.014999921 +0000 UTC m=+217.968323241 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.515047 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7b05200d-8025-468a-9c30-fbfd45a80b8b-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-vbbdr\" (UID: \"7b05200d-8025-468a-9c30-fbfd45a80b8b\") " pod="openshift-authentication/oauth-openshift-558db77b4-vbbdr" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.515064 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lz4s6\" (UniqueName: \"kubernetes.io/projected/3118d277-cc77-47d6-b6ae-7799a5220bef-kube-api-access-lz4s6\") pod \"multus-admission-controller-857f4d67dd-t54r9\" (UID: \"3118d277-cc77-47d6-b6ae-7799a5220bef\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-t54r9" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.515082 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e36469c-36ee-4c28-b4f3-c37c5b25330d-serving-cert\") pod \"etcd-operator-b45778765-bjpgk\" (UID: \"6e36469c-36ee-4c28-b4f3-c37c5b25330d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bjpgk" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.515109 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6e36469c-36ee-4c28-b4f3-c37c5b25330d-etcd-ca\") pod \"etcd-operator-b45778765-bjpgk\" (UID: \"6e36469c-36ee-4c28-b4f3-c37c5b25330d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bjpgk" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.515127 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/049eab8c-0b60-4da6-8ad3-64b6ba257301-metrics-tls\") pod \"dns-operator-744455d44c-97qlz\" (UID: \"049eab8c-0b60-4da6-8ad3-64b6ba257301\") " pod="openshift-dns-operator/dns-operator-744455d44c-97qlz" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.515141 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0dca839a-7824-4d7b-9c48-ebc16a0d156f-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-8bbz5\" (UID: \"0dca839a-7824-4d7b-9c48-ebc16a0d156f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8bbz5" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.515156 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/6985cebb-8e40-453c-ae6f-935654eed745-node-bootstrap-token\") pod \"machine-config-server-nnfrn\" (UID: \"6985cebb-8e40-453c-ae6f-935654eed745\") " pod="openshift-machine-config-operator/machine-config-server-nnfrn" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.515171 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvrr5\" (UniqueName: \"kubernetes.io/projected/0dca839a-7824-4d7b-9c48-ebc16a0d156f-kube-api-access-zvrr5\") pod \"cluster-image-registry-operator-dc59b4c8b-8bbz5\" (UID: \"0dca839a-7824-4d7b-9c48-ebc16a0d156f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8bbz5" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.515629 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4ea394d-b845-43b1-93e2-31db42fe21bd-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-4llxn\" (UID: \"d4ea394d-b845-43b1-93e2-31db42fe21bd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4llxn" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.515983 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/24ab5c03-d768-4147-bbc2-4e71ac337623-console-config\") pod \"console-f9d7485db-zp4ks\" (UID: \"24ab5c03-d768-4147-bbc2-4e71ac337623\") " pod="openshift-console/console-f9d7485db-zp4ks" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.516008 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/24ab5c03-d768-4147-bbc2-4e71ac337623-oauth-serving-cert\") pod \"console-f9d7485db-zp4ks\" (UID: \"24ab5c03-d768-4147-bbc2-4e71ac337623\") " pod="openshift-console/console-f9d7485db-zp4ks" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.516023 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/86edbb02-cc48-4845-86d4-51c46a4120bf-secret-volume\") pod \"collect-profiles-29553630-gsnv8\" (UID: \"86edbb02-cc48-4845-86d4-51c46a4120bf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553630-gsnv8" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.516242 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c9411d8-3c3c-4ae8-9580-10ae4884967b-service-ca-bundle\") pod \"router-default-5444994796-h9wnt\" (UID: \"6c9411d8-3c3c-4ae8-9580-10ae4884967b\") " pod="openshift-ingress/router-default-5444994796-h9wnt" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.516274 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6e36469c-36ee-4c28-b4f3-c37c5b25330d-etcd-ca\") pod \"etcd-operator-b45778765-bjpgk\" (UID: \"6e36469c-36ee-4c28-b4f3-c37c5b25330d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bjpgk" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.516314 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e02b1fd1-95ae-45ac-91b5-3f9376f87b41-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-dljmc\" (UID: \"e02b1fd1-95ae-45ac-91b5-3f9376f87b41\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dljmc" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.516390 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/759ae8a0-0d30-4da6-82e5-7d82ebfec823-registration-dir\") pod \"csi-hostpathplugin-f7nlr\" (UID: \"759ae8a0-0d30-4da6-82e5-7d82ebfec823\") " pod="hostpath-provisioner/csi-hostpathplugin-f7nlr" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.516412 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvdjq\" (UniqueName: \"kubernetes.io/projected/86edbb02-cc48-4845-86d4-51c46a4120bf-kube-api-access-hvdjq\") pod \"collect-profiles-29553630-gsnv8\" (UID: \"86edbb02-cc48-4845-86d4-51c46a4120bf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553630-gsnv8" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.516430 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/dfecc13f-f972-4fac-acb4-b1c8e9ce1b5e-auth-proxy-config\") pod \"machine-config-operator-74547568cd-lfhrh\" (UID: \"dfecc13f-f972-4fac-acb4-b1c8e9ce1b5e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lfhrh" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.516491 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/6c9411d8-3c3c-4ae8-9580-10ae4884967b-default-certificate\") pod \"router-default-5444994796-h9wnt\" (UID: \"6c9411d8-3c3c-4ae8-9580-10ae4884967b\") " pod="openshift-ingress/router-default-5444994796-h9wnt" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.516550 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7b05200d-8025-468a-9c30-fbfd45a80b8b-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-vbbdr\" (UID: \"7b05200d-8025-468a-9c30-fbfd45a80b8b\") " pod="openshift-authentication/oauth-openshift-558db77b4-vbbdr" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.516582 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/24ab5c03-d768-4147-bbc2-4e71ac337623-console-config\") pod \"console-f9d7485db-zp4ks\" (UID: \"24ab5c03-d768-4147-bbc2-4e71ac337623\") " pod="openshift-console/console-f9d7485db-zp4ks" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.516596 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/759ae8a0-0d30-4da6-82e5-7d82ebfec823-plugins-dir\") pod \"csi-hostpathplugin-f7nlr\" (UID: \"759ae8a0-0d30-4da6-82e5-7d82ebfec823\") " pod="hostpath-provisioner/csi-hostpathplugin-f7nlr" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.516627 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/281b54ef-f700-437e-b975-1979a9d31151-srv-cert\") pod \"olm-operator-6b444d44fb-nwcwz\" (UID: \"281b54ef-f700-437e-b975-1979a9d31151\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nwcwz" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.516643 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/77bcd05b-916e-481a-989f-eb860903e1a1-trusted-ca\") pod \"ingress-operator-5b745b69d9-sph2s\" (UID: \"77bcd05b-916e-481a-989f-eb860903e1a1\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-sph2s" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.516677 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/77bcd05b-916e-481a-989f-eb860903e1a1-bound-sa-token\") pod \"ingress-operator-5b745b69d9-sph2s\" (UID: \"77bcd05b-916e-481a-989f-eb860903e1a1\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-sph2s" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.516711 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/6c9411d8-3c3c-4ae8-9580-10ae4884967b-stats-auth\") pod \"router-default-5444994796-h9wnt\" (UID: \"6c9411d8-3c3c-4ae8-9580-10ae4884967b\") " pod="openshift-ingress/router-default-5444994796-h9wnt" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.516728 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/759ae8a0-0d30-4da6-82e5-7d82ebfec823-mountpoint-dir\") pod \"csi-hostpathplugin-f7nlr\" (UID: \"759ae8a0-0d30-4da6-82e5-7d82ebfec823\") " pod="hostpath-provisioner/csi-hostpathplugin-f7nlr" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.516744 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3e544519-8cbf-476f-8025-c4c867dead85-apiservice-cert\") pod \"packageserver-d55dfcdfc-r7bc7\" (UID: \"3e544519-8cbf-476f-8025-c4c867dead85\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r7bc7" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.517554 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/24ab5c03-d768-4147-bbc2-4e71ac337623-service-ca\") pod \"console-f9d7485db-zp4ks\" (UID: \"24ab5c03-d768-4147-bbc2-4e71ac337623\") " pod="openshift-console/console-f9d7485db-zp4ks" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.519613 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e36469c-36ee-4c28-b4f3-c37c5b25330d-serving-cert\") pod \"etcd-operator-b45778765-bjpgk\" (UID: \"6e36469c-36ee-4c28-b4f3-c37c5b25330d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bjpgk" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.520293 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/24ab5c03-d768-4147-bbc2-4e71ac337623-console-oauth-config\") pod \"console-f9d7485db-zp4ks\" (UID: \"24ab5c03-d768-4147-bbc2-4e71ac337623\") " pod="openshift-console/console-f9d7485db-zp4ks" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.520900 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6c9411d8-3c3c-4ae8-9580-10ae4884967b-metrics-certs\") pod \"router-default-5444994796-h9wnt\" (UID: \"6c9411d8-3c3c-4ae8-9580-10ae4884967b\") " pod="openshift-ingress/router-default-5444994796-h9wnt" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.521874 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/cde5dc2c-6004-42ac-bd4a-93f0bec898fa-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-s9lbs\" (UID: \"cde5dc2c-6004-42ac-bd4a-93f0bec898fa\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s9lbs" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.522518 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6e36469c-36ee-4c28-b4f3-c37c5b25330d-etcd-client\") pod \"etcd-operator-b45778765-bjpgk\" (UID: \"6e36469c-36ee-4c28-b4f3-c37c5b25330d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bjpgk" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.522542 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/a9e02138-74fa-484f-a87d-b88e05e92d58-machine-approver-tls\") pod \"machine-approver-56656f9798-zvh9m\" (UID: \"a9e02138-74fa-484f-a87d-b88e05e92d58\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zvh9m" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.522597 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/cde5dc2c-6004-42ac-bd4a-93f0bec898fa-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-s9lbs\" (UID: \"cde5dc2c-6004-42ac-bd4a-93f0bec898fa\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s9lbs" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.522651 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7b05200d-8025-468a-9c30-fbfd45a80b8b-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-vbbdr\" (UID: \"7b05200d-8025-468a-9c30-fbfd45a80b8b\") " pod="openshift-authentication/oauth-openshift-558db77b4-vbbdr" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.522679 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9c5fj\" (UniqueName: \"kubernetes.io/projected/65550926-3f8b-436d-8a5c-e425d8c4875f-kube-api-access-9c5fj\") pod \"image-registry-697d97f7c8-76p85\" (UID: \"65550926-3f8b-436d-8a5c-e425d8c4875f\") " pod="openshift-image-registry/image-registry-697d97f7c8-76p85" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.522718 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5b2g8\" (UniqueName: \"kubernetes.io/projected/6985cebb-8e40-453c-ae6f-935654eed745-kube-api-access-5b2g8\") pod \"machine-config-server-nnfrn\" (UID: \"6985cebb-8e40-453c-ae6f-935654eed745\") " pod="openshift-machine-config-operator/machine-config-server-nnfrn" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.522741 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/bef53da3-4222-4e59-9ef7-373d15c9721b-signing-key\") pod \"service-ca-9c57cc56f-mp49l\" (UID: \"bef53da3-4222-4e59-9ef7-373d15c9721b\") " pod="openshift-service-ca/service-ca-9c57cc56f-mp49l" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.522777 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4ea394d-b845-43b1-93e2-31db42fe21bd-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-4llxn\" (UID: \"d4ea394d-b845-43b1-93e2-31db42fe21bd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4llxn" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.522801 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df778b51-e71b-4518-8cd8-e01c12cfd03f-serving-cert\") pod \"service-ca-operator-777779d784-8mblp\" (UID: \"df778b51-e71b-4518-8cd8-e01c12cfd03f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8mblp" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.522830 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7f8rw\" (UniqueName: \"kubernetes.io/projected/8a27437d-c17a-4c8e-837c-f86587dc9346-kube-api-access-7f8rw\") pod \"kube-storage-version-migrator-operator-b67b599dd-h82vj\" (UID: \"8a27437d-c17a-4c8e-837c-f86587dc9346\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-h82vj" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.522852 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/759ae8a0-0d30-4da6-82e5-7d82ebfec823-csi-data-dir\") pod \"csi-hostpathplugin-f7nlr\" (UID: \"759ae8a0-0d30-4da6-82e5-7d82ebfec823\") " pod="hostpath-provisioner/csi-hostpathplugin-f7nlr" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.522872 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45cj2\" (UniqueName: \"kubernetes.io/projected/759ae8a0-0d30-4da6-82e5-7d82ebfec823-kube-api-access-45cj2\") pod \"csi-hostpathplugin-f7nlr\" (UID: \"759ae8a0-0d30-4da6-82e5-7d82ebfec823\") " pod="hostpath-provisioner/csi-hostpathplugin-f7nlr" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.522915 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7b05200d-8025-468a-9c30-fbfd45a80b8b-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-vbbdr\" (UID: \"7b05200d-8025-468a-9c30-fbfd45a80b8b\") " pod="openshift-authentication/oauth-openshift-558db77b4-vbbdr" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.522942 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/090328a2-0e9e-49a5-b82a-e35947e2fbf2-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-b25qz\" (UID: \"090328a2-0e9e-49a5-b82a-e35947e2fbf2\") " pod="openshift-marketplace/marketplace-operator-79b997595-b25qz" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.522982 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/cc204192-dc8a-4a27-ba31-7b45ed831217-profile-collector-cert\") pod \"catalog-operator-68c6474976-xpv26\" (UID: \"cc204192-dc8a-4a27-ba31-7b45ed831217\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xpv26" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.523002 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/dfecc13f-f972-4fac-acb4-b1c8e9ce1b5e-images\") pod \"machine-config-operator-74547568cd-lfhrh\" (UID: \"dfecc13f-f972-4fac-acb4-b1c8e9ce1b5e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lfhrh" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.523045 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkfrp\" (UniqueName: \"kubernetes.io/projected/dfecc13f-f972-4fac-acb4-b1c8e9ce1b5e-kube-api-access-bkfrp\") pod \"machine-config-operator-74547568cd-lfhrh\" (UID: \"dfecc13f-f972-4fac-acb4-b1c8e9ce1b5e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lfhrh" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.523069 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/759ae8a0-0d30-4da6-82e5-7d82ebfec823-socket-dir\") pod \"csi-hostpathplugin-f7nlr\" (UID: \"759ae8a0-0d30-4da6-82e5-7d82ebfec823\") " pod="hostpath-provisioner/csi-hostpathplugin-f7nlr" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.523089 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzjk6\" (UniqueName: \"kubernetes.io/projected/281b54ef-f700-437e-b975-1979a9d31151-kube-api-access-hzjk6\") pod \"olm-operator-6b444d44fb-nwcwz\" (UID: \"281b54ef-f700-437e-b975-1979a9d31151\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nwcwz" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.523116 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/65550926-3f8b-436d-8a5c-e425d8c4875f-registry-tls\") pod \"image-registry-697d97f7c8-76p85\" (UID: \"65550926-3f8b-436d-8a5c-e425d8c4875f\") " pod="openshift-image-registry/image-registry-697d97f7c8-76p85" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.523139 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/65550926-3f8b-436d-8a5c-e425d8c4875f-installation-pull-secrets\") pod \"image-registry-697d97f7c8-76p85\" (UID: \"65550926-3f8b-436d-8a5c-e425d8c4875f\") " pod="openshift-image-registry/image-registry-697d97f7c8-76p85" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.523161 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7b05200d-8025-468a-9c30-fbfd45a80b8b-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-vbbdr\" (UID: \"7b05200d-8025-468a-9c30-fbfd45a80b8b\") " pod="openshift-authentication/oauth-openshift-558db77b4-vbbdr" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.523182 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c2dd2c5e-2805-4166-8819-65486393641b-cert\") pod \"ingress-canary-h2qnq\" (UID: \"c2dd2c5e-2805-4166-8819-65486393641b\") " pod="openshift-ingress-canary/ingress-canary-h2qnq" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.523205 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/3e544519-8cbf-476f-8025-c4c867dead85-tmpfs\") pod \"packageserver-d55dfcdfc-r7bc7\" (UID: \"3e544519-8cbf-476f-8025-c4c867dead85\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r7bc7" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.523232 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c4d7251-5000-4973-aeaa-0d085d4f264d-config\") pod \"kube-apiserver-operator-766d6c64bb-b6vjx\" (UID: \"7c4d7251-5000-4973-aeaa-0d085d4f264d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-b6vjx" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.523253 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7b05200d-8025-468a-9c30-fbfd45a80b8b-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-vbbdr\" (UID: \"7b05200d-8025-468a-9c30-fbfd45a80b8b\") " pod="openshift-authentication/oauth-openshift-558db77b4-vbbdr" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.523283 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43783d3c-c9b4-456b-a821-09ddf3e9ca75-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vxs94\" (UID: \"43783d3c-c9b4-456b-a821-09ddf3e9ca75\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vxs94" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.523307 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8a27437d-c17a-4c8e-837c-f86587dc9346-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-h82vj\" (UID: \"8a27437d-c17a-4c8e-837c-f86587dc9346\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-h82vj" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.523329 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3118d277-cc77-47d6-b6ae-7799a5220bef-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-t54r9\" (UID: \"3118d277-cc77-47d6-b6ae-7799a5220bef\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-t54r9" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.523378 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/65550926-3f8b-436d-8a5c-e425d8c4875f-trusted-ca\") pod \"image-registry-697d97f7c8-76p85\" (UID: \"65550926-3f8b-436d-8a5c-e425d8c4875f\") " pod="openshift-image-registry/image-registry-697d97f7c8-76p85" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.523400 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e36469c-36ee-4c28-b4f3-c37c5b25330d-config\") pod \"etcd-operator-b45778765-bjpgk\" (UID: \"6e36469c-36ee-4c28-b4f3-c37c5b25330d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bjpgk" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.523422 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/65550926-3f8b-436d-8a5c-e425d8c4875f-ca-trust-extracted\") pod \"image-registry-697d97f7c8-76p85\" (UID: \"65550926-3f8b-436d-8a5c-e425d8c4875f\") " pod="openshift-image-registry/image-registry-697d97f7c8-76p85" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.523446 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0dca839a-7824-4d7b-9c48-ebc16a0d156f-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-8bbz5\" (UID: \"0dca839a-7824-4d7b-9c48-ebc16a0d156f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8bbz5" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.523468 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwg6x\" (UniqueName: \"kubernetes.io/projected/df778b51-e71b-4518-8cd8-e01c12cfd03f-kube-api-access-gwg6x\") pod \"service-ca-operator-777779d784-8mblp\" (UID: \"df778b51-e71b-4518-8cd8-e01c12cfd03f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8mblp" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.523494 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6e36469c-36ee-4c28-b4f3-c37c5b25330d-etcd-service-ca\") pod \"etcd-operator-b45778765-bjpgk\" (UID: \"6e36469c-36ee-4c28-b4f3-c37c5b25330d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bjpgk" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.523518 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9e02138-74fa-484f-a87d-b88e05e92d58-config\") pod \"machine-approver-56656f9798-zvh9m\" (UID: \"a9e02138-74fa-484f-a87d-b88e05e92d58\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zvh9m" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.523542 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jl98p\" (UniqueName: \"kubernetes.io/projected/a9e02138-74fa-484f-a87d-b88e05e92d58-kube-api-access-jl98p\") pod \"machine-approver-56656f9798-zvh9m\" (UID: \"a9e02138-74fa-484f-a87d-b88e05e92d58\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zvh9m" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.523565 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlf9v\" (UniqueName: \"kubernetes.io/projected/025e6699-14e0-4c1c-8c27-77f280040b4e-kube-api-access-wlf9v\") pod \"downloads-7954f5f757-w6wrf\" (UID: \"025e6699-14e0-4c1c-8c27-77f280040b4e\") " pod="openshift-console/downloads-7954f5f757-w6wrf" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.523601 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/6b36c7a8-aafc-4327-96b1-bcec3d2cf99e-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-kgwqw\" (UID: \"6b36c7a8-aafc-4327-96b1-bcec3d2cf99e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kgwqw" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.523628 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/86edbb02-cc48-4845-86d4-51c46a4120bf-config-volume\") pod \"collect-profiles-29553630-gsnv8\" (UID: \"86edbb02-cc48-4845-86d4-51c46a4120bf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553630-gsnv8" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.523657 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/24ab5c03-d768-4147-bbc2-4e71ac337623-trusted-ca-bundle\") pod \"console-f9d7485db-zp4ks\" (UID: \"24ab5c03-d768-4147-bbc2-4e71ac337623\") " pod="openshift-console/console-f9d7485db-zp4ks" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.523677 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2xz5\" (UniqueName: \"kubernetes.io/projected/6b36c7a8-aafc-4327-96b1-bcec3d2cf99e-kube-api-access-j2xz5\") pod \"package-server-manager-789f6589d5-kgwqw\" (UID: \"6b36c7a8-aafc-4327-96b1-bcec3d2cf99e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kgwqw" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.524155 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e36469c-36ee-4c28-b4f3-c37c5b25330d-config\") pod \"etcd-operator-b45778765-bjpgk\" (UID: \"6e36469c-36ee-4c28-b4f3-c37c5b25330d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bjpgk" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.524676 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/6c9411d8-3c3c-4ae8-9580-10ae4884967b-stats-auth\") pod \"router-default-5444994796-h9wnt\" (UID: \"6c9411d8-3c3c-4ae8-9580-10ae4884967b\") " pod="openshift-ingress/router-default-5444994796-h9wnt" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.524935 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/6c9411d8-3c3c-4ae8-9580-10ae4884967b-default-certificate\") pod \"router-default-5444994796-h9wnt\" (UID: \"6c9411d8-3c3c-4ae8-9580-10ae4884967b\") " pod="openshift-ingress/router-default-5444994796-h9wnt" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.525623 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/24ab5c03-d768-4147-bbc2-4e71ac337623-console-serving-cert\") pod \"console-f9d7485db-zp4ks\" (UID: \"24ab5c03-d768-4147-bbc2-4e71ac337623\") " pod="openshift-console/console-f9d7485db-zp4ks" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.527305 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e02b1fd1-95ae-45ac-91b5-3f9376f87b41-config\") pod \"kube-controller-manager-operator-78b949d7b-dljmc\" (UID: \"e02b1fd1-95ae-45ac-91b5-3f9376f87b41\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dljmc" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.527343 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qfph\" (UniqueName: \"kubernetes.io/projected/29ace785-0297-4201-9d1c-778af0740058-kube-api-access-4qfph\") pod \"auto-csr-approver-29553642-n69sp\" (UID: \"29ace785-0297-4201-9d1c-778af0740058\") " pod="openshift-infra/auto-csr-approver-29553642-n69sp" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.527402 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c4d7251-5000-4973-aeaa-0d085d4f264d-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-b6vjx\" (UID: \"7c4d7251-5000-4973-aeaa-0d085d4f264d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-b6vjx" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.527427 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7c4d7251-5000-4973-aeaa-0d085d4f264d-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-b6vjx\" (UID: \"7c4d7251-5000-4973-aeaa-0d085d4f264d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-b6vjx" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.527445 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h59p2\" (UniqueName: \"kubernetes.io/projected/10a3cae7-8c99-405a-a5ec-cbed98e037bd-kube-api-access-h59p2\") pod \"dns-default-9mn64\" (UID: \"10a3cae7-8c99-405a-a5ec-cbed98e037bd\") " pod="openshift-dns/dns-default-9mn64" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.527475 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a9e02138-74fa-484f-a87d-b88e05e92d58-auth-proxy-config\") pod \"machine-approver-56656f9798-zvh9m\" (UID: \"a9e02138-74fa-484f-a87d-b88e05e92d58\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zvh9m" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.527492 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7b05200d-8025-468a-9c30-fbfd45a80b8b-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-vbbdr\" (UID: \"7b05200d-8025-468a-9c30-fbfd45a80b8b\") " pod="openshift-authentication/oauth-openshift-558db77b4-vbbdr" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.527575 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7b05200d-8025-468a-9c30-fbfd45a80b8b-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-vbbdr\" (UID: \"7b05200d-8025-468a-9c30-fbfd45a80b8b\") " pod="openshift-authentication/oauth-openshift-558db77b4-vbbdr" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.527593 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7b05200d-8025-468a-9c30-fbfd45a80b8b-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-vbbdr\" (UID: \"7b05200d-8025-468a-9c30-fbfd45a80b8b\") " pod="openshift-authentication/oauth-openshift-558db77b4-vbbdr" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.527612 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/090328a2-0e9e-49a5-b82a-e35947e2fbf2-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-b25qz\" (UID: \"090328a2-0e9e-49a5-b82a-e35947e2fbf2\") " pod="openshift-marketplace/marketplace-operator-79b997595-b25qz" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.527629 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/6985cebb-8e40-453c-ae6f-935654eed745-certs\") pod \"machine-config-server-nnfrn\" (UID: \"6985cebb-8e40-453c-ae6f-935654eed745\") " pod="openshift-machine-config-operator/machine-config-server-nnfrn" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.527653 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/10a3cae7-8c99-405a-a5ec-cbed98e037bd-metrics-tls\") pod \"dns-default-9mn64\" (UID: \"10a3cae7-8c99-405a-a5ec-cbed98e037bd\") " pod="openshift-dns/dns-default-9mn64" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.527674 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twpbf\" (UniqueName: \"kubernetes.io/projected/24ab5c03-d768-4147-bbc2-4e71ac337623-kube-api-access-twpbf\") pod \"console-f9d7485db-zp4ks\" (UID: \"24ab5c03-d768-4147-bbc2-4e71ac337623\") " pod="openshift-console/console-f9d7485db-zp4ks" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.527693 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwfqd\" (UniqueName: \"kubernetes.io/projected/049eab8c-0b60-4da6-8ad3-64b6ba257301-kube-api-access-bwfqd\") pod \"dns-operator-744455d44c-97qlz\" (UID: \"049eab8c-0b60-4da6-8ad3-64b6ba257301\") " pod="openshift-dns-operator/dns-operator-744455d44c-97qlz" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.527709 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zjtq\" (UniqueName: \"kubernetes.io/projected/c2dd2c5e-2805-4166-8819-65486393641b-kube-api-access-8zjtq\") pod \"ingress-canary-h2qnq\" (UID: \"c2dd2c5e-2805-4166-8819-65486393641b\") " pod="openshift-ingress-canary/ingress-canary-h2qnq" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.527726 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/65550926-3f8b-436d-8a5c-e425d8c4875f-registry-certificates\") pod \"image-registry-697d97f7c8-76p85\" (UID: \"65550926-3f8b-436d-8a5c-e425d8c4875f\") " pod="openshift-image-registry/image-registry-697d97f7c8-76p85" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.527782 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9x95\" (UniqueName: \"kubernetes.io/projected/090328a2-0e9e-49a5-b82a-e35947e2fbf2-kube-api-access-m9x95\") pod \"marketplace-operator-79b997595-b25qz\" (UID: \"090328a2-0e9e-49a5-b82a-e35947e2fbf2\") " pod="openshift-marketplace/marketplace-operator-79b997595-b25qz" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.528054 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4ea394d-b845-43b1-93e2-31db42fe21bd-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-4llxn\" (UID: \"d4ea394d-b845-43b1-93e2-31db42fe21bd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4llxn" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.528257 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/65550926-3f8b-436d-8a5c-e425d8c4875f-ca-trust-extracted\") pod \"image-registry-697d97f7c8-76p85\" (UID: \"65550926-3f8b-436d-8a5c-e425d8c4875f\") " pod="openshift-image-registry/image-registry-697d97f7c8-76p85" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.528399 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c4d7251-5000-4973-aeaa-0d085d4f264d-config\") pod \"kube-apiserver-operator-766d6c64bb-b6vjx\" (UID: \"7c4d7251-5000-4973-aeaa-0d085d4f264d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-b6vjx" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.528795 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/24ab5c03-d768-4147-bbc2-4e71ac337623-trusted-ca-bundle\") pod \"console-f9d7485db-zp4ks\" (UID: \"24ab5c03-d768-4147-bbc2-4e71ac337623\") " pod="openshift-console/console-f9d7485db-zp4ks" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.529533 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6e36469c-36ee-4c28-b4f3-c37c5b25330d-etcd-service-ca\") pod \"etcd-operator-b45778765-bjpgk\" (UID: \"6e36469c-36ee-4c28-b4f3-c37c5b25330d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bjpgk" Mar 11 08:42:46 crc kubenswrapper[4808]: W0311 08:42:46.530463 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b5f17e9_1e47_4c7a_b225_a874c78f88ef.slice/crio-bb8d6e4082072ca2a7ecb448fab602fd256b4243071bb80fb724c2de43274532 WatchSource:0}: Error finding container bb8d6e4082072ca2a7ecb448fab602fd256b4243071bb80fb724c2de43274532: Status 404 returned error can't find the container with id bb8d6e4082072ca2a7ecb448fab602fd256b4243071bb80fb724c2de43274532 Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.531464 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/65550926-3f8b-436d-8a5c-e425d8c4875f-trusted-ca\") pod \"image-registry-697d97f7c8-76p85\" (UID: \"65550926-3f8b-436d-8a5c-e425d8c4875f\") " pod="openshift-image-registry/image-registry-697d97f7c8-76p85" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.532130 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/65550926-3f8b-436d-8a5c-e425d8c4875f-installation-pull-secrets\") pod \"image-registry-697d97f7c8-76p85\" (UID: \"65550926-3f8b-436d-8a5c-e425d8c4875f\") " pod="openshift-image-registry/image-registry-697d97f7c8-76p85" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.535491 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a9e02138-74fa-484f-a87d-b88e05e92d58-auth-proxy-config\") pod \"machine-approver-56656f9798-zvh9m\" (UID: \"a9e02138-74fa-484f-a87d-b88e05e92d58\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zvh9m" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.535528 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/65550926-3f8b-436d-8a5c-e425d8c4875f-registry-certificates\") pod \"image-registry-697d97f7c8-76p85\" (UID: \"65550926-3f8b-436d-8a5c-e425d8c4875f\") " pod="openshift-image-registry/image-registry-697d97f7c8-76p85" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.535777 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9e02138-74fa-484f-a87d-b88e05e92d58-config\") pod \"machine-approver-56656f9798-zvh9m\" (UID: \"a9e02138-74fa-484f-a87d-b88e05e92d58\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zvh9m" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.536026 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-6hxxc"] Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.536046 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c4d7251-5000-4973-aeaa-0d085d4f264d-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-b6vjx\" (UID: \"7c4d7251-5000-4973-aeaa-0d085d4f264d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-b6vjx" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.536954 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/65550926-3f8b-436d-8a5c-e425d8c4875f-registry-tls\") pod \"image-registry-697d97f7c8-76p85\" (UID: \"65550926-3f8b-436d-8a5c-e425d8c4875f\") " pod="openshift-image-registry/image-registry-697d97f7c8-76p85" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.555984 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tf4w5\" (UniqueName: \"kubernetes.io/projected/cde5dc2c-6004-42ac-bd4a-93f0bec898fa-kube-api-access-tf4w5\") pod \"control-plane-machine-set-operator-78cbb6b69f-s9lbs\" (UID: \"cde5dc2c-6004-42ac-bd4a-93f0bec898fa\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s9lbs" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.570745 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzcnx\" (UniqueName: \"kubernetes.io/projected/6e36469c-36ee-4c28-b4f3-c37c5b25330d-kube-api-access-dzcnx\") pod \"etcd-operator-b45778765-bjpgk\" (UID: \"6e36469c-36ee-4c28-b4f3-c37c5b25330d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bjpgk" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.596455 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6954h\" (UniqueName: \"kubernetes.io/projected/6c9411d8-3c3c-4ae8-9580-10ae4884967b-kube-api-access-6954h\") pod \"router-default-5444994796-h9wnt\" (UID: \"6c9411d8-3c3c-4ae8-9580-10ae4884967b\") " pod="openshift-ingress/router-default-5444994796-h9wnt" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.620311 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-td5rr"] Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.626330 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqlx2\" (UniqueName: \"kubernetes.io/projected/d4ea394d-b845-43b1-93e2-31db42fe21bd-kube-api-access-nqlx2\") pod \"openshift-controller-manager-operator-756b6f6bc6-4llxn\" (UID: \"d4ea394d-b845-43b1-93e2-31db42fe21bd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4llxn" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.628788 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxxmn\" (UniqueName: \"kubernetes.io/projected/7b05200d-8025-468a-9c30-fbfd45a80b8b-kube-api-access-bxxmn\") pod \"oauth-openshift-558db77b4-vbbdr\" (UID: \"7b05200d-8025-468a-9c30-fbfd45a80b8b\") " pod="openshift-authentication/oauth-openshift-558db77b4-vbbdr" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.628821 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b05200d-8025-468a-9c30-fbfd45a80b8b-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-vbbdr\" (UID: \"7b05200d-8025-468a-9c30-fbfd45a80b8b\") " pod="openshift-authentication/oauth-openshift-558db77b4-vbbdr" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.628841 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/cc204192-dc8a-4a27-ba31-7b45ed831217-srv-cert\") pod \"catalog-operator-68c6474976-xpv26\" (UID: \"cc204192-dc8a-4a27-ba31-7b45ed831217\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xpv26" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.628860 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3e544519-8cbf-476f-8025-c4c867dead85-webhook-cert\") pod \"packageserver-d55dfcdfc-r7bc7\" (UID: \"3e544519-8cbf-476f-8025-c4c867dead85\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r7bc7" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.628880 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdhrf\" (UniqueName: \"kubernetes.io/projected/3e544519-8cbf-476f-8025-c4c867dead85-kube-api-access-fdhrf\") pod \"packageserver-d55dfcdfc-r7bc7\" (UID: \"3e544519-8cbf-476f-8025-c4c867dead85\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r7bc7" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.628900 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e02b1fd1-95ae-45ac-91b5-3f9376f87b41-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-dljmc\" (UID: \"e02b1fd1-95ae-45ac-91b5-3f9376f87b41\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dljmc" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.628922 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/10a3cae7-8c99-405a-a5ec-cbed98e037bd-config-volume\") pod \"dns-default-9mn64\" (UID: \"10a3cae7-8c99-405a-a5ec-cbed98e037bd\") " pod="openshift-dns/dns-default-9mn64" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.628941 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7b05200d-8025-468a-9c30-fbfd45a80b8b-audit-dir\") pod \"oauth-openshift-558db77b4-vbbdr\" (UID: \"7b05200d-8025-468a-9c30-fbfd45a80b8b\") " pod="openshift-authentication/oauth-openshift-558db77b4-vbbdr" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.628970 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-76p85\" (UID: \"65550926-3f8b-436d-8a5c-e425d8c4875f\") " pod="openshift-image-registry/image-registry-697d97f7c8-76p85" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.628991 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rv5vc\" (UniqueName: \"kubernetes.io/projected/77bcd05b-916e-481a-989f-eb860903e1a1-kube-api-access-rv5vc\") pod \"ingress-operator-5b745b69d9-sph2s\" (UID: \"77bcd05b-916e-481a-989f-eb860903e1a1\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-sph2s" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.629022 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/bef53da3-4222-4e59-9ef7-373d15c9721b-signing-cabundle\") pod \"service-ca-9c57cc56f-mp49l\" (UID: \"bef53da3-4222-4e59-9ef7-373d15c9721b\") " pod="openshift-service-ca/service-ca-9c57cc56f-mp49l" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.629041 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df778b51-e71b-4518-8cd8-e01c12cfd03f-config\") pod \"service-ca-operator-777779d784-8mblp\" (UID: \"df778b51-e71b-4518-8cd8-e01c12cfd03f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8mblp" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.629061 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7b05200d-8025-468a-9c30-fbfd45a80b8b-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-vbbdr\" (UID: \"7b05200d-8025-468a-9c30-fbfd45a80b8b\") " pod="openshift-authentication/oauth-openshift-558db77b4-vbbdr" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.629081 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkvv2\" (UniqueName: \"kubernetes.io/projected/cc204192-dc8a-4a27-ba31-7b45ed831217-kube-api-access-pkvv2\") pod \"catalog-operator-68c6474976-xpv26\" (UID: \"cc204192-dc8a-4a27-ba31-7b45ed831217\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xpv26" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.629102 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/281b54ef-f700-437e-b975-1979a9d31151-profile-collector-cert\") pod \"olm-operator-6b444d44fb-nwcwz\" (UID: \"281b54ef-f700-437e-b975-1979a9d31151\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nwcwz" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.629122 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43783d3c-c9b4-456b-a821-09ddf3e9ca75-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vxs94\" (UID: \"43783d3c-c9b4-456b-a821-09ddf3e9ca75\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vxs94" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.629142 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a27437d-c17a-4c8e-837c-f86587dc9346-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-h82vj\" (UID: \"8a27437d-c17a-4c8e-837c-f86587dc9346\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-h82vj" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.629160 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bps2h\" (UniqueName: \"kubernetes.io/projected/9194a5de-636b-410e-b05b-64f861a6daf9-kube-api-access-bps2h\") pod \"migrator-59844c95c7-2z52n\" (UID: \"9194a5de-636b-410e-b05b-64f861a6daf9\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-2z52n" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.629201 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7s62s\" (UniqueName: \"kubernetes.io/projected/bef53da3-4222-4e59-9ef7-373d15c9721b-kube-api-access-7s62s\") pod \"service-ca-9c57cc56f-mp49l\" (UID: \"bef53da3-4222-4e59-9ef7-373d15c9721b\") " pod="openshift-service-ca/service-ca-9c57cc56f-mp49l" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.629243 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/0dca839a-7824-4d7b-9c48-ebc16a0d156f-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-8bbz5\" (UID: \"0dca839a-7824-4d7b-9c48-ebc16a0d156f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8bbz5" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.629281 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/77bcd05b-916e-481a-989f-eb860903e1a1-metrics-tls\") pod \"ingress-operator-5b745b69d9-sph2s\" (UID: \"77bcd05b-916e-481a-989f-eb860903e1a1\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-sph2s" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.629317 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7b05200d-8025-468a-9c30-fbfd45a80b8b-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-vbbdr\" (UID: \"7b05200d-8025-468a-9c30-fbfd45a80b8b\") " pod="openshift-authentication/oauth-openshift-558db77b4-vbbdr" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.629377 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lz4s6\" (UniqueName: \"kubernetes.io/projected/3118d277-cc77-47d6-b6ae-7799a5220bef-kube-api-access-lz4s6\") pod \"multus-admission-controller-857f4d67dd-t54r9\" (UID: \"3118d277-cc77-47d6-b6ae-7799a5220bef\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-t54r9" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.629440 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/049eab8c-0b60-4da6-8ad3-64b6ba257301-metrics-tls\") pod \"dns-operator-744455d44c-97qlz\" (UID: \"049eab8c-0b60-4da6-8ad3-64b6ba257301\") " pod="openshift-dns-operator/dns-operator-744455d44c-97qlz" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.629461 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0dca839a-7824-4d7b-9c48-ebc16a0d156f-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-8bbz5\" (UID: \"0dca839a-7824-4d7b-9c48-ebc16a0d156f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8bbz5" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.629488 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/86edbb02-cc48-4845-86d4-51c46a4120bf-secret-volume\") pod \"collect-profiles-29553630-gsnv8\" (UID: \"86edbb02-cc48-4845-86d4-51c46a4120bf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553630-gsnv8" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.629515 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/6985cebb-8e40-453c-ae6f-935654eed745-node-bootstrap-token\") pod \"machine-config-server-nnfrn\" (UID: \"6985cebb-8e40-453c-ae6f-935654eed745\") " pod="openshift-machine-config-operator/machine-config-server-nnfrn" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.629537 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvrr5\" (UniqueName: \"kubernetes.io/projected/0dca839a-7824-4d7b-9c48-ebc16a0d156f-kube-api-access-zvrr5\") pod \"cluster-image-registry-operator-dc59b4c8b-8bbz5\" (UID: \"0dca839a-7824-4d7b-9c48-ebc16a0d156f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8bbz5" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.630117 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b05200d-8025-468a-9c30-fbfd45a80b8b-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-vbbdr\" (UID: \"7b05200d-8025-468a-9c30-fbfd45a80b8b\") " pod="openshift-authentication/oauth-openshift-558db77b4-vbbdr" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.630144 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e02b1fd1-95ae-45ac-91b5-3f9376f87b41-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-dljmc\" (UID: \"e02b1fd1-95ae-45ac-91b5-3f9376f87b41\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dljmc" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.630319 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/759ae8a0-0d30-4da6-82e5-7d82ebfec823-registration-dir\") pod \"csi-hostpathplugin-f7nlr\" (UID: \"759ae8a0-0d30-4da6-82e5-7d82ebfec823\") " pod="hostpath-provisioner/csi-hostpathplugin-f7nlr" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.630348 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/dfecc13f-f972-4fac-acb4-b1c8e9ce1b5e-auth-proxy-config\") pod \"machine-config-operator-74547568cd-lfhrh\" (UID: \"dfecc13f-f972-4fac-acb4-b1c8e9ce1b5e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lfhrh" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.630384 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvdjq\" (UniqueName: \"kubernetes.io/projected/86edbb02-cc48-4845-86d4-51c46a4120bf-kube-api-access-hvdjq\") pod \"collect-profiles-29553630-gsnv8\" (UID: \"86edbb02-cc48-4845-86d4-51c46a4120bf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553630-gsnv8" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.630405 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7b05200d-8025-468a-9c30-fbfd45a80b8b-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-vbbdr\" (UID: \"7b05200d-8025-468a-9c30-fbfd45a80b8b\") " pod="openshift-authentication/oauth-openshift-558db77b4-vbbdr" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.630431 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/759ae8a0-0d30-4da6-82e5-7d82ebfec823-plugins-dir\") pod \"csi-hostpathplugin-f7nlr\" (UID: \"759ae8a0-0d30-4da6-82e5-7d82ebfec823\") " pod="hostpath-provisioner/csi-hostpathplugin-f7nlr" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.630452 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/77bcd05b-916e-481a-989f-eb860903e1a1-trusted-ca\") pod \"ingress-operator-5b745b69d9-sph2s\" (UID: \"77bcd05b-916e-481a-989f-eb860903e1a1\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-sph2s" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.630476 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/77bcd05b-916e-481a-989f-eb860903e1a1-bound-sa-token\") pod \"ingress-operator-5b745b69d9-sph2s\" (UID: \"77bcd05b-916e-481a-989f-eb860903e1a1\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-sph2s" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.630579 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/281b54ef-f700-437e-b975-1979a9d31151-srv-cert\") pod \"olm-operator-6b444d44fb-nwcwz\" (UID: \"281b54ef-f700-437e-b975-1979a9d31151\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nwcwz" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.630608 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/759ae8a0-0d30-4da6-82e5-7d82ebfec823-mountpoint-dir\") pod \"csi-hostpathplugin-f7nlr\" (UID: \"759ae8a0-0d30-4da6-82e5-7d82ebfec823\") " pod="hostpath-provisioner/csi-hostpathplugin-f7nlr" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.630626 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3e544519-8cbf-476f-8025-c4c867dead85-apiservice-cert\") pod \"packageserver-d55dfcdfc-r7bc7\" (UID: \"3e544519-8cbf-476f-8025-c4c867dead85\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r7bc7" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.630651 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7b05200d-8025-468a-9c30-fbfd45a80b8b-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-vbbdr\" (UID: \"7b05200d-8025-468a-9c30-fbfd45a80b8b\") " pod="openshift-authentication/oauth-openshift-558db77b4-vbbdr" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.630681 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/bef53da3-4222-4e59-9ef7-373d15c9721b-signing-key\") pod \"service-ca-9c57cc56f-mp49l\" (UID: \"bef53da3-4222-4e59-9ef7-373d15c9721b\") " pod="openshift-service-ca/service-ca-9c57cc56f-mp49l" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.630708 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5b2g8\" (UniqueName: \"kubernetes.io/projected/6985cebb-8e40-453c-ae6f-935654eed745-kube-api-access-5b2g8\") pod \"machine-config-server-nnfrn\" (UID: \"6985cebb-8e40-453c-ae6f-935654eed745\") " pod="openshift-machine-config-operator/machine-config-server-nnfrn" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.630728 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df778b51-e71b-4518-8cd8-e01c12cfd03f-serving-cert\") pod \"service-ca-operator-777779d784-8mblp\" (UID: \"df778b51-e71b-4518-8cd8-e01c12cfd03f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8mblp" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.630749 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7f8rw\" (UniqueName: \"kubernetes.io/projected/8a27437d-c17a-4c8e-837c-f86587dc9346-kube-api-access-7f8rw\") pod \"kube-storage-version-migrator-operator-b67b599dd-h82vj\" (UID: \"8a27437d-c17a-4c8e-837c-f86587dc9346\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-h82vj" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.630769 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45cj2\" (UniqueName: \"kubernetes.io/projected/759ae8a0-0d30-4da6-82e5-7d82ebfec823-kube-api-access-45cj2\") pod \"csi-hostpathplugin-f7nlr\" (UID: \"759ae8a0-0d30-4da6-82e5-7d82ebfec823\") " pod="hostpath-provisioner/csi-hostpathplugin-f7nlr" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.630790 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/759ae8a0-0d30-4da6-82e5-7d82ebfec823-csi-data-dir\") pod \"csi-hostpathplugin-f7nlr\" (UID: \"759ae8a0-0d30-4da6-82e5-7d82ebfec823\") " pod="hostpath-provisioner/csi-hostpathplugin-f7nlr" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.630810 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7b05200d-8025-468a-9c30-fbfd45a80b8b-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-vbbdr\" (UID: \"7b05200d-8025-468a-9c30-fbfd45a80b8b\") " pod="openshift-authentication/oauth-openshift-558db77b4-vbbdr" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.631625 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/bef53da3-4222-4e59-9ef7-373d15c9721b-signing-cabundle\") pod \"service-ca-9c57cc56f-mp49l\" (UID: \"bef53da3-4222-4e59-9ef7-373d15c9721b\") " pod="openshift-service-ca/service-ca-9c57cc56f-mp49l" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.631829 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/759ae8a0-0d30-4da6-82e5-7d82ebfec823-mountpoint-dir\") pod \"csi-hostpathplugin-f7nlr\" (UID: \"759ae8a0-0d30-4da6-82e5-7d82ebfec823\") " pod="hostpath-provisioner/csi-hostpathplugin-f7nlr" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.633135 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43783d3c-c9b4-456b-a821-09ddf3e9ca75-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vxs94\" (UID: \"43783d3c-c9b4-456b-a821-09ddf3e9ca75\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vxs94" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.633139 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/759ae8a0-0d30-4da6-82e5-7d82ebfec823-plugins-dir\") pod \"csi-hostpathplugin-f7nlr\" (UID: \"759ae8a0-0d30-4da6-82e5-7d82ebfec823\") " pod="hostpath-provisioner/csi-hostpathplugin-f7nlr" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.633236 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7b05200d-8025-468a-9c30-fbfd45a80b8b-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-vbbdr\" (UID: \"7b05200d-8025-468a-9c30-fbfd45a80b8b\") " pod="openshift-authentication/oauth-openshift-558db77b4-vbbdr" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.633279 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/77bcd05b-916e-481a-989f-eb860903e1a1-trusted-ca\") pod \"ingress-operator-5b745b69d9-sph2s\" (UID: \"77bcd05b-916e-481a-989f-eb860903e1a1\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-sph2s" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.633381 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df778b51-e71b-4518-8cd8-e01c12cfd03f-config\") pod \"service-ca-operator-777779d784-8mblp\" (UID: \"df778b51-e71b-4518-8cd8-e01c12cfd03f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8mblp" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.633614 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a27437d-c17a-4c8e-837c-f86587dc9346-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-h82vj\" (UID: \"8a27437d-c17a-4c8e-837c-f86587dc9346\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-h82vj" Mar 11 08:42:46 crc kubenswrapper[4808]: E0311 08:42:46.633667 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-11 08:42:47.130821043 +0000 UTC m=+218.084144363 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-76p85" (UID: "65550926-3f8b-436d-8a5c-e425d8c4875f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.633695 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/090328a2-0e9e-49a5-b82a-e35947e2fbf2-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-b25qz\" (UID: \"090328a2-0e9e-49a5-b82a-e35947e2fbf2\") " pod="openshift-marketplace/marketplace-operator-79b997595-b25qz" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.633733 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/cc204192-dc8a-4a27-ba31-7b45ed831217-profile-collector-cert\") pod \"catalog-operator-68c6474976-xpv26\" (UID: \"cc204192-dc8a-4a27-ba31-7b45ed831217\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xpv26" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.633755 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/dfecc13f-f972-4fac-acb4-b1c8e9ce1b5e-images\") pod \"machine-config-operator-74547568cd-lfhrh\" (UID: \"dfecc13f-f972-4fac-acb4-b1c8e9ce1b5e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lfhrh" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.633790 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkfrp\" (UniqueName: \"kubernetes.io/projected/dfecc13f-f972-4fac-acb4-b1c8e9ce1b5e-kube-api-access-bkfrp\") pod \"machine-config-operator-74547568cd-lfhrh\" (UID: \"dfecc13f-f972-4fac-acb4-b1c8e9ce1b5e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lfhrh" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.633811 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/759ae8a0-0d30-4da6-82e5-7d82ebfec823-socket-dir\") pod \"csi-hostpathplugin-f7nlr\" (UID: \"759ae8a0-0d30-4da6-82e5-7d82ebfec823\") " pod="hostpath-provisioner/csi-hostpathplugin-f7nlr" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.633844 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzjk6\" (UniqueName: \"kubernetes.io/projected/281b54ef-f700-437e-b975-1979a9d31151-kube-api-access-hzjk6\") pod \"olm-operator-6b444d44fb-nwcwz\" (UID: \"281b54ef-f700-437e-b975-1979a9d31151\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nwcwz" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.633871 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7b05200d-8025-468a-9c30-fbfd45a80b8b-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-vbbdr\" (UID: \"7b05200d-8025-468a-9c30-fbfd45a80b8b\") " pod="openshift-authentication/oauth-openshift-558db77b4-vbbdr" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.633894 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7b05200d-8025-468a-9c30-fbfd45a80b8b-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-vbbdr\" (UID: \"7b05200d-8025-468a-9c30-fbfd45a80b8b\") " pod="openshift-authentication/oauth-openshift-558db77b4-vbbdr" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.633918 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c2dd2c5e-2805-4166-8819-65486393641b-cert\") pod \"ingress-canary-h2qnq\" (UID: \"c2dd2c5e-2805-4166-8819-65486393641b\") " pod="openshift-ingress-canary/ingress-canary-h2qnq" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.633938 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/3e544519-8cbf-476f-8025-c4c867dead85-tmpfs\") pod \"packageserver-d55dfcdfc-r7bc7\" (UID: \"3e544519-8cbf-476f-8025-c4c867dead85\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r7bc7" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.633968 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43783d3c-c9b4-456b-a821-09ddf3e9ca75-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vxs94\" (UID: \"43783d3c-c9b4-456b-a821-09ddf3e9ca75\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vxs94" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.633987 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8a27437d-c17a-4c8e-837c-f86587dc9346-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-h82vj\" (UID: \"8a27437d-c17a-4c8e-837c-f86587dc9346\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-h82vj" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.634021 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3118d277-cc77-47d6-b6ae-7799a5220bef-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-t54r9\" (UID: \"3118d277-cc77-47d6-b6ae-7799a5220bef\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-t54r9" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.634043 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwg6x\" (UniqueName: \"kubernetes.io/projected/df778b51-e71b-4518-8cd8-e01c12cfd03f-kube-api-access-gwg6x\") pod \"service-ca-operator-777779d784-8mblp\" (UID: \"df778b51-e71b-4518-8cd8-e01c12cfd03f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8mblp" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.634064 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0dca839a-7824-4d7b-9c48-ebc16a0d156f-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-8bbz5\" (UID: \"0dca839a-7824-4d7b-9c48-ebc16a0d156f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8bbz5" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.634095 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlf9v\" (UniqueName: \"kubernetes.io/projected/025e6699-14e0-4c1c-8c27-77f280040b4e-kube-api-access-wlf9v\") pod \"downloads-7954f5f757-w6wrf\" (UID: \"025e6699-14e0-4c1c-8c27-77f280040b4e\") " pod="openshift-console/downloads-7954f5f757-w6wrf" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.634125 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/6b36c7a8-aafc-4327-96b1-bcec3d2cf99e-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-kgwqw\" (UID: \"6b36c7a8-aafc-4327-96b1-bcec3d2cf99e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kgwqw" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.634146 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7b05200d-8025-468a-9c30-fbfd45a80b8b-audit-dir\") pod \"oauth-openshift-558db77b4-vbbdr\" (UID: \"7b05200d-8025-468a-9c30-fbfd45a80b8b\") " pod="openshift-authentication/oauth-openshift-558db77b4-vbbdr" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.634154 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/86edbb02-cc48-4845-86d4-51c46a4120bf-config-volume\") pod \"collect-profiles-29553630-gsnv8\" (UID: \"86edbb02-cc48-4845-86d4-51c46a4120bf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553630-gsnv8" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.634237 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e02b1fd1-95ae-45ac-91b5-3f9376f87b41-config\") pod \"kube-controller-manager-operator-78b949d7b-dljmc\" (UID: \"e02b1fd1-95ae-45ac-91b5-3f9376f87b41\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dljmc" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.634262 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qfph\" (UniqueName: \"kubernetes.io/projected/29ace785-0297-4201-9d1c-778af0740058-kube-api-access-4qfph\") pod \"auto-csr-approver-29553642-n69sp\" (UID: \"29ace785-0297-4201-9d1c-778af0740058\") " pod="openshift-infra/auto-csr-approver-29553642-n69sp" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.634281 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2xz5\" (UniqueName: \"kubernetes.io/projected/6b36c7a8-aafc-4327-96b1-bcec3d2cf99e-kube-api-access-j2xz5\") pod \"package-server-manager-789f6589d5-kgwqw\" (UID: \"6b36c7a8-aafc-4327-96b1-bcec3d2cf99e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kgwqw" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.634316 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h59p2\" (UniqueName: \"kubernetes.io/projected/10a3cae7-8c99-405a-a5ec-cbed98e037bd-kube-api-access-h59p2\") pod \"dns-default-9mn64\" (UID: \"10a3cae7-8c99-405a-a5ec-cbed98e037bd\") " pod="openshift-dns/dns-default-9mn64" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.634338 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7b05200d-8025-468a-9c30-fbfd45a80b8b-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-vbbdr\" (UID: \"7b05200d-8025-468a-9c30-fbfd45a80b8b\") " pod="openshift-authentication/oauth-openshift-558db77b4-vbbdr" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.634381 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7b05200d-8025-468a-9c30-fbfd45a80b8b-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-vbbdr\" (UID: \"7b05200d-8025-468a-9c30-fbfd45a80b8b\") " pod="openshift-authentication/oauth-openshift-558db77b4-vbbdr" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.634412 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7b05200d-8025-468a-9c30-fbfd45a80b8b-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-vbbdr\" (UID: \"7b05200d-8025-468a-9c30-fbfd45a80b8b\") " pod="openshift-authentication/oauth-openshift-558db77b4-vbbdr" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.634442 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/090328a2-0e9e-49a5-b82a-e35947e2fbf2-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-b25qz\" (UID: \"090328a2-0e9e-49a5-b82a-e35947e2fbf2\") " pod="openshift-marketplace/marketplace-operator-79b997595-b25qz" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.634484 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/6985cebb-8e40-453c-ae6f-935654eed745-certs\") pod \"machine-config-server-nnfrn\" (UID: \"6985cebb-8e40-453c-ae6f-935654eed745\") " pod="openshift-machine-config-operator/machine-config-server-nnfrn" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.634526 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwfqd\" (UniqueName: \"kubernetes.io/projected/049eab8c-0b60-4da6-8ad3-64b6ba257301-kube-api-access-bwfqd\") pod \"dns-operator-744455d44c-97qlz\" (UID: \"049eab8c-0b60-4da6-8ad3-64b6ba257301\") " pod="openshift-dns-operator/dns-operator-744455d44c-97qlz" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.634550 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zjtq\" (UniqueName: \"kubernetes.io/projected/c2dd2c5e-2805-4166-8819-65486393641b-kube-api-access-8zjtq\") pod \"ingress-canary-h2qnq\" (UID: \"c2dd2c5e-2805-4166-8819-65486393641b\") " pod="openshift-ingress-canary/ingress-canary-h2qnq" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.634574 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/10a3cae7-8c99-405a-a5ec-cbed98e037bd-metrics-tls\") pod \"dns-default-9mn64\" (UID: \"10a3cae7-8c99-405a-a5ec-cbed98e037bd\") " pod="openshift-dns/dns-default-9mn64" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.634596 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9x95\" (UniqueName: \"kubernetes.io/projected/090328a2-0e9e-49a5-b82a-e35947e2fbf2-kube-api-access-m9x95\") pod \"marketplace-operator-79b997595-b25qz\" (UID: \"090328a2-0e9e-49a5-b82a-e35947e2fbf2\") " pod="openshift-marketplace/marketplace-operator-79b997595-b25qz" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.634633 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7b05200d-8025-468a-9c30-fbfd45a80b8b-audit-policies\") pod \"oauth-openshift-558db77b4-vbbdr\" (UID: \"7b05200d-8025-468a-9c30-fbfd45a80b8b\") " pod="openshift-authentication/oauth-openshift-558db77b4-vbbdr" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.634656 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/dfecc13f-f972-4fac-acb4-b1c8e9ce1b5e-proxy-tls\") pod \"machine-config-operator-74547568cd-lfhrh\" (UID: \"dfecc13f-f972-4fac-acb4-b1c8e9ce1b5e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lfhrh" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.634674 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/43783d3c-c9b4-456b-a821-09ddf3e9ca75-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vxs94\" (UID: \"43783d3c-c9b4-456b-a821-09ddf3e9ca75\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vxs94" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.634980 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/86edbb02-cc48-4845-86d4-51c46a4120bf-config-volume\") pod \"collect-profiles-29553630-gsnv8\" (UID: \"86edbb02-cc48-4845-86d4-51c46a4120bf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553630-gsnv8" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.634178 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/10a3cae7-8c99-405a-a5ec-cbed98e037bd-config-volume\") pod \"dns-default-9mn64\" (UID: \"10a3cae7-8c99-405a-a5ec-cbed98e037bd\") " pod="openshift-dns/dns-default-9mn64" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.635893 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/759ae8a0-0d30-4da6-82e5-7d82ebfec823-socket-dir\") pod \"csi-hostpathplugin-f7nlr\" (UID: \"759ae8a0-0d30-4da6-82e5-7d82ebfec823\") " pod="hostpath-provisioner/csi-hostpathplugin-f7nlr" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.636061 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/0dca839a-7824-4d7b-9c48-ebc16a0d156f-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-8bbz5\" (UID: \"0dca839a-7824-4d7b-9c48-ebc16a0d156f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8bbz5" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.638196 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/77bcd05b-916e-481a-989f-eb860903e1a1-metrics-tls\") pod \"ingress-operator-5b745b69d9-sph2s\" (UID: \"77bcd05b-916e-481a-989f-eb860903e1a1\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-sph2s" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.638492 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/3e544519-8cbf-476f-8025-c4c867dead85-tmpfs\") pod \"packageserver-d55dfcdfc-r7bc7\" (UID: \"3e544519-8cbf-476f-8025-c4c867dead85\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r7bc7" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.638818 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7b05200d-8025-468a-9c30-fbfd45a80b8b-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-vbbdr\" (UID: \"7b05200d-8025-468a-9c30-fbfd45a80b8b\") " pod="openshift-authentication/oauth-openshift-558db77b4-vbbdr" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.639420 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3e544519-8cbf-476f-8025-c4c867dead85-apiservice-cert\") pod \"packageserver-d55dfcdfc-r7bc7\" (UID: \"3e544519-8cbf-476f-8025-c4c867dead85\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r7bc7" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.643796 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/86edbb02-cc48-4845-86d4-51c46a4120bf-secret-volume\") pod \"collect-profiles-29553630-gsnv8\" (UID: \"86edbb02-cc48-4845-86d4-51c46a4120bf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553630-gsnv8" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.644904 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/cc204192-dc8a-4a27-ba31-7b45ed831217-srv-cert\") pod \"catalog-operator-68c6474976-xpv26\" (UID: \"cc204192-dc8a-4a27-ba31-7b45ed831217\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xpv26" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.645520 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/dfecc13f-f972-4fac-acb4-b1c8e9ce1b5e-images\") pod \"machine-config-operator-74547568cd-lfhrh\" (UID: \"dfecc13f-f972-4fac-acb4-b1c8e9ce1b5e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lfhrh" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.645558 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8a27437d-c17a-4c8e-837c-f86587dc9346-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-h82vj\" (UID: \"8a27437d-c17a-4c8e-837c-f86587dc9346\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-h82vj" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.645999 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e02b1fd1-95ae-45ac-91b5-3f9376f87b41-config\") pod \"kube-controller-manager-operator-78b949d7b-dljmc\" (UID: \"e02b1fd1-95ae-45ac-91b5-3f9376f87b41\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dljmc" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.646036 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c2dd2c5e-2805-4166-8819-65486393641b-cert\") pod \"ingress-canary-h2qnq\" (UID: \"c2dd2c5e-2805-4166-8819-65486393641b\") " pod="openshift-ingress-canary/ingress-canary-h2qnq" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.646115 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/bef53da3-4222-4e59-9ef7-373d15c9721b-signing-key\") pod \"service-ca-9c57cc56f-mp49l\" (UID: \"bef53da3-4222-4e59-9ef7-373d15c9721b\") " pod="openshift-service-ca/service-ca-9c57cc56f-mp49l" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.646569 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/281b54ef-f700-437e-b975-1979a9d31151-srv-cert\") pod \"olm-operator-6b444d44fb-nwcwz\" (UID: \"281b54ef-f700-437e-b975-1979a9d31151\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nwcwz" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.647070 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/cc204192-dc8a-4a27-ba31-7b45ed831217-profile-collector-cert\") pod \"catalog-operator-68c6474976-xpv26\" (UID: \"cc204192-dc8a-4a27-ba31-7b45ed831217\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xpv26" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.647182 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3118d277-cc77-47d6-b6ae-7799a5220bef-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-t54r9\" (UID: \"3118d277-cc77-47d6-b6ae-7799a5220bef\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-t54r9" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.647523 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/759ae8a0-0d30-4da6-82e5-7d82ebfec823-registration-dir\") pod \"csi-hostpathplugin-f7nlr\" (UID: \"759ae8a0-0d30-4da6-82e5-7d82ebfec823\") " pod="hostpath-provisioner/csi-hostpathplugin-f7nlr" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.634376 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/759ae8a0-0d30-4da6-82e5-7d82ebfec823-csi-data-dir\") pod \"csi-hostpathplugin-f7nlr\" (UID: \"759ae8a0-0d30-4da6-82e5-7d82ebfec823\") " pod="hostpath-provisioner/csi-hostpathplugin-f7nlr" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.648521 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7b05200d-8025-468a-9c30-fbfd45a80b8b-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-vbbdr\" (UID: \"7b05200d-8025-468a-9c30-fbfd45a80b8b\") " pod="openshift-authentication/oauth-openshift-558db77b4-vbbdr" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.651582 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/6985cebb-8e40-453c-ae6f-935654eed745-node-bootstrap-token\") pod \"machine-config-server-nnfrn\" (UID: \"6985cebb-8e40-453c-ae6f-935654eed745\") " pod="openshift-machine-config-operator/machine-config-server-nnfrn" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.652731 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7b05200d-8025-468a-9c30-fbfd45a80b8b-audit-policies\") pod \"oauth-openshift-558db77b4-vbbdr\" (UID: \"7b05200d-8025-468a-9c30-fbfd45a80b8b\") " pod="openshift-authentication/oauth-openshift-558db77b4-vbbdr" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.653201 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/6985cebb-8e40-453c-ae6f-935654eed745-certs\") pod \"machine-config-server-nnfrn\" (UID: \"6985cebb-8e40-453c-ae6f-935654eed745\") " pod="openshift-machine-config-operator/machine-config-server-nnfrn" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.654725 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/090328a2-0e9e-49a5-b82a-e35947e2fbf2-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-b25qz\" (UID: \"090328a2-0e9e-49a5-b82a-e35947e2fbf2\") " pod="openshift-marketplace/marketplace-operator-79b997595-b25qz" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.654760 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43783d3c-c9b4-456b-a821-09ddf3e9ca75-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vxs94\" (UID: \"43783d3c-c9b4-456b-a821-09ddf3e9ca75\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vxs94" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.658241 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/dfecc13f-f972-4fac-acb4-b1c8e9ce1b5e-auth-proxy-config\") pod \"machine-config-operator-74547568cd-lfhrh\" (UID: \"dfecc13f-f972-4fac-acb4-b1c8e9ce1b5e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lfhrh" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.658283 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/dfecc13f-f972-4fac-acb4-b1c8e9ce1b5e-proxy-tls\") pod \"machine-config-operator-74547568cd-lfhrh\" (UID: \"dfecc13f-f972-4fac-acb4-b1c8e9ce1b5e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lfhrh" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.658434 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0dca839a-7824-4d7b-9c48-ebc16a0d156f-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-8bbz5\" (UID: \"0dca839a-7824-4d7b-9c48-ebc16a0d156f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8bbz5" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.658650 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/10a3cae7-8c99-405a-a5ec-cbed98e037bd-metrics-tls\") pod \"dns-default-9mn64\" (UID: \"10a3cae7-8c99-405a-a5ec-cbed98e037bd\") " pod="openshift-dns/dns-default-9mn64" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.658895 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/090328a2-0e9e-49a5-b82a-e35947e2fbf2-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-b25qz\" (UID: \"090328a2-0e9e-49a5-b82a-e35947e2fbf2\") " pod="openshift-marketplace/marketplace-operator-79b997595-b25qz" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.658992 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3e544519-8cbf-476f-8025-c4c867dead85-webhook-cert\") pod \"packageserver-d55dfcdfc-r7bc7\" (UID: \"3e544519-8cbf-476f-8025-c4c867dead85\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r7bc7" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.660797 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df778b51-e71b-4518-8cd8-e01c12cfd03f-serving-cert\") pod \"service-ca-operator-777779d784-8mblp\" (UID: \"df778b51-e71b-4518-8cd8-e01c12cfd03f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8mblp" Mar 11 08:42:46 crc kubenswrapper[4808]: W0311 08:42:46.660879 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb163e19a_baca_49ff_a20e_457983474014.slice/crio-fd6c7a5da2e7d3df0772f38f235371f3c2e49001db7f89b214cdb7d1bb07b8c7 WatchSource:0}: Error finding container fd6c7a5da2e7d3df0772f38f235371f3c2e49001db7f89b214cdb7d1bb07b8c7: Status 404 returned error can't find the container with id fd6c7a5da2e7d3df0772f38f235371f3c2e49001db7f89b214cdb7d1bb07b8c7 Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.661036 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/281b54ef-f700-437e-b975-1979a9d31151-profile-collector-cert\") pod \"olm-operator-6b444d44fb-nwcwz\" (UID: \"281b54ef-f700-437e-b975-1979a9d31151\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nwcwz" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.661446 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/049eab8c-0b60-4da6-8ad3-64b6ba257301-metrics-tls\") pod \"dns-operator-744455d44c-97qlz\" (UID: \"049eab8c-0b60-4da6-8ad3-64b6ba257301\") " pod="openshift-dns-operator/dns-operator-744455d44c-97qlz" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.661507 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/6b36c7a8-aafc-4327-96b1-bcec3d2cf99e-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-kgwqw\" (UID: \"6b36c7a8-aafc-4327-96b1-bcec3d2cf99e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kgwqw" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.662427 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e02b1fd1-95ae-45ac-91b5-3f9376f87b41-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-dljmc\" (UID: \"e02b1fd1-95ae-45ac-91b5-3f9376f87b41\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dljmc" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.673863 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7c4d7251-5000-4973-aeaa-0d085d4f264d-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-b6vjx\" (UID: \"7c4d7251-5000-4973-aeaa-0d085d4f264d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-b6vjx" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.688597 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7b05200d-8025-468a-9c30-fbfd45a80b8b-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-vbbdr\" (UID: \"7b05200d-8025-468a-9c30-fbfd45a80b8b\") " pod="openshift-authentication/oauth-openshift-558db77b4-vbbdr" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.688634 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7b05200d-8025-468a-9c30-fbfd45a80b8b-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-vbbdr\" (UID: \"7b05200d-8025-468a-9c30-fbfd45a80b8b\") " pod="openshift-authentication/oauth-openshift-558db77b4-vbbdr" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.688860 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7b05200d-8025-468a-9c30-fbfd45a80b8b-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-vbbdr\" (UID: \"7b05200d-8025-468a-9c30-fbfd45a80b8b\") " pod="openshift-authentication/oauth-openshift-558db77b4-vbbdr" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.688948 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/65550926-3f8b-436d-8a5c-e425d8c4875f-bound-sa-token\") pod \"image-registry-697d97f7c8-76p85\" (UID: \"65550926-3f8b-436d-8a5c-e425d8c4875f\") " pod="openshift-image-registry/image-registry-697d97f7c8-76p85" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.689381 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9c5fj\" (UniqueName: \"kubernetes.io/projected/65550926-3f8b-436d-8a5c-e425d8c4875f-kube-api-access-9c5fj\") pod \"image-registry-697d97f7c8-76p85\" (UID: \"65550926-3f8b-436d-8a5c-e425d8c4875f\") " pod="openshift-image-registry/image-registry-697d97f7c8-76p85" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.690250 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-ntqfc"] Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.690765 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7b05200d-8025-468a-9c30-fbfd45a80b8b-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-vbbdr\" (UID: \"7b05200d-8025-468a-9c30-fbfd45a80b8b\") " pod="openshift-authentication/oauth-openshift-558db77b4-vbbdr" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.690817 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7b05200d-8025-468a-9c30-fbfd45a80b8b-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-vbbdr\" (UID: \"7b05200d-8025-468a-9c30-fbfd45a80b8b\") " pod="openshift-authentication/oauth-openshift-558db77b4-vbbdr" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.691460 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7b05200d-8025-468a-9c30-fbfd45a80b8b-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-vbbdr\" (UID: \"7b05200d-8025-468a-9c30-fbfd45a80b8b\") " pod="openshift-authentication/oauth-openshift-558db77b4-vbbdr" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.694881 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7b05200d-8025-468a-9c30-fbfd45a80b8b-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-vbbdr\" (UID: \"7b05200d-8025-468a-9c30-fbfd45a80b8b\") " pod="openshift-authentication/oauth-openshift-558db77b4-vbbdr" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.710048 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jl98p\" (UniqueName: \"kubernetes.io/projected/a9e02138-74fa-484f-a87d-b88e05e92d58-kube-api-access-jl98p\") pod \"machine-approver-56656f9798-zvh9m\" (UID: \"a9e02138-74fa-484f-a87d-b88e05e92d58\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zvh9m" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.716510 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twpbf\" (UniqueName: \"kubernetes.io/projected/24ab5c03-d768-4147-bbc2-4e71ac337623-kube-api-access-twpbf\") pod \"console-f9d7485db-zp4ks\" (UID: \"24ab5c03-d768-4147-bbc2-4e71ac337623\") " pod="openshift-console/console-f9d7485db-zp4ks" Mar 11 08:42:46 crc kubenswrapper[4808]: W0311 08:42:46.732575 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode0ffc2f5_a2b0_40bd_9732_c24caee2dfdb.slice/crio-bc9c2ca828426b6175db80ca8a55bfd9e31a4cef45e0a7f750fb952667b6b7b2 WatchSource:0}: Error finding container bc9c2ca828426b6175db80ca8a55bfd9e31a4cef45e0a7f750fb952667b6b7b2: Status 404 returned error can't find the container with id bc9c2ca828426b6175db80ca8a55bfd9e31a4cef45e0a7f750fb952667b6b7b2 Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.737893 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:42:46 crc kubenswrapper[4808]: E0311 08:42:46.738488 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:42:47.238473259 +0000 UTC m=+218.191796579 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.740147 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-gv48h"] Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.742391 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-bjpgk" Mar 11 08:42:46 crc kubenswrapper[4808]: W0311 08:42:46.751925 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda23f2035_3d2d_4caa_a1d4_62dfd729f876.slice/crio-071f0558d242b12a0cf75bc4e3e40a714eea07fcd1a9d287b35c7b3aed974c17 WatchSource:0}: Error finding container 071f0558d242b12a0cf75bc4e3e40a714eea07fcd1a9d287b35c7b3aed974c17: Status 404 returned error can't find the container with id 071f0558d242b12a0cf75bc4e3e40a714eea07fcd1a9d287b35c7b3aed974c17 Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.755785 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxxmn\" (UniqueName: \"kubernetes.io/projected/7b05200d-8025-468a-9c30-fbfd45a80b8b-kube-api-access-bxxmn\") pod \"oauth-openshift-558db77b4-vbbdr\" (UID: \"7b05200d-8025-468a-9c30-fbfd45a80b8b\") " pod="openshift-authentication/oauth-openshift-558db77b4-vbbdr" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.757171 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-h9wnt" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.764563 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s9lbs" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.772193 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkvv2\" (UniqueName: \"kubernetes.io/projected/cc204192-dc8a-4a27-ba31-7b45ed831217-kube-api-access-pkvv2\") pod \"catalog-operator-68c6474976-xpv26\" (UID: \"cc204192-dc8a-4a27-ba31-7b45ed831217\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xpv26" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.775313 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-b6vjx" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.791045 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4llxn" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.791632 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rv5vc\" (UniqueName: \"kubernetes.io/projected/77bcd05b-916e-481a-989f-eb860903e1a1-kube-api-access-rv5vc\") pod \"ingress-operator-5b745b69d9-sph2s\" (UID: \"77bcd05b-916e-481a-989f-eb860903e1a1\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-sph2s" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.810944 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdhrf\" (UniqueName: \"kubernetes.io/projected/3e544519-8cbf-476f-8025-c4c867dead85-kube-api-access-fdhrf\") pod \"packageserver-d55dfcdfc-r7bc7\" (UID: \"3e544519-8cbf-476f-8025-c4c867dead85\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r7bc7" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.834442 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bps2h\" (UniqueName: \"kubernetes.io/projected/9194a5de-636b-410e-b05b-64f861a6daf9-kube-api-access-bps2h\") pod \"migrator-59844c95c7-2z52n\" (UID: \"9194a5de-636b-410e-b05b-64f861a6daf9\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-2z52n" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.839567 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-76p85\" (UID: \"65550926-3f8b-436d-8a5c-e425d8c4875f\") " pod="openshift-image-registry/image-registry-697d97f7c8-76p85" Mar 11 08:42:46 crc kubenswrapper[4808]: E0311 08:42:46.839893 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-11 08:42:47.339877234 +0000 UTC m=+218.293200554 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-76p85" (UID: "65550926-3f8b-436d-8a5c-e425d8c4875f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.854376 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-2z52n" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.858119 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/77bcd05b-916e-481a-989f-eb860903e1a1-bound-sa-token\") pod \"ingress-operator-5b745b69d9-sph2s\" (UID: \"77bcd05b-916e-481a-989f-eb860903e1a1\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-sph2s" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.870340 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5b2g8\" (UniqueName: \"kubernetes.io/projected/6985cebb-8e40-453c-ae6f-935654eed745-kube-api-access-5b2g8\") pod \"machine-config-server-nnfrn\" (UID: \"6985cebb-8e40-453c-ae6f-935654eed745\") " pod="openshift-machine-config-operator/machine-config-server-nnfrn" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.874301 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n2lkc" event={"ID":"2b5f17e9-1e47-4c7a-b225-a874c78f88ef","Type":"ContainerStarted","Data":"bb8d6e4082072ca2a7ecb448fab602fd256b4243071bb80fb724c2de43274532"} Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.875134 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r7bc7" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.875541 4808 generic.go:334] "Generic (PLEG): container finished" podID="30ee7d2f-3ff9-44c0-8128-e5a25ed43613" containerID="995f0ac98d9ca73c615ac692e685d2e28e4453c435fe3f23c8bc90176ab0b227" exitCode=0 Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.875622 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8wmnr" event={"ID":"30ee7d2f-3ff9-44c0-8128-e5a25ed43613","Type":"ContainerDied","Data":"995f0ac98d9ca73c615ac692e685d2e28e4453c435fe3f23c8bc90176ab0b227"} Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.875674 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8wmnr" event={"ID":"30ee7d2f-3ff9-44c0-8128-e5a25ed43613","Type":"ContainerStarted","Data":"66a4b8a5231dd3effd22bf523e585d5c6a9214ef2e3b83c9997a98ccdf854202"} Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.876873 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-gv48h" event={"ID":"a23f2035-3d2d-4caa-a1d4-62dfd729f876","Type":"ContainerStarted","Data":"071f0558d242b12a0cf75bc4e3e40a714eea07fcd1a9d287b35c7b3aed974c17"} Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.878644 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ntqfc" event={"ID":"e0ffc2f5-a2b0-40bd-9732-c24caee2dfdb","Type":"ContainerStarted","Data":"bc9c2ca828426b6175db80ca8a55bfd9e31a4cef45e0a7f750fb952667b6b7b2"} Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.879622 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-td5rr" event={"ID":"b163e19a-baca-49ff-a20e-457983474014","Type":"ContainerStarted","Data":"fd6c7a5da2e7d3df0772f38f235371f3c2e49001db7f89b214cdb7d1bb07b8c7"} Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.881122 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-fwsjb" event={"ID":"a8210275-38af-4411-b048-dc409cc7b88d","Type":"ContainerStarted","Data":"e45e9f438cc70e89f4e24b7364bcefc082b7eb5bb7013587110fd2d2e134759d"} Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.881147 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-fwsjb" event={"ID":"a8210275-38af-4411-b048-dc409cc7b88d","Type":"ContainerStarted","Data":"afaa3d95f13ac7cb1395efaa21abb78e5cb44ce39bbd3aea574fcf91ca5f431b"} Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.883389 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xpv26" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.894758 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7s62s\" (UniqueName: \"kubernetes.io/projected/bef53da3-4222-4e59-9ef7-373d15c9721b-kube-api-access-7s62s\") pod \"service-ca-9c57cc56f-mp49l\" (UID: \"bef53da3-4222-4e59-9ef7-373d15c9721b\") " pod="openshift-service-ca/service-ca-9c57cc56f-mp49l" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.897450 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-bp68q" event={"ID":"adc95b24-ac31-47c8-8221-b27da4ea0564","Type":"ContainerStarted","Data":"9ca0ded6cf60645349aaea5474ab53cb48253c03f19c5488ae65f842358e5d17"} Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.904604 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-6hxxc" event={"ID":"83b44b9a-4c3d-4100-ade1-2645a32a237e","Type":"ContainerStarted","Data":"7506c3c17ab783d8c9adb09bbeede08209ed2ec8304310ab52096088dd70dd8e"} Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.904650 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-6hxxc" event={"ID":"83b44b9a-4c3d-4100-ade1-2645a32a237e","Type":"ContainerStarted","Data":"3ebdda935cfe9bdb15b3272c979f2aa59298a6bc23465e53c3fb497515f766b3"} Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.906729 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-zld8r" event={"ID":"da668c40-37d5-4ebf-ae7e-59e9c301b386","Type":"ContainerStarted","Data":"0bd24cd0a29693c6830228954cf8c245bb823384f51047efd71b56aec40a416c"} Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.906760 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-zld8r" event={"ID":"da668c40-37d5-4ebf-ae7e-59e9c301b386","Type":"ContainerStarted","Data":"485bd4535e6da3dc9bf18ece89e1b75ad6c1fd56cef0f3ed4300d40aee152325"} Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.908621 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vvz25" event={"ID":"0eec4c1a-2ffd-487b-bd91-ecb5008f789e","Type":"ContainerStarted","Data":"04df614ba085e341a67594cb9022be4ea57ac4eb92e5d76b84beee050156ba37"} Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.908647 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vvz25" event={"ID":"0eec4c1a-2ffd-487b-bd91-ecb5008f789e","Type":"ContainerStarted","Data":"328f003ef25502dbd651948e94b39861f313ff5eb3d9dcb7ab184bd5aec191d2"} Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.910575 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-h9wnt" event={"ID":"6c9411d8-3c3c-4ae8-9580-10ae4884967b","Type":"ContainerStarted","Data":"581ce50451b6a7ab00836d335906cb193f792b5ab1365361d3200b9d478bf46d"} Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.913928 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p5m5n" event={"ID":"9381da56-e194-4d1c-a8ac-577035653c33","Type":"ContainerStarted","Data":"c737e96205508ffb6874a701fd74bc97d923e9e91d18cd4cff69063783297041"} Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.914940 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45cj2\" (UniqueName: \"kubernetes.io/projected/759ae8a0-0d30-4da6-82e5-7d82ebfec823-kube-api-access-45cj2\") pod \"csi-hostpathplugin-f7nlr\" (UID: \"759ae8a0-0d30-4da6-82e5-7d82ebfec823\") " pod="hostpath-provisioner/csi-hostpathplugin-f7nlr" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.924808 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-vbbdr" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.937042 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7f8rw\" (UniqueName: \"kubernetes.io/projected/8a27437d-c17a-4c8e-837c-f86587dc9346-kube-api-access-7f8rw\") pod \"kube-storage-version-migrator-operator-b67b599dd-h82vj\" (UID: \"8a27437d-c17a-4c8e-837c-f86587dc9346\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-h82vj" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.947348 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:42:46 crc kubenswrapper[4808]: E0311 08:42:46.947645 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:42:47.44751961 +0000 UTC m=+218.400842930 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.948729 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-mp49l" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.949485 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-bjpgk"] Mar 11 08:42:46 crc kubenswrapper[4808]: E0311 08:42:46.950059 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-11 08:42:47.450026242 +0000 UTC m=+218.403349622 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-76p85" (UID: "65550926-3f8b-436d-8a5c-e425d8c4875f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.950956 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-76p85\" (UID: \"65550926-3f8b-436d-8a5c-e425d8c4875f\") " pod="openshift-image-registry/image-registry-697d97f7c8-76p85" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.955510 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-zp4ks" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.961647 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zvh9m" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.962479 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvrr5\" (UniqueName: \"kubernetes.io/projected/0dca839a-7824-4d7b-9c48-ebc16a0d156f-kube-api-access-zvrr5\") pod \"cluster-image-registry-operator-dc59b4c8b-8bbz5\" (UID: \"0dca839a-7824-4d7b-9c48-ebc16a0d156f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8bbz5" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.974538 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lz4s6\" (UniqueName: \"kubernetes.io/projected/3118d277-cc77-47d6-b6ae-7799a5220bef-kube-api-access-lz4s6\") pod \"multus-admission-controller-857f4d67dd-t54r9\" (UID: \"3118d277-cc77-47d6-b6ae-7799a5220bef\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-t54r9" Mar 11 08:42:46 crc kubenswrapper[4808]: W0311 08:42:46.990294 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6e36469c_36ee_4c28_b4f3_c37c5b25330d.slice/crio-d5ac9d67ed26e52567ab68dea4556586971151b3e6635cf11358f3e146a16774 WatchSource:0}: Error finding container d5ac9d67ed26e52567ab68dea4556586971151b3e6635cf11358f3e146a16774: Status 404 returned error can't find the container with id d5ac9d67ed26e52567ab68dea4556586971151b3e6635cf11358f3e146a16774 Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.991624 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/43783d3c-c9b4-456b-a821-09ddf3e9ca75-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vxs94\" (UID: \"43783d3c-c9b4-456b-a821-09ddf3e9ca75\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vxs94" Mar 11 08:42:46 crc kubenswrapper[4808]: I0311 08:42:46.995810 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-f7nlr" Mar 11 08:42:47 crc kubenswrapper[4808]: I0311 08:42:47.007690 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-nnfrn" Mar 11 08:42:47 crc kubenswrapper[4808]: I0311 08:42:47.023648 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qfph\" (UniqueName: \"kubernetes.io/projected/29ace785-0297-4201-9d1c-778af0740058-kube-api-access-4qfph\") pod \"auto-csr-approver-29553642-n69sp\" (UID: \"29ace785-0297-4201-9d1c-778af0740058\") " pod="openshift-infra/auto-csr-approver-29553642-n69sp" Mar 11 08:42:47 crc kubenswrapper[4808]: I0311 08:42:47.033373 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2xz5\" (UniqueName: \"kubernetes.io/projected/6b36c7a8-aafc-4327-96b1-bcec3d2cf99e-kube-api-access-j2xz5\") pod \"package-server-manager-789f6589d5-kgwqw\" (UID: \"6b36c7a8-aafc-4327-96b1-bcec3d2cf99e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kgwqw" Mar 11 08:42:47 crc kubenswrapper[4808]: I0311 08:42:47.052677 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:42:47 crc kubenswrapper[4808]: E0311 08:42:47.052827 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:42:47.552803498 +0000 UTC m=+218.506126818 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:42:47 crc kubenswrapper[4808]: I0311 08:42:47.052971 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-76p85\" (UID: \"65550926-3f8b-436d-8a5c-e425d8c4875f\") " pod="openshift-image-registry/image-registry-697d97f7c8-76p85" Mar 11 08:42:47 crc kubenswrapper[4808]: E0311 08:42:47.053285 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-11 08:42:47.553276191 +0000 UTC m=+218.506599511 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-76p85" (UID: "65550926-3f8b-436d-8a5c-e425d8c4875f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:42:47 crc kubenswrapper[4808]: I0311 08:42:47.058544 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h59p2\" (UniqueName: \"kubernetes.io/projected/10a3cae7-8c99-405a-a5ec-cbed98e037bd-kube-api-access-h59p2\") pod \"dns-default-9mn64\" (UID: \"10a3cae7-8c99-405a-a5ec-cbed98e037bd\") " pod="openshift-dns/dns-default-9mn64" Mar 11 08:42:47 crc kubenswrapper[4808]: I0311 08:42:47.075140 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwfqd\" (UniqueName: \"kubernetes.io/projected/049eab8c-0b60-4da6-8ad3-64b6ba257301-kube-api-access-bwfqd\") pod \"dns-operator-744455d44c-97qlz\" (UID: \"049eab8c-0b60-4da6-8ad3-64b6ba257301\") " pod="openshift-dns-operator/dns-operator-744455d44c-97qlz" Mar 11 08:42:47 crc kubenswrapper[4808]: I0311 08:42:47.086758 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-b6vjx"] Mar 11 08:42:47 crc kubenswrapper[4808]: I0311 08:42:47.103055 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkfrp\" (UniqueName: \"kubernetes.io/projected/dfecc13f-f972-4fac-acb4-b1c8e9ce1b5e-kube-api-access-bkfrp\") pod \"machine-config-operator-74547568cd-lfhrh\" (UID: \"dfecc13f-f972-4fac-acb4-b1c8e9ce1b5e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lfhrh" Mar 11 08:42:47 crc kubenswrapper[4808]: I0311 08:42:47.110275 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-97qlz" Mar 11 08:42:47 crc kubenswrapper[4808]: I0311 08:42:47.114103 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzjk6\" (UniqueName: \"kubernetes.io/projected/281b54ef-f700-437e-b975-1979a9d31151-kube-api-access-hzjk6\") pod \"olm-operator-6b444d44fb-nwcwz\" (UID: \"281b54ef-f700-437e-b975-1979a9d31151\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nwcwz" Mar 11 08:42:47 crc kubenswrapper[4808]: I0311 08:42:47.117775 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-sph2s" Mar 11 08:42:47 crc kubenswrapper[4808]: I0311 08:42:47.124458 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-h82vj" Mar 11 08:42:47 crc kubenswrapper[4808]: I0311 08:42:47.132876 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9x95\" (UniqueName: \"kubernetes.io/projected/090328a2-0e9e-49a5-b82a-e35947e2fbf2-kube-api-access-m9x95\") pod \"marketplace-operator-79b997595-b25qz\" (UID: \"090328a2-0e9e-49a5-b82a-e35947e2fbf2\") " pod="openshift-marketplace/marketplace-operator-79b997595-b25qz" Mar 11 08:42:47 crc kubenswrapper[4808]: I0311 08:42:47.138723 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vxs94" Mar 11 08:42:47 crc kubenswrapper[4808]: I0311 08:42:47.155055 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zjtq\" (UniqueName: \"kubernetes.io/projected/c2dd2c5e-2805-4166-8819-65486393641b-kube-api-access-8zjtq\") pod \"ingress-canary-h2qnq\" (UID: \"c2dd2c5e-2805-4166-8819-65486393641b\") " pod="openshift-ingress-canary/ingress-canary-h2qnq" Mar 11 08:42:47 crc kubenswrapper[4808]: I0311 08:42:47.162691 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:42:47 crc kubenswrapper[4808]: E0311 08:42:47.163178 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:42:47.663163142 +0000 UTC m=+218.616486462 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:42:47 crc kubenswrapper[4808]: I0311 08:42:47.166726 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lfhrh" Mar 11 08:42:47 crc kubenswrapper[4808]: I0311 08:42:47.166877 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-b25qz" Mar 11 08:42:47 crc kubenswrapper[4808]: I0311 08:42:47.169073 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-t54r9" Mar 11 08:42:47 crc kubenswrapper[4808]: I0311 08:42:47.177630 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwg6x\" (UniqueName: \"kubernetes.io/projected/df778b51-e71b-4518-8cd8-e01c12cfd03f-kube-api-access-gwg6x\") pod \"service-ca-operator-777779d784-8mblp\" (UID: \"df778b51-e71b-4518-8cd8-e01c12cfd03f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8mblp" Mar 11 08:42:47 crc kubenswrapper[4808]: I0311 08:42:47.178343 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s9lbs"] Mar 11 08:42:47 crc kubenswrapper[4808]: I0311 08:42:47.191245 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlf9v\" (UniqueName: \"kubernetes.io/projected/025e6699-14e0-4c1c-8c27-77f280040b4e-kube-api-access-wlf9v\") pod \"downloads-7954f5f757-w6wrf\" (UID: \"025e6699-14e0-4c1c-8c27-77f280040b4e\") " pod="openshift-console/downloads-7954f5f757-w6wrf" Mar 11 08:42:47 crc kubenswrapper[4808]: I0311 08:42:47.193348 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4llxn"] Mar 11 08:42:47 crc kubenswrapper[4808]: I0311 08:42:47.215016 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553642-n69sp" Mar 11 08:42:47 crc kubenswrapper[4808]: I0311 08:42:47.226892 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0dca839a-7824-4d7b-9c48-ebc16a0d156f-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-8bbz5\" (UID: \"0dca839a-7824-4d7b-9c48-ebc16a0d156f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8bbz5" Mar 11 08:42:47 crc kubenswrapper[4808]: I0311 08:42:47.227215 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kgwqw" Mar 11 08:42:47 crc kubenswrapper[4808]: I0311 08:42:47.232913 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-8mblp" Mar 11 08:42:47 crc kubenswrapper[4808]: I0311 08:42:47.233282 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e02b1fd1-95ae-45ac-91b5-3f9376f87b41-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-dljmc\" (UID: \"e02b1fd1-95ae-45ac-91b5-3f9376f87b41\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dljmc" Mar 11 08:42:47 crc kubenswrapper[4808]: I0311 08:42:47.240415 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nwcwz" Mar 11 08:42:47 crc kubenswrapper[4808]: I0311 08:42:47.247810 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r7bc7"] Mar 11 08:42:47 crc kubenswrapper[4808]: I0311 08:42:47.253556 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-w6wrf" Mar 11 08:42:47 crc kubenswrapper[4808]: I0311 08:42:47.254199 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvdjq\" (UniqueName: \"kubernetes.io/projected/86edbb02-cc48-4845-86d4-51c46a4120bf-kube-api-access-hvdjq\") pod \"collect-profiles-29553630-gsnv8\" (UID: \"86edbb02-cc48-4845-86d4-51c46a4120bf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553630-gsnv8" Mar 11 08:42:47 crc kubenswrapper[4808]: I0311 08:42:47.255911 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29553630-gsnv8" Mar 11 08:42:47 crc kubenswrapper[4808]: I0311 08:42:47.266010 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-76p85\" (UID: \"65550926-3f8b-436d-8a5c-e425d8c4875f\") " pod="openshift-image-registry/image-registry-697d97f7c8-76p85" Mar 11 08:42:47 crc kubenswrapper[4808]: E0311 08:42:47.266300 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-11 08:42:47.766286627 +0000 UTC m=+218.719609947 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-76p85" (UID: "65550926-3f8b-436d-8a5c-e425d8c4875f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:42:47 crc kubenswrapper[4808]: I0311 08:42:47.275839 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-h2qnq" Mar 11 08:42:47 crc kubenswrapper[4808]: I0311 08:42:47.285824 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-9mn64" Mar 11 08:42:47 crc kubenswrapper[4808]: I0311 08:42:47.369257 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:42:47 crc kubenswrapper[4808]: E0311 08:42:47.369480 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:42:47.869450983 +0000 UTC m=+218.822774303 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:42:47 crc kubenswrapper[4808]: I0311 08:42:47.369655 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-76p85\" (UID: \"65550926-3f8b-436d-8a5c-e425d8c4875f\") " pod="openshift-image-registry/image-registry-697d97f7c8-76p85" Mar 11 08:42:47 crc kubenswrapper[4808]: E0311 08:42:47.369988 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-11 08:42:47.869974899 +0000 UTC m=+218.823298289 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-76p85" (UID: "65550926-3f8b-436d-8a5c-e425d8c4875f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:42:47 crc kubenswrapper[4808]: I0311 08:42:47.403627 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8bbz5" Mar 11 08:42:47 crc kubenswrapper[4808]: I0311 08:42:47.432405 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dljmc" Mar 11 08:42:47 crc kubenswrapper[4808]: I0311 08:42:47.471064 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:42:47 crc kubenswrapper[4808]: E0311 08:42:47.471186 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:42:47.971168888 +0000 UTC m=+218.924492208 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:42:47 crc kubenswrapper[4808]: I0311 08:42:47.473780 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-76p85\" (UID: \"65550926-3f8b-436d-8a5c-e425d8c4875f\") " pod="openshift-image-registry/image-registry-697d97f7c8-76p85" Mar 11 08:42:47 crc kubenswrapper[4808]: E0311 08:42:47.474111 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-11 08:42:47.974100873 +0000 UTC m=+218.927424193 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-76p85" (UID: "65550926-3f8b-436d-8a5c-e425d8c4875f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:42:47 crc kubenswrapper[4808]: I0311 08:42:47.487166 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-zp4ks"] Mar 11 08:42:47 crc kubenswrapper[4808]: I0311 08:42:47.575386 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:42:47 crc kubenswrapper[4808]: E0311 08:42:47.575817 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:42:48.075799667 +0000 UTC m=+219.029122977 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:42:47 crc kubenswrapper[4808]: I0311 08:42:47.576625 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xpv26"] Mar 11 08:42:47 crc kubenswrapper[4808]: I0311 08:42:47.628016 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-2z52n"] Mar 11 08:42:47 crc kubenswrapper[4808]: I0311 08:42:47.629670 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-vbbdr"] Mar 11 08:42:47 crc kubenswrapper[4808]: I0311 08:42:47.661652 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-mp49l"] Mar 11 08:42:47 crc kubenswrapper[4808]: I0311 08:42:47.677491 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-76p85\" (UID: \"65550926-3f8b-436d-8a5c-e425d8c4875f\") " pod="openshift-image-registry/image-registry-697d97f7c8-76p85" Mar 11 08:42:47 crc kubenswrapper[4808]: E0311 08:42:47.677848 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-11 08:42:48.177832551 +0000 UTC m=+219.131155871 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-76p85" (UID: "65550926-3f8b-436d-8a5c-e425d8c4875f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:42:47 crc kubenswrapper[4808]: I0311 08:42:47.699961 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-97qlz"] Mar 11 08:42:47 crc kubenswrapper[4808]: W0311 08:42:47.723831 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24ab5c03_d768_4147_bbc2_4e71ac337623.slice/crio-d0c1fc932778507e5ff4041017fff259ed2eda76dc8719beab06708af1a0c33d WatchSource:0}: Error finding container d0c1fc932778507e5ff4041017fff259ed2eda76dc8719beab06708af1a0c33d: Status 404 returned error can't find the container with id d0c1fc932778507e5ff4041017fff259ed2eda76dc8719beab06708af1a0c33d Mar 11 08:42:47 crc kubenswrapper[4808]: I0311 08:42:47.733276 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-w6wrf"] Mar 11 08:42:47 crc kubenswrapper[4808]: I0311 08:42:47.780754 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:42:47 crc kubenswrapper[4808]: E0311 08:42:47.781107 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:42:48.28109135 +0000 UTC m=+219.234414670 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:42:47 crc kubenswrapper[4808]: W0311 08:42:47.792663 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9194a5de_636b_410e_b05b_64f861a6daf9.slice/crio-562220900c303c1bd985573e4dc4848a0031b52aa2e556d5a1ce9ed8be701089 WatchSource:0}: Error finding container 562220900c303c1bd985573e4dc4848a0031b52aa2e556d5a1ce9ed8be701089: Status 404 returned error can't find the container with id 562220900c303c1bd985573e4dc4848a0031b52aa2e556d5a1ce9ed8be701089 Mar 11 08:42:47 crc kubenswrapper[4808]: W0311 08:42:47.796889 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbef53da3_4222_4e59_9ef7_373d15c9721b.slice/crio-61c533c9014d2516f71e2752dc046da1802725993126b0626ce5bdd4bf9c1883 WatchSource:0}: Error finding container 61c533c9014d2516f71e2752dc046da1802725993126b0626ce5bdd4bf9c1883: Status 404 returned error can't find the container with id 61c533c9014d2516f71e2752dc046da1802725993126b0626ce5bdd4bf9c1883 Mar 11 08:42:47 crc kubenswrapper[4808]: I0311 08:42:47.801112 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-sph2s"] Mar 11 08:42:47 crc kubenswrapper[4808]: I0311 08:42:47.811419 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-f7nlr"] Mar 11 08:42:47 crc kubenswrapper[4808]: W0311 08:42:47.861392 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod049eab8c_0b60_4da6_8ad3_64b6ba257301.slice/crio-86f0e07bc3975aeed0d17eb7d0d43b954a04c84ac933e6119199c420132843cc WatchSource:0}: Error finding container 86f0e07bc3975aeed0d17eb7d0d43b954a04c84ac933e6119199c420132843cc: Status 404 returned error can't find the container with id 86f0e07bc3975aeed0d17eb7d0d43b954a04c84ac933e6119199c420132843cc Mar 11 08:42:47 crc kubenswrapper[4808]: I0311 08:42:47.867936 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-fwsjb" podStartSLOduration=153.867915425 podStartE2EDuration="2m33.867915425s" podCreationTimestamp="2026-03-11 08:40:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:42:47.866597617 +0000 UTC m=+218.819920937" watchObservedRunningTime="2026-03-11 08:42:47.867915425 +0000 UTC m=+218.821238745" Mar 11 08:42:47 crc kubenswrapper[4808]: I0311 08:42:47.882965 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-76p85\" (UID: \"65550926-3f8b-436d-8a5c-e425d8c4875f\") " pod="openshift-image-registry/image-registry-697d97f7c8-76p85" Mar 11 08:42:47 crc kubenswrapper[4808]: I0311 08:42:47.883212 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-lfhrh"] Mar 11 08:42:47 crc kubenswrapper[4808]: E0311 08:42:47.883234 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-11 08:42:48.383224177 +0000 UTC m=+219.336547497 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-76p85" (UID: "65550926-3f8b-436d-8a5c-e425d8c4875f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:42:47 crc kubenswrapper[4808]: I0311 08:42:47.945979 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-vbbdr" event={"ID":"7b05200d-8025-468a-9c30-fbfd45a80b8b","Type":"ContainerStarted","Data":"0893a81cee756ae0cdc2508227afd0f5c095c7f76f3bd2af9148bee2c38c66f7"} Mar 11 08:42:47 crc kubenswrapper[4808]: I0311 08:42:47.957970 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-97qlz" event={"ID":"049eab8c-0b60-4da6-8ad3-64b6ba257301","Type":"ContainerStarted","Data":"86f0e07bc3975aeed0d17eb7d0d43b954a04c84ac933e6119199c420132843cc"} Mar 11 08:42:47 crc kubenswrapper[4808]: I0311 08:42:47.979710 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-t54r9"] Mar 11 08:42:47 crc kubenswrapper[4808]: I0311 08:42:47.984350 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:42:47 crc kubenswrapper[4808]: E0311 08:42:47.984805 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:42:48.484787096 +0000 UTC m=+219.438110416 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:42:47 crc kubenswrapper[4808]: I0311 08:42:47.989106 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-2z52n" event={"ID":"9194a5de-636b-410e-b05b-64f861a6daf9","Type":"ContainerStarted","Data":"562220900c303c1bd985573e4dc4848a0031b52aa2e556d5a1ce9ed8be701089"} Mar 11 08:42:48 crc kubenswrapper[4808]: I0311 08:42:47.999891 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xpv26" event={"ID":"cc204192-dc8a-4a27-ba31-7b45ed831217","Type":"ContainerStarted","Data":"1f387a886c604125049a64130bcf9ad02d1dc3962190ab8f3bee8f7e2eaef555"} Mar 11 08:42:48 crc kubenswrapper[4808]: I0311 08:42:48.013628 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-b6vjx" event={"ID":"7c4d7251-5000-4973-aeaa-0d085d4f264d","Type":"ContainerStarted","Data":"4c558121a37bc43b28f707152dbf2d014397771433917b7595d2f57b27f29a2d"} Mar 11 08:42:48 crc kubenswrapper[4808]: I0311 08:42:48.014966 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zvh9m" event={"ID":"a9e02138-74fa-484f-a87d-b88e05e92d58","Type":"ContainerStarted","Data":"99af861cf65093546d787608cd4d1d854168718b1590ba1d242decad4da36e66"} Mar 11 08:42:48 crc kubenswrapper[4808]: I0311 08:42:48.018922 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n2lkc" event={"ID":"2b5f17e9-1e47-4c7a-b225-a874c78f88ef","Type":"ContainerStarted","Data":"e7cf7e02f93e0720f2b42328bf368ed16e99da3d7821162965e4b58e976fa215"} Mar 11 08:42:48 crc kubenswrapper[4808]: I0311 08:42:48.019611 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n2lkc" Mar 11 08:42:48 crc kubenswrapper[4808]: I0311 08:42:48.022508 4808 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-n2lkc container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Mar 11 08:42:48 crc kubenswrapper[4808]: I0311 08:42:48.022554 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n2lkc" podUID="2b5f17e9-1e47-4c7a-b225-a874c78f88ef" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" Mar 11 08:42:48 crc kubenswrapper[4808]: I0311 08:42:48.029736 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-mp49l" event={"ID":"bef53da3-4222-4e59-9ef7-373d15c9721b","Type":"ContainerStarted","Data":"61c533c9014d2516f71e2752dc046da1802725993126b0626ce5bdd4bf9c1883"} Mar 11 08:42:48 crc kubenswrapper[4808]: I0311 08:42:48.044715 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kgwqw"] Mar 11 08:42:48 crc kubenswrapper[4808]: I0311 08:42:48.048988 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-zld8r" event={"ID":"da668c40-37d5-4ebf-ae7e-59e9c301b386","Type":"ContainerStarted","Data":"fcaafe9eceade4cacf1323eba9b8c9f48f179b7a80af73e85fd6a760306f2ed8"} Mar 11 08:42:48 crc kubenswrapper[4808]: I0311 08:42:48.050553 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-td5rr" event={"ID":"b163e19a-baca-49ff-a20e-457983474014","Type":"ContainerStarted","Data":"bb06c98c45e11ab4c147160497ce27a82f89d178a7a9fdea1b33e7b58375d083"} Mar 11 08:42:48 crc kubenswrapper[4808]: I0311 08:42:48.051632 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s9lbs" event={"ID":"cde5dc2c-6004-42ac-bd4a-93f0bec898fa","Type":"ContainerStarted","Data":"f0b1936f4fa18fe075c7eedfbfbf51ee2d743758629c4093f15208deb7c10700"} Mar 11 08:42:48 crc kubenswrapper[4808]: I0311 08:42:48.053908 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-gv48h" event={"ID":"a23f2035-3d2d-4caa-a1d4-62dfd729f876","Type":"ContainerStarted","Data":"31e726a145b48aaa2c6c96349e5636c160b6e480cc6e4497153dbfd4b74e6a2d"} Mar 11 08:42:48 crc kubenswrapper[4808]: I0311 08:42:48.054106 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-gv48h" Mar 11 08:42:48 crc kubenswrapper[4808]: I0311 08:42:48.056133 4808 generic.go:334] "Generic (PLEG): container finished" podID="adc95b24-ac31-47c8-8221-b27da4ea0564" containerID="80b90fa351249c1136d33185763c3bc0ce97049d7a2a3ee3e724a18fe619677a" exitCode=0 Mar 11 08:42:48 crc kubenswrapper[4808]: I0311 08:42:48.056199 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-bp68q" event={"ID":"adc95b24-ac31-47c8-8221-b27da4ea0564","Type":"ContainerDied","Data":"80b90fa351249c1136d33185763c3bc0ce97049d7a2a3ee3e724a18fe619677a"} Mar 11 08:42:48 crc kubenswrapper[4808]: I0311 08:42:48.058809 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-bjpgk" event={"ID":"6e36469c-36ee-4c28-b4f3-c37c5b25330d","Type":"ContainerStarted","Data":"79b3c51cbea9a903e2e7686b9280ae47a2aabdea0923220583fa1bc1f11d9527"} Mar 11 08:42:48 crc kubenswrapper[4808]: I0311 08:42:48.058843 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-bjpgk" event={"ID":"6e36469c-36ee-4c28-b4f3-c37c5b25330d","Type":"ContainerStarted","Data":"d5ac9d67ed26e52567ab68dea4556586971151b3e6635cf11358f3e146a16774"} Mar 11 08:42:48 crc kubenswrapper[4808]: I0311 08:42:48.061682 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r7bc7" event={"ID":"3e544519-8cbf-476f-8025-c4c867dead85","Type":"ContainerStarted","Data":"225d18608c8b937a30c6eb1ca3df1f694a2424bb5f2d358b55bd6a3e6feccf27"} Mar 11 08:42:48 crc kubenswrapper[4808]: I0311 08:42:48.061948 4808 patch_prober.go:28] interesting pod/console-operator-58897d9998-gv48h container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/readyz\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Mar 11 08:42:48 crc kubenswrapper[4808]: I0311 08:42:48.062033 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-gv48h" podUID="a23f2035-3d2d-4caa-a1d4-62dfd729f876" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/readyz\": dial tcp 10.217.0.25:8443: connect: connection refused" Mar 11 08:42:48 crc kubenswrapper[4808]: I0311 08:42:48.075732 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ntqfc" event={"ID":"e0ffc2f5-a2b0-40bd-9732-c24caee2dfdb","Type":"ContainerStarted","Data":"56513912b7fa8df758fc2f0b659d8d9126ace805906c14587f8577de84075a34"} Mar 11 08:42:48 crc kubenswrapper[4808]: I0311 08:42:48.086287 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-76p85\" (UID: \"65550926-3f8b-436d-8a5c-e425d8c4875f\") " pod="openshift-image-registry/image-registry-697d97f7c8-76p85" Mar 11 08:42:48 crc kubenswrapper[4808]: E0311 08:42:48.089789 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-11 08:42:48.589770235 +0000 UTC m=+219.543093555 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-76p85" (UID: "65550926-3f8b-436d-8a5c-e425d8c4875f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:42:48 crc kubenswrapper[4808]: I0311 08:42:48.124342 4808 generic.go:334] "Generic (PLEG): container finished" podID="0eec4c1a-2ffd-487b-bd91-ecb5008f789e" containerID="04df614ba085e341a67594cb9022be4ea57ac4eb92e5d76b84beee050156ba37" exitCode=0 Mar 11 08:42:48 crc kubenswrapper[4808]: I0311 08:42:48.124457 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vvz25" event={"ID":"0eec4c1a-2ffd-487b-bd91-ecb5008f789e","Type":"ContainerDied","Data":"04df614ba085e341a67594cb9022be4ea57ac4eb92e5d76b84beee050156ba37"} Mar 11 08:42:48 crc kubenswrapper[4808]: I0311 08:42:48.135302 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p5m5n" event={"ID":"9381da56-e194-4d1c-a8ac-577035653c33","Type":"ContainerStarted","Data":"d524ce9a31014b6251c2e49f3d1111b3629e753e9f2430a8fa468d2210b0bf5b"} Mar 11 08:42:48 crc kubenswrapper[4808]: I0311 08:42:48.158170 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4llxn" event={"ID":"d4ea394d-b845-43b1-93e2-31db42fe21bd","Type":"ContainerStarted","Data":"046912fd7e856facf352f0e1800baaa55ce69ee04a0e9afb246de3814060c077"} Mar 11 08:42:48 crc kubenswrapper[4808]: I0311 08:42:48.161185 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-h9wnt" event={"ID":"6c9411d8-3c3c-4ae8-9580-10ae4884967b","Type":"ContainerStarted","Data":"fd5f5fd8219e8cf25841cd1e7917f1d858a7c48795f35addcfaf1f49aa94cecb"} Mar 11 08:42:48 crc kubenswrapper[4808]: I0311 08:42:48.168769 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8wmnr" event={"ID":"30ee7d2f-3ff9-44c0-8128-e5a25ed43613","Type":"ContainerStarted","Data":"c0b826f06286278525b84178715b59cb4a31f927ca75837766a5f295232f2d50"} Mar 11 08:42:48 crc kubenswrapper[4808]: I0311 08:42:48.172201 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-nnfrn" event={"ID":"6985cebb-8e40-453c-ae6f-935654eed745","Type":"ContainerStarted","Data":"d7ea4dbf89590e4d87afbd3e1dd0b1b3c56eb3769015ea13857b63feb201a627"} Mar 11 08:42:48 crc kubenswrapper[4808]: I0311 08:42:48.177742 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-zp4ks" event={"ID":"24ab5c03-d768-4147-bbc2-4e71ac337623","Type":"ContainerStarted","Data":"d0c1fc932778507e5ff4041017fff259ed2eda76dc8719beab06708af1a0c33d"} Mar 11 08:42:48 crc kubenswrapper[4808]: I0311 08:42:48.177793 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-6hxxc" Mar 11 08:42:48 crc kubenswrapper[4808]: I0311 08:42:48.186038 4808 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-6hxxc container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Mar 11 08:42:48 crc kubenswrapper[4808]: I0311 08:42:48.186096 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-6hxxc" podUID="83b44b9a-4c3d-4100-ade1-2645a32a237e" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Mar 11 08:42:48 crc kubenswrapper[4808]: I0311 08:42:48.190696 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:42:48 crc kubenswrapper[4808]: E0311 08:42:48.194918 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:42:48.694900488 +0000 UTC m=+219.648223808 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:42:48 crc kubenswrapper[4808]: I0311 08:42:48.291994 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-76p85\" (UID: \"65550926-3f8b-436d-8a5c-e425d8c4875f\") " pod="openshift-image-registry/image-registry-697d97f7c8-76p85" Mar 11 08:42:48 crc kubenswrapper[4808]: E0311 08:42:48.293940 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-11 08:42:48.793924825 +0000 UTC m=+219.747248145 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-76p85" (UID: "65550926-3f8b-436d-8a5c-e425d8c4875f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:42:48 crc kubenswrapper[4808]: I0311 08:42:48.385522 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-h2qnq"] Mar 11 08:42:48 crc kubenswrapper[4808]: I0311 08:42:48.392706 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-b25qz"] Mar 11 08:42:48 crc kubenswrapper[4808]: I0311 08:42:48.393538 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:42:48 crc kubenswrapper[4808]: E0311 08:42:48.393637 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:42:48.893615331 +0000 UTC m=+219.846938651 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:42:48 crc kubenswrapper[4808]: I0311 08:42:48.393928 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-76p85\" (UID: \"65550926-3f8b-436d-8a5c-e425d8c4875f\") " pod="openshift-image-registry/image-registry-697d97f7c8-76p85" Mar 11 08:42:48 crc kubenswrapper[4808]: E0311 08:42:48.394693 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-11 08:42:48.894684362 +0000 UTC m=+219.848007682 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-76p85" (UID: "65550926-3f8b-436d-8a5c-e425d8c4875f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:42:48 crc kubenswrapper[4808]: I0311 08:42:48.496196 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:42:48 crc kubenswrapper[4808]: E0311 08:42:48.496565 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:42:48.996550131 +0000 UTC m=+219.949873451 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:42:48 crc kubenswrapper[4808]: W0311 08:42:48.511902 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod090328a2_0e9e_49a5_b82a_e35947e2fbf2.slice/crio-fe4c0be5f4837f9e7bd9459ee81d96af39f7b14a37319149e79cdf0bea3a308e WatchSource:0}: Error finding container fe4c0be5f4837f9e7bd9459ee81d96af39f7b14a37319149e79cdf0bea3a308e: Status 404 returned error can't find the container with id fe4c0be5f4837f9e7bd9459ee81d96af39f7b14a37319149e79cdf0bea3a308e Mar 11 08:42:48 crc kubenswrapper[4808]: I0311 08:42:48.597958 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-76p85\" (UID: \"65550926-3f8b-436d-8a5c-e425d8c4875f\") " pod="openshift-image-registry/image-registry-697d97f7c8-76p85" Mar 11 08:42:48 crc kubenswrapper[4808]: E0311 08:42:48.598291 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-11 08:42:49.098280796 +0000 UTC m=+220.051604116 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-76p85" (UID: "65550926-3f8b-436d-8a5c-e425d8c4875f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:42:48 crc kubenswrapper[4808]: I0311 08:42:48.602577 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nwcwz"] Mar 11 08:42:48 crc kubenswrapper[4808]: I0311 08:42:48.606385 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553642-n69sp"] Mar 11 08:42:48 crc kubenswrapper[4808]: I0311 08:42:48.625308 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-h82vj"] Mar 11 08:42:48 crc kubenswrapper[4808]: W0311 08:42:48.680000 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod29ace785_0297_4201_9d1c_778af0740058.slice/crio-940d9f3a3d86aaca2ac9fe61d6caced57e1fae31f5882b626d70816e20a0d498 WatchSource:0}: Error finding container 940d9f3a3d86aaca2ac9fe61d6caced57e1fae31f5882b626d70816e20a0d498: Status 404 returned error can't find the container with id 940d9f3a3d86aaca2ac9fe61d6caced57e1fae31f5882b626d70816e20a0d498 Mar 11 08:42:48 crc kubenswrapper[4808]: I0311 08:42:48.688924 4808 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 11 08:42:48 crc kubenswrapper[4808]: W0311 08:42:48.699494 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod281b54ef_f700_437e_b975_1979a9d31151.slice/crio-8561cc67850b30bba600efdc08db0a96d33f1d1ee1329fefc01d22654fdb7430 WatchSource:0}: Error finding container 8561cc67850b30bba600efdc08db0a96d33f1d1ee1329fefc01d22654fdb7430: Status 404 returned error can't find the container with id 8561cc67850b30bba600efdc08db0a96d33f1d1ee1329fefc01d22654fdb7430 Mar 11 08:42:48 crc kubenswrapper[4808]: I0311 08:42:48.700326 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:42:48 crc kubenswrapper[4808]: E0311 08:42:48.700471 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:42:49.200445854 +0000 UTC m=+220.153769174 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:42:48 crc kubenswrapper[4808]: I0311 08:42:48.700552 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-76p85\" (UID: \"65550926-3f8b-436d-8a5c-e425d8c4875f\") " pod="openshift-image-registry/image-registry-697d97f7c8-76p85" Mar 11 08:42:48 crc kubenswrapper[4808]: E0311 08:42:48.701012 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-11 08:42:49.20100287 +0000 UTC m=+220.154326190 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-76p85" (UID: "65550926-3f8b-436d-8a5c-e425d8c4875f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:42:48 crc kubenswrapper[4808]: I0311 08:42:48.757948 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-h9wnt" Mar 11 08:42:48 crc kubenswrapper[4808]: I0311 08:42:48.759416 4808 patch_prober.go:28] interesting pod/router-default-5444994796-h9wnt container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Mar 11 08:42:48 crc kubenswrapper[4808]: I0311 08:42:48.759448 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h9wnt" podUID="6c9411d8-3c3c-4ae8-9580-10ae4884967b" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Mar 11 08:42:48 crc kubenswrapper[4808]: I0311 08:42:48.803179 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:42:48 crc kubenswrapper[4808]: E0311 08:42:48.803292 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:42:49.303274721 +0000 UTC m=+220.256598041 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:42:48 crc kubenswrapper[4808]: I0311 08:42:48.803400 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-76p85\" (UID: \"65550926-3f8b-436d-8a5c-e425d8c4875f\") " pod="openshift-image-registry/image-registry-697d97f7c8-76p85" Mar 11 08:42:48 crc kubenswrapper[4808]: E0311 08:42:48.803706 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-11 08:42:49.303695263 +0000 UTC m=+220.257018603 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-76p85" (UID: "65550926-3f8b-436d-8a5c-e425d8c4875f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:42:48 crc kubenswrapper[4808]: I0311 08:42:48.804990 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vxs94"] Mar 11 08:42:48 crc kubenswrapper[4808]: I0311 08:42:48.831090 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8wmnr" podStartSLOduration=154.831073183 podStartE2EDuration="2m34.831073183s" podCreationTimestamp="2026-03-11 08:40:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:42:48.830827826 +0000 UTC m=+219.784151146" watchObservedRunningTime="2026-03-11 08:42:48.831073183 +0000 UTC m=+219.784396503" Mar 11 08:42:48 crc kubenswrapper[4808]: I0311 08:42:48.841503 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-8mblp"] Mar 11 08:42:48 crc kubenswrapper[4808]: I0311 08:42:48.893090 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-h9wnt" podStartSLOduration=154.893056551 podStartE2EDuration="2m34.893056551s" podCreationTimestamp="2026-03-11 08:40:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:42:48.882666231 +0000 UTC m=+219.835989551" watchObservedRunningTime="2026-03-11 08:42:48.893056551 +0000 UTC m=+219.846379871" Mar 11 08:42:48 crc kubenswrapper[4808]: I0311 08:42:48.894328 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29553630-gsnv8"] Mar 11 08:42:48 crc kubenswrapper[4808]: I0311 08:42:48.908132 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:42:48 crc kubenswrapper[4808]: E0311 08:42:48.908536 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:42:49.408509337 +0000 UTC m=+220.361832657 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:42:48 crc kubenswrapper[4808]: I0311 08:42:48.909522 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-9mn64"] Mar 11 08:42:48 crc kubenswrapper[4808]: I0311 08:42:48.937316 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8bbz5"] Mar 11 08:42:48 crc kubenswrapper[4808]: I0311 08:42:48.953497 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-td5rr" podStartSLOduration=154.953477344 podStartE2EDuration="2m34.953477344s" podCreationTimestamp="2026-03-11 08:40:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:42:48.949192201 +0000 UTC m=+219.902515521" watchObservedRunningTime="2026-03-11 08:42:48.953477344 +0000 UTC m=+219.906800664" Mar 11 08:42:49 crc kubenswrapper[4808]: I0311 08:42:49.010000 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-76p85\" (UID: \"65550926-3f8b-436d-8a5c-e425d8c4875f\") " pod="openshift-image-registry/image-registry-697d97f7c8-76p85" Mar 11 08:42:49 crc kubenswrapper[4808]: E0311 08:42:49.010311 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-11 08:42:49.510300254 +0000 UTC m=+220.463623574 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-76p85" (UID: "65550926-3f8b-436d-8a5c-e425d8c4875f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:42:49 crc kubenswrapper[4808]: I0311 08:42:49.031098 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-zld8r" podStartSLOduration=155.031068473 podStartE2EDuration="2m35.031068473s" podCreationTimestamp="2026-03-11 08:40:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:42:49.030546708 +0000 UTC m=+219.983870028" watchObservedRunningTime="2026-03-11 08:42:49.031068473 +0000 UTC m=+219.984391793" Mar 11 08:42:49 crc kubenswrapper[4808]: I0311 08:42:49.043588 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-gv48h" podStartSLOduration=155.043568973 podStartE2EDuration="2m35.043568973s" podCreationTimestamp="2026-03-11 08:40:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:42:49.005760803 +0000 UTC m=+219.959084113" watchObservedRunningTime="2026-03-11 08:42:49.043568973 +0000 UTC m=+219.996892293" Mar 11 08:42:49 crc kubenswrapper[4808]: I0311 08:42:49.060186 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dljmc"] Mar 11 08:42:49 crc kubenswrapper[4808]: I0311 08:42:49.078068 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-6hxxc" podStartSLOduration=155.078036688 podStartE2EDuration="2m35.078036688s" podCreationTimestamp="2026-03-11 08:40:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:42:49.077205324 +0000 UTC m=+220.030528644" watchObservedRunningTime="2026-03-11 08:42:49.078036688 +0000 UTC m=+220.031360008" Mar 11 08:42:49 crc kubenswrapper[4808]: I0311 08:42:49.114591 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:42:49 crc kubenswrapper[4808]: E0311 08:42:49.114980 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:42:49.614961083 +0000 UTC m=+220.568284403 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:42:49 crc kubenswrapper[4808]: I0311 08:42:49.151013 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-bjpgk" podStartSLOduration=155.150995693 podStartE2EDuration="2m35.150995693s" podCreationTimestamp="2026-03-11 08:40:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:42:49.148982615 +0000 UTC m=+220.102305935" watchObservedRunningTime="2026-03-11 08:42:49.150995693 +0000 UTC m=+220.104319013" Mar 11 08:42:49 crc kubenswrapper[4808]: I0311 08:42:49.197021 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-sph2s" event={"ID":"77bcd05b-916e-481a-989f-eb860903e1a1","Type":"ContainerStarted","Data":"a597f4b2770bc3fff21fe03684b06378240f9cd6183298435a0fad99e69dfb14"} Mar 11 08:42:49 crc kubenswrapper[4808]: I0311 08:42:49.197065 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-sph2s" event={"ID":"77bcd05b-916e-481a-989f-eb860903e1a1","Type":"ContainerStarted","Data":"0c328cbe1bf6f937ee1ddf435dc041ef85098471c5987a04e6ba3d69fcd4ff3e"} Mar 11 08:42:49 crc kubenswrapper[4808]: I0311 08:42:49.201966 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-h2qnq" event={"ID":"c2dd2c5e-2805-4166-8819-65486393641b","Type":"ContainerStarted","Data":"0f6d34827852532e63e7e8b3b034f18a1d6edc813a0a2bfe3500e1fe60a9eea7"} Mar 11 08:42:49 crc kubenswrapper[4808]: I0311 08:42:49.202029 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-h2qnq" event={"ID":"c2dd2c5e-2805-4166-8819-65486393641b","Type":"ContainerStarted","Data":"aaa4e3e017012ce15fcb72bd4a23533aa95feb8ee4b16c2e75b1d78c13fcb6e2"} Mar 11 08:42:49 crc kubenswrapper[4808]: I0311 08:42:49.202471 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n2lkc" podStartSLOduration=155.202452758 podStartE2EDuration="2m35.202452758s" podCreationTimestamp="2026-03-11 08:40:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:42:49.194474637 +0000 UTC m=+220.147797957" watchObservedRunningTime="2026-03-11 08:42:49.202452758 +0000 UTC m=+220.155776078" Mar 11 08:42:49 crc kubenswrapper[4808]: I0311 08:42:49.215467 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-b6vjx" event={"ID":"7c4d7251-5000-4973-aeaa-0d085d4f264d","Type":"ContainerStarted","Data":"be298dcce3c113d4bd024e6f9706c8ec2598e349b2f8e9151c8b8308786e56b9"} Mar 11 08:42:49 crc kubenswrapper[4808]: I0311 08:42:49.215955 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-76p85\" (UID: \"65550926-3f8b-436d-8a5c-e425d8c4875f\") " pod="openshift-image-registry/image-registry-697d97f7c8-76p85" Mar 11 08:42:49 crc kubenswrapper[4808]: E0311 08:42:49.216486 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-11 08:42:49.716465022 +0000 UTC m=+220.669788342 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-76p85" (UID: "65550926-3f8b-436d-8a5c-e425d8c4875f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:42:49 crc kubenswrapper[4808]: I0311 08:42:49.219348 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kgwqw" event={"ID":"6b36c7a8-aafc-4327-96b1-bcec3d2cf99e","Type":"ContainerStarted","Data":"49ef7e9c44043ef27a892c4f28a60c58c08100c97ba9c402e12bb3a210cf9588"} Mar 11 08:42:49 crc kubenswrapper[4808]: I0311 08:42:49.219402 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kgwqw" event={"ID":"6b36c7a8-aafc-4327-96b1-bcec3d2cf99e","Type":"ContainerStarted","Data":"2bd5e6e3fb511e953ef4ce197e221179d61a0525def46996eff486dc807739b2"} Mar 11 08:42:49 crc kubenswrapper[4808]: I0311 08:42:49.223558 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-nnfrn" event={"ID":"6985cebb-8e40-453c-ae6f-935654eed745","Type":"ContainerStarted","Data":"8e57a47f5bbc8499976514f698dd5c7796519fec5a3f2266d062ac64d8334d33"} Mar 11 08:42:49 crc kubenswrapper[4808]: I0311 08:42:49.229584 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-f7nlr" event={"ID":"759ae8a0-0d30-4da6-82e5-7d82ebfec823","Type":"ContainerStarted","Data":"ab49ec3817d300bcea4de2793b214d1d7ecd52b9b3c8ccda0585aee922131b3a"} Mar 11 08:42:49 crc kubenswrapper[4808]: I0311 08:42:49.234854 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-h2qnq" podStartSLOduration=6.229167028 podStartE2EDuration="6.229167028s" podCreationTimestamp="2026-03-11 08:42:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:42:49.226149771 +0000 UTC m=+220.179473091" watchObservedRunningTime="2026-03-11 08:42:49.229167028 +0000 UTC m=+220.182490348" Mar 11 08:42:49 crc kubenswrapper[4808]: I0311 08:42:49.264285 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lfhrh" event={"ID":"dfecc13f-f972-4fac-acb4-b1c8e9ce1b5e","Type":"ContainerStarted","Data":"a612487f00eb0454ccf2341a87e9bcc44e2083439d6d02d5c24d71bed4457fe7"} Mar 11 08:42:49 crc kubenswrapper[4808]: I0311 08:42:49.264335 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lfhrh" event={"ID":"dfecc13f-f972-4fac-acb4-b1c8e9ce1b5e","Type":"ContainerStarted","Data":"e1feb5f708a60bbf8558b6bac2b05d1296fbc7c2a70d8f90b81462182ae2f5f4"} Mar 11 08:42:49 crc kubenswrapper[4808]: I0311 08:42:49.296875 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zvh9m" event={"ID":"a9e02138-74fa-484f-a87d-b88e05e92d58","Type":"ContainerStarted","Data":"feffc14e41bacf5b63e687108f24823ab48513a362ebb530cb4b6ff5d5427508"} Mar 11 08:42:49 crc kubenswrapper[4808]: I0311 08:42:49.305335 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-h82vj" event={"ID":"8a27437d-c17a-4c8e-837c-f86587dc9346","Type":"ContainerStarted","Data":"30a46ca813442f3f43ab06649323ecb251871c208c443b6f4badf1f7806ae929"} Mar 11 08:42:49 crc kubenswrapper[4808]: I0311 08:42:49.318312 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:42:49 crc kubenswrapper[4808]: E0311 08:42:49.319843 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:42:49.819823854 +0000 UTC m=+220.773147174 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:42:49 crc kubenswrapper[4808]: I0311 08:42:49.328568 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-9mn64" event={"ID":"10a3cae7-8c99-405a-a5ec-cbed98e037bd","Type":"ContainerStarted","Data":"c1a4d29f18920c0d8e7c8516625522957a91d16b5d5053541756fbfbd6fcf080"} Mar 11 08:42:49 crc kubenswrapper[4808]: I0311 08:42:49.352044 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-nnfrn" podStartSLOduration=5.352024813 podStartE2EDuration="5.352024813s" podCreationTimestamp="2026-03-11 08:42:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:42:49.282898189 +0000 UTC m=+220.236221529" watchObservedRunningTime="2026-03-11 08:42:49.352024813 +0000 UTC m=+220.305348133" Mar 11 08:42:49 crc kubenswrapper[4808]: I0311 08:42:49.353540 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-b6vjx" podStartSLOduration=155.353530736 podStartE2EDuration="2m35.353530736s" podCreationTimestamp="2026-03-11 08:40:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:42:49.351126437 +0000 UTC m=+220.304449757" watchObservedRunningTime="2026-03-11 08:42:49.353530736 +0000 UTC m=+220.306854056" Mar 11 08:42:49 crc kubenswrapper[4808]: I0311 08:42:49.355267 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xpv26" event={"ID":"cc204192-dc8a-4a27-ba31-7b45ed831217","Type":"ContainerStarted","Data":"aabea64eb754654a994b56f7a0296b7f748a1f8bb6a8d4176771a66e573809ce"} Mar 11 08:42:49 crc kubenswrapper[4808]: I0311 08:42:49.355927 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xpv26" Mar 11 08:42:49 crc kubenswrapper[4808]: I0311 08:42:49.359933 4808 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-xpv26 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Mar 11 08:42:49 crc kubenswrapper[4808]: I0311 08:42:49.359999 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xpv26" podUID="cc204192-dc8a-4a27-ba31-7b45ed831217" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" Mar 11 08:42:49 crc kubenswrapper[4808]: I0311 08:42:49.381926 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-mp49l" event={"ID":"bef53da3-4222-4e59-9ef7-373d15c9721b","Type":"ContainerStarted","Data":"8ccc970775d328aa343d2d57e48932a0d55d3977a2522ad896980f9a18dfce0d"} Mar 11 08:42:49 crc kubenswrapper[4808]: I0311 08:42:49.400948 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s9lbs" event={"ID":"cde5dc2c-6004-42ac-bd4a-93f0bec898fa","Type":"ContainerStarted","Data":"83698ae0e0a9658f66097026b9562a278337f5c2ff08730859dc9c6700c18d23"} Mar 11 08:42:49 crc kubenswrapper[4808]: I0311 08:42:49.401421 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xpv26" podStartSLOduration=155.401405228 podStartE2EDuration="2m35.401405228s" podCreationTimestamp="2026-03-11 08:40:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:42:49.400285225 +0000 UTC m=+220.353608545" watchObservedRunningTime="2026-03-11 08:42:49.401405228 +0000 UTC m=+220.354728558" Mar 11 08:42:49 crc kubenswrapper[4808]: I0311 08:42:49.406626 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-b25qz" event={"ID":"090328a2-0e9e-49a5-b82a-e35947e2fbf2","Type":"ContainerStarted","Data":"fe4c0be5f4837f9e7bd9459ee81d96af39f7b14a37319149e79cdf0bea3a308e"} Mar 11 08:42:49 crc kubenswrapper[4808]: I0311 08:42:49.419778 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-76p85\" (UID: \"65550926-3f8b-436d-8a5c-e425d8c4875f\") " pod="openshift-image-registry/image-registry-697d97f7c8-76p85" Mar 11 08:42:49 crc kubenswrapper[4808]: E0311 08:42:49.420735 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-11 08:42:49.920712235 +0000 UTC m=+220.874035555 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-76p85" (UID: "65550926-3f8b-436d-8a5c-e425d8c4875f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:42:49 crc kubenswrapper[4808]: I0311 08:42:49.420916 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8bbz5" event={"ID":"0dca839a-7824-4d7b-9c48-ebc16a0d156f","Type":"ContainerStarted","Data":"0f85efefcbbb3e438f6062a902d9563125ed4bed4d6e178c1b220e70928d478c"} Mar 11 08:42:49 crc kubenswrapper[4808]: I0311 08:42:49.433079 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ntqfc" event={"ID":"e0ffc2f5-a2b0-40bd-9732-c24caee2dfdb","Type":"ContainerStarted","Data":"042e4f43e506728deadb86189a00b598777e99cc1346c316dcf8829965bf103e"} Mar 11 08:42:49 crc kubenswrapper[4808]: I0311 08:42:49.434709 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nwcwz" event={"ID":"281b54ef-f700-437e-b975-1979a9d31151","Type":"ContainerStarted","Data":"8561cc67850b30bba600efdc08db0a96d33f1d1ee1329fefc01d22654fdb7430"} Mar 11 08:42:49 crc kubenswrapper[4808]: I0311 08:42:49.438386 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vxs94" event={"ID":"43783d3c-c9b4-456b-a821-09ddf3e9ca75","Type":"ContainerStarted","Data":"cf425cd6510065ff6c18654c0ae4d13f674e218364232437c7ca3fa5d1d1497a"} Mar 11 08:42:49 crc kubenswrapper[4808]: I0311 08:42:49.449074 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-mp49l" podStartSLOduration=155.449051222 podStartE2EDuration="2m35.449051222s" podCreationTimestamp="2026-03-11 08:40:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:42:49.448797705 +0000 UTC m=+220.402121015" watchObservedRunningTime="2026-03-11 08:42:49.449051222 +0000 UTC m=+220.402374542" Mar 11 08:42:49 crc kubenswrapper[4808]: I0311 08:42:49.462713 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-t54r9" event={"ID":"3118d277-cc77-47d6-b6ae-7799a5220bef","Type":"ContainerStarted","Data":"854530e0caa85ca4e40a6b5b5554a0945750c3c852f4371c670a714288a64f6d"} Mar 11 08:42:49 crc kubenswrapper[4808]: I0311 08:42:49.474790 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553642-n69sp" event={"ID":"29ace785-0297-4201-9d1c-778af0740058","Type":"ContainerStarted","Data":"940d9f3a3d86aaca2ac9fe61d6caced57e1fae31f5882b626d70816e20a0d498"} Mar 11 08:42:49 crc kubenswrapper[4808]: I0311 08:42:49.475987 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s9lbs" podStartSLOduration=155.475970709 podStartE2EDuration="2m35.475970709s" podCreationTimestamp="2026-03-11 08:40:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:42:49.473627391 +0000 UTC m=+220.426950731" watchObservedRunningTime="2026-03-11 08:42:49.475970709 +0000 UTC m=+220.429294029" Mar 11 08:42:49 crc kubenswrapper[4808]: I0311 08:42:49.477758 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29553630-gsnv8" event={"ID":"86edbb02-cc48-4845-86d4-51c46a4120bf","Type":"ContainerStarted","Data":"7c32559a658e25e9160702c9cc7c78375228500b699ab42370f21e1447c084df"} Mar 11 08:42:49 crc kubenswrapper[4808]: I0311 08:42:49.480849 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r7bc7" event={"ID":"3e544519-8cbf-476f-8025-c4c867dead85","Type":"ContainerStarted","Data":"39d390153cd4cb1b498250ceadc2cf57bbc5db8bdeed89a07a5a6b71855b50ab"} Mar 11 08:42:49 crc kubenswrapper[4808]: I0311 08:42:49.481346 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r7bc7" Mar 11 08:42:49 crc kubenswrapper[4808]: I0311 08:42:49.492436 4808 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-r7bc7 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:5443/healthz\": dial tcp 10.217.0.31:5443: connect: connection refused" start-of-body= Mar 11 08:42:49 crc kubenswrapper[4808]: I0311 08:42:49.492492 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r7bc7" podUID="3e544519-8cbf-476f-8025-c4c867dead85" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.31:5443/healthz\": dial tcp 10.217.0.31:5443: connect: connection refused" Mar 11 08:42:49 crc kubenswrapper[4808]: I0311 08:42:49.495899 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p5m5n" event={"ID":"9381da56-e194-4d1c-a8ac-577035653c33","Type":"ContainerStarted","Data":"2f11dfec09a78c2229de0b1677fafd4ee9c8ca792d3d74e7d1bbb156bdb9d792"} Mar 11 08:42:49 crc kubenswrapper[4808]: I0311 08:42:49.503811 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-8mblp" event={"ID":"df778b51-e71b-4518-8cd8-e01c12cfd03f","Type":"ContainerStarted","Data":"c4eda71b7fc328807a46dd8ef8fe194894848acee6071f3e001122ee36c7b9d5"} Mar 11 08:42:49 crc kubenswrapper[4808]: I0311 08:42:49.513573 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-2z52n" event={"ID":"9194a5de-636b-410e-b05b-64f861a6daf9","Type":"ContainerStarted","Data":"b3a4fdd79201707b73adc8ded22b474c98cd4c2ce9fe1fa75caa0da1154c531a"} Mar 11 08:42:49 crc kubenswrapper[4808]: I0311 08:42:49.521802 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:42:49 crc kubenswrapper[4808]: I0311 08:42:49.521857 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ntqfc" podStartSLOduration=155.521839122 podStartE2EDuration="2m35.521839122s" podCreationTimestamp="2026-03-11 08:40:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:42:49.504873693 +0000 UTC m=+220.458197013" watchObservedRunningTime="2026-03-11 08:42:49.521839122 +0000 UTC m=+220.475162442" Mar 11 08:42:49 crc kubenswrapper[4808]: E0311 08:42:49.521974 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:42:50.021961056 +0000 UTC m=+220.975284376 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:42:49 crc kubenswrapper[4808]: I0311 08:42:49.523012 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-76p85\" (UID: \"65550926-3f8b-436d-8a5c-e425d8c4875f\") " pod="openshift-image-registry/image-registry-697d97f7c8-76p85" Mar 11 08:42:49 crc kubenswrapper[4808]: E0311 08:42:49.523799 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-11 08:42:50.023786288 +0000 UTC m=+220.977109608 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-76p85" (UID: "65550926-3f8b-436d-8a5c-e425d8c4875f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:42:49 crc kubenswrapper[4808]: I0311 08:42:49.541867 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r7bc7" podStartSLOduration=155.54185373 podStartE2EDuration="2m35.54185373s" podCreationTimestamp="2026-03-11 08:40:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:42:49.541083038 +0000 UTC m=+220.494406358" watchObservedRunningTime="2026-03-11 08:42:49.54185373 +0000 UTC m=+220.495177050" Mar 11 08:42:49 crc kubenswrapper[4808]: I0311 08:42:49.544405 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-zp4ks" event={"ID":"24ab5c03-d768-4147-bbc2-4e71ac337623","Type":"ContainerStarted","Data":"1cbab9ab4724fe279fbf33e2fa4c3350b20b119494cf9ac472c15763714a09e0"} Mar 11 08:42:49 crc kubenswrapper[4808]: I0311 08:42:49.564481 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p5m5n" podStartSLOduration=155.564465172 podStartE2EDuration="2m35.564465172s" podCreationTimestamp="2026-03-11 08:40:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:42:49.563838374 +0000 UTC m=+220.517161694" watchObservedRunningTime="2026-03-11 08:42:49.564465172 +0000 UTC m=+220.517788492" Mar 11 08:42:49 crc kubenswrapper[4808]: I0311 08:42:49.568112 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dljmc" event={"ID":"e02b1fd1-95ae-45ac-91b5-3f9376f87b41","Type":"ContainerStarted","Data":"1f8c96de5f6cd22964893f8123b6c5e90099473afd7ee18b9a954c85edb791d3"} Mar 11 08:42:49 crc kubenswrapper[4808]: I0311 08:42:49.580177 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-97qlz" event={"ID":"049eab8c-0b60-4da6-8ad3-64b6ba257301","Type":"ContainerStarted","Data":"cf836f88b77f81217c7684f276155f1da810ed9ee520a538b3dd47d93d32786f"} Mar 11 08:42:49 crc kubenswrapper[4808]: I0311 08:42:49.597230 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-zp4ks" podStartSLOduration=155.597206517 podStartE2EDuration="2m35.597206517s" podCreationTimestamp="2026-03-11 08:40:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:42:49.59455355 +0000 UTC m=+220.547876870" watchObservedRunningTime="2026-03-11 08:42:49.597206517 +0000 UTC m=+220.550529857" Mar 11 08:42:49 crc kubenswrapper[4808]: I0311 08:42:49.605020 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4llxn" event={"ID":"d4ea394d-b845-43b1-93e2-31db42fe21bd","Type":"ContainerStarted","Data":"fd201d54170e82077ed695f487d798590a994d773aabe033480b88cb02599bcf"} Mar 11 08:42:49 crc kubenswrapper[4808]: I0311 08:42:49.623660 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:42:49 crc kubenswrapper[4808]: E0311 08:42:49.623850 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:42:50.123833475 +0000 UTC m=+221.077156795 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:42:49 crc kubenswrapper[4808]: I0311 08:42:49.624259 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-76p85\" (UID: \"65550926-3f8b-436d-8a5c-e425d8c4875f\") " pod="openshift-image-registry/image-registry-697d97f7c8-76p85" Mar 11 08:42:49 crc kubenswrapper[4808]: E0311 08:42:49.625245 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-11 08:42:50.125237616 +0000 UTC m=+221.078560936 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-76p85" (UID: "65550926-3f8b-436d-8a5c-e425d8c4875f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:42:49 crc kubenswrapper[4808]: I0311 08:42:49.641497 4808 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-6hxxc container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Mar 11 08:42:49 crc kubenswrapper[4808]: I0311 08:42:49.641536 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-6hxxc" podUID="83b44b9a-4c3d-4100-ade1-2645a32a237e" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Mar 11 08:42:49 crc kubenswrapper[4808]: I0311 08:42:49.642179 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-w6wrf" event={"ID":"025e6699-14e0-4c1c-8c27-77f280040b4e","Type":"ContainerStarted","Data":"1f0c13449d466447f569e2b75cf2eae93ebc9ab046dc055cacec55d4aa6b53a3"} Mar 11 08:42:49 crc kubenswrapper[4808]: I0311 08:42:49.642207 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-w6wrf" event={"ID":"025e6699-14e0-4c1c-8c27-77f280040b4e","Type":"ContainerStarted","Data":"14db5b30c5125de45a38dd8196dd5cffe154b0898d809076d76126b58fffa259"} Mar 11 08:42:49 crc kubenswrapper[4808]: I0311 08:42:49.643045 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-w6wrf" Mar 11 08:42:49 crc kubenswrapper[4808]: I0311 08:42:49.643640 4808 patch_prober.go:28] interesting pod/console-operator-58897d9998-gv48h container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/readyz\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Mar 11 08:42:49 crc kubenswrapper[4808]: I0311 08:42:49.643724 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-gv48h" podUID="a23f2035-3d2d-4caa-a1d4-62dfd729f876" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/readyz\": dial tcp 10.217.0.25:8443: connect: connection refused" Mar 11 08:42:49 crc kubenswrapper[4808]: I0311 08:42:49.657162 4808 patch_prober.go:28] interesting pod/downloads-7954f5f757-w6wrf container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Mar 11 08:42:49 crc kubenswrapper[4808]: I0311 08:42:49.657210 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-w6wrf" podUID="025e6699-14e0-4c1c-8c27-77f280040b4e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Mar 11 08:42:49 crc kubenswrapper[4808]: I0311 08:42:49.674906 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n2lkc" Mar 11 08:42:49 crc kubenswrapper[4808]: I0311 08:42:49.681738 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4llxn" podStartSLOduration=155.681722795 podStartE2EDuration="2m35.681722795s" podCreationTimestamp="2026-03-11 08:40:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:42:49.633949947 +0000 UTC m=+220.587273267" watchObservedRunningTime="2026-03-11 08:42:49.681722795 +0000 UTC m=+220.635046125" Mar 11 08:42:49 crc kubenswrapper[4808]: I0311 08:42:49.686335 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-w6wrf" podStartSLOduration=155.686318188 podStartE2EDuration="2m35.686318188s" podCreationTimestamp="2026-03-11 08:40:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:42:49.680543751 +0000 UTC m=+220.633867071" watchObservedRunningTime="2026-03-11 08:42:49.686318188 +0000 UTC m=+220.639641508" Mar 11 08:42:49 crc kubenswrapper[4808]: I0311 08:42:49.728988 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:42:49 crc kubenswrapper[4808]: E0311 08:42:49.733354 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:42:50.233333604 +0000 UTC m=+221.186656924 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:42:49 crc kubenswrapper[4808]: I0311 08:42:49.774887 4808 patch_prober.go:28] interesting pod/router-default-5444994796-h9wnt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 11 08:42:49 crc kubenswrapper[4808]: [-]has-synced failed: reason withheld Mar 11 08:42:49 crc kubenswrapper[4808]: [+]process-running ok Mar 11 08:42:49 crc kubenswrapper[4808]: healthz check failed Mar 11 08:42:49 crc kubenswrapper[4808]: I0311 08:42:49.774944 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h9wnt" podUID="6c9411d8-3c3c-4ae8-9580-10ae4884967b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 11 08:42:49 crc kubenswrapper[4808]: I0311 08:42:49.830975 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-76p85\" (UID: \"65550926-3f8b-436d-8a5c-e425d8c4875f\") " pod="openshift-image-registry/image-registry-697d97f7c8-76p85" Mar 11 08:42:49 crc kubenswrapper[4808]: E0311 08:42:49.832063 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-11 08:42:50.332042292 +0000 UTC m=+221.285365612 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-76p85" (UID: "65550926-3f8b-436d-8a5c-e425d8c4875f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:42:49 crc kubenswrapper[4808]: I0311 08:42:49.936824 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:42:49 crc kubenswrapper[4808]: E0311 08:42:49.937407 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:42:50.437388732 +0000 UTC m=+221.390712052 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:42:50 crc kubenswrapper[4808]: I0311 08:42:50.040554 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-76p85\" (UID: \"65550926-3f8b-436d-8a5c-e425d8c4875f\") " pod="openshift-image-registry/image-registry-697d97f7c8-76p85" Mar 11 08:42:50 crc kubenswrapper[4808]: E0311 08:42:50.042523 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-11 08:42:50.542506874 +0000 UTC m=+221.495830194 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-76p85" (UID: "65550926-3f8b-436d-8a5c-e425d8c4875f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:42:50 crc kubenswrapper[4808]: I0311 08:42:50.144139 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:42:50 crc kubenswrapper[4808]: E0311 08:42:50.144274 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:42:50.64425405 +0000 UTC m=+221.597577360 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:42:50 crc kubenswrapper[4808]: I0311 08:42:50.144838 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-76p85\" (UID: \"65550926-3f8b-436d-8a5c-e425d8c4875f\") " pod="openshift-image-registry/image-registry-697d97f7c8-76p85" Mar 11 08:42:50 crc kubenswrapper[4808]: E0311 08:42:50.145144 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-11 08:42:50.645129905 +0000 UTC m=+221.598453225 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-76p85" (UID: "65550926-3f8b-436d-8a5c-e425d8c4875f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:42:50 crc kubenswrapper[4808]: I0311 08:42:50.254070 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:42:50 crc kubenswrapper[4808]: E0311 08:42:50.254251 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:42:50.754221783 +0000 UTC m=+221.707545103 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:42:50 crc kubenswrapper[4808]: I0311 08:42:50.254445 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-76p85\" (UID: \"65550926-3f8b-436d-8a5c-e425d8c4875f\") " pod="openshift-image-registry/image-registry-697d97f7c8-76p85" Mar 11 08:42:50 crc kubenswrapper[4808]: E0311 08:42:50.254812 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-11 08:42:50.754787999 +0000 UTC m=+221.708111319 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-76p85" (UID: "65550926-3f8b-436d-8a5c-e425d8c4875f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:42:50 crc kubenswrapper[4808]: I0311 08:42:50.356879 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:42:50 crc kubenswrapper[4808]: E0311 08:42:50.358190 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:42:50.858170982 +0000 UTC m=+221.811494302 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:42:50 crc kubenswrapper[4808]: I0311 08:42:50.458891 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-76p85\" (UID: \"65550926-3f8b-436d-8a5c-e425d8c4875f\") " pod="openshift-image-registry/image-registry-697d97f7c8-76p85" Mar 11 08:42:50 crc kubenswrapper[4808]: E0311 08:42:50.459407 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-11 08:42:50.959395912 +0000 UTC m=+221.912719232 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-76p85" (UID: "65550926-3f8b-436d-8a5c-e425d8c4875f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:42:50 crc kubenswrapper[4808]: I0311 08:42:50.560141 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:42:50 crc kubenswrapper[4808]: E0311 08:42:50.560324 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:42:51.060295014 +0000 UTC m=+222.013618344 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:42:50 crc kubenswrapper[4808]: I0311 08:42:50.560668 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-76p85\" (UID: \"65550926-3f8b-436d-8a5c-e425d8c4875f\") " pod="openshift-image-registry/image-registry-697d97f7c8-76p85" Mar 11 08:42:50 crc kubenswrapper[4808]: E0311 08:42:50.561046 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-11 08:42:51.061032645 +0000 UTC m=+222.014355955 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-76p85" (UID: "65550926-3f8b-436d-8a5c-e425d8c4875f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:42:50 crc kubenswrapper[4808]: I0311 08:42:50.662952 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:42:50 crc kubenswrapper[4808]: E0311 08:42:50.663174 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:42:51.163155661 +0000 UTC m=+222.116478991 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:42:50 crc kubenswrapper[4808]: I0311 08:42:50.663231 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-76p85\" (UID: \"65550926-3f8b-436d-8a5c-e425d8c4875f\") " pod="openshift-image-registry/image-registry-697d97f7c8-76p85" Mar 11 08:42:50 crc kubenswrapper[4808]: E0311 08:42:50.663628 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-11 08:42:51.163617264 +0000 UTC m=+222.116940584 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-76p85" (UID: "65550926-3f8b-436d-8a5c-e425d8c4875f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:42:50 crc kubenswrapper[4808]: I0311 08:42:50.706709 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vvz25" event={"ID":"0eec4c1a-2ffd-487b-bd91-ecb5008f789e","Type":"ContainerStarted","Data":"d1400b66096d829ff8b3efc9f316f2f8055722238e6e428cdc413ffa7cb043b4"} Mar 11 08:42:50 crc kubenswrapper[4808]: I0311 08:42:50.707481 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vvz25" Mar 11 08:42:50 crc kubenswrapper[4808]: I0311 08:42:50.723142 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-t54r9" event={"ID":"3118d277-cc77-47d6-b6ae-7799a5220bef","Type":"ContainerStarted","Data":"fa3f759f2de7d49aaf4311dbf9219a45e87b74c62bb7917cd4b7cd56cd13ae6d"} Mar 11 08:42:50 crc kubenswrapper[4808]: I0311 08:42:50.723261 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-t54r9" event={"ID":"3118d277-cc77-47d6-b6ae-7799a5220bef","Type":"ContainerStarted","Data":"952a027ae7986d15803e21e640737b867b0f198682195c7e4b19a7c0022ec1bf"} Mar 11 08:42:50 crc kubenswrapper[4808]: I0311 08:42:50.744705 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-2z52n" event={"ID":"9194a5de-636b-410e-b05b-64f861a6daf9","Type":"ContainerStarted","Data":"69310c5f5a72492b6fdf7db1639559fac9f91a0b8c9ce7038ab95e0a797af696"} Mar 11 08:42:50 crc kubenswrapper[4808]: I0311 08:42:50.766277 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:42:50 crc kubenswrapper[4808]: E0311 08:42:50.766596 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:42:51.266582285 +0000 UTC m=+222.219905595 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:42:50 crc kubenswrapper[4808]: I0311 08:42:50.766706 4808 patch_prober.go:28] interesting pod/router-default-5444994796-h9wnt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 11 08:42:50 crc kubenswrapper[4808]: [-]has-synced failed: reason withheld Mar 11 08:42:50 crc kubenswrapper[4808]: [+]process-running ok Mar 11 08:42:50 crc kubenswrapper[4808]: healthz check failed Mar 11 08:42:50 crc kubenswrapper[4808]: I0311 08:42:50.766731 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h9wnt" podUID="6c9411d8-3c3c-4ae8-9580-10ae4884967b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 11 08:42:50 crc kubenswrapper[4808]: I0311 08:42:50.779863 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-bp68q" event={"ID":"adc95b24-ac31-47c8-8221-b27da4ea0564","Type":"ContainerStarted","Data":"31d5656f32a44afb147dfe98f375006b37f2b7d010b9f2044fb5cb64ea9b6c9d"} Mar 11 08:42:50 crc kubenswrapper[4808]: I0311 08:42:50.779909 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-bp68q" event={"ID":"adc95b24-ac31-47c8-8221-b27da4ea0564","Type":"ContainerStarted","Data":"0df8492d28ef077664fdb214c6a868822aa2b5273ea4a0c567149d321e72ce32"} Mar 11 08:42:50 crc kubenswrapper[4808]: I0311 08:42:50.794852 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29553630-gsnv8" event={"ID":"86edbb02-cc48-4845-86d4-51c46a4120bf","Type":"ContainerStarted","Data":"4c93a608601bdd1a5a8a72c0a02be858aa4010c878cb3b6b1a0c808f95563a34"} Mar 11 08:42:50 crc kubenswrapper[4808]: I0311 08:42:50.822024 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vvz25" podStartSLOduration=156.822004854 podStartE2EDuration="2m36.822004854s" podCreationTimestamp="2026-03-11 08:40:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:42:50.787092257 +0000 UTC m=+221.740415577" watchObservedRunningTime="2026-03-11 08:42:50.822004854 +0000 UTC m=+221.775328174" Mar 11 08:42:50 crc kubenswrapper[4808]: I0311 08:42:50.823956 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-t54r9" podStartSLOduration=156.82394834 podStartE2EDuration="2m36.82394834s" podCreationTimestamp="2026-03-11 08:40:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:42:50.822637242 +0000 UTC m=+221.775960552" watchObservedRunningTime="2026-03-11 08:42:50.82394834 +0000 UTC m=+221.777271660" Mar 11 08:42:50 crc kubenswrapper[4808]: I0311 08:42:50.825556 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kgwqw" event={"ID":"6b36c7a8-aafc-4327-96b1-bcec3d2cf99e","Type":"ContainerStarted","Data":"47b4dd81039c6af3573255bcfcbdcb3bdefb92c2aed47ac847f37af59ee9dfac"} Mar 11 08:42:50 crc kubenswrapper[4808]: I0311 08:42:50.826283 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kgwqw" Mar 11 08:42:50 crc kubenswrapper[4808]: I0311 08:42:50.842292 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-h82vj" event={"ID":"8a27437d-c17a-4c8e-837c-f86587dc9346","Type":"ContainerStarted","Data":"1e1af2202c4baa562b1fd09546966bf8e8d1bccbae01ada7f7d6d7a0d224823e"} Mar 11 08:42:50 crc kubenswrapper[4808]: I0311 08:42:50.872482 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8bbz5" event={"ID":"0dca839a-7824-4d7b-9c48-ebc16a0d156f","Type":"ContainerStarted","Data":"08a7ea94ec766a493a4ff1c10e46d2cc5a4bc322380b0d1efd1b9cc14b2fdc58"} Mar 11 08:42:50 crc kubenswrapper[4808]: I0311 08:42:50.872605 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-76p85\" (UID: \"65550926-3f8b-436d-8a5c-e425d8c4875f\") " pod="openshift-image-registry/image-registry-697d97f7c8-76p85" Mar 11 08:42:50 crc kubenswrapper[4808]: E0311 08:42:50.876866 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-11 08:42:51.376850117 +0000 UTC m=+222.330173437 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-76p85" (UID: "65550926-3f8b-436d-8a5c-e425d8c4875f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:42:50 crc kubenswrapper[4808]: I0311 08:42:50.884763 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-sph2s" event={"ID":"77bcd05b-916e-481a-989f-eb860903e1a1","Type":"ContainerStarted","Data":"167b30ed77eeda6fabe021018cc22ea4278e4e1cdd0493a140c1c0458d13d7fa"} Mar 11 08:42:50 crc kubenswrapper[4808]: I0311 08:42:50.946518 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-8mblp" event={"ID":"df778b51-e71b-4518-8cd8-e01c12cfd03f","Type":"ContainerStarted","Data":"09d974c51c8c7f7bc19c4c0b05054b50e68b5ce46f81d4d9a363f1d7bd0e5bae"} Mar 11 08:42:50 crc kubenswrapper[4808]: I0311 08:42:50.949155 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-2z52n" podStartSLOduration=156.949142702 podStartE2EDuration="2m36.949142702s" podCreationTimestamp="2026-03-11 08:40:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:42:50.947543916 +0000 UTC m=+221.900867236" watchObservedRunningTime="2026-03-11 08:42:50.949142702 +0000 UTC m=+221.902466022" Mar 11 08:42:50 crc kubenswrapper[4808]: I0311 08:42:50.950276 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-bp68q" podStartSLOduration=156.950267085 podStartE2EDuration="2m36.950267085s" podCreationTimestamp="2026-03-11 08:40:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:42:50.909222731 +0000 UTC m=+221.862546051" watchObservedRunningTime="2026-03-11 08:42:50.950267085 +0000 UTC m=+221.903590405" Mar 11 08:42:50 crc kubenswrapper[4808]: I0311 08:42:50.964955 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-9mn64" event={"ID":"10a3cae7-8c99-405a-a5ec-cbed98e037bd","Type":"ContainerStarted","Data":"e478e1f98ef3ea1edc7d37b106c0f401b597f4a06314d96934d6287f2b10ad2e"} Mar 11 08:42:50 crc kubenswrapper[4808]: I0311 08:42:50.974507 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-sph2s" podStartSLOduration=156.974343609 podStartE2EDuration="2m36.974343609s" podCreationTimestamp="2026-03-11 08:40:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:42:50.974076242 +0000 UTC m=+221.927399562" watchObservedRunningTime="2026-03-11 08:42:50.974343609 +0000 UTC m=+221.927666939" Mar 11 08:42:50 crc kubenswrapper[4808]: I0311 08:42:50.975764 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:42:50 crc kubenswrapper[4808]: E0311 08:42:50.977246 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:42:51.477225283 +0000 UTC m=+222.430548603 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:42:50 crc kubenswrapper[4808]: I0311 08:42:50.988293 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-b25qz" event={"ID":"090328a2-0e9e-49a5-b82a-e35947e2fbf2","Type":"ContainerStarted","Data":"2e74dbe82b83d6432a70673b4d9b66afacb6c9f1751b3a5da20c04b029b95669"} Mar 11 08:42:50 crc kubenswrapper[4808]: I0311 08:42:50.988775 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-b25qz" Mar 11 08:42:50 crc kubenswrapper[4808]: I0311 08:42:50.992567 4808 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-b25qz container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" start-of-body= Mar 11 08:42:50 crc kubenswrapper[4808]: I0311 08:42:50.992647 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-b25qz" podUID="090328a2-0e9e-49a5-b82a-e35947e2fbf2" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" Mar 11 08:42:51 crc kubenswrapper[4808]: I0311 08:42:51.008456 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8wmnr" Mar 11 08:42:51 crc kubenswrapper[4808]: I0311 08:42:51.016853 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8wmnr" Mar 11 08:42:51 crc kubenswrapper[4808]: I0311 08:42:51.018968 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kgwqw" podStartSLOduration=157.018948626 podStartE2EDuration="2m37.018948626s" podCreationTimestamp="2026-03-11 08:40:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:42:51.017417522 +0000 UTC m=+221.970740842" watchObservedRunningTime="2026-03-11 08:42:51.018948626 +0000 UTC m=+221.972271956" Mar 11 08:42:51 crc kubenswrapper[4808]: I0311 08:42:51.020840 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zvh9m" event={"ID":"a9e02138-74fa-484f-a87d-b88e05e92d58","Type":"ContainerStarted","Data":"738e4f1747bda36af93548f67e91cf8476645b5e3ec9073fc67948c528f209b4"} Mar 11 08:42:51 crc kubenswrapper[4808]: I0311 08:42:51.045879 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8wmnr" Mar 11 08:42:51 crc kubenswrapper[4808]: I0311 08:42:51.065186 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-h82vj" podStartSLOduration=157.06516873 podStartE2EDuration="2m37.06516873s" podCreationTimestamp="2026-03-11 08:40:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:42:51.062817552 +0000 UTC m=+222.016140872" watchObservedRunningTime="2026-03-11 08:42:51.06516873 +0000 UTC m=+222.018492050" Mar 11 08:42:51 crc kubenswrapper[4808]: I0311 08:42:51.065725 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nwcwz" event={"ID":"281b54ef-f700-437e-b975-1979a9d31151","Type":"ContainerStarted","Data":"809a23fb2fd6cbfba6b1f2d6de72a1578d87f423308ceee95cc9692ccde23864"} Mar 11 08:42:51 crc kubenswrapper[4808]: I0311 08:42:51.066549 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nwcwz" Mar 11 08:42:51 crc kubenswrapper[4808]: I0311 08:42:51.068030 4808 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-nwcwz container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" start-of-body= Mar 11 08:42:51 crc kubenswrapper[4808]: I0311 08:42:51.068102 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nwcwz" podUID="281b54ef-f700-437e-b975-1979a9d31151" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" Mar 11 08:42:51 crc kubenswrapper[4808]: I0311 08:42:51.080184 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-76p85\" (UID: \"65550926-3f8b-436d-8a5c-e425d8c4875f\") " pod="openshift-image-registry/image-registry-697d97f7c8-76p85" Mar 11 08:42:51 crc kubenswrapper[4808]: E0311 08:42:51.081872 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-11 08:42:51.581849521 +0000 UTC m=+222.535172841 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-76p85" (UID: "65550926-3f8b-436d-8a5c-e425d8c4875f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:42:51 crc kubenswrapper[4808]: I0311 08:42:51.090674 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-vbbdr" event={"ID":"7b05200d-8025-468a-9c30-fbfd45a80b8b","Type":"ContainerStarted","Data":"0b4c317af0392a9228a19a4528a8c24889882893b9f8cf41552da6be0d8cd340"} Mar 11 08:42:51 crc kubenswrapper[4808]: I0311 08:42:51.091494 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-vbbdr" Mar 11 08:42:51 crc kubenswrapper[4808]: I0311 08:42:51.092570 4808 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-vbbdr container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.10:6443/healthz\": dial tcp 10.217.0.10:6443: connect: connection refused" start-of-body= Mar 11 08:42:51 crc kubenswrapper[4808]: I0311 08:42:51.092606 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-vbbdr" podUID="7b05200d-8025-468a-9c30-fbfd45a80b8b" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.10:6443/healthz\": dial tcp 10.217.0.10:6443: connect: connection refused" Mar 11 08:42:51 crc kubenswrapper[4808]: I0311 08:42:51.102937 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-8mblp" podStartSLOduration=157.102924609 podStartE2EDuration="2m37.102924609s" podCreationTimestamp="2026-03-11 08:40:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:42:51.101634692 +0000 UTC m=+222.054958012" watchObservedRunningTime="2026-03-11 08:42:51.102924609 +0000 UTC m=+222.056247929" Mar 11 08:42:51 crc kubenswrapper[4808]: I0311 08:42:51.129207 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vxs94" event={"ID":"43783d3c-c9b4-456b-a821-09ddf3e9ca75","Type":"ContainerStarted","Data":"10c8add4437c82701857a807ac9ba48bb3a9fdd105e33d775d1fdbfaf6d4e11f"} Mar 11 08:42:51 crc kubenswrapper[4808]: I0311 08:42:51.162728 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dljmc" event={"ID":"e02b1fd1-95ae-45ac-91b5-3f9376f87b41","Type":"ContainerStarted","Data":"fe23d5d8c982293dcf9d73f5186bda62aece4ecc849501427a34cd0d61482c3a"} Mar 11 08:42:51 crc kubenswrapper[4808]: I0311 08:42:51.173481 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8bbz5" podStartSLOduration=157.173466524 podStartE2EDuration="2m37.173466524s" podCreationTimestamp="2026-03-11 08:40:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:42:51.172704872 +0000 UTC m=+222.126028192" watchObservedRunningTime="2026-03-11 08:42:51.173466524 +0000 UTC m=+222.126789844" Mar 11 08:42:51 crc kubenswrapper[4808]: I0311 08:42:51.182859 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:42:51 crc kubenswrapper[4808]: E0311 08:42:51.182943 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:42:51.682925567 +0000 UTC m=+222.636248887 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:42:51 crc kubenswrapper[4808]: I0311 08:42:51.182989 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-bp68q" Mar 11 08:42:51 crc kubenswrapper[4808]: I0311 08:42:51.183229 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-bp68q" Mar 11 08:42:51 crc kubenswrapper[4808]: I0311 08:42:51.183265 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-76p85\" (UID: \"65550926-3f8b-436d-8a5c-e425d8c4875f\") " pod="openshift-image-registry/image-registry-697d97f7c8-76p85" Mar 11 08:42:51 crc kubenswrapper[4808]: E0311 08:42:51.183569 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-11 08:42:51.683555636 +0000 UTC m=+222.636878956 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-76p85" (UID: "65550926-3f8b-436d-8a5c-e425d8c4875f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:42:51 crc kubenswrapper[4808]: I0311 08:42:51.199223 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lfhrh" event={"ID":"dfecc13f-f972-4fac-acb4-b1c8e9ce1b5e","Type":"ContainerStarted","Data":"8b827954df72eb7375f0976b10e0665e0911e59a36a0b85a70b2a93f5e988e05"} Mar 11 08:42:51 crc kubenswrapper[4808]: I0311 08:42:51.199816 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29553630-gsnv8" podStartSLOduration=157.199796074 podStartE2EDuration="2m37.199796074s" podCreationTimestamp="2026-03-11 08:40:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:42:51.19826587 +0000 UTC m=+222.151589190" watchObservedRunningTime="2026-03-11 08:42:51.199796074 +0000 UTC m=+222.153119394" Mar 11 08:42:51 crc kubenswrapper[4808]: I0311 08:42:51.209517 4808 patch_prober.go:28] interesting pod/apiserver-76f77b778f-bp68q container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.8:8443/livez\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Mar 11 08:42:51 crc kubenswrapper[4808]: I0311 08:42:51.209563 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-bp68q" podUID="adc95b24-ac31-47c8-8221-b27da4ea0564" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.8:8443/livez\": dial tcp 10.217.0.8:8443: connect: connection refused" Mar 11 08:42:51 crc kubenswrapper[4808]: I0311 08:42:51.210789 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-97qlz" event={"ID":"049eab8c-0b60-4da6-8ad3-64b6ba257301","Type":"ContainerStarted","Data":"89d2b0fd0721912b96a49d0e37bf6bcd2f448c8aac969b69e85bbeedca58a83a"} Mar 11 08:42:51 crc kubenswrapper[4808]: I0311 08:42:51.213738 4808 patch_prober.go:28] interesting pod/downloads-7954f5f757-w6wrf container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Mar 11 08:42:51 crc kubenswrapper[4808]: I0311 08:42:51.213775 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-w6wrf" podUID="025e6699-14e0-4c1c-8c27-77f280040b4e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Mar 11 08:42:51 crc kubenswrapper[4808]: I0311 08:42:51.219661 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xpv26" Mar 11 08:42:51 crc kubenswrapper[4808]: I0311 08:42:51.221304 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vxs94" podStartSLOduration=157.221293894 podStartE2EDuration="2m37.221293894s" podCreationTimestamp="2026-03-11 08:40:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:42:51.220026658 +0000 UTC m=+222.173349978" watchObservedRunningTime="2026-03-11 08:42:51.221293894 +0000 UTC m=+222.174617214" Mar 11 08:42:51 crc kubenswrapper[4808]: I0311 08:42:51.231089 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8wmnr" Mar 11 08:42:51 crc kubenswrapper[4808]: I0311 08:42:51.248547 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-b25qz" podStartSLOduration=157.24851447 podStartE2EDuration="2m37.24851447s" podCreationTimestamp="2026-03-11 08:40:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:42:51.247623774 +0000 UTC m=+222.200947094" watchObservedRunningTime="2026-03-11 08:42:51.24851447 +0000 UTC m=+222.201837790" Mar 11 08:42:51 crc kubenswrapper[4808]: I0311 08:42:51.272382 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zvh9m" podStartSLOduration=157.272139841 podStartE2EDuration="2m37.272139841s" podCreationTimestamp="2026-03-11 08:40:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:42:51.270846224 +0000 UTC m=+222.224169544" watchObservedRunningTime="2026-03-11 08:42:51.272139841 +0000 UTC m=+222.225463161" Mar 11 08:42:51 crc kubenswrapper[4808]: I0311 08:42:51.290781 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:42:51 crc kubenswrapper[4808]: I0311 08:42:51.291227 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dljmc" podStartSLOduration=157.291210872 podStartE2EDuration="2m37.291210872s" podCreationTimestamp="2026-03-11 08:40:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:42:51.290267334 +0000 UTC m=+222.243590654" watchObservedRunningTime="2026-03-11 08:42:51.291210872 +0000 UTC m=+222.244534192" Mar 11 08:42:51 crc kubenswrapper[4808]: E0311 08:42:51.295376 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:42:51.795340441 +0000 UTC m=+222.748663761 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:42:51 crc kubenswrapper[4808]: I0311 08:42:51.361012 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-vbbdr" podStartSLOduration=157.360995145 podStartE2EDuration="2m37.360995145s" podCreationTimestamp="2026-03-11 08:40:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:42:51.358205304 +0000 UTC m=+222.311528624" watchObservedRunningTime="2026-03-11 08:42:51.360995145 +0000 UTC m=+222.314318465" Mar 11 08:42:51 crc kubenswrapper[4808]: I0311 08:42:51.385538 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r7bc7" Mar 11 08:42:51 crc kubenswrapper[4808]: I0311 08:42:51.394141 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-76p85\" (UID: \"65550926-3f8b-436d-8a5c-e425d8c4875f\") " pod="openshift-image-registry/image-registry-697d97f7c8-76p85" Mar 11 08:42:51 crc kubenswrapper[4808]: E0311 08:42:51.394487 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-11 08:42:51.894474031 +0000 UTC m=+222.847797351 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-76p85" (UID: "65550926-3f8b-436d-8a5c-e425d8c4875f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:42:51 crc kubenswrapper[4808]: I0311 08:42:51.411067 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nwcwz" podStartSLOduration=157.411048419 podStartE2EDuration="2m37.411048419s" podCreationTimestamp="2026-03-11 08:40:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:42:51.407304961 +0000 UTC m=+222.360628291" watchObservedRunningTime="2026-03-11 08:42:51.411048419 +0000 UTC m=+222.364371739" Mar 11 08:42:51 crc kubenswrapper[4808]: I0311 08:42:51.437318 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lfhrh" podStartSLOduration=157.437299076 podStartE2EDuration="2m37.437299076s" podCreationTimestamp="2026-03-11 08:40:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:42:51.429931754 +0000 UTC m=+222.383255074" watchObservedRunningTime="2026-03-11 08:42:51.437299076 +0000 UTC m=+222.390622396" Mar 11 08:42:51 crc kubenswrapper[4808]: I0311 08:42:51.494889 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:42:51 crc kubenswrapper[4808]: E0311 08:42:51.495298 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:42:51.995279459 +0000 UTC m=+222.948602779 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:42:51 crc kubenswrapper[4808]: I0311 08:42:51.508902 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-97qlz" podStartSLOduration=157.508889051 podStartE2EDuration="2m37.508889051s" podCreationTimestamp="2026-03-11 08:40:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:42:51.507977825 +0000 UTC m=+222.461301135" watchObservedRunningTime="2026-03-11 08:42:51.508889051 +0000 UTC m=+222.462212371" Mar 11 08:42:51 crc kubenswrapper[4808]: I0311 08:42:51.596722 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-76p85\" (UID: \"65550926-3f8b-436d-8a5c-e425d8c4875f\") " pod="openshift-image-registry/image-registry-697d97f7c8-76p85" Mar 11 08:42:51 crc kubenswrapper[4808]: E0311 08:42:51.597007 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-11 08:42:52.096996953 +0000 UTC m=+223.050320273 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-76p85" (UID: "65550926-3f8b-436d-8a5c-e425d8c4875f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:42:51 crc kubenswrapper[4808]: I0311 08:42:51.697876 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:42:51 crc kubenswrapper[4808]: E0311 08:42:51.698035 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:42:52.198007447 +0000 UTC m=+223.151330767 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:42:51 crc kubenswrapper[4808]: I0311 08:42:51.698523 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-76p85\" (UID: \"65550926-3f8b-436d-8a5c-e425d8c4875f\") " pod="openshift-image-registry/image-registry-697d97f7c8-76p85" Mar 11 08:42:51 crc kubenswrapper[4808]: E0311 08:42:51.698944 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-11 08:42:52.198929674 +0000 UTC m=+223.152252984 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-76p85" (UID: "65550926-3f8b-436d-8a5c-e425d8c4875f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:42:51 crc kubenswrapper[4808]: I0311 08:42:51.761089 4808 patch_prober.go:28] interesting pod/router-default-5444994796-h9wnt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 11 08:42:51 crc kubenswrapper[4808]: [-]has-synced failed: reason withheld Mar 11 08:42:51 crc kubenswrapper[4808]: [+]process-running ok Mar 11 08:42:51 crc kubenswrapper[4808]: healthz check failed Mar 11 08:42:51 crc kubenswrapper[4808]: I0311 08:42:51.761159 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h9wnt" podUID="6c9411d8-3c3c-4ae8-9580-10ae4884967b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 11 08:42:51 crc kubenswrapper[4808]: I0311 08:42:51.800435 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:42:51 crc kubenswrapper[4808]: E0311 08:42:51.801063 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:42:52.30104517 +0000 UTC m=+223.254368480 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:42:51 crc kubenswrapper[4808]: I0311 08:42:51.901912 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-76p85\" (UID: \"65550926-3f8b-436d-8a5c-e425d8c4875f\") " pod="openshift-image-registry/image-registry-697d97f7c8-76p85" Mar 11 08:42:51 crc kubenswrapper[4808]: E0311 08:42:51.902318 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-11 08:42:52.402300092 +0000 UTC m=+223.355623412 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-76p85" (UID: "65550926-3f8b-436d-8a5c-e425d8c4875f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:42:51 crc kubenswrapper[4808]: I0311 08:42:51.964067 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bc8sc"] Mar 11 08:42:51 crc kubenswrapper[4808]: I0311 08:42:51.965111 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bc8sc" Mar 11 08:42:51 crc kubenswrapper[4808]: I0311 08:42:51.969567 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Mar 11 08:42:51 crc kubenswrapper[4808]: I0311 08:42:51.982107 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bc8sc"] Mar 11 08:42:52 crc kubenswrapper[4808]: I0311 08:42:52.003200 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:42:52 crc kubenswrapper[4808]: E0311 08:42:52.003472 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:42:52.503422159 +0000 UTC m=+223.456745479 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:42:52 crc kubenswrapper[4808]: I0311 08:42:52.090450 4808 ???:1] "http: TLS handshake error from 192.168.126.11:41306: no serving certificate available for the kubelet" Mar 11 08:42:52 crc kubenswrapper[4808]: I0311 08:42:52.104886 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-76p85\" (UID: \"65550926-3f8b-436d-8a5c-e425d8c4875f\") " pod="openshift-image-registry/image-registry-697d97f7c8-76p85" Mar 11 08:42:52 crc kubenswrapper[4808]: I0311 08:42:52.104948 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzk8f\" (UniqueName: \"kubernetes.io/projected/8d26a515-59f9-49a6-a9e0-6ff62b523ab4-kube-api-access-wzk8f\") pod \"community-operators-bc8sc\" (UID: \"8d26a515-59f9-49a6-a9e0-6ff62b523ab4\") " pod="openshift-marketplace/community-operators-bc8sc" Mar 11 08:42:52 crc kubenswrapper[4808]: I0311 08:42:52.104974 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d26a515-59f9-49a6-a9e0-6ff62b523ab4-utilities\") pod \"community-operators-bc8sc\" (UID: \"8d26a515-59f9-49a6-a9e0-6ff62b523ab4\") " pod="openshift-marketplace/community-operators-bc8sc" Mar 11 08:42:52 crc kubenswrapper[4808]: I0311 08:42:52.105236 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d26a515-59f9-49a6-a9e0-6ff62b523ab4-catalog-content\") pod \"community-operators-bc8sc\" (UID: \"8d26a515-59f9-49a6-a9e0-6ff62b523ab4\") " pod="openshift-marketplace/community-operators-bc8sc" Mar 11 08:42:52 crc kubenswrapper[4808]: E0311 08:42:52.105302 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-11 08:42:52.605285438 +0000 UTC m=+223.558608758 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-76p85" (UID: "65550926-3f8b-436d-8a5c-e425d8c4875f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:42:52 crc kubenswrapper[4808]: I0311 08:42:52.179947 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-pf6x9"] Mar 11 08:42:52 crc kubenswrapper[4808]: I0311 08:42:52.180848 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pf6x9" Mar 11 08:42:52 crc kubenswrapper[4808]: I0311 08:42:52.206873 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:42:52 crc kubenswrapper[4808]: I0311 08:42:52.206982 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Mar 11 08:42:52 crc kubenswrapper[4808]: E0311 08:42:52.207077 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:42:52.707045164 +0000 UTC m=+223.660368484 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:42:52 crc kubenswrapper[4808]: I0311 08:42:52.207177 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d26a515-59f9-49a6-a9e0-6ff62b523ab4-catalog-content\") pod \"community-operators-bc8sc\" (UID: \"8d26a515-59f9-49a6-a9e0-6ff62b523ab4\") " pod="openshift-marketplace/community-operators-bc8sc" Mar 11 08:42:52 crc kubenswrapper[4808]: I0311 08:42:52.207338 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-76p85\" (UID: \"65550926-3f8b-436d-8a5c-e425d8c4875f\") " pod="openshift-image-registry/image-registry-697d97f7c8-76p85" Mar 11 08:42:52 crc kubenswrapper[4808]: I0311 08:42:52.207446 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzk8f\" (UniqueName: \"kubernetes.io/projected/8d26a515-59f9-49a6-a9e0-6ff62b523ab4-kube-api-access-wzk8f\") pod \"community-operators-bc8sc\" (UID: \"8d26a515-59f9-49a6-a9e0-6ff62b523ab4\") " pod="openshift-marketplace/community-operators-bc8sc" Mar 11 08:42:52 crc kubenswrapper[4808]: I0311 08:42:52.207522 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d26a515-59f9-49a6-a9e0-6ff62b523ab4-utilities\") pod \"community-operators-bc8sc\" (UID: \"8d26a515-59f9-49a6-a9e0-6ff62b523ab4\") " pod="openshift-marketplace/community-operators-bc8sc" Mar 11 08:42:52 crc kubenswrapper[4808]: I0311 08:42:52.207647 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d26a515-59f9-49a6-a9e0-6ff62b523ab4-catalog-content\") pod \"community-operators-bc8sc\" (UID: \"8d26a515-59f9-49a6-a9e0-6ff62b523ab4\") " pod="openshift-marketplace/community-operators-bc8sc" Mar 11 08:42:52 crc kubenswrapper[4808]: E0311 08:42:52.207920 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-11 08:42:52.707909079 +0000 UTC m=+223.661232399 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-76p85" (UID: "65550926-3f8b-436d-8a5c-e425d8c4875f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:42:52 crc kubenswrapper[4808]: I0311 08:42:52.207936 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d26a515-59f9-49a6-a9e0-6ff62b523ab4-utilities\") pod \"community-operators-bc8sc\" (UID: \"8d26a515-59f9-49a6-a9e0-6ff62b523ab4\") " pod="openshift-marketplace/community-operators-bc8sc" Mar 11 08:42:52 crc kubenswrapper[4808]: I0311 08:42:52.227734 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pf6x9"] Mar 11 08:42:52 crc kubenswrapper[4808]: I0311 08:42:52.239833 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-9mn64" event={"ID":"10a3cae7-8c99-405a-a5ec-cbed98e037bd","Type":"ContainerStarted","Data":"9d02807e1020c548ccaa5641ce036cd3110dca18748c658c01f0e51b244898a8"} Mar 11 08:42:52 crc kubenswrapper[4808]: I0311 08:42:52.240004 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-9mn64" Mar 11 08:42:52 crc kubenswrapper[4808]: I0311 08:42:52.243020 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzk8f\" (UniqueName: \"kubernetes.io/projected/8d26a515-59f9-49a6-a9e0-6ff62b523ab4-kube-api-access-wzk8f\") pod \"community-operators-bc8sc\" (UID: \"8d26a515-59f9-49a6-a9e0-6ff62b523ab4\") " pod="openshift-marketplace/community-operators-bc8sc" Mar 11 08:42:52 crc kubenswrapper[4808]: I0311 08:42:52.255619 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-f7nlr" event={"ID":"759ae8a0-0d30-4da6-82e5-7d82ebfec823","Type":"ContainerStarted","Data":"aef0006cebed63ad7e222970ebf9d2b60ef0e4b604789bfcfa6bddb9d97017fe"} Mar 11 08:42:52 crc kubenswrapper[4808]: I0311 08:42:52.257683 4808 patch_prober.go:28] interesting pod/downloads-7954f5f757-w6wrf container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Mar 11 08:42:52 crc kubenswrapper[4808]: I0311 08:42:52.257731 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-w6wrf" podUID="025e6699-14e0-4c1c-8c27-77f280040b4e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Mar 11 08:42:52 crc kubenswrapper[4808]: I0311 08:42:52.264086 4808 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-b25qz container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" start-of-body= Mar 11 08:42:52 crc kubenswrapper[4808]: I0311 08:42:52.264135 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-b25qz" podUID="090328a2-0e9e-49a5-b82a-e35947e2fbf2" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" Mar 11 08:42:52 crc kubenswrapper[4808]: I0311 08:42:52.264314 4808 ???:1] "http: TLS handshake error from 192.168.126.11:41308: no serving certificate available for the kubelet" Mar 11 08:42:52 crc kubenswrapper[4808]: I0311 08:42:52.284432 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bc8sc" Mar 11 08:42:52 crc kubenswrapper[4808]: I0311 08:42:52.308237 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:42:52 crc kubenswrapper[4808]: I0311 08:42:52.308558 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32a93168-4bf6-48c0-89b4-5e4393234562-utilities\") pod \"certified-operators-pf6x9\" (UID: \"32a93168-4bf6-48c0-89b4-5e4393234562\") " pod="openshift-marketplace/certified-operators-pf6x9" Mar 11 08:42:52 crc kubenswrapper[4808]: I0311 08:42:52.308637 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pc8c\" (UniqueName: \"kubernetes.io/projected/32a93168-4bf6-48c0-89b4-5e4393234562-kube-api-access-4pc8c\") pod \"certified-operators-pf6x9\" (UID: \"32a93168-4bf6-48c0-89b4-5e4393234562\") " pod="openshift-marketplace/certified-operators-pf6x9" Mar 11 08:42:52 crc kubenswrapper[4808]: I0311 08:42:52.308656 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32a93168-4bf6-48c0-89b4-5e4393234562-catalog-content\") pod \"certified-operators-pf6x9\" (UID: \"32a93168-4bf6-48c0-89b4-5e4393234562\") " pod="openshift-marketplace/certified-operators-pf6x9" Mar 11 08:42:52 crc kubenswrapper[4808]: E0311 08:42:52.308780 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:42:52.808766799 +0000 UTC m=+223.762090119 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:42:52 crc kubenswrapper[4808]: I0311 08:42:52.342484 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nwcwz" Mar 11 08:42:52 crc kubenswrapper[4808]: I0311 08:42:52.360166 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-9mn64" podStartSLOduration=8.360150711 podStartE2EDuration="8.360150711s" podCreationTimestamp="2026-03-11 08:42:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:42:52.356421794 +0000 UTC m=+223.309745114" watchObservedRunningTime="2026-03-11 08:42:52.360150711 +0000 UTC m=+223.313474031" Mar 11 08:42:52 crc kubenswrapper[4808]: I0311 08:42:52.399537 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-p7nmq"] Mar 11 08:42:52 crc kubenswrapper[4808]: I0311 08:42:52.400434 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p7nmq" Mar 11 08:42:52 crc kubenswrapper[4808]: I0311 08:42:52.409640 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pc8c\" (UniqueName: \"kubernetes.io/projected/32a93168-4bf6-48c0-89b4-5e4393234562-kube-api-access-4pc8c\") pod \"certified-operators-pf6x9\" (UID: \"32a93168-4bf6-48c0-89b4-5e4393234562\") " pod="openshift-marketplace/certified-operators-pf6x9" Mar 11 08:42:52 crc kubenswrapper[4808]: I0311 08:42:52.409784 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32a93168-4bf6-48c0-89b4-5e4393234562-catalog-content\") pod \"certified-operators-pf6x9\" (UID: \"32a93168-4bf6-48c0-89b4-5e4393234562\") " pod="openshift-marketplace/certified-operators-pf6x9" Mar 11 08:42:52 crc kubenswrapper[4808]: I0311 08:42:52.410501 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32a93168-4bf6-48c0-89b4-5e4393234562-utilities\") pod \"certified-operators-pf6x9\" (UID: \"32a93168-4bf6-48c0-89b4-5e4393234562\") " pod="openshift-marketplace/certified-operators-pf6x9" Mar 11 08:42:52 crc kubenswrapper[4808]: I0311 08:42:52.410691 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-76p85\" (UID: \"65550926-3f8b-436d-8a5c-e425d8c4875f\") " pod="openshift-image-registry/image-registry-697d97f7c8-76p85" Mar 11 08:42:52 crc kubenswrapper[4808]: I0311 08:42:52.414134 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32a93168-4bf6-48c0-89b4-5e4393234562-catalog-content\") pod \"certified-operators-pf6x9\" (UID: \"32a93168-4bf6-48c0-89b4-5e4393234562\") " pod="openshift-marketplace/certified-operators-pf6x9" Mar 11 08:42:52 crc kubenswrapper[4808]: I0311 08:42:52.424463 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32a93168-4bf6-48c0-89b4-5e4393234562-utilities\") pod \"certified-operators-pf6x9\" (UID: \"32a93168-4bf6-48c0-89b4-5e4393234562\") " pod="openshift-marketplace/certified-operators-pf6x9" Mar 11 08:42:52 crc kubenswrapper[4808]: E0311 08:42:52.426257 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-11 08:42:52.926244958 +0000 UTC m=+223.879568278 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-76p85" (UID: "65550926-3f8b-436d-8a5c-e425d8c4875f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:42:52 crc kubenswrapper[4808]: I0311 08:42:52.432044 4808 ???:1] "http: TLS handshake error from 192.168.126.11:41312: no serving certificate available for the kubelet" Mar 11 08:42:52 crc kubenswrapper[4808]: I0311 08:42:52.437141 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p7nmq"] Mar 11 08:42:52 crc kubenswrapper[4808]: I0311 08:42:52.453256 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pc8c\" (UniqueName: \"kubernetes.io/projected/32a93168-4bf6-48c0-89b4-5e4393234562-kube-api-access-4pc8c\") pod \"certified-operators-pf6x9\" (UID: \"32a93168-4bf6-48c0-89b4-5e4393234562\") " pod="openshift-marketplace/certified-operators-pf6x9" Mar 11 08:42:52 crc kubenswrapper[4808]: I0311 08:42:52.493327 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pf6x9" Mar 11 08:42:52 crc kubenswrapper[4808]: I0311 08:42:52.511834 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:42:52 crc kubenswrapper[4808]: I0311 08:42:52.512040 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5cb71001-8e81-4655-97db-e7bd5eaccb2a-utilities\") pod \"community-operators-p7nmq\" (UID: \"5cb71001-8e81-4655-97db-e7bd5eaccb2a\") " pod="openshift-marketplace/community-operators-p7nmq" Mar 11 08:42:52 crc kubenswrapper[4808]: I0311 08:42:52.512065 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zxkp\" (UniqueName: \"kubernetes.io/projected/5cb71001-8e81-4655-97db-e7bd5eaccb2a-kube-api-access-9zxkp\") pod \"community-operators-p7nmq\" (UID: \"5cb71001-8e81-4655-97db-e7bd5eaccb2a\") " pod="openshift-marketplace/community-operators-p7nmq" Mar 11 08:42:52 crc kubenswrapper[4808]: I0311 08:42:52.512102 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5cb71001-8e81-4655-97db-e7bd5eaccb2a-catalog-content\") pod \"community-operators-p7nmq\" (UID: \"5cb71001-8e81-4655-97db-e7bd5eaccb2a\") " pod="openshift-marketplace/community-operators-p7nmq" Mar 11 08:42:52 crc kubenswrapper[4808]: E0311 08:42:52.512197 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:42:53.012183328 +0000 UTC m=+223.965506648 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:42:52 crc kubenswrapper[4808]: I0311 08:42:52.560018 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wxbs4"] Mar 11 08:42:52 crc kubenswrapper[4808]: I0311 08:42:52.560942 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wxbs4" Mar 11 08:42:52 crc kubenswrapper[4808]: I0311 08:42:52.587059 4808 ???:1] "http: TLS handshake error from 192.168.126.11:41318: no serving certificate available for the kubelet" Mar 11 08:42:52 crc kubenswrapper[4808]: I0311 08:42:52.615382 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5cb71001-8e81-4655-97db-e7bd5eaccb2a-utilities\") pod \"community-operators-p7nmq\" (UID: \"5cb71001-8e81-4655-97db-e7bd5eaccb2a\") " pod="openshift-marketplace/community-operators-p7nmq" Mar 11 08:42:52 crc kubenswrapper[4808]: I0311 08:42:52.615428 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zxkp\" (UniqueName: \"kubernetes.io/projected/5cb71001-8e81-4655-97db-e7bd5eaccb2a-kube-api-access-9zxkp\") pod \"community-operators-p7nmq\" (UID: \"5cb71001-8e81-4655-97db-e7bd5eaccb2a\") " pod="openshift-marketplace/community-operators-p7nmq" Mar 11 08:42:52 crc kubenswrapper[4808]: I0311 08:42:52.615466 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5cb71001-8e81-4655-97db-e7bd5eaccb2a-catalog-content\") pod \"community-operators-p7nmq\" (UID: \"5cb71001-8e81-4655-97db-e7bd5eaccb2a\") " pod="openshift-marketplace/community-operators-p7nmq" Mar 11 08:42:52 crc kubenswrapper[4808]: I0311 08:42:52.615525 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-76p85\" (UID: \"65550926-3f8b-436d-8a5c-e425d8c4875f\") " pod="openshift-image-registry/image-registry-697d97f7c8-76p85" Mar 11 08:42:52 crc kubenswrapper[4808]: E0311 08:42:52.615792 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-11 08:42:53.115780727 +0000 UTC m=+224.069104037 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-76p85" (UID: "65550926-3f8b-436d-8a5c-e425d8c4875f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:42:52 crc kubenswrapper[4808]: I0311 08:42:52.616236 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5cb71001-8e81-4655-97db-e7bd5eaccb2a-utilities\") pod \"community-operators-p7nmq\" (UID: \"5cb71001-8e81-4655-97db-e7bd5eaccb2a\") " pod="openshift-marketplace/community-operators-p7nmq" Mar 11 08:42:52 crc kubenswrapper[4808]: I0311 08:42:52.617455 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5cb71001-8e81-4655-97db-e7bd5eaccb2a-catalog-content\") pod \"community-operators-p7nmq\" (UID: \"5cb71001-8e81-4655-97db-e7bd5eaccb2a\") " pod="openshift-marketplace/community-operators-p7nmq" Mar 11 08:42:52 crc kubenswrapper[4808]: I0311 08:42:52.640211 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wxbs4"] Mar 11 08:42:52 crc kubenswrapper[4808]: I0311 08:42:52.671556 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zxkp\" (UniqueName: \"kubernetes.io/projected/5cb71001-8e81-4655-97db-e7bd5eaccb2a-kube-api-access-9zxkp\") pod \"community-operators-p7nmq\" (UID: \"5cb71001-8e81-4655-97db-e7bd5eaccb2a\") " pod="openshift-marketplace/community-operators-p7nmq" Mar 11 08:42:52 crc kubenswrapper[4808]: I0311 08:42:52.685701 4808 ???:1] "http: TLS handshake error from 192.168.126.11:41330: no serving certificate available for the kubelet" Mar 11 08:42:52 crc kubenswrapper[4808]: I0311 08:42:52.718992 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:42:52 crc kubenswrapper[4808]: E0311 08:42:52.719140 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:42:53.219123118 +0000 UTC m=+224.172446438 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:42:52 crc kubenswrapper[4808]: I0311 08:42:52.719388 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d9db75f-12f9-4870-94c5-474c8b74f021-utilities\") pod \"certified-operators-wxbs4\" (UID: \"8d9db75f-12f9-4870-94c5-474c8b74f021\") " pod="openshift-marketplace/certified-operators-wxbs4" Mar 11 08:42:52 crc kubenswrapper[4808]: I0311 08:42:52.719455 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7z98\" (UniqueName: \"kubernetes.io/projected/8d9db75f-12f9-4870-94c5-474c8b74f021-kube-api-access-j7z98\") pod \"certified-operators-wxbs4\" (UID: \"8d9db75f-12f9-4870-94c5-474c8b74f021\") " pod="openshift-marketplace/certified-operators-wxbs4" Mar 11 08:42:52 crc kubenswrapper[4808]: I0311 08:42:52.719518 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-76p85\" (UID: \"65550926-3f8b-436d-8a5c-e425d8c4875f\") " pod="openshift-image-registry/image-registry-697d97f7c8-76p85" Mar 11 08:42:52 crc kubenswrapper[4808]: I0311 08:42:52.719543 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d9db75f-12f9-4870-94c5-474c8b74f021-catalog-content\") pod \"certified-operators-wxbs4\" (UID: \"8d9db75f-12f9-4870-94c5-474c8b74f021\") " pod="openshift-marketplace/certified-operators-wxbs4" Mar 11 08:42:52 crc kubenswrapper[4808]: E0311 08:42:52.719800 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-11 08:42:53.219792808 +0000 UTC m=+224.173116128 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-76p85" (UID: "65550926-3f8b-436d-8a5c-e425d8c4875f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:42:52 crc kubenswrapper[4808]: I0311 08:42:52.723168 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vvz25" Mar 11 08:42:52 crc kubenswrapper[4808]: I0311 08:42:52.751703 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p7nmq" Mar 11 08:42:52 crc kubenswrapper[4808]: I0311 08:42:52.765624 4808 patch_prober.go:28] interesting pod/router-default-5444994796-h9wnt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 11 08:42:52 crc kubenswrapper[4808]: [-]has-synced failed: reason withheld Mar 11 08:42:52 crc kubenswrapper[4808]: [+]process-running ok Mar 11 08:42:52 crc kubenswrapper[4808]: healthz check failed Mar 11 08:42:52 crc kubenswrapper[4808]: I0311 08:42:52.765691 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h9wnt" podUID="6c9411d8-3c3c-4ae8-9580-10ae4884967b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 11 08:42:52 crc kubenswrapper[4808]: I0311 08:42:52.774704 4808 ???:1] "http: TLS handshake error from 192.168.126.11:41344: no serving certificate available for the kubelet" Mar 11 08:42:52 crc kubenswrapper[4808]: I0311 08:42:52.820685 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:42:52 crc kubenswrapper[4808]: I0311 08:42:52.820897 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d9db75f-12f9-4870-94c5-474c8b74f021-utilities\") pod \"certified-operators-wxbs4\" (UID: \"8d9db75f-12f9-4870-94c5-474c8b74f021\") " pod="openshift-marketplace/certified-operators-wxbs4" Mar 11 08:42:52 crc kubenswrapper[4808]: I0311 08:42:52.820928 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7z98\" (UniqueName: \"kubernetes.io/projected/8d9db75f-12f9-4870-94c5-474c8b74f021-kube-api-access-j7z98\") pod \"certified-operators-wxbs4\" (UID: \"8d9db75f-12f9-4870-94c5-474c8b74f021\") " pod="openshift-marketplace/certified-operators-wxbs4" Mar 11 08:42:52 crc kubenswrapper[4808]: I0311 08:42:52.820985 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d9db75f-12f9-4870-94c5-474c8b74f021-catalog-content\") pod \"certified-operators-wxbs4\" (UID: \"8d9db75f-12f9-4870-94c5-474c8b74f021\") " pod="openshift-marketplace/certified-operators-wxbs4" Mar 11 08:42:52 crc kubenswrapper[4808]: I0311 08:42:52.821405 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d9db75f-12f9-4870-94c5-474c8b74f021-catalog-content\") pod \"certified-operators-wxbs4\" (UID: \"8d9db75f-12f9-4870-94c5-474c8b74f021\") " pod="openshift-marketplace/certified-operators-wxbs4" Mar 11 08:42:52 crc kubenswrapper[4808]: E0311 08:42:52.821498 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:42:53.321482691 +0000 UTC m=+224.274806011 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:42:52 crc kubenswrapper[4808]: I0311 08:42:52.821697 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d9db75f-12f9-4870-94c5-474c8b74f021-utilities\") pod \"certified-operators-wxbs4\" (UID: \"8d9db75f-12f9-4870-94c5-474c8b74f021\") " pod="openshift-marketplace/certified-operators-wxbs4" Mar 11 08:42:52 crc kubenswrapper[4808]: I0311 08:42:52.893248 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7z98\" (UniqueName: \"kubernetes.io/projected/8d9db75f-12f9-4870-94c5-474c8b74f021-kube-api-access-j7z98\") pod \"certified-operators-wxbs4\" (UID: \"8d9db75f-12f9-4870-94c5-474c8b74f021\") " pod="openshift-marketplace/certified-operators-wxbs4" Mar 11 08:42:52 crc kubenswrapper[4808]: I0311 08:42:52.916080 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wxbs4" Mar 11 08:42:52 crc kubenswrapper[4808]: I0311 08:42:52.918790 4808 ???:1] "http: TLS handshake error from 192.168.126.11:41354: no serving certificate available for the kubelet" Mar 11 08:42:52 crc kubenswrapper[4808]: I0311 08:42:52.922109 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-76p85\" (UID: \"65550926-3f8b-436d-8a5c-e425d8c4875f\") " pod="openshift-image-registry/image-registry-697d97f7c8-76p85" Mar 11 08:42:52 crc kubenswrapper[4808]: E0311 08:42:52.922628 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-11 08:42:53.422616049 +0000 UTC m=+224.375939369 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-76p85" (UID: "65550926-3f8b-436d-8a5c-e425d8c4875f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:42:53 crc kubenswrapper[4808]: I0311 08:42:53.027106 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:42:53 crc kubenswrapper[4808]: E0311 08:42:53.027735 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:42:53.527699751 +0000 UTC m=+224.481023121 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:42:53 crc kubenswrapper[4808]: I0311 08:42:53.078854 4808 ???:1] "http: TLS handshake error from 192.168.126.11:41360: no serving certificate available for the kubelet" Mar 11 08:42:53 crc kubenswrapper[4808]: I0311 08:42:53.123612 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-6hxxc"] Mar 11 08:42:53 crc kubenswrapper[4808]: I0311 08:42:53.123852 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-6hxxc" podUID="83b44b9a-4c3d-4100-ade1-2645a32a237e" containerName="controller-manager" containerID="cri-o://7506c3c17ab783d8c9adb09bbeede08209ed2ec8304310ab52096088dd70dd8e" gracePeriod=30 Mar 11 08:42:53 crc kubenswrapper[4808]: I0311 08:42:53.128221 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-76p85\" (UID: \"65550926-3f8b-436d-8a5c-e425d8c4875f\") " pod="openshift-image-registry/image-registry-697d97f7c8-76p85" Mar 11 08:42:53 crc kubenswrapper[4808]: E0311 08:42:53.128632 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-11 08:42:53.628622073 +0000 UTC m=+224.581945393 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-76p85" (UID: "65550926-3f8b-436d-8a5c-e425d8c4875f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:42:53 crc kubenswrapper[4808]: I0311 08:42:53.152717 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-6hxxc" Mar 11 08:42:53 crc kubenswrapper[4808]: I0311 08:42:53.186653 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bc8sc"] Mar 11 08:42:53 crc kubenswrapper[4808]: W0311 08:42:53.228026 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8d26a515_59f9_49a6_a9e0_6ff62b523ab4.slice/crio-e0bb0b31911453de8bcbf5d6df0c9e6806b4142d12224ca4635e0909604a2dcd WatchSource:0}: Error finding container e0bb0b31911453de8bcbf5d6df0c9e6806b4142d12224ca4635e0909604a2dcd: Status 404 returned error can't find the container with id e0bb0b31911453de8bcbf5d6df0c9e6806b4142d12224ca4635e0909604a2dcd Mar 11 08:42:53 crc kubenswrapper[4808]: I0311 08:42:53.232408 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:42:53 crc kubenswrapper[4808]: E0311 08:42:53.232818 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:42:53.732803199 +0000 UTC m=+224.686126519 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:42:53 crc kubenswrapper[4808]: I0311 08:42:53.235015 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-n2lkc"] Mar 11 08:42:53 crc kubenswrapper[4808]: I0311 08:42:53.235217 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n2lkc" podUID="2b5f17e9-1e47-4c7a-b225-a874c78f88ef" containerName="route-controller-manager" containerID="cri-o://e7cf7e02f93e0720f2b42328bf368ed16e99da3d7821162965e4b58e976fa215" gracePeriod=30 Mar 11 08:42:53 crc kubenswrapper[4808]: I0311 08:42:53.259487 4808 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-vbbdr container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.10:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 11 08:42:53 crc kubenswrapper[4808]: I0311 08:42:53.259542 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-vbbdr" podUID="7b05200d-8025-468a-9c30-fbfd45a80b8b" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.10:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 11 08:42:53 crc kubenswrapper[4808]: I0311 08:42:53.292751 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bc8sc" event={"ID":"8d26a515-59f9-49a6-a9e0-6ff62b523ab4","Type":"ContainerStarted","Data":"e0bb0b31911453de8bcbf5d6df0c9e6806b4142d12224ca4635e0909604a2dcd"} Mar 11 08:42:53 crc kubenswrapper[4808]: I0311 08:42:53.333598 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-76p85\" (UID: \"65550926-3f8b-436d-8a5c-e425d8c4875f\") " pod="openshift-image-registry/image-registry-697d97f7c8-76p85" Mar 11 08:42:53 crc kubenswrapper[4808]: E0311 08:42:53.334027 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-11 08:42:53.834015999 +0000 UTC m=+224.787339319 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-76p85" (UID: "65550926-3f8b-436d-8a5c-e425d8c4875f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:42:53 crc kubenswrapper[4808]: I0311 08:42:53.350785 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pf6x9"] Mar 11 08:42:53 crc kubenswrapper[4808]: I0311 08:42:53.358987 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-f7nlr" event={"ID":"759ae8a0-0d30-4da6-82e5-7d82ebfec823","Type":"ContainerStarted","Data":"fe810b58ba511dfd3f8110a6ea64e00881d3ad25d5f0b4bda77f141851666438"} Mar 11 08:42:53 crc kubenswrapper[4808]: I0311 08:42:53.431349 4808 ???:1] "http: TLS handshake error from 192.168.126.11:41362: no serving certificate available for the kubelet" Mar 11 08:42:53 crc kubenswrapper[4808]: I0311 08:42:53.435338 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:42:53 crc kubenswrapper[4808]: E0311 08:42:53.437108 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:42:53.937087093 +0000 UTC m=+224.890410403 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:42:53 crc kubenswrapper[4808]: I0311 08:42:53.537334 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-76p85\" (UID: \"65550926-3f8b-436d-8a5c-e425d8c4875f\") " pod="openshift-image-registry/image-registry-697d97f7c8-76p85" Mar 11 08:42:53 crc kubenswrapper[4808]: E0311 08:42:53.537884 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-11 08:42:54.0378708 +0000 UTC m=+224.991194110 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-76p85" (UID: "65550926-3f8b-436d-8a5c-e425d8c4875f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:42:53 crc kubenswrapper[4808]: I0311 08:42:53.568865 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p7nmq"] Mar 11 08:42:53 crc kubenswrapper[4808]: I0311 08:42:53.638304 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:42:53 crc kubenswrapper[4808]: E0311 08:42:53.638717 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:42:54.13870279 +0000 UTC m=+225.092026110 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:42:53 crc kubenswrapper[4808]: I0311 08:42:53.741922 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-76p85\" (UID: \"65550926-3f8b-436d-8a5c-e425d8c4875f\") " pod="openshift-image-registry/image-registry-697d97f7c8-76p85" Mar 11 08:42:53 crc kubenswrapper[4808]: I0311 08:42:53.743268 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wxbs4"] Mar 11 08:42:53 crc kubenswrapper[4808]: E0311 08:42:53.743633 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-11 08:42:54.243619947 +0000 UTC m=+225.196943267 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-76p85" (UID: "65550926-3f8b-436d-8a5c-e425d8c4875f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:42:53 crc kubenswrapper[4808]: I0311 08:42:53.779714 4808 patch_prober.go:28] interesting pod/router-default-5444994796-h9wnt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 11 08:42:53 crc kubenswrapper[4808]: [-]has-synced failed: reason withheld Mar 11 08:42:53 crc kubenswrapper[4808]: [+]process-running ok Mar 11 08:42:53 crc kubenswrapper[4808]: healthz check failed Mar 11 08:42:53 crc kubenswrapper[4808]: I0311 08:42:53.779767 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h9wnt" podUID="6c9411d8-3c3c-4ae8-9580-10ae4884967b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 11 08:42:53 crc kubenswrapper[4808]: I0311 08:42:53.780253 4808 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Mar 11 08:42:53 crc kubenswrapper[4808]: I0311 08:42:53.844133 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:42:53 crc kubenswrapper[4808]: E0311 08:42:53.844286 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:42:54.34425741 +0000 UTC m=+225.297580730 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:42:53 crc kubenswrapper[4808]: I0311 08:42:53.844555 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-76p85\" (UID: \"65550926-3f8b-436d-8a5c-e425d8c4875f\") " pod="openshift-image-registry/image-registry-697d97f7c8-76p85" Mar 11 08:42:53 crc kubenswrapper[4808]: E0311 08:42:53.845001 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-11 08:42:54.344989791 +0000 UTC m=+225.298313181 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-76p85" (UID: "65550926-3f8b-436d-8a5c-e425d8c4875f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:42:53 crc kubenswrapper[4808]: I0311 08:42:53.875977 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-vbbdr" Mar 11 08:42:53 crc kubenswrapper[4808]: I0311 08:42:53.890810 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n2lkc" Mar 11 08:42:53 crc kubenswrapper[4808]: I0311 08:42:53.938191 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-cnszv"] Mar 11 08:42:53 crc kubenswrapper[4808]: E0311 08:42:53.938497 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b5f17e9-1e47-4c7a-b225-a874c78f88ef" containerName="route-controller-manager" Mar 11 08:42:53 crc kubenswrapper[4808]: I0311 08:42:53.938511 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b5f17e9-1e47-4c7a-b225-a874c78f88ef" containerName="route-controller-manager" Mar 11 08:42:53 crc kubenswrapper[4808]: I0311 08:42:53.938617 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b5f17e9-1e47-4c7a-b225-a874c78f88ef" containerName="route-controller-manager" Mar 11 08:42:53 crc kubenswrapper[4808]: I0311 08:42:53.939310 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cnszv" Mar 11 08:42:53 crc kubenswrapper[4808]: I0311 08:42:53.946777 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Mar 11 08:42:53 crc kubenswrapper[4808]: I0311 08:42:53.947290 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:42:53 crc kubenswrapper[4808]: E0311 08:42:53.947523 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:42:54.447506179 +0000 UTC m=+225.400829499 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:42:53 crc kubenswrapper[4808]: I0311 08:42:53.960845 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cnszv"] Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.000520 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-6hxxc" Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.049847 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b5f17e9-1e47-4c7a-b225-a874c78f88ef-config\") pod \"2b5f17e9-1e47-4c7a-b225-a874c78f88ef\" (UID: \"2b5f17e9-1e47-4c7a-b225-a874c78f88ef\") " Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.050077 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qsrtw\" (UniqueName: \"kubernetes.io/projected/2b5f17e9-1e47-4c7a-b225-a874c78f88ef-kube-api-access-qsrtw\") pod \"2b5f17e9-1e47-4c7a-b225-a874c78f88ef\" (UID: \"2b5f17e9-1e47-4c7a-b225-a874c78f88ef\") " Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.050192 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2b5f17e9-1e47-4c7a-b225-a874c78f88ef-client-ca\") pod \"2b5f17e9-1e47-4c7a-b225-a874c78f88ef\" (UID: \"2b5f17e9-1e47-4c7a-b225-a874c78f88ef\") " Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.050246 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b5f17e9-1e47-4c7a-b225-a874c78f88ef-serving-cert\") pod \"2b5f17e9-1e47-4c7a-b225-a874c78f88ef\" (UID: \"2b5f17e9-1e47-4c7a-b225-a874c78f88ef\") " Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.050410 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzcgv\" (UniqueName: \"kubernetes.io/projected/5a83bb5a-5523-4258-b6ce-895c0b1d7f07-kube-api-access-vzcgv\") pod \"redhat-marketplace-cnszv\" (UID: \"5a83bb5a-5523-4258-b6ce-895c0b1d7f07\") " pod="openshift-marketplace/redhat-marketplace-cnszv" Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.050494 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a83bb5a-5523-4258-b6ce-895c0b1d7f07-utilities\") pod \"redhat-marketplace-cnszv\" (UID: \"5a83bb5a-5523-4258-b6ce-895c0b1d7f07\") " pod="openshift-marketplace/redhat-marketplace-cnszv" Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.050530 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-76p85\" (UID: \"65550926-3f8b-436d-8a5c-e425d8c4875f\") " pod="openshift-image-registry/image-registry-697d97f7c8-76p85" Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.050566 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a83bb5a-5523-4258-b6ce-895c0b1d7f07-catalog-content\") pod \"redhat-marketplace-cnszv\" (UID: \"5a83bb5a-5523-4258-b6ce-895c0b1d7f07\") " pod="openshift-marketplace/redhat-marketplace-cnszv" Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.055884 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b5f17e9-1e47-4c7a-b225-a874c78f88ef-config" (OuterVolumeSpecName: "config") pod "2b5f17e9-1e47-4c7a-b225-a874c78f88ef" (UID: "2b5f17e9-1e47-4c7a-b225-a874c78f88ef"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.057543 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b5f17e9-1e47-4c7a-b225-a874c78f88ef-client-ca" (OuterVolumeSpecName: "client-ca") pod "2b5f17e9-1e47-4c7a-b225-a874c78f88ef" (UID: "2b5f17e9-1e47-4c7a-b225-a874c78f88ef"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:42:54 crc kubenswrapper[4808]: E0311 08:42:54.057630 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-11 08:42:54.557612846 +0000 UTC m=+225.510936226 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-76p85" (UID: "65550926-3f8b-436d-8a5c-e425d8c4875f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.065650 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b5f17e9-1e47-4c7a-b225-a874c78f88ef-kube-api-access-qsrtw" (OuterVolumeSpecName: "kube-api-access-qsrtw") pod "2b5f17e9-1e47-4c7a-b225-a874c78f88ef" (UID: "2b5f17e9-1e47-4c7a-b225-a874c78f88ef"). InnerVolumeSpecName "kube-api-access-qsrtw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.084873 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b5f17e9-1e47-4c7a-b225-a874c78f88ef-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2b5f17e9-1e47-4c7a-b225-a874c78f88ef" (UID: "2b5f17e9-1e47-4c7a-b225-a874c78f88ef"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.150643 4808 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-03-11T08:42:53.780266304Z","Handler":null,"Name":""} Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.151192 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:42:54 crc kubenswrapper[4808]: E0311 08:42:54.151254 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:42:54.651238227 +0000 UTC m=+225.604561547 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.151349 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/83b44b9a-4c3d-4100-ade1-2645a32a237e-client-ca\") pod \"83b44b9a-4c3d-4100-ade1-2645a32a237e\" (UID: \"83b44b9a-4c3d-4100-ade1-2645a32a237e\") " Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.151421 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/83b44b9a-4c3d-4100-ade1-2645a32a237e-serving-cert\") pod \"83b44b9a-4c3d-4100-ade1-2645a32a237e\" (UID: \"83b44b9a-4c3d-4100-ade1-2645a32a237e\") " Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.151445 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83b44b9a-4c3d-4100-ade1-2645a32a237e-config\") pod \"83b44b9a-4c3d-4100-ade1-2645a32a237e\" (UID: \"83b44b9a-4c3d-4100-ade1-2645a32a237e\") " Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.151479 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x5clr\" (UniqueName: \"kubernetes.io/projected/83b44b9a-4c3d-4100-ade1-2645a32a237e-kube-api-access-x5clr\") pod \"83b44b9a-4c3d-4100-ade1-2645a32a237e\" (UID: \"83b44b9a-4c3d-4100-ade1-2645a32a237e\") " Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.151502 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/83b44b9a-4c3d-4100-ade1-2645a32a237e-proxy-ca-bundles\") pod \"83b44b9a-4c3d-4100-ade1-2645a32a237e\" (UID: \"83b44b9a-4c3d-4100-ade1-2645a32a237e\") " Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.151649 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a83bb5a-5523-4258-b6ce-895c0b1d7f07-utilities\") pod \"redhat-marketplace-cnszv\" (UID: \"5a83bb5a-5523-4258-b6ce-895c0b1d7f07\") " pod="openshift-marketplace/redhat-marketplace-cnszv" Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.151679 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-76p85\" (UID: \"65550926-3f8b-436d-8a5c-e425d8c4875f\") " pod="openshift-image-registry/image-registry-697d97f7c8-76p85" Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.151707 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a83bb5a-5523-4258-b6ce-895c0b1d7f07-catalog-content\") pod \"redhat-marketplace-cnszv\" (UID: \"5a83bb5a-5523-4258-b6ce-895c0b1d7f07\") " pod="openshift-marketplace/redhat-marketplace-cnszv" Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.151748 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzcgv\" (UniqueName: \"kubernetes.io/projected/5a83bb5a-5523-4258-b6ce-895c0b1d7f07-kube-api-access-vzcgv\") pod \"redhat-marketplace-cnszv\" (UID: \"5a83bb5a-5523-4258-b6ce-895c0b1d7f07\") " pod="openshift-marketplace/redhat-marketplace-cnszv" Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.151784 4808 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b5f17e9-1e47-4c7a-b225-a874c78f88ef-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.151793 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b5f17e9-1e47-4c7a-b225-a874c78f88ef-config\") on node \"crc\" DevicePath \"\"" Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.151803 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qsrtw\" (UniqueName: \"kubernetes.io/projected/2b5f17e9-1e47-4c7a-b225-a874c78f88ef-kube-api-access-qsrtw\") on node \"crc\" DevicePath \"\"" Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.151814 4808 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2b5f17e9-1e47-4c7a-b225-a874c78f88ef-client-ca\") on node \"crc\" DevicePath \"\"" Mar 11 08:42:54 crc kubenswrapper[4808]: E0311 08:42:54.152582 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-11 08:42:54.652572826 +0000 UTC m=+225.605896146 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-76p85" (UID: "65550926-3f8b-436d-8a5c-e425d8c4875f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.152865 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a83bb5a-5523-4258-b6ce-895c0b1d7f07-utilities\") pod \"redhat-marketplace-cnszv\" (UID: \"5a83bb5a-5523-4258-b6ce-895c0b1d7f07\") " pod="openshift-marketplace/redhat-marketplace-cnszv" Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.153049 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83b44b9a-4c3d-4100-ade1-2645a32a237e-client-ca" (OuterVolumeSpecName: "client-ca") pod "83b44b9a-4c3d-4100-ade1-2645a32a237e" (UID: "83b44b9a-4c3d-4100-ade1-2645a32a237e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.153145 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a83bb5a-5523-4258-b6ce-895c0b1d7f07-catalog-content\") pod \"redhat-marketplace-cnszv\" (UID: \"5a83bb5a-5523-4258-b6ce-895c0b1d7f07\") " pod="openshift-marketplace/redhat-marketplace-cnszv" Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.153261 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83b44b9a-4c3d-4100-ade1-2645a32a237e-config" (OuterVolumeSpecName: "config") pod "83b44b9a-4c3d-4100-ade1-2645a32a237e" (UID: "83b44b9a-4c3d-4100-ade1-2645a32a237e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.153494 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83b44b9a-4c3d-4100-ade1-2645a32a237e-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "83b44b9a-4c3d-4100-ade1-2645a32a237e" (UID: "83b44b9a-4c3d-4100-ade1-2645a32a237e"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.156307 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83b44b9a-4c3d-4100-ade1-2645a32a237e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "83b44b9a-4c3d-4100-ade1-2645a32a237e" (UID: "83b44b9a-4c3d-4100-ade1-2645a32a237e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.156608 4808 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.156636 4808 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.156601 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83b44b9a-4c3d-4100-ade1-2645a32a237e-kube-api-access-x5clr" (OuterVolumeSpecName: "kube-api-access-x5clr") pod "83b44b9a-4c3d-4100-ade1-2645a32a237e" (UID: "83b44b9a-4c3d-4100-ade1-2645a32a237e"). InnerVolumeSpecName "kube-api-access-x5clr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.166550 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzcgv\" (UniqueName: \"kubernetes.io/projected/5a83bb5a-5523-4258-b6ce-895c0b1d7f07-kube-api-access-vzcgv\") pod \"redhat-marketplace-cnszv\" (UID: \"5a83bb5a-5523-4258-b6ce-895c0b1d7f07\") " pod="openshift-marketplace/redhat-marketplace-cnszv" Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.252619 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.252948 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83b44b9a-4c3d-4100-ade1-2645a32a237e-config\") on node \"crc\" DevicePath \"\"" Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.252964 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x5clr\" (UniqueName: \"kubernetes.io/projected/83b44b9a-4c3d-4100-ade1-2645a32a237e-kube-api-access-x5clr\") on node \"crc\" DevicePath \"\"" Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.252972 4808 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/83b44b9a-4c3d-4100-ade1-2645a32a237e-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.252981 4808 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/83b44b9a-4c3d-4100-ade1-2645a32a237e-client-ca\") on node \"crc\" DevicePath \"\"" Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.252989 4808 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/83b44b9a-4c3d-4100-ade1-2645a32a237e-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.262248 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.286731 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cnszv" Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.351908 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-p7jn7"] Mar 11 08:42:54 crc kubenswrapper[4808]: E0311 08:42:54.352109 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83b44b9a-4c3d-4100-ade1-2645a32a237e" containerName="controller-manager" Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.352127 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="83b44b9a-4c3d-4100-ade1-2645a32a237e" containerName="controller-manager" Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.352217 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="83b44b9a-4c3d-4100-ade1-2645a32a237e" containerName="controller-manager" Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.353000 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p7jn7" Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.353948 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-76p85\" (UID: \"65550926-3f8b-436d-8a5c-e425d8c4875f\") " pod="openshift-image-registry/image-registry-697d97f7c8-76p85" Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.360127 4808 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.360176 4808 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-76p85\" (UID: \"65550926-3f8b-436d-8a5c-e425d8c4875f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-76p85" Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.374907 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-p7jn7"] Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.378162 4808 generic.go:334] "Generic (PLEG): container finished" podID="2b5f17e9-1e47-4c7a-b225-a874c78f88ef" containerID="e7cf7e02f93e0720f2b42328bf368ed16e99da3d7821162965e4b58e976fa215" exitCode=0 Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.378217 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n2lkc" event={"ID":"2b5f17e9-1e47-4c7a-b225-a874c78f88ef","Type":"ContainerDied","Data":"e7cf7e02f93e0720f2b42328bf368ed16e99da3d7821162965e4b58e976fa215"} Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.378267 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n2lkc" event={"ID":"2b5f17e9-1e47-4c7a-b225-a874c78f88ef","Type":"ContainerDied","Data":"bb8d6e4082072ca2a7ecb448fab602fd256b4243071bb80fb724c2de43274532"} Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.378286 4808 scope.go:117] "RemoveContainer" containerID="e7cf7e02f93e0720f2b42328bf368ed16e99da3d7821162965e4b58e976fa215" Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.378550 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n2lkc" Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.405069 4808 generic.go:334] "Generic (PLEG): container finished" podID="83b44b9a-4c3d-4100-ade1-2645a32a237e" containerID="7506c3c17ab783d8c9adb09bbeede08209ed2ec8304310ab52096088dd70dd8e" exitCode=0 Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.405290 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-6hxxc" Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.405387 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-6hxxc" event={"ID":"83b44b9a-4c3d-4100-ade1-2645a32a237e","Type":"ContainerDied","Data":"7506c3c17ab783d8c9adb09bbeede08209ed2ec8304310ab52096088dd70dd8e"} Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.405416 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-6hxxc" event={"ID":"83b44b9a-4c3d-4100-ade1-2645a32a237e","Type":"ContainerDied","Data":"3ebdda935cfe9bdb15b3272c979f2aa59298a6bc23465e53c3fb497515f766b3"} Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.427600 4808 generic.go:334] "Generic (PLEG): container finished" podID="5cb71001-8e81-4655-97db-e7bd5eaccb2a" containerID="60cf3b87aeed0a9ce6e296efc7f4eb33421d98aed1b82a0211dabe1deade4f3f" exitCode=0 Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.427713 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p7nmq" event={"ID":"5cb71001-8e81-4655-97db-e7bd5eaccb2a","Type":"ContainerDied","Data":"60cf3b87aeed0a9ce6e296efc7f4eb33421d98aed1b82a0211dabe1deade4f3f"} Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.427743 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p7nmq" event={"ID":"5cb71001-8e81-4655-97db-e7bd5eaccb2a","Type":"ContainerStarted","Data":"a296c9bd23044dd63c43892375a60a792dd98911bab857bb693b8d3bb468456d"} Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.430481 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-76p85\" (UID: \"65550926-3f8b-436d-8a5c-e425d8c4875f\") " pod="openshift-image-registry/image-registry-697d97f7c8-76p85" Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.431280 4808 generic.go:334] "Generic (PLEG): container finished" podID="8d26a515-59f9-49a6-a9e0-6ff62b523ab4" containerID="e303b3282c058dd47673b1784d8ef582f35033d49ca8d5626c3545d6f2d8f6d8" exitCode=0 Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.431323 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bc8sc" event={"ID":"8d26a515-59f9-49a6-a9e0-6ff62b523ab4","Type":"ContainerDied","Data":"e303b3282c058dd47673b1784d8ef582f35033d49ca8d5626c3545d6f2d8f6d8"} Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.436768 4808 generic.go:334] "Generic (PLEG): container finished" podID="8d9db75f-12f9-4870-94c5-474c8b74f021" containerID="eb2afbb6d4f5324a9a313b5908928b0c491d5c525a7e5264824969106a3c1629" exitCode=0 Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.436834 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wxbs4" event={"ID":"8d9db75f-12f9-4870-94c5-474c8b74f021","Type":"ContainerDied","Data":"eb2afbb6d4f5324a9a313b5908928b0c491d5c525a7e5264824969106a3c1629"} Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.436861 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wxbs4" event={"ID":"8d9db75f-12f9-4870-94c5-474c8b74f021","Type":"ContainerStarted","Data":"c1d164e250c1bbd115334178dc47780ce937c461d2848eab38765c830132fc50"} Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.443681 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-f7nlr" event={"ID":"759ae8a0-0d30-4da6-82e5-7d82ebfec823","Type":"ContainerStarted","Data":"a7910dbe19067a85022056654c25fee9ddae585237de17d2f76443f329aa8785"} Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.443728 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-f7nlr" event={"ID":"759ae8a0-0d30-4da6-82e5-7d82ebfec823","Type":"ContainerStarted","Data":"b6068a875c1a019ebdc0961557961d46e1a6556696484fb25a244ede16ce62c4"} Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.446922 4808 generic.go:334] "Generic (PLEG): container finished" podID="32a93168-4bf6-48c0-89b4-5e4393234562" containerID="5600a6dba32204f485a6b40da78cb61f2f88a90ee59e22b71531fa9398bbf2e9" exitCode=0 Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.448531 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pf6x9" event={"ID":"32a93168-4bf6-48c0-89b4-5e4393234562","Type":"ContainerDied","Data":"5600a6dba32204f485a6b40da78cb61f2f88a90ee59e22b71531fa9398bbf2e9"} Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.448556 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pf6x9" event={"ID":"32a93168-4bf6-48c0-89b4-5e4393234562","Type":"ContainerStarted","Data":"8c1765628c5202bc313e32c21f8c546084dd7a15bd5474fcbb05993c6af6893f"} Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.455076 4808 scope.go:117] "RemoveContainer" containerID="e7cf7e02f93e0720f2b42328bf368ed16e99da3d7821162965e4b58e976fa215" Mar 11 08:42:54 crc kubenswrapper[4808]: E0311 08:42:54.455517 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7cf7e02f93e0720f2b42328bf368ed16e99da3d7821162965e4b58e976fa215\": container with ID starting with e7cf7e02f93e0720f2b42328bf368ed16e99da3d7821162965e4b58e976fa215 not found: ID does not exist" containerID="e7cf7e02f93e0720f2b42328bf368ed16e99da3d7821162965e4b58e976fa215" Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.455545 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7cf7e02f93e0720f2b42328bf368ed16e99da3d7821162965e4b58e976fa215"} err="failed to get container status \"e7cf7e02f93e0720f2b42328bf368ed16e99da3d7821162965e4b58e976fa215\": rpc error: code = NotFound desc = could not find container \"e7cf7e02f93e0720f2b42328bf368ed16e99da3d7821162965e4b58e976fa215\": container with ID starting with e7cf7e02f93e0720f2b42328bf368ed16e99da3d7821162965e4b58e976fa215 not found: ID does not exist" Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.455565 4808 scope.go:117] "RemoveContainer" containerID="7506c3c17ab783d8c9adb09bbeede08209ed2ec8304310ab52096088dd70dd8e" Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.459524 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c04d52d-2eda-40d3-8252-ac2e14d0a861-utilities\") pod \"redhat-marketplace-p7jn7\" (UID: \"8c04d52d-2eda-40d3-8252-ac2e14d0a861\") " pod="openshift-marketplace/redhat-marketplace-p7jn7" Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.459587 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rf6jf\" (UniqueName: \"kubernetes.io/projected/8c04d52d-2eda-40d3-8252-ac2e14d0a861-kube-api-access-rf6jf\") pod \"redhat-marketplace-p7jn7\" (UID: \"8c04d52d-2eda-40d3-8252-ac2e14d0a861\") " pod="openshift-marketplace/redhat-marketplace-p7jn7" Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.459946 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c04d52d-2eda-40d3-8252-ac2e14d0a861-catalog-content\") pod \"redhat-marketplace-p7jn7\" (UID: \"8c04d52d-2eda-40d3-8252-ac2e14d0a861\") " pod="openshift-marketplace/redhat-marketplace-p7jn7" Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.512282 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-f7nlr" podStartSLOduration=11.512238953 podStartE2EDuration="11.512238953s" podCreationTimestamp="2026-03-11 08:42:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:42:54.50904018 +0000 UTC m=+225.462363500" watchObservedRunningTime="2026-03-11 08:42:54.512238953 +0000 UTC m=+225.465562273" Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.522882 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-6hxxc"] Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.526329 4808 scope.go:117] "RemoveContainer" containerID="7506c3c17ab783d8c9adb09bbeede08209ed2ec8304310ab52096088dd70dd8e" Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.526787 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-6hxxc"] Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.526818 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-76p85" Mar 11 08:42:54 crc kubenswrapper[4808]: E0311 08:42:54.530203 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7506c3c17ab783d8c9adb09bbeede08209ed2ec8304310ab52096088dd70dd8e\": container with ID starting with 7506c3c17ab783d8c9adb09bbeede08209ed2ec8304310ab52096088dd70dd8e not found: ID does not exist" containerID="7506c3c17ab783d8c9adb09bbeede08209ed2ec8304310ab52096088dd70dd8e" Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.530241 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7506c3c17ab783d8c9adb09bbeede08209ed2ec8304310ab52096088dd70dd8e"} err="failed to get container status \"7506c3c17ab783d8c9adb09bbeede08209ed2ec8304310ab52096088dd70dd8e\": rpc error: code = NotFound desc = could not find container \"7506c3c17ab783d8c9adb09bbeede08209ed2ec8304310ab52096088dd70dd8e\": container with ID starting with 7506c3c17ab783d8c9adb09bbeede08209ed2ec8304310ab52096088dd70dd8e not found: ID does not exist" Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.558788 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-n2lkc"] Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.561708 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c04d52d-2eda-40d3-8252-ac2e14d0a861-catalog-content\") pod \"redhat-marketplace-p7jn7\" (UID: \"8c04d52d-2eda-40d3-8252-ac2e14d0a861\") " pod="openshift-marketplace/redhat-marketplace-p7jn7" Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.561872 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c04d52d-2eda-40d3-8252-ac2e14d0a861-utilities\") pod \"redhat-marketplace-p7jn7\" (UID: \"8c04d52d-2eda-40d3-8252-ac2e14d0a861\") " pod="openshift-marketplace/redhat-marketplace-p7jn7" Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.561908 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rf6jf\" (UniqueName: \"kubernetes.io/projected/8c04d52d-2eda-40d3-8252-ac2e14d0a861-kube-api-access-rf6jf\") pod \"redhat-marketplace-p7jn7\" (UID: \"8c04d52d-2eda-40d3-8252-ac2e14d0a861\") " pod="openshift-marketplace/redhat-marketplace-p7jn7" Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.563117 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c04d52d-2eda-40d3-8252-ac2e14d0a861-utilities\") pod \"redhat-marketplace-p7jn7\" (UID: \"8c04d52d-2eda-40d3-8252-ac2e14d0a861\") " pod="openshift-marketplace/redhat-marketplace-p7jn7" Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.564220 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c04d52d-2eda-40d3-8252-ac2e14d0a861-catalog-content\") pod \"redhat-marketplace-p7jn7\" (UID: \"8c04d52d-2eda-40d3-8252-ac2e14d0a861\") " pod="openshift-marketplace/redhat-marketplace-p7jn7" Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.567395 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-n2lkc"] Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.583974 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rf6jf\" (UniqueName: \"kubernetes.io/projected/8c04d52d-2eda-40d3-8252-ac2e14d0a861-kube-api-access-rf6jf\") pod \"redhat-marketplace-p7jn7\" (UID: \"8c04d52d-2eda-40d3-8252-ac2e14d0a861\") " pod="openshift-marketplace/redhat-marketplace-p7jn7" Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.624726 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d99c9f484-zhbtz"] Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.625600 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d99c9f484-zhbtz" Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.632149 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.632265 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.632373 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.633200 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.633438 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.647417 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.648195 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-789bd68df-bj97t"] Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.653666 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-789bd68df-bj97t" Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.658607 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.659511 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d99c9f484-zhbtz"] Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.660865 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.661770 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.661858 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.661905 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.662376 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.666141 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-789bd68df-bj97t"] Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.671216 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.744695 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p7jn7" Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.754512 4808 ???:1] "http: TLS handshake error from 192.168.126.11:41368: no serving certificate available for the kubelet" Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.763094 4808 patch_prober.go:28] interesting pod/router-default-5444994796-h9wnt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 11 08:42:54 crc kubenswrapper[4808]: [-]has-synced failed: reason withheld Mar 11 08:42:54 crc kubenswrapper[4808]: [+]process-running ok Mar 11 08:42:54 crc kubenswrapper[4808]: healthz check failed Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.763136 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h9wnt" podUID="6c9411d8-3c3c-4ae8-9580-10ae4884967b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.770842 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bfdb996d-28ba-473a-859a-ead31adc0d38-client-ca\") pod \"route-controller-manager-6d99c9f484-zhbtz\" (UID: \"bfdb996d-28ba-473a-859a-ead31adc0d38\") " pod="openshift-route-controller-manager/route-controller-manager-6d99c9f484-zhbtz" Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.770901 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eced1a52-bfd2-4a53-b68f-fc28542d2898-proxy-ca-bundles\") pod \"controller-manager-789bd68df-bj97t\" (UID: \"eced1a52-bfd2-4a53-b68f-fc28542d2898\") " pod="openshift-controller-manager/controller-manager-789bd68df-bj97t" Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.770934 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eced1a52-bfd2-4a53-b68f-fc28542d2898-client-ca\") pod \"controller-manager-789bd68df-bj97t\" (UID: \"eced1a52-bfd2-4a53-b68f-fc28542d2898\") " pod="openshift-controller-manager/controller-manager-789bd68df-bj97t" Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.770971 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bfdb996d-28ba-473a-859a-ead31adc0d38-serving-cert\") pod \"route-controller-manager-6d99c9f484-zhbtz\" (UID: \"bfdb996d-28ba-473a-859a-ead31adc0d38\") " pod="openshift-route-controller-manager/route-controller-manager-6d99c9f484-zhbtz" Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.771062 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eced1a52-bfd2-4a53-b68f-fc28542d2898-config\") pod \"controller-manager-789bd68df-bj97t\" (UID: \"eced1a52-bfd2-4a53-b68f-fc28542d2898\") " pod="openshift-controller-manager/controller-manager-789bd68df-bj97t" Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.771088 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcm9q\" (UniqueName: \"kubernetes.io/projected/bfdb996d-28ba-473a-859a-ead31adc0d38-kube-api-access-fcm9q\") pod \"route-controller-manager-6d99c9f484-zhbtz\" (UID: \"bfdb996d-28ba-473a-859a-ead31adc0d38\") " pod="openshift-route-controller-manager/route-controller-manager-6d99c9f484-zhbtz" Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.771129 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfdb996d-28ba-473a-859a-ead31adc0d38-config\") pod \"route-controller-manager-6d99c9f484-zhbtz\" (UID: \"bfdb996d-28ba-473a-859a-ead31adc0d38\") " pod="openshift-route-controller-manager/route-controller-manager-6d99c9f484-zhbtz" Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.771247 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eced1a52-bfd2-4a53-b68f-fc28542d2898-serving-cert\") pod \"controller-manager-789bd68df-bj97t\" (UID: \"eced1a52-bfd2-4a53-b68f-fc28542d2898\") " pod="openshift-controller-manager/controller-manager-789bd68df-bj97t" Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.771310 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rt6md\" (UniqueName: \"kubernetes.io/projected/eced1a52-bfd2-4a53-b68f-fc28542d2898-kube-api-access-rt6md\") pod \"controller-manager-789bd68df-bj97t\" (UID: \"eced1a52-bfd2-4a53-b68f-fc28542d2898\") " pod="openshift-controller-manager/controller-manager-789bd68df-bj97t" Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.777834 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cnszv"] Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.786214 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-76p85"] Mar 11 08:42:54 crc kubenswrapper[4808]: W0311 08:42:54.798738 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5a83bb5a_5523_4258_b6ce_895c0b1d7f07.slice/crio-63c34a18d15242e471c764fee72ff5c382d4030e95bd6fb6b22f3cba14d5ee8d WatchSource:0}: Error finding container 63c34a18d15242e471c764fee72ff5c382d4030e95bd6fb6b22f3cba14d5ee8d: Status 404 returned error can't find the container with id 63c34a18d15242e471c764fee72ff5c382d4030e95bd6fb6b22f3cba14d5ee8d Mar 11 08:42:54 crc kubenswrapper[4808]: W0311 08:42:54.815940 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod65550926_3f8b_436d_8a5c_e425d8c4875f.slice/crio-4b169c4d22bb4941869a3dc4f405ee30482126efc68e79b7589d0d26b1d32b1a WatchSource:0}: Error finding container 4b169c4d22bb4941869a3dc4f405ee30482126efc68e79b7589d0d26b1d32b1a: Status 404 returned error can't find the container with id 4b169c4d22bb4941869a3dc4f405ee30482126efc68e79b7589d0d26b1d32b1a Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.872266 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcm9q\" (UniqueName: \"kubernetes.io/projected/bfdb996d-28ba-473a-859a-ead31adc0d38-kube-api-access-fcm9q\") pod \"route-controller-manager-6d99c9f484-zhbtz\" (UID: \"bfdb996d-28ba-473a-859a-ead31adc0d38\") " pod="openshift-route-controller-manager/route-controller-manager-6d99c9f484-zhbtz" Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.872326 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfdb996d-28ba-473a-859a-ead31adc0d38-config\") pod \"route-controller-manager-6d99c9f484-zhbtz\" (UID: \"bfdb996d-28ba-473a-859a-ead31adc0d38\") " pod="openshift-route-controller-manager/route-controller-manager-6d99c9f484-zhbtz" Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.872505 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eced1a52-bfd2-4a53-b68f-fc28542d2898-serving-cert\") pod \"controller-manager-789bd68df-bj97t\" (UID: \"eced1a52-bfd2-4a53-b68f-fc28542d2898\") " pod="openshift-controller-manager/controller-manager-789bd68df-bj97t" Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.872542 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rt6md\" (UniqueName: \"kubernetes.io/projected/eced1a52-bfd2-4a53-b68f-fc28542d2898-kube-api-access-rt6md\") pod \"controller-manager-789bd68df-bj97t\" (UID: \"eced1a52-bfd2-4a53-b68f-fc28542d2898\") " pod="openshift-controller-manager/controller-manager-789bd68df-bj97t" Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.872579 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bfdb996d-28ba-473a-859a-ead31adc0d38-client-ca\") pod \"route-controller-manager-6d99c9f484-zhbtz\" (UID: \"bfdb996d-28ba-473a-859a-ead31adc0d38\") " pod="openshift-route-controller-manager/route-controller-manager-6d99c9f484-zhbtz" Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.872597 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eced1a52-bfd2-4a53-b68f-fc28542d2898-proxy-ca-bundles\") pod \"controller-manager-789bd68df-bj97t\" (UID: \"eced1a52-bfd2-4a53-b68f-fc28542d2898\") " pod="openshift-controller-manager/controller-manager-789bd68df-bj97t" Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.872647 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eced1a52-bfd2-4a53-b68f-fc28542d2898-client-ca\") pod \"controller-manager-789bd68df-bj97t\" (UID: \"eced1a52-bfd2-4a53-b68f-fc28542d2898\") " pod="openshift-controller-manager/controller-manager-789bd68df-bj97t" Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.872891 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bfdb996d-28ba-473a-859a-ead31adc0d38-serving-cert\") pod \"route-controller-manager-6d99c9f484-zhbtz\" (UID: \"bfdb996d-28ba-473a-859a-ead31adc0d38\") " pod="openshift-route-controller-manager/route-controller-manager-6d99c9f484-zhbtz" Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.872956 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eced1a52-bfd2-4a53-b68f-fc28542d2898-config\") pod \"controller-manager-789bd68df-bj97t\" (UID: \"eced1a52-bfd2-4a53-b68f-fc28542d2898\") " pod="openshift-controller-manager/controller-manager-789bd68df-bj97t" Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.873501 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eced1a52-bfd2-4a53-b68f-fc28542d2898-client-ca\") pod \"controller-manager-789bd68df-bj97t\" (UID: \"eced1a52-bfd2-4a53-b68f-fc28542d2898\") " pod="openshift-controller-manager/controller-manager-789bd68df-bj97t" Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.874187 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eced1a52-bfd2-4a53-b68f-fc28542d2898-config\") pod \"controller-manager-789bd68df-bj97t\" (UID: \"eced1a52-bfd2-4a53-b68f-fc28542d2898\") " pod="openshift-controller-manager/controller-manager-789bd68df-bj97t" Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.874638 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bfdb996d-28ba-473a-859a-ead31adc0d38-client-ca\") pod \"route-controller-manager-6d99c9f484-zhbtz\" (UID: \"bfdb996d-28ba-473a-859a-ead31adc0d38\") " pod="openshift-route-controller-manager/route-controller-manager-6d99c9f484-zhbtz" Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.874740 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eced1a52-bfd2-4a53-b68f-fc28542d2898-proxy-ca-bundles\") pod \"controller-manager-789bd68df-bj97t\" (UID: \"eced1a52-bfd2-4a53-b68f-fc28542d2898\") " pod="openshift-controller-manager/controller-manager-789bd68df-bj97t" Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.874916 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfdb996d-28ba-473a-859a-ead31adc0d38-config\") pod \"route-controller-manager-6d99c9f484-zhbtz\" (UID: \"bfdb996d-28ba-473a-859a-ead31adc0d38\") " pod="openshift-route-controller-manager/route-controller-manager-6d99c9f484-zhbtz" Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.879028 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bfdb996d-28ba-473a-859a-ead31adc0d38-serving-cert\") pod \"route-controller-manager-6d99c9f484-zhbtz\" (UID: \"bfdb996d-28ba-473a-859a-ead31adc0d38\") " pod="openshift-route-controller-manager/route-controller-manager-6d99c9f484-zhbtz" Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.879133 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eced1a52-bfd2-4a53-b68f-fc28542d2898-serving-cert\") pod \"controller-manager-789bd68df-bj97t\" (UID: \"eced1a52-bfd2-4a53-b68f-fc28542d2898\") " pod="openshift-controller-manager/controller-manager-789bd68df-bj97t" Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.889711 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rt6md\" (UniqueName: \"kubernetes.io/projected/eced1a52-bfd2-4a53-b68f-fc28542d2898-kube-api-access-rt6md\") pod \"controller-manager-789bd68df-bj97t\" (UID: \"eced1a52-bfd2-4a53-b68f-fc28542d2898\") " pod="openshift-controller-manager/controller-manager-789bd68df-bj97t" Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.894751 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcm9q\" (UniqueName: \"kubernetes.io/projected/bfdb996d-28ba-473a-859a-ead31adc0d38-kube-api-access-fcm9q\") pod \"route-controller-manager-6d99c9f484-zhbtz\" (UID: \"bfdb996d-28ba-473a-859a-ead31adc0d38\") " pod="openshift-route-controller-manager/route-controller-manager-6d99c9f484-zhbtz" Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.979764 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d99c9f484-zhbtz" Mar 11 08:42:54 crc kubenswrapper[4808]: I0311 08:42:54.996728 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-p7jn7"] Mar 11 08:42:55 crc kubenswrapper[4808]: I0311 08:42:55.012031 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-789bd68df-bj97t" Mar 11 08:42:55 crc kubenswrapper[4808]: W0311 08:42:55.025183 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8c04d52d_2eda_40d3_8252_ac2e14d0a861.slice/crio-5a1a7c5e5e8d459c48e5780db5bae38a05be7ae5e32d1ced13d7be4abe378296 WatchSource:0}: Error finding container 5a1a7c5e5e8d459c48e5780db5bae38a05be7ae5e32d1ced13d7be4abe378296: Status 404 returned error can't find the container with id 5a1a7c5e5e8d459c48e5780db5bae38a05be7ae5e32d1ced13d7be4abe378296 Mar 11 08:42:55 crc kubenswrapper[4808]: I0311 08:42:55.040434 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Mar 11 08:42:55 crc kubenswrapper[4808]: I0311 08:42:55.041080 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 11 08:42:55 crc kubenswrapper[4808]: I0311 08:42:55.042957 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Mar 11 08:42:55 crc kubenswrapper[4808]: I0311 08:42:55.043046 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 11 08:42:55 crc kubenswrapper[4808]: I0311 08:42:55.052285 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Mar 11 08:42:55 crc kubenswrapper[4808]: I0311 08:42:55.127616 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dqmwn"] Mar 11 08:42:55 crc kubenswrapper[4808]: I0311 08:42:55.128901 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dqmwn" Mar 11 08:42:55 crc kubenswrapper[4808]: I0311 08:42:55.134305 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Mar 11 08:42:55 crc kubenswrapper[4808]: I0311 08:42:55.136032 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dqmwn"] Mar 11 08:42:55 crc kubenswrapper[4808]: I0311 08:42:55.182491 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0570fe69-84d7-4416-b7ee-a26b6c5603d0-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"0570fe69-84d7-4416-b7ee-a26b6c5603d0\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 11 08:42:55 crc kubenswrapper[4808]: I0311 08:42:55.182657 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0570fe69-84d7-4416-b7ee-a26b6c5603d0-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"0570fe69-84d7-4416-b7ee-a26b6c5603d0\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 11 08:42:55 crc kubenswrapper[4808]: I0311 08:42:55.254934 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d99c9f484-zhbtz"] Mar 11 08:42:55 crc kubenswrapper[4808]: I0311 08:42:55.284265 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5cd00c2e-5e6d-4fa5-a2a5-28153c8ef4bd-utilities\") pod \"redhat-operators-dqmwn\" (UID: \"5cd00c2e-5e6d-4fa5-a2a5-28153c8ef4bd\") " pod="openshift-marketplace/redhat-operators-dqmwn" Mar 11 08:42:55 crc kubenswrapper[4808]: I0311 08:42:55.284327 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ww7bz\" (UniqueName: \"kubernetes.io/projected/5cd00c2e-5e6d-4fa5-a2a5-28153c8ef4bd-kube-api-access-ww7bz\") pod \"redhat-operators-dqmwn\" (UID: \"5cd00c2e-5e6d-4fa5-a2a5-28153c8ef4bd\") " pod="openshift-marketplace/redhat-operators-dqmwn" Mar 11 08:42:55 crc kubenswrapper[4808]: I0311 08:42:55.284351 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0570fe69-84d7-4416-b7ee-a26b6c5603d0-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"0570fe69-84d7-4416-b7ee-a26b6c5603d0\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 11 08:42:55 crc kubenswrapper[4808]: I0311 08:42:55.284450 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0570fe69-84d7-4416-b7ee-a26b6c5603d0-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"0570fe69-84d7-4416-b7ee-a26b6c5603d0\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 11 08:42:55 crc kubenswrapper[4808]: I0311 08:42:55.284514 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5cd00c2e-5e6d-4fa5-a2a5-28153c8ef4bd-catalog-content\") pod \"redhat-operators-dqmwn\" (UID: \"5cd00c2e-5e6d-4fa5-a2a5-28153c8ef4bd\") " pod="openshift-marketplace/redhat-operators-dqmwn" Mar 11 08:42:55 crc kubenswrapper[4808]: I0311 08:42:55.284623 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0570fe69-84d7-4416-b7ee-a26b6c5603d0-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"0570fe69-84d7-4416-b7ee-a26b6c5603d0\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 11 08:42:55 crc kubenswrapper[4808]: I0311 08:42:55.309709 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0570fe69-84d7-4416-b7ee-a26b6c5603d0-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"0570fe69-84d7-4416-b7ee-a26b6c5603d0\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 11 08:42:55 crc kubenswrapper[4808]: I0311 08:42:55.377661 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 11 08:42:55 crc kubenswrapper[4808]: I0311 08:42:55.386247 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ww7bz\" (UniqueName: \"kubernetes.io/projected/5cd00c2e-5e6d-4fa5-a2a5-28153c8ef4bd-kube-api-access-ww7bz\") pod \"redhat-operators-dqmwn\" (UID: \"5cd00c2e-5e6d-4fa5-a2a5-28153c8ef4bd\") " pod="openshift-marketplace/redhat-operators-dqmwn" Mar 11 08:42:55 crc kubenswrapper[4808]: I0311 08:42:55.386394 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5cd00c2e-5e6d-4fa5-a2a5-28153c8ef4bd-catalog-content\") pod \"redhat-operators-dqmwn\" (UID: \"5cd00c2e-5e6d-4fa5-a2a5-28153c8ef4bd\") " pod="openshift-marketplace/redhat-operators-dqmwn" Mar 11 08:42:55 crc kubenswrapper[4808]: I0311 08:42:55.386478 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5cd00c2e-5e6d-4fa5-a2a5-28153c8ef4bd-utilities\") pod \"redhat-operators-dqmwn\" (UID: \"5cd00c2e-5e6d-4fa5-a2a5-28153c8ef4bd\") " pod="openshift-marketplace/redhat-operators-dqmwn" Mar 11 08:42:55 crc kubenswrapper[4808]: I0311 08:42:55.387049 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5cd00c2e-5e6d-4fa5-a2a5-28153c8ef4bd-utilities\") pod \"redhat-operators-dqmwn\" (UID: \"5cd00c2e-5e6d-4fa5-a2a5-28153c8ef4bd\") " pod="openshift-marketplace/redhat-operators-dqmwn" Mar 11 08:42:55 crc kubenswrapper[4808]: I0311 08:42:55.387625 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5cd00c2e-5e6d-4fa5-a2a5-28153c8ef4bd-catalog-content\") pod \"redhat-operators-dqmwn\" (UID: \"5cd00c2e-5e6d-4fa5-a2a5-28153c8ef4bd\") " pod="openshift-marketplace/redhat-operators-dqmwn" Mar 11 08:42:55 crc kubenswrapper[4808]: I0311 08:42:55.388339 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d99c9f484-zhbtz"] Mar 11 08:42:55 crc kubenswrapper[4808]: I0311 08:42:55.395742 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-789bd68df-bj97t"] Mar 11 08:42:55 crc kubenswrapper[4808]: I0311 08:42:55.421180 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ww7bz\" (UniqueName: \"kubernetes.io/projected/5cd00c2e-5e6d-4fa5-a2a5-28153c8ef4bd-kube-api-access-ww7bz\") pod \"redhat-operators-dqmwn\" (UID: \"5cd00c2e-5e6d-4fa5-a2a5-28153c8ef4bd\") " pod="openshift-marketplace/redhat-operators-dqmwn" Mar 11 08:42:55 crc kubenswrapper[4808]: I0311 08:42:55.450649 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dqmwn" Mar 11 08:42:55 crc kubenswrapper[4808]: I0311 08:42:55.470896 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6d99c9f484-zhbtz" event={"ID":"bfdb996d-28ba-473a-859a-ead31adc0d38","Type":"ContainerStarted","Data":"109569a7bdc8fd4711a5c096b23147a1289e3fe4e03b71f6cb6b140e17699a10"} Mar 11 08:42:55 crc kubenswrapper[4808]: I0311 08:42:55.475502 4808 generic.go:334] "Generic (PLEG): container finished" podID="8c04d52d-2eda-40d3-8252-ac2e14d0a861" containerID="11271cd9d7dbca602dd6d35ca2cf0972cd16896cb48334121797004bc4b4eb28" exitCode=0 Mar 11 08:42:55 crc kubenswrapper[4808]: I0311 08:42:55.475827 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p7jn7" event={"ID":"8c04d52d-2eda-40d3-8252-ac2e14d0a861","Type":"ContainerDied","Data":"11271cd9d7dbca602dd6d35ca2cf0972cd16896cb48334121797004bc4b4eb28"} Mar 11 08:42:55 crc kubenswrapper[4808]: I0311 08:42:55.475858 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p7jn7" event={"ID":"8c04d52d-2eda-40d3-8252-ac2e14d0a861","Type":"ContainerStarted","Data":"5a1a7c5e5e8d459c48e5780db5bae38a05be7ae5e32d1ced13d7be4abe378296"} Mar 11 08:42:55 crc kubenswrapper[4808]: I0311 08:42:55.482168 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-76p85" event={"ID":"65550926-3f8b-436d-8a5c-e425d8c4875f","Type":"ContainerStarted","Data":"530f9546a16fa2c98dd75198f9613a0e7eac5b9518c1427ed4c35d0d32de8797"} Mar 11 08:42:55 crc kubenswrapper[4808]: I0311 08:42:55.482207 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-76p85" event={"ID":"65550926-3f8b-436d-8a5c-e425d8c4875f","Type":"ContainerStarted","Data":"4b169c4d22bb4941869a3dc4f405ee30482126efc68e79b7589d0d26b1d32b1a"} Mar 11 08:42:55 crc kubenswrapper[4808]: I0311 08:42:55.482776 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-76p85" Mar 11 08:42:55 crc kubenswrapper[4808]: I0311 08:42:55.487874 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-789bd68df-bj97t" event={"ID":"eced1a52-bfd2-4a53-b68f-fc28542d2898","Type":"ContainerStarted","Data":"92cd5d8dfe941d1171f902ffb51b91cfa5cd89a46f5b08ec3d12c82c294fdd46"} Mar 11 08:42:55 crc kubenswrapper[4808]: I0311 08:42:55.489901 4808 generic.go:334] "Generic (PLEG): container finished" podID="5a83bb5a-5523-4258-b6ce-895c0b1d7f07" containerID="8c1d7c702e054ded9daa9bd4fbb6a1199f4f98de666664dfe792895e0ba1d748" exitCode=0 Mar 11 08:42:55 crc kubenswrapper[4808]: I0311 08:42:55.490025 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cnszv" event={"ID":"5a83bb5a-5523-4258-b6ce-895c0b1d7f07","Type":"ContainerDied","Data":"8c1d7c702e054ded9daa9bd4fbb6a1199f4f98de666664dfe792895e0ba1d748"} Mar 11 08:42:55 crc kubenswrapper[4808]: I0311 08:42:55.490062 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cnszv" event={"ID":"5a83bb5a-5523-4258-b6ce-895c0b1d7f07","Type":"ContainerStarted","Data":"63c34a18d15242e471c764fee72ff5c382d4030e95bd6fb6b22f3cba14d5ee8d"} Mar 11 08:42:55 crc kubenswrapper[4808]: I0311 08:42:55.528698 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-76p85" podStartSLOduration=161.528676257 podStartE2EDuration="2m41.528676257s" podCreationTimestamp="2026-03-11 08:40:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:42:55.520330257 +0000 UTC m=+226.473653587" watchObservedRunningTime="2026-03-11 08:42:55.528676257 +0000 UTC m=+226.481999577" Mar 11 08:42:55 crc kubenswrapper[4808]: I0311 08:42:55.553247 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-nn2pz"] Mar 11 08:42:55 crc kubenswrapper[4808]: I0311 08:42:55.554405 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nn2pz" Mar 11 08:42:55 crc kubenswrapper[4808]: I0311 08:42:55.564433 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nn2pz"] Mar 11 08:42:55 crc kubenswrapper[4808]: I0311 08:42:55.695276 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06cf5014-8032-4d6d-b905-1d7196c123c7-catalog-content\") pod \"redhat-operators-nn2pz\" (UID: \"06cf5014-8032-4d6d-b905-1d7196c123c7\") " pod="openshift-marketplace/redhat-operators-nn2pz" Mar 11 08:42:55 crc kubenswrapper[4808]: I0311 08:42:55.695381 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgnjp\" (UniqueName: \"kubernetes.io/projected/06cf5014-8032-4d6d-b905-1d7196c123c7-kube-api-access-fgnjp\") pod \"redhat-operators-nn2pz\" (UID: \"06cf5014-8032-4d6d-b905-1d7196c123c7\") " pod="openshift-marketplace/redhat-operators-nn2pz" Mar 11 08:42:55 crc kubenswrapper[4808]: I0311 08:42:55.695486 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06cf5014-8032-4d6d-b905-1d7196c123c7-utilities\") pod \"redhat-operators-nn2pz\" (UID: \"06cf5014-8032-4d6d-b905-1d7196c123c7\") " pod="openshift-marketplace/redhat-operators-nn2pz" Mar 11 08:42:55 crc kubenswrapper[4808]: I0311 08:42:55.760871 4808 patch_prober.go:28] interesting pod/router-default-5444994796-h9wnt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 11 08:42:55 crc kubenswrapper[4808]: [-]has-synced failed: reason withheld Mar 11 08:42:55 crc kubenswrapper[4808]: [+]process-running ok Mar 11 08:42:55 crc kubenswrapper[4808]: healthz check failed Mar 11 08:42:55 crc kubenswrapper[4808]: I0311 08:42:55.760918 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h9wnt" podUID="6c9411d8-3c3c-4ae8-9580-10ae4884967b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 11 08:42:55 crc kubenswrapper[4808]: I0311 08:42:55.796894 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgnjp\" (UniqueName: \"kubernetes.io/projected/06cf5014-8032-4d6d-b905-1d7196c123c7-kube-api-access-fgnjp\") pod \"redhat-operators-nn2pz\" (UID: \"06cf5014-8032-4d6d-b905-1d7196c123c7\") " pod="openshift-marketplace/redhat-operators-nn2pz" Mar 11 08:42:55 crc kubenswrapper[4808]: I0311 08:42:55.797164 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06cf5014-8032-4d6d-b905-1d7196c123c7-utilities\") pod \"redhat-operators-nn2pz\" (UID: \"06cf5014-8032-4d6d-b905-1d7196c123c7\") " pod="openshift-marketplace/redhat-operators-nn2pz" Mar 11 08:42:55 crc kubenswrapper[4808]: I0311 08:42:55.797204 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06cf5014-8032-4d6d-b905-1d7196c123c7-catalog-content\") pod \"redhat-operators-nn2pz\" (UID: \"06cf5014-8032-4d6d-b905-1d7196c123c7\") " pod="openshift-marketplace/redhat-operators-nn2pz" Mar 11 08:42:55 crc kubenswrapper[4808]: I0311 08:42:55.797596 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06cf5014-8032-4d6d-b905-1d7196c123c7-catalog-content\") pod \"redhat-operators-nn2pz\" (UID: \"06cf5014-8032-4d6d-b905-1d7196c123c7\") " pod="openshift-marketplace/redhat-operators-nn2pz" Mar 11 08:42:55 crc kubenswrapper[4808]: I0311 08:42:55.798271 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06cf5014-8032-4d6d-b905-1d7196c123c7-utilities\") pod \"redhat-operators-nn2pz\" (UID: \"06cf5014-8032-4d6d-b905-1d7196c123c7\") " pod="openshift-marketplace/redhat-operators-nn2pz" Mar 11 08:42:55 crc kubenswrapper[4808]: I0311 08:42:55.809601 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b5f17e9-1e47-4c7a-b225-a874c78f88ef" path="/var/lib/kubelet/pods/2b5f17e9-1e47-4c7a-b225-a874c78f88ef/volumes" Mar 11 08:42:55 crc kubenswrapper[4808]: I0311 08:42:55.810328 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83b44b9a-4c3d-4100-ade1-2645a32a237e" path="/var/lib/kubelet/pods/83b44b9a-4c3d-4100-ade1-2645a32a237e/volumes" Mar 11 08:42:55 crc kubenswrapper[4808]: I0311 08:42:55.811031 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Mar 11 08:42:55 crc kubenswrapper[4808]: I0311 08:42:55.811980 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dqmwn"] Mar 11 08:42:55 crc kubenswrapper[4808]: I0311 08:42:55.817723 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgnjp\" (UniqueName: \"kubernetes.io/projected/06cf5014-8032-4d6d-b905-1d7196c123c7-kube-api-access-fgnjp\") pod \"redhat-operators-nn2pz\" (UID: \"06cf5014-8032-4d6d-b905-1d7196c123c7\") " pod="openshift-marketplace/redhat-operators-nn2pz" Mar 11 08:42:55 crc kubenswrapper[4808]: W0311 08:42:55.818592 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5cd00c2e_5e6d_4fa5_a2a5_28153c8ef4bd.slice/crio-4bc7b1bedc609ebf4989499e799bb099c4bbfbcc608b8f89395f991e6807c3b9 WatchSource:0}: Error finding container 4bc7b1bedc609ebf4989499e799bb099c4bbfbcc608b8f89395f991e6807c3b9: Status 404 returned error can't find the container with id 4bc7b1bedc609ebf4989499e799bb099c4bbfbcc608b8f89395f991e6807c3b9 Mar 11 08:42:55 crc kubenswrapper[4808]: I0311 08:42:55.914867 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Mar 11 08:42:55 crc kubenswrapper[4808]: I0311 08:42:55.924133 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nn2pz" Mar 11 08:42:55 crc kubenswrapper[4808]: W0311 08:42:55.935491 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod0570fe69_84d7_4416_b7ee_a26b6c5603d0.slice/crio-b676171cf490c34dc7f92f380fb90c66bede6b97d440fb64eba5941e18298878 WatchSource:0}: Error finding container b676171cf490c34dc7f92f380fb90c66bede6b97d440fb64eba5941e18298878: Status 404 returned error can't find the container with id b676171cf490c34dc7f92f380fb90c66bede6b97d440fb64eba5941e18298878 Mar 11 08:42:56 crc kubenswrapper[4808]: I0311 08:42:56.190926 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-bp68q" Mar 11 08:42:56 crc kubenswrapper[4808]: I0311 08:42:56.198915 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-bp68q" Mar 11 08:42:56 crc kubenswrapper[4808]: I0311 08:42:56.459751 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-gv48h" Mar 11 08:42:56 crc kubenswrapper[4808]: I0311 08:42:56.536284 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nn2pz"] Mar 11 08:42:56 crc kubenswrapper[4808]: I0311 08:42:56.537783 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-789bd68df-bj97t" event={"ID":"eced1a52-bfd2-4a53-b68f-fc28542d2898","Type":"ContainerStarted","Data":"09532162146b90f5d107bfdceb7680e148dfa13323ba58c0ac73f1c97c00a7eb"} Mar 11 08:42:56 crc kubenswrapper[4808]: I0311 08:42:56.538825 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-789bd68df-bj97t" Mar 11 08:42:56 crc kubenswrapper[4808]: I0311 08:42:56.553013 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-789bd68df-bj97t" Mar 11 08:42:56 crc kubenswrapper[4808]: I0311 08:42:56.558690 4808 generic.go:334] "Generic (PLEG): container finished" podID="5cd00c2e-5e6d-4fa5-a2a5-28153c8ef4bd" containerID="22774567f7df2852b549858401f0464dd96c4621b4b8cf668a33045d6147516c" exitCode=0 Mar 11 08:42:56 crc kubenswrapper[4808]: I0311 08:42:56.558774 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dqmwn" event={"ID":"5cd00c2e-5e6d-4fa5-a2a5-28153c8ef4bd","Type":"ContainerDied","Data":"22774567f7df2852b549858401f0464dd96c4621b4b8cf668a33045d6147516c"} Mar 11 08:42:56 crc kubenswrapper[4808]: I0311 08:42:56.558800 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dqmwn" event={"ID":"5cd00c2e-5e6d-4fa5-a2a5-28153c8ef4bd","Type":"ContainerStarted","Data":"4bc7b1bedc609ebf4989499e799bb099c4bbfbcc608b8f89395f991e6807c3b9"} Mar 11 08:42:56 crc kubenswrapper[4808]: I0311 08:42:56.560721 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6d99c9f484-zhbtz" event={"ID":"bfdb996d-28ba-473a-859a-ead31adc0d38","Type":"ContainerStarted","Data":"2cccbe8f082b3fc6784dffed0037b4a70498c05f16c18fbba4a07f9c957a082a"} Mar 11 08:42:56 crc kubenswrapper[4808]: I0311 08:42:56.560815 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6d99c9f484-zhbtz" podUID="bfdb996d-28ba-473a-859a-ead31adc0d38" containerName="route-controller-manager" containerID="cri-o://2cccbe8f082b3fc6784dffed0037b4a70498c05f16c18fbba4a07f9c957a082a" gracePeriod=30 Mar 11 08:42:56 crc kubenswrapper[4808]: I0311 08:42:56.561032 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6d99c9f484-zhbtz" Mar 11 08:42:56 crc kubenswrapper[4808]: I0311 08:42:56.568409 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-789bd68df-bj97t" podStartSLOduration=3.568395375 podStartE2EDuration="3.568395375s" podCreationTimestamp="2026-03-11 08:42:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:42:56.567449208 +0000 UTC m=+227.520772528" watchObservedRunningTime="2026-03-11 08:42:56.568395375 +0000 UTC m=+227.521718695" Mar 11 08:42:56 crc kubenswrapper[4808]: I0311 08:42:56.591774 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"0570fe69-84d7-4416-b7ee-a26b6c5603d0","Type":"ContainerStarted","Data":"b676171cf490c34dc7f92f380fb90c66bede6b97d440fb64eba5941e18298878"} Mar 11 08:42:56 crc kubenswrapper[4808]: I0311 08:42:56.597126 4808 patch_prober.go:28] interesting pod/route-controller-manager-6d99c9f484-zhbtz container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.51:8443/healthz\": dial tcp 10.217.0.51:8443: connect: connection reset by peer" start-of-body= Mar 11 08:42:56 crc kubenswrapper[4808]: I0311 08:42:56.597192 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6d99c9f484-zhbtz" podUID="bfdb996d-28ba-473a-859a-ead31adc0d38" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.51:8443/healthz\": dial tcp 10.217.0.51:8443: connect: connection reset by peer" Mar 11 08:42:56 crc kubenswrapper[4808]: I0311 08:42:56.602203 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6d99c9f484-zhbtz" podStartSLOduration=3.60217843 podStartE2EDuration="3.60217843s" podCreationTimestamp="2026-03-11 08:42:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:42:56.591996636 +0000 UTC m=+227.545319956" watchObservedRunningTime="2026-03-11 08:42:56.60217843 +0000 UTC m=+227.555501750" Mar 11 08:42:56 crc kubenswrapper[4808]: I0311 08:42:56.758885 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-h9wnt" Mar 11 08:42:56 crc kubenswrapper[4808]: I0311 08:42:56.764805 4808 patch_prober.go:28] interesting pod/router-default-5444994796-h9wnt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 11 08:42:56 crc kubenswrapper[4808]: [-]has-synced failed: reason withheld Mar 11 08:42:56 crc kubenswrapper[4808]: [+]process-running ok Mar 11 08:42:56 crc kubenswrapper[4808]: healthz check failed Mar 11 08:42:56 crc kubenswrapper[4808]: I0311 08:42:56.764873 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h9wnt" podUID="6c9411d8-3c3c-4ae8-9580-10ae4884967b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 11 08:42:56 crc kubenswrapper[4808]: I0311 08:42:56.958705 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-zp4ks" Mar 11 08:42:56 crc kubenswrapper[4808]: I0311 08:42:56.959045 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-zp4ks" Mar 11 08:42:56 crc kubenswrapper[4808]: I0311 08:42:56.961302 4808 patch_prober.go:28] interesting pod/console-f9d7485db-zp4ks container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Mar 11 08:42:56 crc kubenswrapper[4808]: I0311 08:42:56.961414 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-zp4ks" podUID="24ab5c03-d768-4147-bbc2-4e71ac337623" containerName="console" probeResult="failure" output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" Mar 11 08:42:57 crc kubenswrapper[4808]: I0311 08:42:57.121459 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Mar 11 08:42:57 crc kubenswrapper[4808]: I0311 08:42:57.122152 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 11 08:42:57 crc kubenswrapper[4808]: I0311 08:42:57.124855 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Mar 11 08:42:57 crc kubenswrapper[4808]: I0311 08:42:57.129664 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Mar 11 08:42:57 crc kubenswrapper[4808]: I0311 08:42:57.146048 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Mar 11 08:42:57 crc kubenswrapper[4808]: I0311 08:42:57.182053 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-b25qz" Mar 11 08:42:57 crc kubenswrapper[4808]: I0311 08:42:57.243083 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/960e521b-59df-4b32-93e0-7bc83f864c68-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"960e521b-59df-4b32-93e0-7bc83f864c68\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 11 08:42:57 crc kubenswrapper[4808]: I0311 08:42:57.243182 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/960e521b-59df-4b32-93e0-7bc83f864c68-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"960e521b-59df-4b32-93e0-7bc83f864c68\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 11 08:42:57 crc kubenswrapper[4808]: I0311 08:42:57.263494 4808 patch_prober.go:28] interesting pod/downloads-7954f5f757-w6wrf container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Mar 11 08:42:57 crc kubenswrapper[4808]: I0311 08:42:57.263569 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-w6wrf" podUID="025e6699-14e0-4c1c-8c27-77f280040b4e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Mar 11 08:42:57 crc kubenswrapper[4808]: I0311 08:42:57.263860 4808 patch_prober.go:28] interesting pod/downloads-7954f5f757-w6wrf container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Mar 11 08:42:57 crc kubenswrapper[4808]: I0311 08:42:57.263875 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-w6wrf" podUID="025e6699-14e0-4c1c-8c27-77f280040b4e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Mar 11 08:42:57 crc kubenswrapper[4808]: I0311 08:42:57.344936 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/960e521b-59df-4b32-93e0-7bc83f864c68-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"960e521b-59df-4b32-93e0-7bc83f864c68\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 11 08:42:57 crc kubenswrapper[4808]: I0311 08:42:57.345083 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/960e521b-59df-4b32-93e0-7bc83f864c68-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"960e521b-59df-4b32-93e0-7bc83f864c68\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 11 08:42:57 crc kubenswrapper[4808]: I0311 08:42:57.345225 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/960e521b-59df-4b32-93e0-7bc83f864c68-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"960e521b-59df-4b32-93e0-7bc83f864c68\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 11 08:42:57 crc kubenswrapper[4808]: I0311 08:42:57.375941 4808 ???:1] "http: TLS handshake error from 192.168.126.11:41372: no serving certificate available for the kubelet" Mar 11 08:42:57 crc kubenswrapper[4808]: I0311 08:42:57.376205 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/960e521b-59df-4b32-93e0-7bc83f864c68-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"960e521b-59df-4b32-93e0-7bc83f864c68\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 11 08:42:57 crc kubenswrapper[4808]: I0311 08:42:57.462322 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 11 08:42:57 crc kubenswrapper[4808]: I0311 08:42:57.678289 4808 generic.go:334] "Generic (PLEG): container finished" podID="06cf5014-8032-4d6d-b905-1d7196c123c7" containerID="4d98b80721e95ed03e9a1f56cfc48377711f03acf42242d8e72dc321cd0e2b8c" exitCode=0 Mar 11 08:42:57 crc kubenswrapper[4808]: I0311 08:42:57.678629 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nn2pz" event={"ID":"06cf5014-8032-4d6d-b905-1d7196c123c7","Type":"ContainerDied","Data":"4d98b80721e95ed03e9a1f56cfc48377711f03acf42242d8e72dc321cd0e2b8c"} Mar 11 08:42:57 crc kubenswrapper[4808]: I0311 08:42:57.678658 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nn2pz" event={"ID":"06cf5014-8032-4d6d-b905-1d7196c123c7","Type":"ContainerStarted","Data":"9177ac8f927c4fb8931bf30538c47aceff331d1de7ca078b99f112d7e3e49d7f"} Mar 11 08:42:57 crc kubenswrapper[4808]: I0311 08:42:57.700587 4808 generic.go:334] "Generic (PLEG): container finished" podID="bfdb996d-28ba-473a-859a-ead31adc0d38" containerID="2cccbe8f082b3fc6784dffed0037b4a70498c05f16c18fbba4a07f9c957a082a" exitCode=0 Mar 11 08:42:57 crc kubenswrapper[4808]: I0311 08:42:57.700650 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6d99c9f484-zhbtz" event={"ID":"bfdb996d-28ba-473a-859a-ead31adc0d38","Type":"ContainerDied","Data":"2cccbe8f082b3fc6784dffed0037b4a70498c05f16c18fbba4a07f9c957a082a"} Mar 11 08:42:57 crc kubenswrapper[4808]: I0311 08:42:57.719059 4808 generic.go:334] "Generic (PLEG): container finished" podID="0570fe69-84d7-4416-b7ee-a26b6c5603d0" containerID="59fa6b97b1c84a009a19b73744fe63a2ee488fa10680301e814b5c88f44eff64" exitCode=0 Mar 11 08:42:57 crc kubenswrapper[4808]: I0311 08:42:57.719550 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"0570fe69-84d7-4416-b7ee-a26b6c5603d0","Type":"ContainerDied","Data":"59fa6b97b1c84a009a19b73744fe63a2ee488fa10680301e814b5c88f44eff64"} Mar 11 08:42:57 crc kubenswrapper[4808]: I0311 08:42:57.762021 4808 patch_prober.go:28] interesting pod/router-default-5444994796-h9wnt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 11 08:42:57 crc kubenswrapper[4808]: [-]has-synced failed: reason withheld Mar 11 08:42:57 crc kubenswrapper[4808]: [+]process-running ok Mar 11 08:42:57 crc kubenswrapper[4808]: healthz check failed Mar 11 08:42:57 crc kubenswrapper[4808]: I0311 08:42:57.762078 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h9wnt" podUID="6c9411d8-3c3c-4ae8-9580-10ae4884967b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 11 08:42:57 crc kubenswrapper[4808]: I0311 08:42:57.990301 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d99c9f484-zhbtz" Mar 11 08:42:58 crc kubenswrapper[4808]: I0311 08:42:58.040531 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79c6d49999-km84k"] Mar 11 08:42:58 crc kubenswrapper[4808]: E0311 08:42:58.041174 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfdb996d-28ba-473a-859a-ead31adc0d38" containerName="route-controller-manager" Mar 11 08:42:58 crc kubenswrapper[4808]: I0311 08:42:58.041196 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfdb996d-28ba-473a-859a-ead31adc0d38" containerName="route-controller-manager" Mar 11 08:42:58 crc kubenswrapper[4808]: I0311 08:42:58.041581 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfdb996d-28ba-473a-859a-ead31adc0d38" containerName="route-controller-manager" Mar 11 08:42:58 crc kubenswrapper[4808]: I0311 08:42:58.042233 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-79c6d49999-km84k" Mar 11 08:42:58 crc kubenswrapper[4808]: I0311 08:42:58.044664 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79c6d49999-km84k"] Mar 11 08:42:58 crc kubenswrapper[4808]: I0311 08:42:58.055936 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Mar 11 08:42:58 crc kubenswrapper[4808]: I0311 08:42:58.062668 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcm9q\" (UniqueName: \"kubernetes.io/projected/bfdb996d-28ba-473a-859a-ead31adc0d38-kube-api-access-fcm9q\") pod \"bfdb996d-28ba-473a-859a-ead31adc0d38\" (UID: \"bfdb996d-28ba-473a-859a-ead31adc0d38\") " Mar 11 08:42:58 crc kubenswrapper[4808]: I0311 08:42:58.062708 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bfdb996d-28ba-473a-859a-ead31adc0d38-serving-cert\") pod \"bfdb996d-28ba-473a-859a-ead31adc0d38\" (UID: \"bfdb996d-28ba-473a-859a-ead31adc0d38\") " Mar 11 08:42:58 crc kubenswrapper[4808]: I0311 08:42:58.062781 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfdb996d-28ba-473a-859a-ead31adc0d38-config\") pod \"bfdb996d-28ba-473a-859a-ead31adc0d38\" (UID: \"bfdb996d-28ba-473a-859a-ead31adc0d38\") " Mar 11 08:42:58 crc kubenswrapper[4808]: I0311 08:42:58.062809 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bfdb996d-28ba-473a-859a-ead31adc0d38-client-ca\") pod \"bfdb996d-28ba-473a-859a-ead31adc0d38\" (UID: \"bfdb996d-28ba-473a-859a-ead31adc0d38\") " Mar 11 08:42:58 crc kubenswrapper[4808]: I0311 08:42:58.063027 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0af7b44d-1117-4fd9-8982-b472d4f26786-config\") pod \"route-controller-manager-79c6d49999-km84k\" (UID: \"0af7b44d-1117-4fd9-8982-b472d4f26786\") " pod="openshift-route-controller-manager/route-controller-manager-79c6d49999-km84k" Mar 11 08:42:58 crc kubenswrapper[4808]: I0311 08:42:58.063110 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0af7b44d-1117-4fd9-8982-b472d4f26786-serving-cert\") pod \"route-controller-manager-79c6d49999-km84k\" (UID: \"0af7b44d-1117-4fd9-8982-b472d4f26786\") " pod="openshift-route-controller-manager/route-controller-manager-79c6d49999-km84k" Mar 11 08:42:58 crc kubenswrapper[4808]: I0311 08:42:58.063159 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8b585\" (UniqueName: \"kubernetes.io/projected/0af7b44d-1117-4fd9-8982-b472d4f26786-kube-api-access-8b585\") pod \"route-controller-manager-79c6d49999-km84k\" (UID: \"0af7b44d-1117-4fd9-8982-b472d4f26786\") " pod="openshift-route-controller-manager/route-controller-manager-79c6d49999-km84k" Mar 11 08:42:58 crc kubenswrapper[4808]: I0311 08:42:58.063240 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0af7b44d-1117-4fd9-8982-b472d4f26786-client-ca\") pod \"route-controller-manager-79c6d49999-km84k\" (UID: \"0af7b44d-1117-4fd9-8982-b472d4f26786\") " pod="openshift-route-controller-manager/route-controller-manager-79c6d49999-km84k" Mar 11 08:42:58 crc kubenswrapper[4808]: I0311 08:42:58.065512 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bfdb996d-28ba-473a-859a-ead31adc0d38-client-ca" (OuterVolumeSpecName: "client-ca") pod "bfdb996d-28ba-473a-859a-ead31adc0d38" (UID: "bfdb996d-28ba-473a-859a-ead31adc0d38"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:42:58 crc kubenswrapper[4808]: I0311 08:42:58.066226 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bfdb996d-28ba-473a-859a-ead31adc0d38-config" (OuterVolumeSpecName: "config") pod "bfdb996d-28ba-473a-859a-ead31adc0d38" (UID: "bfdb996d-28ba-473a-859a-ead31adc0d38"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:42:58 crc kubenswrapper[4808]: I0311 08:42:58.070693 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfdb996d-28ba-473a-859a-ead31adc0d38-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bfdb996d-28ba-473a-859a-ead31adc0d38" (UID: "bfdb996d-28ba-473a-859a-ead31adc0d38"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:42:58 crc kubenswrapper[4808]: I0311 08:42:58.070823 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfdb996d-28ba-473a-859a-ead31adc0d38-kube-api-access-fcm9q" (OuterVolumeSpecName: "kube-api-access-fcm9q") pod "bfdb996d-28ba-473a-859a-ead31adc0d38" (UID: "bfdb996d-28ba-473a-859a-ead31adc0d38"). InnerVolumeSpecName "kube-api-access-fcm9q". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:42:58 crc kubenswrapper[4808]: I0311 08:42:58.164513 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0af7b44d-1117-4fd9-8982-b472d4f26786-client-ca\") pod \"route-controller-manager-79c6d49999-km84k\" (UID: \"0af7b44d-1117-4fd9-8982-b472d4f26786\") " pod="openshift-route-controller-manager/route-controller-manager-79c6d49999-km84k" Mar 11 08:42:58 crc kubenswrapper[4808]: I0311 08:42:58.164657 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0af7b44d-1117-4fd9-8982-b472d4f26786-config\") pod \"route-controller-manager-79c6d49999-km84k\" (UID: \"0af7b44d-1117-4fd9-8982-b472d4f26786\") " pod="openshift-route-controller-manager/route-controller-manager-79c6d49999-km84k" Mar 11 08:42:58 crc kubenswrapper[4808]: I0311 08:42:58.164685 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0af7b44d-1117-4fd9-8982-b472d4f26786-serving-cert\") pod \"route-controller-manager-79c6d49999-km84k\" (UID: \"0af7b44d-1117-4fd9-8982-b472d4f26786\") " pod="openshift-route-controller-manager/route-controller-manager-79c6d49999-km84k" Mar 11 08:42:58 crc kubenswrapper[4808]: I0311 08:42:58.164804 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8b585\" (UniqueName: \"kubernetes.io/projected/0af7b44d-1117-4fd9-8982-b472d4f26786-kube-api-access-8b585\") pod \"route-controller-manager-79c6d49999-km84k\" (UID: \"0af7b44d-1117-4fd9-8982-b472d4f26786\") " pod="openshift-route-controller-manager/route-controller-manager-79c6d49999-km84k" Mar 11 08:42:58 crc kubenswrapper[4808]: I0311 08:42:58.164999 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcm9q\" (UniqueName: \"kubernetes.io/projected/bfdb996d-28ba-473a-859a-ead31adc0d38-kube-api-access-fcm9q\") on node \"crc\" DevicePath \"\"" Mar 11 08:42:58 crc kubenswrapper[4808]: I0311 08:42:58.165044 4808 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bfdb996d-28ba-473a-859a-ead31adc0d38-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 11 08:42:58 crc kubenswrapper[4808]: I0311 08:42:58.165054 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfdb996d-28ba-473a-859a-ead31adc0d38-config\") on node \"crc\" DevicePath \"\"" Mar 11 08:42:58 crc kubenswrapper[4808]: I0311 08:42:58.165063 4808 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bfdb996d-28ba-473a-859a-ead31adc0d38-client-ca\") on node \"crc\" DevicePath \"\"" Mar 11 08:42:58 crc kubenswrapper[4808]: I0311 08:42:58.167025 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0af7b44d-1117-4fd9-8982-b472d4f26786-client-ca\") pod \"route-controller-manager-79c6d49999-km84k\" (UID: \"0af7b44d-1117-4fd9-8982-b472d4f26786\") " pod="openshift-route-controller-manager/route-controller-manager-79c6d49999-km84k" Mar 11 08:42:58 crc kubenswrapper[4808]: I0311 08:42:58.167286 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0af7b44d-1117-4fd9-8982-b472d4f26786-config\") pod \"route-controller-manager-79c6d49999-km84k\" (UID: \"0af7b44d-1117-4fd9-8982-b472d4f26786\") " pod="openshift-route-controller-manager/route-controller-manager-79c6d49999-km84k" Mar 11 08:42:58 crc kubenswrapper[4808]: I0311 08:42:58.171574 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0af7b44d-1117-4fd9-8982-b472d4f26786-serving-cert\") pod \"route-controller-manager-79c6d49999-km84k\" (UID: \"0af7b44d-1117-4fd9-8982-b472d4f26786\") " pod="openshift-route-controller-manager/route-controller-manager-79c6d49999-km84k" Mar 11 08:42:58 crc kubenswrapper[4808]: I0311 08:42:58.185253 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8b585\" (UniqueName: \"kubernetes.io/projected/0af7b44d-1117-4fd9-8982-b472d4f26786-kube-api-access-8b585\") pod \"route-controller-manager-79c6d49999-km84k\" (UID: \"0af7b44d-1117-4fd9-8982-b472d4f26786\") " pod="openshift-route-controller-manager/route-controller-manager-79c6d49999-km84k" Mar 11 08:42:58 crc kubenswrapper[4808]: I0311 08:42:58.363160 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-79c6d49999-km84k" Mar 11 08:42:58 crc kubenswrapper[4808]: I0311 08:42:58.719327 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79c6d49999-km84k"] Mar 11 08:42:58 crc kubenswrapper[4808]: I0311 08:42:58.728806 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"960e521b-59df-4b32-93e0-7bc83f864c68","Type":"ContainerStarted","Data":"2e208fdf3395d825c06c47b229a2b911cc94b7e7eeb2a01817c5c0248a9e3a52"} Mar 11 08:42:58 crc kubenswrapper[4808]: I0311 08:42:58.733755 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d99c9f484-zhbtz" Mar 11 08:42:58 crc kubenswrapper[4808]: I0311 08:42:58.736507 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6d99c9f484-zhbtz" event={"ID":"bfdb996d-28ba-473a-859a-ead31adc0d38","Type":"ContainerDied","Data":"109569a7bdc8fd4711a5c096b23147a1289e3fe4e03b71f6cb6b140e17699a10"} Mar 11 08:42:58 crc kubenswrapper[4808]: I0311 08:42:58.736566 4808 scope.go:117] "RemoveContainer" containerID="2cccbe8f082b3fc6784dffed0037b4a70498c05f16c18fbba4a07f9c957a082a" Mar 11 08:42:58 crc kubenswrapper[4808]: W0311 08:42:58.776964 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0af7b44d_1117_4fd9_8982_b472d4f26786.slice/crio-84ef1381998c8c254a468f57e2792bee75722bbf2a044c784a3c15216f41a4cd WatchSource:0}: Error finding container 84ef1381998c8c254a468f57e2792bee75722bbf2a044c784a3c15216f41a4cd: Status 404 returned error can't find the container with id 84ef1381998c8c254a468f57e2792bee75722bbf2a044c784a3c15216f41a4cd Mar 11 08:42:58 crc kubenswrapper[4808]: I0311 08:42:58.777034 4808 patch_prober.go:28] interesting pod/router-default-5444994796-h9wnt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 11 08:42:58 crc kubenswrapper[4808]: [-]has-synced failed: reason withheld Mar 11 08:42:58 crc kubenswrapper[4808]: [+]process-running ok Mar 11 08:42:58 crc kubenswrapper[4808]: healthz check failed Mar 11 08:42:58 crc kubenswrapper[4808]: I0311 08:42:58.777547 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h9wnt" podUID="6c9411d8-3c3c-4ae8-9580-10ae4884967b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 11 08:42:58 crc kubenswrapper[4808]: I0311 08:42:58.812850 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d99c9f484-zhbtz"] Mar 11 08:42:58 crc kubenswrapper[4808]: I0311 08:42:58.816272 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d99c9f484-zhbtz"] Mar 11 08:42:59 crc kubenswrapper[4808]: I0311 08:42:59.161146 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 11 08:42:59 crc kubenswrapper[4808]: I0311 08:42:59.190417 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0570fe69-84d7-4416-b7ee-a26b6c5603d0-kube-api-access\") pod \"0570fe69-84d7-4416-b7ee-a26b6c5603d0\" (UID: \"0570fe69-84d7-4416-b7ee-a26b6c5603d0\") " Mar 11 08:42:59 crc kubenswrapper[4808]: I0311 08:42:59.190534 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0570fe69-84d7-4416-b7ee-a26b6c5603d0-kubelet-dir\") pod \"0570fe69-84d7-4416-b7ee-a26b6c5603d0\" (UID: \"0570fe69-84d7-4416-b7ee-a26b6c5603d0\") " Mar 11 08:42:59 crc kubenswrapper[4808]: I0311 08:42:59.190803 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0570fe69-84d7-4416-b7ee-a26b6c5603d0-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "0570fe69-84d7-4416-b7ee-a26b6c5603d0" (UID: "0570fe69-84d7-4416-b7ee-a26b6c5603d0"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 11 08:42:59 crc kubenswrapper[4808]: I0311 08:42:59.199327 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0570fe69-84d7-4416-b7ee-a26b6c5603d0-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0570fe69-84d7-4416-b7ee-a26b6c5603d0" (UID: "0570fe69-84d7-4416-b7ee-a26b6c5603d0"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:42:59 crc kubenswrapper[4808]: I0311 08:42:59.291749 4808 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0570fe69-84d7-4416-b7ee-a26b6c5603d0-kubelet-dir\") on node \"crc\" DevicePath \"\"" Mar 11 08:42:59 crc kubenswrapper[4808]: I0311 08:42:59.292130 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0570fe69-84d7-4416-b7ee-a26b6c5603d0-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 11 08:42:59 crc kubenswrapper[4808]: I0311 08:42:59.743382 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-79c6d49999-km84k" event={"ID":"0af7b44d-1117-4fd9-8982-b472d4f26786","Type":"ContainerStarted","Data":"bee10930c9ba67e4045dc8f238dee274f3b8e8042e2fc21f4cbdaab6f2c14c39"} Mar 11 08:42:59 crc kubenswrapper[4808]: I0311 08:42:59.743424 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-79c6d49999-km84k" event={"ID":"0af7b44d-1117-4fd9-8982-b472d4f26786","Type":"ContainerStarted","Data":"84ef1381998c8c254a468f57e2792bee75722bbf2a044c784a3c15216f41a4cd"} Mar 11 08:42:59 crc kubenswrapper[4808]: I0311 08:42:59.743810 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-79c6d49999-km84k" Mar 11 08:42:59 crc kubenswrapper[4808]: I0311 08:42:59.749257 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 11 08:42:59 crc kubenswrapper[4808]: I0311 08:42:59.749264 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"0570fe69-84d7-4416-b7ee-a26b6c5603d0","Type":"ContainerDied","Data":"b676171cf490c34dc7f92f380fb90c66bede6b97d440fb64eba5941e18298878"} Mar 11 08:42:59 crc kubenswrapper[4808]: I0311 08:42:59.749428 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b676171cf490c34dc7f92f380fb90c66bede6b97d440fb64eba5941e18298878" Mar 11 08:42:59 crc kubenswrapper[4808]: I0311 08:42:59.764767 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-79c6d49999-km84k" podStartSLOduration=4.764719353 podStartE2EDuration="4.764719353s" podCreationTimestamp="2026-03-11 08:42:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:42:59.759625586 +0000 UTC m=+230.712948906" watchObservedRunningTime="2026-03-11 08:42:59.764719353 +0000 UTC m=+230.718042673" Mar 11 08:42:59 crc kubenswrapper[4808]: I0311 08:42:59.769840 4808 patch_prober.go:28] interesting pod/router-default-5444994796-h9wnt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 11 08:42:59 crc kubenswrapper[4808]: [+]has-synced ok Mar 11 08:42:59 crc kubenswrapper[4808]: [+]process-running ok Mar 11 08:42:59 crc kubenswrapper[4808]: healthz check failed Mar 11 08:42:59 crc kubenswrapper[4808]: I0311 08:42:59.769899 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h9wnt" podUID="6c9411d8-3c3c-4ae8-9580-10ae4884967b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 11 08:42:59 crc kubenswrapper[4808]: I0311 08:42:59.772207 4808 generic.go:334] "Generic (PLEG): container finished" podID="960e521b-59df-4b32-93e0-7bc83f864c68" containerID="5b4c90e3a4da531134897b016029cf710984b3b870386224cabe94889737d841" exitCode=0 Mar 11 08:42:59 crc kubenswrapper[4808]: I0311 08:42:59.772295 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"960e521b-59df-4b32-93e0-7bc83f864c68","Type":"ContainerDied","Data":"5b4c90e3a4da531134897b016029cf710984b3b870386224cabe94889737d841"} Mar 11 08:42:59 crc kubenswrapper[4808]: I0311 08:42:59.787585 4808 generic.go:334] "Generic (PLEG): container finished" podID="86edbb02-cc48-4845-86d4-51c46a4120bf" containerID="4c93a608601bdd1a5a8a72c0a02be858aa4010c878cb3b6b1a0c808f95563a34" exitCode=0 Mar 11 08:42:59 crc kubenswrapper[4808]: I0311 08:42:59.787641 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29553630-gsnv8" event={"ID":"86edbb02-cc48-4845-86d4-51c46a4120bf","Type":"ContainerDied","Data":"4c93a608601bdd1a5a8a72c0a02be858aa4010c878cb3b6b1a0c808f95563a34"} Mar 11 08:42:59 crc kubenswrapper[4808]: I0311 08:42:59.806206 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bfdb996d-28ba-473a-859a-ead31adc0d38" path="/var/lib/kubelet/pods/bfdb996d-28ba-473a-859a-ead31adc0d38/volumes" Mar 11 08:42:59 crc kubenswrapper[4808]: I0311 08:42:59.868852 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-79c6d49999-km84k" Mar 11 08:43:00 crc kubenswrapper[4808]: I0311 08:43:00.762898 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-h9wnt" Mar 11 08:43:00 crc kubenswrapper[4808]: I0311 08:43:00.767742 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-h9wnt" Mar 11 08:43:01 crc kubenswrapper[4808]: I0311 08:43:01.236740 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29553630-gsnv8" Mar 11 08:43:01 crc kubenswrapper[4808]: I0311 08:43:01.334151 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 11 08:43:01 crc kubenswrapper[4808]: I0311 08:43:01.344144 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hvdjq\" (UniqueName: \"kubernetes.io/projected/86edbb02-cc48-4845-86d4-51c46a4120bf-kube-api-access-hvdjq\") pod \"86edbb02-cc48-4845-86d4-51c46a4120bf\" (UID: \"86edbb02-cc48-4845-86d4-51c46a4120bf\") " Mar 11 08:43:01 crc kubenswrapper[4808]: I0311 08:43:01.344202 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/86edbb02-cc48-4845-86d4-51c46a4120bf-secret-volume\") pod \"86edbb02-cc48-4845-86d4-51c46a4120bf\" (UID: \"86edbb02-cc48-4845-86d4-51c46a4120bf\") " Mar 11 08:43:01 crc kubenswrapper[4808]: I0311 08:43:01.344245 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/86edbb02-cc48-4845-86d4-51c46a4120bf-config-volume\") pod \"86edbb02-cc48-4845-86d4-51c46a4120bf\" (UID: \"86edbb02-cc48-4845-86d4-51c46a4120bf\") " Mar 11 08:43:01 crc kubenswrapper[4808]: I0311 08:43:01.346057 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86edbb02-cc48-4845-86d4-51c46a4120bf-config-volume" (OuterVolumeSpecName: "config-volume") pod "86edbb02-cc48-4845-86d4-51c46a4120bf" (UID: "86edbb02-cc48-4845-86d4-51c46a4120bf"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:43:01 crc kubenswrapper[4808]: I0311 08:43:01.402180 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86edbb02-cc48-4845-86d4-51c46a4120bf-kube-api-access-hvdjq" (OuterVolumeSpecName: "kube-api-access-hvdjq") pod "86edbb02-cc48-4845-86d4-51c46a4120bf" (UID: "86edbb02-cc48-4845-86d4-51c46a4120bf"). InnerVolumeSpecName "kube-api-access-hvdjq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:43:01 crc kubenswrapper[4808]: I0311 08:43:01.402381 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86edbb02-cc48-4845-86d4-51c46a4120bf-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "86edbb02-cc48-4845-86d4-51c46a4120bf" (UID: "86edbb02-cc48-4845-86d4-51c46a4120bf"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:43:01 crc kubenswrapper[4808]: I0311 08:43:01.454237 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/960e521b-59df-4b32-93e0-7bc83f864c68-kubelet-dir\") pod \"960e521b-59df-4b32-93e0-7bc83f864c68\" (UID: \"960e521b-59df-4b32-93e0-7bc83f864c68\") " Mar 11 08:43:01 crc kubenswrapper[4808]: I0311 08:43:01.454370 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/960e521b-59df-4b32-93e0-7bc83f864c68-kube-api-access\") pod \"960e521b-59df-4b32-93e0-7bc83f864c68\" (UID: \"960e521b-59df-4b32-93e0-7bc83f864c68\") " Mar 11 08:43:01 crc kubenswrapper[4808]: I0311 08:43:01.454419 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/960e521b-59df-4b32-93e0-7bc83f864c68-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "960e521b-59df-4b32-93e0-7bc83f864c68" (UID: "960e521b-59df-4b32-93e0-7bc83f864c68"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 11 08:43:01 crc kubenswrapper[4808]: I0311 08:43:01.454619 4808 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/960e521b-59df-4b32-93e0-7bc83f864c68-kubelet-dir\") on node \"crc\" DevicePath \"\"" Mar 11 08:43:01 crc kubenswrapper[4808]: I0311 08:43:01.454637 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hvdjq\" (UniqueName: \"kubernetes.io/projected/86edbb02-cc48-4845-86d4-51c46a4120bf-kube-api-access-hvdjq\") on node \"crc\" DevicePath \"\"" Mar 11 08:43:01 crc kubenswrapper[4808]: I0311 08:43:01.454650 4808 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/86edbb02-cc48-4845-86d4-51c46a4120bf-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 11 08:43:01 crc kubenswrapper[4808]: I0311 08:43:01.454661 4808 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/86edbb02-cc48-4845-86d4-51c46a4120bf-config-volume\") on node \"crc\" DevicePath \"\"" Mar 11 08:43:01 crc kubenswrapper[4808]: I0311 08:43:01.459800 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/960e521b-59df-4b32-93e0-7bc83f864c68-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "960e521b-59df-4b32-93e0-7bc83f864c68" (UID: "960e521b-59df-4b32-93e0-7bc83f864c68"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:43:01 crc kubenswrapper[4808]: I0311 08:43:01.555342 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/960e521b-59df-4b32-93e0-7bc83f864c68-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 11 08:43:01 crc kubenswrapper[4808]: I0311 08:43:01.860595 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29553630-gsnv8" Mar 11 08:43:01 crc kubenswrapper[4808]: I0311 08:43:01.860628 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29553630-gsnv8" event={"ID":"86edbb02-cc48-4845-86d4-51c46a4120bf","Type":"ContainerDied","Data":"7c32559a658e25e9160702c9cc7c78375228500b699ab42370f21e1447c084df"} Mar 11 08:43:01 crc kubenswrapper[4808]: I0311 08:43:01.861160 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c32559a658e25e9160702c9cc7c78375228500b699ab42370f21e1447c084df" Mar 11 08:43:01 crc kubenswrapper[4808]: I0311 08:43:01.869245 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 11 08:43:01 crc kubenswrapper[4808]: I0311 08:43:01.869858 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"960e521b-59df-4b32-93e0-7bc83f864c68","Type":"ContainerDied","Data":"2e208fdf3395d825c06c47b229a2b911cc94b7e7eeb2a01817c5c0248a9e3a52"} Mar 11 08:43:01 crc kubenswrapper[4808]: I0311 08:43:01.870489 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2e208fdf3395d825c06c47b229a2b911cc94b7e7eeb2a01817c5c0248a9e3a52" Mar 11 08:43:02 crc kubenswrapper[4808]: I0311 08:43:02.159091 4808 ???:1] "http: TLS handshake error from 192.168.126.11:35814: no serving certificate available for the kubelet" Mar 11 08:43:02 crc kubenswrapper[4808]: I0311 08:43:02.291120 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-9mn64" Mar 11 08:43:02 crc kubenswrapper[4808]: I0311 08:43:02.530058 4808 ???:1] "http: TLS handshake error from 192.168.126.11:35824: no serving certificate available for the kubelet" Mar 11 08:43:06 crc kubenswrapper[4808]: I0311 08:43:06.936566 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cf747e37-c201-4dcc-a2a5-2429f4eba47d-metrics-certs\") pod \"network-metrics-daemon-kqsq9\" (UID: \"cf747e37-c201-4dcc-a2a5-2429f4eba47d\") " pod="openshift-multus/network-metrics-daemon-kqsq9" Mar 11 08:43:06 crc kubenswrapper[4808]: I0311 08:43:06.956206 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cf747e37-c201-4dcc-a2a5-2429f4eba47d-metrics-certs\") pod \"network-metrics-daemon-kqsq9\" (UID: \"cf747e37-c201-4dcc-a2a5-2429f4eba47d\") " pod="openshift-multus/network-metrics-daemon-kqsq9" Mar 11 08:43:07 crc kubenswrapper[4808]: I0311 08:43:07.009375 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-zp4ks" Mar 11 08:43:07 crc kubenswrapper[4808]: I0311 08:43:07.013979 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-zp4ks" Mar 11 08:43:07 crc kubenswrapper[4808]: I0311 08:43:07.113628 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kqsq9" Mar 11 08:43:07 crc kubenswrapper[4808]: I0311 08:43:07.254961 4808 patch_prober.go:28] interesting pod/downloads-7954f5f757-w6wrf container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Mar 11 08:43:07 crc kubenswrapper[4808]: I0311 08:43:07.254973 4808 patch_prober.go:28] interesting pod/downloads-7954f5f757-w6wrf container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Mar 11 08:43:07 crc kubenswrapper[4808]: I0311 08:43:07.255024 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-w6wrf" podUID="025e6699-14e0-4c1c-8c27-77f280040b4e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Mar 11 08:43:07 crc kubenswrapper[4808]: I0311 08:43:07.255059 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-w6wrf" podUID="025e6699-14e0-4c1c-8c27-77f280040b4e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Mar 11 08:43:12 crc kubenswrapper[4808]: I0311 08:43:12.369083 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-789bd68df-bj97t"] Mar 11 08:43:12 crc kubenswrapper[4808]: I0311 08:43:12.370433 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-789bd68df-bj97t" podUID="eced1a52-bfd2-4a53-b68f-fc28542d2898" containerName="controller-manager" containerID="cri-o://09532162146b90f5d107bfdceb7680e148dfa13323ba58c0ac73f1c97c00a7eb" gracePeriod=30 Mar 11 08:43:12 crc kubenswrapper[4808]: I0311 08:43:12.372385 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79c6d49999-km84k"] Mar 11 08:43:12 crc kubenswrapper[4808]: I0311 08:43:12.372607 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-79c6d49999-km84k" podUID="0af7b44d-1117-4fd9-8982-b472d4f26786" containerName="route-controller-manager" containerID="cri-o://bee10930c9ba67e4045dc8f238dee274f3b8e8042e2fc21f4cbdaab6f2c14c39" gracePeriod=30 Mar 11 08:43:14 crc kubenswrapper[4808]: I0311 08:43:14.624569 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-76p85" Mar 11 08:43:14 crc kubenswrapper[4808]: I0311 08:43:14.947332 4808 generic.go:334] "Generic (PLEG): container finished" podID="eced1a52-bfd2-4a53-b68f-fc28542d2898" containerID="09532162146b90f5d107bfdceb7680e148dfa13323ba58c0ac73f1c97c00a7eb" exitCode=0 Mar 11 08:43:14 crc kubenswrapper[4808]: I0311 08:43:14.947417 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-789bd68df-bj97t" event={"ID":"eced1a52-bfd2-4a53-b68f-fc28542d2898","Type":"ContainerDied","Data":"09532162146b90f5d107bfdceb7680e148dfa13323ba58c0ac73f1c97c00a7eb"} Mar 11 08:43:14 crc kubenswrapper[4808]: I0311 08:43:14.949043 4808 generic.go:334] "Generic (PLEG): container finished" podID="0af7b44d-1117-4fd9-8982-b472d4f26786" containerID="bee10930c9ba67e4045dc8f238dee274f3b8e8042e2fc21f4cbdaab6f2c14c39" exitCode=0 Mar 11 08:43:14 crc kubenswrapper[4808]: I0311 08:43:14.949097 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-79c6d49999-km84k" event={"ID":"0af7b44d-1117-4fd9-8982-b472d4f26786","Type":"ContainerDied","Data":"bee10930c9ba67e4045dc8f238dee274f3b8e8042e2fc21f4cbdaab6f2c14c39"} Mar 11 08:43:15 crc kubenswrapper[4808]: I0311 08:43:15.013695 4808 patch_prober.go:28] interesting pod/controller-manager-789bd68df-bj97t container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.52:8443/healthz\": dial tcp 10.217.0.52:8443: connect: connection refused" start-of-body= Mar 11 08:43:15 crc kubenswrapper[4808]: I0311 08:43:15.013754 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-789bd68df-bj97t" podUID="eced1a52-bfd2-4a53-b68f-fc28542d2898" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.52:8443/healthz\": dial tcp 10.217.0.52:8443: connect: connection refused" Mar 11 08:43:16 crc kubenswrapper[4808]: I0311 08:43:16.027729 4808 patch_prober.go:28] interesting pod/machine-config-daemon-tfsm9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 08:43:16 crc kubenswrapper[4808]: I0311 08:43:16.027837 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 08:43:17 crc kubenswrapper[4808]: I0311 08:43:17.261098 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-w6wrf" Mar 11 08:43:19 crc kubenswrapper[4808]: I0311 08:43:19.365477 4808 patch_prober.go:28] interesting pod/route-controller-manager-79c6d49999-km84k container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.57:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 11 08:43:19 crc kubenswrapper[4808]: I0311 08:43:19.365871 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-79c6d49999-km84k" podUID="0af7b44d-1117-4fd9-8982-b472d4f26786" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.57:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 11 08:43:21 crc kubenswrapper[4808]: E0311 08:43:21.140803 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/openshift4/ose-cli:latest" Mar 11 08:43:21 crc kubenswrapper[4808]: E0311 08:43:21.141289 4808 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 11 08:43:21 crc kubenswrapper[4808]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Mar 11 08:43:21 crc kubenswrapper[4808]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4qfph,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29553642-n69sp_openshift-infra(29ace785-0297-4201-9d1c-778af0740058): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled Mar 11 08:43:21 crc kubenswrapper[4808]: > logger="UnhandledError" Mar 11 08:43:21 crc kubenswrapper[4808]: E0311 08:43:21.142403 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-infra/auto-csr-approver-29553642-n69sp" podUID="29ace785-0297-4201-9d1c-778af0740058" Mar 11 08:43:21 crc kubenswrapper[4808]: I0311 08:43:21.423857 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-79c6d49999-km84k" Mar 11 08:43:21 crc kubenswrapper[4808]: I0311 08:43:21.426289 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-789bd68df-bj97t" Mar 11 08:43:21 crc kubenswrapper[4808]: I0311 08:43:21.447219 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cc9c5fb9c-bhw8h"] Mar 11 08:43:21 crc kubenswrapper[4808]: E0311 08:43:21.447645 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0570fe69-84d7-4416-b7ee-a26b6c5603d0" containerName="pruner" Mar 11 08:43:21 crc kubenswrapper[4808]: I0311 08:43:21.447659 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="0570fe69-84d7-4416-b7ee-a26b6c5603d0" containerName="pruner" Mar 11 08:43:21 crc kubenswrapper[4808]: E0311 08:43:21.447674 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0af7b44d-1117-4fd9-8982-b472d4f26786" containerName="route-controller-manager" Mar 11 08:43:21 crc kubenswrapper[4808]: I0311 08:43:21.447680 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="0af7b44d-1117-4fd9-8982-b472d4f26786" containerName="route-controller-manager" Mar 11 08:43:21 crc kubenswrapper[4808]: E0311 08:43:21.447689 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86edbb02-cc48-4845-86d4-51c46a4120bf" containerName="collect-profiles" Mar 11 08:43:21 crc kubenswrapper[4808]: I0311 08:43:21.447694 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="86edbb02-cc48-4845-86d4-51c46a4120bf" containerName="collect-profiles" Mar 11 08:43:21 crc kubenswrapper[4808]: E0311 08:43:21.447705 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="960e521b-59df-4b32-93e0-7bc83f864c68" containerName="pruner" Mar 11 08:43:21 crc kubenswrapper[4808]: I0311 08:43:21.447712 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="960e521b-59df-4b32-93e0-7bc83f864c68" containerName="pruner" Mar 11 08:43:21 crc kubenswrapper[4808]: E0311 08:43:21.447720 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eced1a52-bfd2-4a53-b68f-fc28542d2898" containerName="controller-manager" Mar 11 08:43:21 crc kubenswrapper[4808]: I0311 08:43:21.447727 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="eced1a52-bfd2-4a53-b68f-fc28542d2898" containerName="controller-manager" Mar 11 08:43:21 crc kubenswrapper[4808]: I0311 08:43:21.447824 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="960e521b-59df-4b32-93e0-7bc83f864c68" containerName="pruner" Mar 11 08:43:21 crc kubenswrapper[4808]: I0311 08:43:21.447833 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="eced1a52-bfd2-4a53-b68f-fc28542d2898" containerName="controller-manager" Mar 11 08:43:21 crc kubenswrapper[4808]: I0311 08:43:21.447845 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="0570fe69-84d7-4416-b7ee-a26b6c5603d0" containerName="pruner" Mar 11 08:43:21 crc kubenswrapper[4808]: I0311 08:43:21.447852 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="86edbb02-cc48-4845-86d4-51c46a4120bf" containerName="collect-profiles" Mar 11 08:43:21 crc kubenswrapper[4808]: I0311 08:43:21.447860 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="0af7b44d-1117-4fd9-8982-b472d4f26786" containerName="route-controller-manager" Mar 11 08:43:21 crc kubenswrapper[4808]: I0311 08:43:21.448196 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6cc9c5fb9c-bhw8h" Mar 11 08:43:21 crc kubenswrapper[4808]: I0311 08:43:21.463478 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cc9c5fb9c-bhw8h"] Mar 11 08:43:21 crc kubenswrapper[4808]: I0311 08:43:21.484968 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8b585\" (UniqueName: \"kubernetes.io/projected/0af7b44d-1117-4fd9-8982-b472d4f26786-kube-api-access-8b585\") pod \"0af7b44d-1117-4fd9-8982-b472d4f26786\" (UID: \"0af7b44d-1117-4fd9-8982-b472d4f26786\") " Mar 11 08:43:21 crc kubenswrapper[4808]: I0311 08:43:21.485054 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eced1a52-bfd2-4a53-b68f-fc28542d2898-proxy-ca-bundles\") pod \"eced1a52-bfd2-4a53-b68f-fc28542d2898\" (UID: \"eced1a52-bfd2-4a53-b68f-fc28542d2898\") " Mar 11 08:43:21 crc kubenswrapper[4808]: I0311 08:43:21.485141 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eced1a52-bfd2-4a53-b68f-fc28542d2898-serving-cert\") pod \"eced1a52-bfd2-4a53-b68f-fc28542d2898\" (UID: \"eced1a52-bfd2-4a53-b68f-fc28542d2898\") " Mar 11 08:43:21 crc kubenswrapper[4808]: I0311 08:43:21.485171 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0af7b44d-1117-4fd9-8982-b472d4f26786-serving-cert\") pod \"0af7b44d-1117-4fd9-8982-b472d4f26786\" (UID: \"0af7b44d-1117-4fd9-8982-b472d4f26786\") " Mar 11 08:43:21 crc kubenswrapper[4808]: I0311 08:43:21.485203 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0af7b44d-1117-4fd9-8982-b472d4f26786-config\") pod \"0af7b44d-1117-4fd9-8982-b472d4f26786\" (UID: \"0af7b44d-1117-4fd9-8982-b472d4f26786\") " Mar 11 08:43:21 crc kubenswrapper[4808]: I0311 08:43:21.485260 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0af7b44d-1117-4fd9-8982-b472d4f26786-client-ca\") pod \"0af7b44d-1117-4fd9-8982-b472d4f26786\" (UID: \"0af7b44d-1117-4fd9-8982-b472d4f26786\") " Mar 11 08:43:21 crc kubenswrapper[4808]: I0311 08:43:21.485287 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eced1a52-bfd2-4a53-b68f-fc28542d2898-config\") pod \"eced1a52-bfd2-4a53-b68f-fc28542d2898\" (UID: \"eced1a52-bfd2-4a53-b68f-fc28542d2898\") " Mar 11 08:43:21 crc kubenswrapper[4808]: I0311 08:43:21.485329 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rt6md\" (UniqueName: \"kubernetes.io/projected/eced1a52-bfd2-4a53-b68f-fc28542d2898-kube-api-access-rt6md\") pod \"eced1a52-bfd2-4a53-b68f-fc28542d2898\" (UID: \"eced1a52-bfd2-4a53-b68f-fc28542d2898\") " Mar 11 08:43:21 crc kubenswrapper[4808]: I0311 08:43:21.485388 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eced1a52-bfd2-4a53-b68f-fc28542d2898-client-ca\") pod \"eced1a52-bfd2-4a53-b68f-fc28542d2898\" (UID: \"eced1a52-bfd2-4a53-b68f-fc28542d2898\") " Mar 11 08:43:21 crc kubenswrapper[4808]: I0311 08:43:21.485589 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c25fd6f8-54c9-4e97-b2f4-3e05ba48c52a-client-ca\") pod \"route-controller-manager-6cc9c5fb9c-bhw8h\" (UID: \"c25fd6f8-54c9-4e97-b2f4-3e05ba48c52a\") " pod="openshift-route-controller-manager/route-controller-manager-6cc9c5fb9c-bhw8h" Mar 11 08:43:21 crc kubenswrapper[4808]: I0311 08:43:21.485627 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c25fd6f8-54c9-4e97-b2f4-3e05ba48c52a-serving-cert\") pod \"route-controller-manager-6cc9c5fb9c-bhw8h\" (UID: \"c25fd6f8-54c9-4e97-b2f4-3e05ba48c52a\") " pod="openshift-route-controller-manager/route-controller-manager-6cc9c5fb9c-bhw8h" Mar 11 08:43:21 crc kubenswrapper[4808]: I0311 08:43:21.485690 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c25fd6f8-54c9-4e97-b2f4-3e05ba48c52a-config\") pod \"route-controller-manager-6cc9c5fb9c-bhw8h\" (UID: \"c25fd6f8-54c9-4e97-b2f4-3e05ba48c52a\") " pod="openshift-route-controller-manager/route-controller-manager-6cc9c5fb9c-bhw8h" Mar 11 08:43:21 crc kubenswrapper[4808]: I0311 08:43:21.485739 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n28ln\" (UniqueName: \"kubernetes.io/projected/c25fd6f8-54c9-4e97-b2f4-3e05ba48c52a-kube-api-access-n28ln\") pod \"route-controller-manager-6cc9c5fb9c-bhw8h\" (UID: \"c25fd6f8-54c9-4e97-b2f4-3e05ba48c52a\") " pod="openshift-route-controller-manager/route-controller-manager-6cc9c5fb9c-bhw8h" Mar 11 08:43:21 crc kubenswrapper[4808]: I0311 08:43:21.486166 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eced1a52-bfd2-4a53-b68f-fc28542d2898-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "eced1a52-bfd2-4a53-b68f-fc28542d2898" (UID: "eced1a52-bfd2-4a53-b68f-fc28542d2898"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:43:21 crc kubenswrapper[4808]: I0311 08:43:21.486674 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eced1a52-bfd2-4a53-b68f-fc28542d2898-client-ca" (OuterVolumeSpecName: "client-ca") pod "eced1a52-bfd2-4a53-b68f-fc28542d2898" (UID: "eced1a52-bfd2-4a53-b68f-fc28542d2898"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:43:21 crc kubenswrapper[4808]: I0311 08:43:21.486751 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0af7b44d-1117-4fd9-8982-b472d4f26786-client-ca" (OuterVolumeSpecName: "client-ca") pod "0af7b44d-1117-4fd9-8982-b472d4f26786" (UID: "0af7b44d-1117-4fd9-8982-b472d4f26786"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:43:21 crc kubenswrapper[4808]: I0311 08:43:21.487168 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eced1a52-bfd2-4a53-b68f-fc28542d2898-config" (OuterVolumeSpecName: "config") pod "eced1a52-bfd2-4a53-b68f-fc28542d2898" (UID: "eced1a52-bfd2-4a53-b68f-fc28542d2898"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:43:21 crc kubenswrapper[4808]: I0311 08:43:21.487664 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0af7b44d-1117-4fd9-8982-b472d4f26786-config" (OuterVolumeSpecName: "config") pod "0af7b44d-1117-4fd9-8982-b472d4f26786" (UID: "0af7b44d-1117-4fd9-8982-b472d4f26786"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:43:21 crc kubenswrapper[4808]: I0311 08:43:21.490597 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0af7b44d-1117-4fd9-8982-b472d4f26786-kube-api-access-8b585" (OuterVolumeSpecName: "kube-api-access-8b585") pod "0af7b44d-1117-4fd9-8982-b472d4f26786" (UID: "0af7b44d-1117-4fd9-8982-b472d4f26786"). InnerVolumeSpecName "kube-api-access-8b585". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:43:21 crc kubenswrapper[4808]: I0311 08:43:21.492747 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0af7b44d-1117-4fd9-8982-b472d4f26786-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0af7b44d-1117-4fd9-8982-b472d4f26786" (UID: "0af7b44d-1117-4fd9-8982-b472d4f26786"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:43:21 crc kubenswrapper[4808]: I0311 08:43:21.499157 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eced1a52-bfd2-4a53-b68f-fc28542d2898-kube-api-access-rt6md" (OuterVolumeSpecName: "kube-api-access-rt6md") pod "eced1a52-bfd2-4a53-b68f-fc28542d2898" (UID: "eced1a52-bfd2-4a53-b68f-fc28542d2898"). InnerVolumeSpecName "kube-api-access-rt6md". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:43:21 crc kubenswrapper[4808]: I0311 08:43:21.500513 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eced1a52-bfd2-4a53-b68f-fc28542d2898-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "eced1a52-bfd2-4a53-b68f-fc28542d2898" (UID: "eced1a52-bfd2-4a53-b68f-fc28542d2898"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:43:21 crc kubenswrapper[4808]: I0311 08:43:21.587034 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c25fd6f8-54c9-4e97-b2f4-3e05ba48c52a-config\") pod \"route-controller-manager-6cc9c5fb9c-bhw8h\" (UID: \"c25fd6f8-54c9-4e97-b2f4-3e05ba48c52a\") " pod="openshift-route-controller-manager/route-controller-manager-6cc9c5fb9c-bhw8h" Mar 11 08:43:21 crc kubenswrapper[4808]: I0311 08:43:21.587099 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n28ln\" (UniqueName: \"kubernetes.io/projected/c25fd6f8-54c9-4e97-b2f4-3e05ba48c52a-kube-api-access-n28ln\") pod \"route-controller-manager-6cc9c5fb9c-bhw8h\" (UID: \"c25fd6f8-54c9-4e97-b2f4-3e05ba48c52a\") " pod="openshift-route-controller-manager/route-controller-manager-6cc9c5fb9c-bhw8h" Mar 11 08:43:21 crc kubenswrapper[4808]: I0311 08:43:21.587179 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c25fd6f8-54c9-4e97-b2f4-3e05ba48c52a-client-ca\") pod \"route-controller-manager-6cc9c5fb9c-bhw8h\" (UID: \"c25fd6f8-54c9-4e97-b2f4-3e05ba48c52a\") " pod="openshift-route-controller-manager/route-controller-manager-6cc9c5fb9c-bhw8h" Mar 11 08:43:21 crc kubenswrapper[4808]: I0311 08:43:21.587198 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c25fd6f8-54c9-4e97-b2f4-3e05ba48c52a-serving-cert\") pod \"route-controller-manager-6cc9c5fb9c-bhw8h\" (UID: \"c25fd6f8-54c9-4e97-b2f4-3e05ba48c52a\") " pod="openshift-route-controller-manager/route-controller-manager-6cc9c5fb9c-bhw8h" Mar 11 08:43:21 crc kubenswrapper[4808]: I0311 08:43:21.587231 4808 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eced1a52-bfd2-4a53-b68f-fc28542d2898-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 11 08:43:21 crc kubenswrapper[4808]: I0311 08:43:21.587242 4808 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0af7b44d-1117-4fd9-8982-b472d4f26786-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 11 08:43:21 crc kubenswrapper[4808]: I0311 08:43:21.587252 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0af7b44d-1117-4fd9-8982-b472d4f26786-config\") on node \"crc\" DevicePath \"\"" Mar 11 08:43:21 crc kubenswrapper[4808]: I0311 08:43:21.587260 4808 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0af7b44d-1117-4fd9-8982-b472d4f26786-client-ca\") on node \"crc\" DevicePath \"\"" Mar 11 08:43:21 crc kubenswrapper[4808]: I0311 08:43:21.587267 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eced1a52-bfd2-4a53-b68f-fc28542d2898-config\") on node \"crc\" DevicePath \"\"" Mar 11 08:43:21 crc kubenswrapper[4808]: I0311 08:43:21.587275 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rt6md\" (UniqueName: \"kubernetes.io/projected/eced1a52-bfd2-4a53-b68f-fc28542d2898-kube-api-access-rt6md\") on node \"crc\" DevicePath \"\"" Mar 11 08:43:21 crc kubenswrapper[4808]: I0311 08:43:21.587285 4808 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eced1a52-bfd2-4a53-b68f-fc28542d2898-client-ca\") on node \"crc\" DevicePath \"\"" Mar 11 08:43:21 crc kubenswrapper[4808]: I0311 08:43:21.587295 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8b585\" (UniqueName: \"kubernetes.io/projected/0af7b44d-1117-4fd9-8982-b472d4f26786-kube-api-access-8b585\") on node \"crc\" DevicePath \"\"" Mar 11 08:43:21 crc kubenswrapper[4808]: I0311 08:43:21.587303 4808 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eced1a52-bfd2-4a53-b68f-fc28542d2898-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 11 08:43:21 crc kubenswrapper[4808]: I0311 08:43:21.588677 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c25fd6f8-54c9-4e97-b2f4-3e05ba48c52a-client-ca\") pod \"route-controller-manager-6cc9c5fb9c-bhw8h\" (UID: \"c25fd6f8-54c9-4e97-b2f4-3e05ba48c52a\") " pod="openshift-route-controller-manager/route-controller-manager-6cc9c5fb9c-bhw8h" Mar 11 08:43:21 crc kubenswrapper[4808]: I0311 08:43:21.588767 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c25fd6f8-54c9-4e97-b2f4-3e05ba48c52a-config\") pod \"route-controller-manager-6cc9c5fb9c-bhw8h\" (UID: \"c25fd6f8-54c9-4e97-b2f4-3e05ba48c52a\") " pod="openshift-route-controller-manager/route-controller-manager-6cc9c5fb9c-bhw8h" Mar 11 08:43:21 crc kubenswrapper[4808]: I0311 08:43:21.591149 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c25fd6f8-54c9-4e97-b2f4-3e05ba48c52a-serving-cert\") pod \"route-controller-manager-6cc9c5fb9c-bhw8h\" (UID: \"c25fd6f8-54c9-4e97-b2f4-3e05ba48c52a\") " pod="openshift-route-controller-manager/route-controller-manager-6cc9c5fb9c-bhw8h" Mar 11 08:43:21 crc kubenswrapper[4808]: I0311 08:43:21.603941 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n28ln\" (UniqueName: \"kubernetes.io/projected/c25fd6f8-54c9-4e97-b2f4-3e05ba48c52a-kube-api-access-n28ln\") pod \"route-controller-manager-6cc9c5fb9c-bhw8h\" (UID: \"c25fd6f8-54c9-4e97-b2f4-3e05ba48c52a\") " pod="openshift-route-controller-manager/route-controller-manager-6cc9c5fb9c-bhw8h" Mar 11 08:43:21 crc kubenswrapper[4808]: I0311 08:43:21.768915 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6cc9c5fb9c-bhw8h" Mar 11 08:43:22 crc kubenswrapper[4808]: I0311 08:43:22.002551 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-79c6d49999-km84k" event={"ID":"0af7b44d-1117-4fd9-8982-b472d4f26786","Type":"ContainerDied","Data":"84ef1381998c8c254a468f57e2792bee75722bbf2a044c784a3c15216f41a4cd"} Mar 11 08:43:22 crc kubenswrapper[4808]: I0311 08:43:22.002608 4808 scope.go:117] "RemoveContainer" containerID="bee10930c9ba67e4045dc8f238dee274f3b8e8042e2fc21f4cbdaab6f2c14c39" Mar 11 08:43:22 crc kubenswrapper[4808]: I0311 08:43:22.002715 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-79c6d49999-km84k" Mar 11 08:43:22 crc kubenswrapper[4808]: I0311 08:43:22.013929 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-789bd68df-bj97t" Mar 11 08:43:22 crc kubenswrapper[4808]: I0311 08:43:22.014034 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-789bd68df-bj97t" event={"ID":"eced1a52-bfd2-4a53-b68f-fc28542d2898","Type":"ContainerDied","Data":"92cd5d8dfe941d1171f902ffb51b91cfa5cd89a46f5b08ec3d12c82c294fdd46"} Mar 11 08:43:22 crc kubenswrapper[4808]: E0311 08:43:22.028337 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29553642-n69sp" podUID="29ace785-0297-4201-9d1c-778af0740058" Mar 11 08:43:22 crc kubenswrapper[4808]: I0311 08:43:22.041866 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79c6d49999-km84k"] Mar 11 08:43:22 crc kubenswrapper[4808]: I0311 08:43:22.044836 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79c6d49999-km84k"] Mar 11 08:43:22 crc kubenswrapper[4808]: I0311 08:43:22.062760 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-789bd68df-bj97t"] Mar 11 08:43:22 crc kubenswrapper[4808]: I0311 08:43:22.069270 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-789bd68df-bj97t"] Mar 11 08:43:23 crc kubenswrapper[4808]: I0311 08:43:23.033539 4808 ???:1] "http: TLS handshake error from 192.168.126.11:55380: no serving certificate available for the kubelet" Mar 11 08:43:23 crc kubenswrapper[4808]: I0311 08:43:23.644736 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-74d4b4f8bf-sj6xb"] Mar 11 08:43:23 crc kubenswrapper[4808]: I0311 08:43:23.645589 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-74d4b4f8bf-sj6xb" Mar 11 08:43:23 crc kubenswrapper[4808]: I0311 08:43:23.648603 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 11 08:43:23 crc kubenswrapper[4808]: I0311 08:43:23.651713 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 11 08:43:23 crc kubenswrapper[4808]: I0311 08:43:23.651769 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 11 08:43:23 crc kubenswrapper[4808]: I0311 08:43:23.651718 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 11 08:43:23 crc kubenswrapper[4808]: I0311 08:43:23.652396 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 11 08:43:23 crc kubenswrapper[4808]: I0311 08:43:23.652497 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 11 08:43:23 crc kubenswrapper[4808]: I0311 08:43:23.654039 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-74d4b4f8bf-sj6xb"] Mar 11 08:43:23 crc kubenswrapper[4808]: I0311 08:43:23.658874 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 11 08:43:23 crc kubenswrapper[4808]: I0311 08:43:23.716897 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/717368c6-5802-4b49-8081-5f957031df07-config\") pod \"controller-manager-74d4b4f8bf-sj6xb\" (UID: \"717368c6-5802-4b49-8081-5f957031df07\") " pod="openshift-controller-manager/controller-manager-74d4b4f8bf-sj6xb" Mar 11 08:43:23 crc kubenswrapper[4808]: I0311 08:43:23.716942 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbg9t\" (UniqueName: \"kubernetes.io/projected/717368c6-5802-4b49-8081-5f957031df07-kube-api-access-hbg9t\") pod \"controller-manager-74d4b4f8bf-sj6xb\" (UID: \"717368c6-5802-4b49-8081-5f957031df07\") " pod="openshift-controller-manager/controller-manager-74d4b4f8bf-sj6xb" Mar 11 08:43:23 crc kubenswrapper[4808]: I0311 08:43:23.716992 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/717368c6-5802-4b49-8081-5f957031df07-client-ca\") pod \"controller-manager-74d4b4f8bf-sj6xb\" (UID: \"717368c6-5802-4b49-8081-5f957031df07\") " pod="openshift-controller-manager/controller-manager-74d4b4f8bf-sj6xb" Mar 11 08:43:23 crc kubenswrapper[4808]: I0311 08:43:23.717111 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/717368c6-5802-4b49-8081-5f957031df07-proxy-ca-bundles\") pod \"controller-manager-74d4b4f8bf-sj6xb\" (UID: \"717368c6-5802-4b49-8081-5f957031df07\") " pod="openshift-controller-manager/controller-manager-74d4b4f8bf-sj6xb" Mar 11 08:43:23 crc kubenswrapper[4808]: I0311 08:43:23.717134 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/717368c6-5802-4b49-8081-5f957031df07-serving-cert\") pod \"controller-manager-74d4b4f8bf-sj6xb\" (UID: \"717368c6-5802-4b49-8081-5f957031df07\") " pod="openshift-controller-manager/controller-manager-74d4b4f8bf-sj6xb" Mar 11 08:43:23 crc kubenswrapper[4808]: I0311 08:43:23.794934 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0af7b44d-1117-4fd9-8982-b472d4f26786" path="/var/lib/kubelet/pods/0af7b44d-1117-4fd9-8982-b472d4f26786/volumes" Mar 11 08:43:23 crc kubenswrapper[4808]: I0311 08:43:23.795601 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eced1a52-bfd2-4a53-b68f-fc28542d2898" path="/var/lib/kubelet/pods/eced1a52-bfd2-4a53-b68f-fc28542d2898/volumes" Mar 11 08:43:23 crc kubenswrapper[4808]: I0311 08:43:23.817940 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/717368c6-5802-4b49-8081-5f957031df07-serving-cert\") pod \"controller-manager-74d4b4f8bf-sj6xb\" (UID: \"717368c6-5802-4b49-8081-5f957031df07\") " pod="openshift-controller-manager/controller-manager-74d4b4f8bf-sj6xb" Mar 11 08:43:23 crc kubenswrapper[4808]: I0311 08:43:23.818167 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/717368c6-5802-4b49-8081-5f957031df07-config\") pod \"controller-manager-74d4b4f8bf-sj6xb\" (UID: \"717368c6-5802-4b49-8081-5f957031df07\") " pod="openshift-controller-manager/controller-manager-74d4b4f8bf-sj6xb" Mar 11 08:43:23 crc kubenswrapper[4808]: I0311 08:43:23.818248 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbg9t\" (UniqueName: \"kubernetes.io/projected/717368c6-5802-4b49-8081-5f957031df07-kube-api-access-hbg9t\") pod \"controller-manager-74d4b4f8bf-sj6xb\" (UID: \"717368c6-5802-4b49-8081-5f957031df07\") " pod="openshift-controller-manager/controller-manager-74d4b4f8bf-sj6xb" Mar 11 08:43:23 crc kubenswrapper[4808]: I0311 08:43:23.818368 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/717368c6-5802-4b49-8081-5f957031df07-client-ca\") pod \"controller-manager-74d4b4f8bf-sj6xb\" (UID: \"717368c6-5802-4b49-8081-5f957031df07\") " pod="openshift-controller-manager/controller-manager-74d4b4f8bf-sj6xb" Mar 11 08:43:23 crc kubenswrapper[4808]: I0311 08:43:23.818467 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/717368c6-5802-4b49-8081-5f957031df07-proxy-ca-bundles\") pod \"controller-manager-74d4b4f8bf-sj6xb\" (UID: \"717368c6-5802-4b49-8081-5f957031df07\") " pod="openshift-controller-manager/controller-manager-74d4b4f8bf-sj6xb" Mar 11 08:43:23 crc kubenswrapper[4808]: I0311 08:43:23.819602 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/717368c6-5802-4b49-8081-5f957031df07-proxy-ca-bundles\") pod \"controller-manager-74d4b4f8bf-sj6xb\" (UID: \"717368c6-5802-4b49-8081-5f957031df07\") " pod="openshift-controller-manager/controller-manager-74d4b4f8bf-sj6xb" Mar 11 08:43:23 crc kubenswrapper[4808]: I0311 08:43:23.819677 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/717368c6-5802-4b49-8081-5f957031df07-client-ca\") pod \"controller-manager-74d4b4f8bf-sj6xb\" (UID: \"717368c6-5802-4b49-8081-5f957031df07\") " pod="openshift-controller-manager/controller-manager-74d4b4f8bf-sj6xb" Mar 11 08:43:23 crc kubenswrapper[4808]: I0311 08:43:23.819879 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/717368c6-5802-4b49-8081-5f957031df07-config\") pod \"controller-manager-74d4b4f8bf-sj6xb\" (UID: \"717368c6-5802-4b49-8081-5f957031df07\") " pod="openshift-controller-manager/controller-manager-74d4b4f8bf-sj6xb" Mar 11 08:43:23 crc kubenswrapper[4808]: I0311 08:43:23.828952 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/717368c6-5802-4b49-8081-5f957031df07-serving-cert\") pod \"controller-manager-74d4b4f8bf-sj6xb\" (UID: \"717368c6-5802-4b49-8081-5f957031df07\") " pod="openshift-controller-manager/controller-manager-74d4b4f8bf-sj6xb" Mar 11 08:43:23 crc kubenswrapper[4808]: I0311 08:43:23.839074 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbg9t\" (UniqueName: \"kubernetes.io/projected/717368c6-5802-4b49-8081-5f957031df07-kube-api-access-hbg9t\") pod \"controller-manager-74d4b4f8bf-sj6xb\" (UID: \"717368c6-5802-4b49-8081-5f957031df07\") " pod="openshift-controller-manager/controller-manager-74d4b4f8bf-sj6xb" Mar 11 08:43:23 crc kubenswrapper[4808]: I0311 08:43:23.968530 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-74d4b4f8bf-sj6xb" Mar 11 08:43:27 crc kubenswrapper[4808]: I0311 08:43:27.232425 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kgwqw" Mar 11 08:43:28 crc kubenswrapper[4808]: I0311 08:43:28.061211 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-kqsq9"] Mar 11 08:43:30 crc kubenswrapper[4808]: I0311 08:43:30.046534 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Mar 11 08:43:30 crc kubenswrapper[4808]: I0311 08:43:30.047683 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 11 08:43:30 crc kubenswrapper[4808]: I0311 08:43:30.050110 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Mar 11 08:43:30 crc kubenswrapper[4808]: I0311 08:43:30.054262 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Mar 11 08:43:30 crc kubenswrapper[4808]: I0311 08:43:30.054668 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 11 08:43:30 crc kubenswrapper[4808]: I0311 08:43:30.103112 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bf48c1f1-0cc2-45ee-924b-684e78843aff-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"bf48c1f1-0cc2-45ee-924b-684e78843aff\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 11 08:43:30 crc kubenswrapper[4808]: I0311 08:43:30.103324 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bf48c1f1-0cc2-45ee-924b-684e78843aff-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"bf48c1f1-0cc2-45ee-924b-684e78843aff\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 11 08:43:30 crc kubenswrapper[4808]: I0311 08:43:30.204388 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bf48c1f1-0cc2-45ee-924b-684e78843aff-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"bf48c1f1-0cc2-45ee-924b-684e78843aff\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 11 08:43:30 crc kubenswrapper[4808]: I0311 08:43:30.204489 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bf48c1f1-0cc2-45ee-924b-684e78843aff-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"bf48c1f1-0cc2-45ee-924b-684e78843aff\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 11 08:43:30 crc kubenswrapper[4808]: I0311 08:43:30.204557 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bf48c1f1-0cc2-45ee-924b-684e78843aff-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"bf48c1f1-0cc2-45ee-924b-684e78843aff\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 11 08:43:30 crc kubenswrapper[4808]: I0311 08:43:30.228039 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bf48c1f1-0cc2-45ee-924b-684e78843aff-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"bf48c1f1-0cc2-45ee-924b-684e78843aff\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 11 08:43:30 crc kubenswrapper[4808]: I0311 08:43:30.376196 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 11 08:43:31 crc kubenswrapper[4808]: E0311 08:43:31.333959 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Mar 11 08:43:31 crc kubenswrapper[4808]: E0311 08:43:31.334534 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4pc8c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-pf6x9_openshift-marketplace(32a93168-4bf6-48c0-89b4-5e4393234562): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 11 08:43:31 crc kubenswrapper[4808]: E0311 08:43:31.336056 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-pf6x9" podUID="32a93168-4bf6-48c0-89b4-5e4393234562" Mar 11 08:43:32 crc kubenswrapper[4808]: I0311 08:43:32.330269 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-74d4b4f8bf-sj6xb"] Mar 11 08:43:32 crc kubenswrapper[4808]: I0311 08:43:32.428830 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cc9c5fb9c-bhw8h"] Mar 11 08:43:35 crc kubenswrapper[4808]: I0311 08:43:35.401917 4808 scope.go:117] "RemoveContainer" containerID="09532162146b90f5d107bfdceb7680e148dfa13323ba58c0ac73f1c97c00a7eb" Mar 11 08:43:35 crc kubenswrapper[4808]: W0311 08:43:35.429557 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf747e37_c201_4dcc_a2a5_2429f4eba47d.slice/crio-06604f557b6ea4aa0c368ca117037a17e78d59b98f8607d897eae96ab9a08291 WatchSource:0}: Error finding container 06604f557b6ea4aa0c368ca117037a17e78d59b98f8607d897eae96ab9a08291: Status 404 returned error can't find the container with id 06604f557b6ea4aa0c368ca117037a17e78d59b98f8607d897eae96ab9a08291 Mar 11 08:43:35 crc kubenswrapper[4808]: I0311 08:43:35.442146 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Mar 11 08:43:35 crc kubenswrapper[4808]: I0311 08:43:35.443015 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Mar 11 08:43:35 crc kubenswrapper[4808]: I0311 08:43:35.453162 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Mar 11 08:43:35 crc kubenswrapper[4808]: E0311 08:43:35.487976 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-pf6x9" podUID="32a93168-4bf6-48c0-89b4-5e4393234562" Mar 11 08:43:35 crc kubenswrapper[4808]: E0311 08:43:35.494647 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Mar 11 08:43:35 crc kubenswrapper[4808]: E0311 08:43:35.494749 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fgnjp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-nn2pz_openshift-marketplace(06cf5014-8032-4d6d-b905-1d7196c123c7): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 11 08:43:35 crc kubenswrapper[4808]: E0311 08:43:35.495881 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-nn2pz" podUID="06cf5014-8032-4d6d-b905-1d7196c123c7" Mar 11 08:43:35 crc kubenswrapper[4808]: E0311 08:43:35.515220 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Mar 11 08:43:35 crc kubenswrapper[4808]: E0311 08:43:35.515375 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rf6jf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-p7jn7_openshift-marketplace(8c04d52d-2eda-40d3-8252-ac2e14d0a861): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 11 08:43:35 crc kubenswrapper[4808]: E0311 08:43:35.516530 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Mar 11 08:43:35 crc kubenswrapper[4808]: E0311 08:43:35.516543 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-p7jn7" podUID="8c04d52d-2eda-40d3-8252-ac2e14d0a861" Mar 11 08:43:35 crc kubenswrapper[4808]: E0311 08:43:35.516641 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j7z98,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-wxbs4_openshift-marketplace(8d9db75f-12f9-4870-94c5-474c8b74f021): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 11 08:43:35 crc kubenswrapper[4808]: E0311 08:43:35.517756 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-wxbs4" podUID="8d9db75f-12f9-4870-94c5-474c8b74f021" Mar 11 08:43:35 crc kubenswrapper[4808]: E0311 08:43:35.544649 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Mar 11 08:43:35 crc kubenswrapper[4808]: E0311 08:43:35.544793 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ww7bz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-dqmwn_openshift-marketplace(5cd00c2e-5e6d-4fa5-a2a5-28153c8ef4bd): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 11 08:43:35 crc kubenswrapper[4808]: E0311 08:43:35.546516 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-dqmwn" podUID="5cd00c2e-5e6d-4fa5-a2a5-28153c8ef4bd" Mar 11 08:43:35 crc kubenswrapper[4808]: I0311 08:43:35.572870 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/30c9c874-96ee-4ff1-88fb-eedbd0d114e8-var-lock\") pod \"installer-9-crc\" (UID: \"30c9c874-96ee-4ff1-88fb-eedbd0d114e8\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 11 08:43:35 crc kubenswrapper[4808]: I0311 08:43:35.573015 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/30c9c874-96ee-4ff1-88fb-eedbd0d114e8-kube-api-access\") pod \"installer-9-crc\" (UID: \"30c9c874-96ee-4ff1-88fb-eedbd0d114e8\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 11 08:43:35 crc kubenswrapper[4808]: I0311 08:43:35.573085 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/30c9c874-96ee-4ff1-88fb-eedbd0d114e8-kubelet-dir\") pod \"installer-9-crc\" (UID: \"30c9c874-96ee-4ff1-88fb-eedbd0d114e8\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 11 08:43:35 crc kubenswrapper[4808]: I0311 08:43:35.673902 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/30c9c874-96ee-4ff1-88fb-eedbd0d114e8-var-lock\") pod \"installer-9-crc\" (UID: \"30c9c874-96ee-4ff1-88fb-eedbd0d114e8\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 11 08:43:35 crc kubenswrapper[4808]: I0311 08:43:35.674264 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/30c9c874-96ee-4ff1-88fb-eedbd0d114e8-kube-api-access\") pod \"installer-9-crc\" (UID: \"30c9c874-96ee-4ff1-88fb-eedbd0d114e8\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 11 08:43:35 crc kubenswrapper[4808]: I0311 08:43:35.673998 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/30c9c874-96ee-4ff1-88fb-eedbd0d114e8-var-lock\") pod \"installer-9-crc\" (UID: \"30c9c874-96ee-4ff1-88fb-eedbd0d114e8\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 11 08:43:35 crc kubenswrapper[4808]: I0311 08:43:35.674326 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/30c9c874-96ee-4ff1-88fb-eedbd0d114e8-kubelet-dir\") pod \"installer-9-crc\" (UID: \"30c9c874-96ee-4ff1-88fb-eedbd0d114e8\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 11 08:43:35 crc kubenswrapper[4808]: I0311 08:43:35.674304 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/30c9c874-96ee-4ff1-88fb-eedbd0d114e8-kubelet-dir\") pod \"installer-9-crc\" (UID: \"30c9c874-96ee-4ff1-88fb-eedbd0d114e8\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 11 08:43:35 crc kubenswrapper[4808]: I0311 08:43:35.696228 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/30c9c874-96ee-4ff1-88fb-eedbd0d114e8-kube-api-access\") pod \"installer-9-crc\" (UID: \"30c9c874-96ee-4ff1-88fb-eedbd0d114e8\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 11 08:43:35 crc kubenswrapper[4808]: I0311 08:43:35.741910 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Mar 11 08:43:35 crc kubenswrapper[4808]: W0311 08:43:35.763232 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podbf48c1f1_0cc2_45ee_924b_684e78843aff.slice/crio-0f9878e1a359b4030c5bf971f305e19d83e63488ec5d6311c88c437b2c02f2dc WatchSource:0}: Error finding container 0f9878e1a359b4030c5bf971f305e19d83e63488ec5d6311c88c437b2c02f2dc: Status 404 returned error can't find the container with id 0f9878e1a359b4030c5bf971f305e19d83e63488ec5d6311c88c437b2c02f2dc Mar 11 08:43:35 crc kubenswrapper[4808]: I0311 08:43:35.781382 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Mar 11 08:43:35 crc kubenswrapper[4808]: I0311 08:43:35.782123 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cc9c5fb9c-bhw8h"] Mar 11 08:43:36 crc kubenswrapper[4808]: I0311 08:43:36.012299 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Mar 11 08:43:36 crc kubenswrapper[4808]: I0311 08:43:36.030085 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-74d4b4f8bf-sj6xb"] Mar 11 08:43:36 crc kubenswrapper[4808]: W0311 08:43:36.063850 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod717368c6_5802_4b49_8081_5f957031df07.slice/crio-d507fcb0abbb8aa603eb00df591aff76dd8600675fd3c0cc1660404cd4818d5c WatchSource:0}: Error finding container d507fcb0abbb8aa603eb00df591aff76dd8600675fd3c0cc1660404cd4818d5c: Status 404 returned error can't find the container with id d507fcb0abbb8aa603eb00df591aff76dd8600675fd3c0cc1660404cd4818d5c Mar 11 08:43:36 crc kubenswrapper[4808]: W0311 08:43:36.064186 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod30c9c874_96ee_4ff1_88fb_eedbd0d114e8.slice/crio-1c217141f0b077043594ce584240558c7cead2f99363b5b6a3066ff19b88e026 WatchSource:0}: Error finding container 1c217141f0b077043594ce584240558c7cead2f99363b5b6a3066ff19b88e026: Status 404 returned error can't find the container with id 1c217141f0b077043594ce584240558c7cead2f99363b5b6a3066ff19b88e026 Mar 11 08:43:36 crc kubenswrapper[4808]: I0311 08:43:36.098247 4808 generic.go:334] "Generic (PLEG): container finished" podID="5cb71001-8e81-4655-97db-e7bd5eaccb2a" containerID="7fdc6bbafce80b90af5831624a0bde3107007101e892ae2ffa1a536b916a00a5" exitCode=0 Mar 11 08:43:36 crc kubenswrapper[4808]: I0311 08:43:36.098452 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p7nmq" event={"ID":"5cb71001-8e81-4655-97db-e7bd5eaccb2a","Type":"ContainerDied","Data":"7fdc6bbafce80b90af5831624a0bde3107007101e892ae2ffa1a536b916a00a5"} Mar 11 08:43:36 crc kubenswrapper[4808]: I0311 08:43:36.100463 4808 generic.go:334] "Generic (PLEG): container finished" podID="5a83bb5a-5523-4258-b6ce-895c0b1d7f07" containerID="c3d179c8b0135b11a21d82d8d0b409ce75683ffd9a9c424f4f4e8c8f9fe7e678" exitCode=0 Mar 11 08:43:36 crc kubenswrapper[4808]: I0311 08:43:36.100548 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cnszv" event={"ID":"5a83bb5a-5523-4258-b6ce-895c0b1d7f07","Type":"ContainerDied","Data":"c3d179c8b0135b11a21d82d8d0b409ce75683ffd9a9c424f4f4e8c8f9fe7e678"} Mar 11 08:43:36 crc kubenswrapper[4808]: I0311 08:43:36.124723 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6cc9c5fb9c-bhw8h" event={"ID":"c25fd6f8-54c9-4e97-b2f4-3e05ba48c52a","Type":"ContainerStarted","Data":"275670557d57f0803032fa30f8bc8e44b8682e15dca7097cfe3bd5fdbc6622bb"} Mar 11 08:43:36 crc kubenswrapper[4808]: I0311 08:43:36.124776 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6cc9c5fb9c-bhw8h" event={"ID":"c25fd6f8-54c9-4e97-b2f4-3e05ba48c52a","Type":"ContainerStarted","Data":"068f8a3732426376b265c25b39ea80201516251b6d4ea22efe5a7a743d6c291c"} Mar 11 08:43:36 crc kubenswrapper[4808]: I0311 08:43:36.124806 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6cc9c5fb9c-bhw8h" podUID="c25fd6f8-54c9-4e97-b2f4-3e05ba48c52a" containerName="route-controller-manager" containerID="cri-o://275670557d57f0803032fa30f8bc8e44b8682e15dca7097cfe3bd5fdbc6622bb" gracePeriod=30 Mar 11 08:43:36 crc kubenswrapper[4808]: I0311 08:43:36.126206 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-74d4b4f8bf-sj6xb" event={"ID":"717368c6-5802-4b49-8081-5f957031df07","Type":"ContainerStarted","Data":"d507fcb0abbb8aa603eb00df591aff76dd8600675fd3c0cc1660404cd4818d5c"} Mar 11 08:43:36 crc kubenswrapper[4808]: I0311 08:43:36.134871 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bc8sc" event={"ID":"8d26a515-59f9-49a6-a9e0-6ff62b523ab4","Type":"ContainerStarted","Data":"a155dd81d66ce35c16f17641800b659c3950077d091e588f6781d1b9a5f090a9"} Mar 11 08:43:36 crc kubenswrapper[4808]: I0311 08:43:36.141374 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-kqsq9" event={"ID":"cf747e37-c201-4dcc-a2a5-2429f4eba47d","Type":"ContainerStarted","Data":"f6073c9ce81cd034f3409ab1e97ad6768ef024119906fc4962b56c515bf44c66"} Mar 11 08:43:36 crc kubenswrapper[4808]: I0311 08:43:36.141443 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-kqsq9" event={"ID":"cf747e37-c201-4dcc-a2a5-2429f4eba47d","Type":"ContainerStarted","Data":"06604f557b6ea4aa0c368ca117037a17e78d59b98f8607d897eae96ab9a08291"} Mar 11 08:43:36 crc kubenswrapper[4808]: I0311 08:43:36.142871 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"30c9c874-96ee-4ff1-88fb-eedbd0d114e8","Type":"ContainerStarted","Data":"1c217141f0b077043594ce584240558c7cead2f99363b5b6a3066ff19b88e026"} Mar 11 08:43:36 crc kubenswrapper[4808]: I0311 08:43:36.144133 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"bf48c1f1-0cc2-45ee-924b-684e78843aff","Type":"ContainerStarted","Data":"0f9878e1a359b4030c5bf971f305e19d83e63488ec5d6311c88c437b2c02f2dc"} Mar 11 08:43:36 crc kubenswrapper[4808]: I0311 08:43:36.160490 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6cc9c5fb9c-bhw8h" podStartSLOduration=24.160465315 podStartE2EDuration="24.160465315s" podCreationTimestamp="2026-03-11 08:43:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:43:36.157277504 +0000 UTC m=+267.110600824" watchObservedRunningTime="2026-03-11 08:43:36.160465315 +0000 UTC m=+267.113788635" Mar 11 08:43:36 crc kubenswrapper[4808]: E0311 08:43:36.160978 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-wxbs4" podUID="8d9db75f-12f9-4870-94c5-474c8b74f021" Mar 11 08:43:36 crc kubenswrapper[4808]: E0311 08:43:36.160978 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-nn2pz" podUID="06cf5014-8032-4d6d-b905-1d7196c123c7" Mar 11 08:43:36 crc kubenswrapper[4808]: E0311 08:43:36.163249 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-dqmwn" podUID="5cd00c2e-5e6d-4fa5-a2a5-28153c8ef4bd" Mar 11 08:43:36 crc kubenswrapper[4808]: E0311 08:43:36.163320 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-p7jn7" podUID="8c04d52d-2eda-40d3-8252-ac2e14d0a861" Mar 11 08:43:37 crc kubenswrapper[4808]: I0311 08:43:37.155030 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"bf48c1f1-0cc2-45ee-924b-684e78843aff","Type":"ContainerStarted","Data":"7afff8d865f705296bfacfaf2857144fbcb3110b7daa423114832e29691666fb"} Mar 11 08:43:37 crc kubenswrapper[4808]: I0311 08:43:37.158045 4808 generic.go:334] "Generic (PLEG): container finished" podID="8d26a515-59f9-49a6-a9e0-6ff62b523ab4" containerID="a155dd81d66ce35c16f17641800b659c3950077d091e588f6781d1b9a5f090a9" exitCode=0 Mar 11 08:43:37 crc kubenswrapper[4808]: I0311 08:43:37.158128 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bc8sc" event={"ID":"8d26a515-59f9-49a6-a9e0-6ff62b523ab4","Type":"ContainerDied","Data":"a155dd81d66ce35c16f17641800b659c3950077d091e588f6781d1b9a5f090a9"} Mar 11 08:43:37 crc kubenswrapper[4808]: I0311 08:43:37.162935 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-kqsq9" event={"ID":"cf747e37-c201-4dcc-a2a5-2429f4eba47d","Type":"ContainerStarted","Data":"0a3982e670318cf26666b42e42d8e89ecd54b65142ae37bce601655064f2c7c3"} Mar 11 08:43:37 crc kubenswrapper[4808]: I0311 08:43:37.170052 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-6cc9c5fb9c-bhw8h_c25fd6f8-54c9-4e97-b2f4-3e05ba48c52a/route-controller-manager/0.log" Mar 11 08:43:37 crc kubenswrapper[4808]: I0311 08:43:37.170109 4808 generic.go:334] "Generic (PLEG): container finished" podID="c25fd6f8-54c9-4e97-b2f4-3e05ba48c52a" containerID="275670557d57f0803032fa30f8bc8e44b8682e15dca7097cfe3bd5fdbc6622bb" exitCode=255 Mar 11 08:43:37 crc kubenswrapper[4808]: I0311 08:43:37.170190 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6cc9c5fb9c-bhw8h" event={"ID":"c25fd6f8-54c9-4e97-b2f4-3e05ba48c52a","Type":"ContainerDied","Data":"275670557d57f0803032fa30f8bc8e44b8682e15dca7097cfe3bd5fdbc6622bb"} Mar 11 08:43:37 crc kubenswrapper[4808]: I0311 08:43:37.170962 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=7.170940437 podStartE2EDuration="7.170940437s" podCreationTimestamp="2026-03-11 08:43:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:43:37.170495744 +0000 UTC m=+268.123819074" watchObservedRunningTime="2026-03-11 08:43:37.170940437 +0000 UTC m=+268.124263767" Mar 11 08:43:37 crc kubenswrapper[4808]: I0311 08:43:37.172197 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"30c9c874-96ee-4ff1-88fb-eedbd0d114e8","Type":"ContainerStarted","Data":"dcad9b652a3d7e1a8db40c6e9fab2d4d4f80d84902747a4d68e6ea4f938dc1de"} Mar 11 08:43:37 crc kubenswrapper[4808]: I0311 08:43:37.185421 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-74d4b4f8bf-sj6xb" event={"ID":"717368c6-5802-4b49-8081-5f957031df07","Type":"ContainerStarted","Data":"e1f365bb578c8c8bbdfef6a34e8e7663c327d670bb38462c7d63af9737f63fb1"} Mar 11 08:43:37 crc kubenswrapper[4808]: I0311 08:43:37.185556 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-74d4b4f8bf-sj6xb" podUID="717368c6-5802-4b49-8081-5f957031df07" containerName="controller-manager" containerID="cri-o://e1f365bb578c8c8bbdfef6a34e8e7663c327d670bb38462c7d63af9737f63fb1" gracePeriod=30 Mar 11 08:43:37 crc kubenswrapper[4808]: I0311 08:43:37.186171 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-74d4b4f8bf-sj6xb" Mar 11 08:43:37 crc kubenswrapper[4808]: I0311 08:43:37.200840 4808 patch_prober.go:28] interesting pod/controller-manager-74d4b4f8bf-sj6xb container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": read tcp 10.217.0.2:35956->10.217.0.60:8443: read: connection reset by peer" start-of-body= Mar 11 08:43:37 crc kubenswrapper[4808]: I0311 08:43:37.200897 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-74d4b4f8bf-sj6xb" podUID="717368c6-5802-4b49-8081-5f957031df07" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": read tcp 10.217.0.2:35956->10.217.0.60:8443: read: connection reset by peer" Mar 11 08:43:37 crc kubenswrapper[4808]: I0311 08:43:37.217453 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-kqsq9" podStartSLOduration=203.217430906 podStartE2EDuration="3m23.217430906s" podCreationTimestamp="2026-03-11 08:40:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:43:37.196948055 +0000 UTC m=+268.150271375" watchObservedRunningTime="2026-03-11 08:43:37.217430906 +0000 UTC m=+268.170754226" Mar 11 08:43:37 crc kubenswrapper[4808]: I0311 08:43:37.236856 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-74d4b4f8bf-sj6xb" podStartSLOduration=25.236835567 podStartE2EDuration="25.236835567s" podCreationTimestamp="2026-03-11 08:43:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:43:37.232166894 +0000 UTC m=+268.185490224" watchObservedRunningTime="2026-03-11 08:43:37.236835567 +0000 UTC m=+268.190158887" Mar 11 08:43:37 crc kubenswrapper[4808]: I0311 08:43:37.254024 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.254004814 podStartE2EDuration="2.254004814s" podCreationTimestamp="2026-03-11 08:43:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:43:37.25316703 +0000 UTC m=+268.206490350" watchObservedRunningTime="2026-03-11 08:43:37.254004814 +0000 UTC m=+268.207328134" Mar 11 08:43:37 crc kubenswrapper[4808]: I0311 08:43:37.472120 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-6cc9c5fb9c-bhw8h_c25fd6f8-54c9-4e97-b2f4-3e05ba48c52a/route-controller-manager/0.log" Mar 11 08:43:37 crc kubenswrapper[4808]: I0311 08:43:37.472181 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6cc9c5fb9c-bhw8h" Mar 11 08:43:37 crc kubenswrapper[4808]: I0311 08:43:37.497485 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-657d894494-dcmx4"] Mar 11 08:43:37 crc kubenswrapper[4808]: E0311 08:43:37.497782 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c25fd6f8-54c9-4e97-b2f4-3e05ba48c52a" containerName="route-controller-manager" Mar 11 08:43:37 crc kubenswrapper[4808]: I0311 08:43:37.497794 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="c25fd6f8-54c9-4e97-b2f4-3e05ba48c52a" containerName="route-controller-manager" Mar 11 08:43:37 crc kubenswrapper[4808]: I0311 08:43:37.497892 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="c25fd6f8-54c9-4e97-b2f4-3e05ba48c52a" containerName="route-controller-manager" Mar 11 08:43:37 crc kubenswrapper[4808]: I0311 08:43:37.498259 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-657d894494-dcmx4" Mar 11 08:43:37 crc kubenswrapper[4808]: I0311 08:43:37.499912 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n28ln\" (UniqueName: \"kubernetes.io/projected/c25fd6f8-54c9-4e97-b2f4-3e05ba48c52a-kube-api-access-n28ln\") pod \"c25fd6f8-54c9-4e97-b2f4-3e05ba48c52a\" (UID: \"c25fd6f8-54c9-4e97-b2f4-3e05ba48c52a\") " Mar 11 08:43:37 crc kubenswrapper[4808]: I0311 08:43:37.500056 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c25fd6f8-54c9-4e97-b2f4-3e05ba48c52a-serving-cert\") pod \"c25fd6f8-54c9-4e97-b2f4-3e05ba48c52a\" (UID: \"c25fd6f8-54c9-4e97-b2f4-3e05ba48c52a\") " Mar 11 08:43:37 crc kubenswrapper[4808]: I0311 08:43:37.500082 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c25fd6f8-54c9-4e97-b2f4-3e05ba48c52a-config\") pod \"c25fd6f8-54c9-4e97-b2f4-3e05ba48c52a\" (UID: \"c25fd6f8-54c9-4e97-b2f4-3e05ba48c52a\") " Mar 11 08:43:37 crc kubenswrapper[4808]: I0311 08:43:37.500106 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c25fd6f8-54c9-4e97-b2f4-3e05ba48c52a-client-ca\") pod \"c25fd6f8-54c9-4e97-b2f4-3e05ba48c52a\" (UID: \"c25fd6f8-54c9-4e97-b2f4-3e05ba48c52a\") " Mar 11 08:43:37 crc kubenswrapper[4808]: I0311 08:43:37.500246 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e8420f8a-2d3b-440b-990b-c7fa2a491de6-client-ca\") pod \"route-controller-manager-657d894494-dcmx4\" (UID: \"e8420f8a-2d3b-440b-990b-c7fa2a491de6\") " pod="openshift-route-controller-manager/route-controller-manager-657d894494-dcmx4" Mar 11 08:43:37 crc kubenswrapper[4808]: I0311 08:43:37.500273 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8420f8a-2d3b-440b-990b-c7fa2a491de6-config\") pod \"route-controller-manager-657d894494-dcmx4\" (UID: \"e8420f8a-2d3b-440b-990b-c7fa2a491de6\") " pod="openshift-route-controller-manager/route-controller-manager-657d894494-dcmx4" Mar 11 08:43:37 crc kubenswrapper[4808]: I0311 08:43:37.500370 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8j96\" (UniqueName: \"kubernetes.io/projected/e8420f8a-2d3b-440b-990b-c7fa2a491de6-kube-api-access-f8j96\") pod \"route-controller-manager-657d894494-dcmx4\" (UID: \"e8420f8a-2d3b-440b-990b-c7fa2a491de6\") " pod="openshift-route-controller-manager/route-controller-manager-657d894494-dcmx4" Mar 11 08:43:37 crc kubenswrapper[4808]: I0311 08:43:37.500390 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e8420f8a-2d3b-440b-990b-c7fa2a491de6-serving-cert\") pod \"route-controller-manager-657d894494-dcmx4\" (UID: \"e8420f8a-2d3b-440b-990b-c7fa2a491de6\") " pod="openshift-route-controller-manager/route-controller-manager-657d894494-dcmx4" Mar 11 08:43:37 crc kubenswrapper[4808]: I0311 08:43:37.500992 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c25fd6f8-54c9-4e97-b2f4-3e05ba48c52a-config" (OuterVolumeSpecName: "config") pod "c25fd6f8-54c9-4e97-b2f4-3e05ba48c52a" (UID: "c25fd6f8-54c9-4e97-b2f4-3e05ba48c52a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:43:37 crc kubenswrapper[4808]: I0311 08:43:37.501198 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c25fd6f8-54c9-4e97-b2f4-3e05ba48c52a-client-ca" (OuterVolumeSpecName: "client-ca") pod "c25fd6f8-54c9-4e97-b2f4-3e05ba48c52a" (UID: "c25fd6f8-54c9-4e97-b2f4-3e05ba48c52a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:43:37 crc kubenswrapper[4808]: I0311 08:43:37.504949 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-657d894494-dcmx4"] Mar 11 08:43:37 crc kubenswrapper[4808]: I0311 08:43:37.505802 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c25fd6f8-54c9-4e97-b2f4-3e05ba48c52a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c25fd6f8-54c9-4e97-b2f4-3e05ba48c52a" (UID: "c25fd6f8-54c9-4e97-b2f4-3e05ba48c52a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:43:37 crc kubenswrapper[4808]: I0311 08:43:37.507668 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c25fd6f8-54c9-4e97-b2f4-3e05ba48c52a-kube-api-access-n28ln" (OuterVolumeSpecName: "kube-api-access-n28ln") pod "c25fd6f8-54c9-4e97-b2f4-3e05ba48c52a" (UID: "c25fd6f8-54c9-4e97-b2f4-3e05ba48c52a"). InnerVolumeSpecName "kube-api-access-n28ln". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:43:37 crc kubenswrapper[4808]: I0311 08:43:37.601172 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e8420f8a-2d3b-440b-990b-c7fa2a491de6-client-ca\") pod \"route-controller-manager-657d894494-dcmx4\" (UID: \"e8420f8a-2d3b-440b-990b-c7fa2a491de6\") " pod="openshift-route-controller-manager/route-controller-manager-657d894494-dcmx4" Mar 11 08:43:37 crc kubenswrapper[4808]: I0311 08:43:37.601222 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8420f8a-2d3b-440b-990b-c7fa2a491de6-config\") pod \"route-controller-manager-657d894494-dcmx4\" (UID: \"e8420f8a-2d3b-440b-990b-c7fa2a491de6\") " pod="openshift-route-controller-manager/route-controller-manager-657d894494-dcmx4" Mar 11 08:43:37 crc kubenswrapper[4808]: I0311 08:43:37.601317 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8j96\" (UniqueName: \"kubernetes.io/projected/e8420f8a-2d3b-440b-990b-c7fa2a491de6-kube-api-access-f8j96\") pod \"route-controller-manager-657d894494-dcmx4\" (UID: \"e8420f8a-2d3b-440b-990b-c7fa2a491de6\") " pod="openshift-route-controller-manager/route-controller-manager-657d894494-dcmx4" Mar 11 08:43:37 crc kubenswrapper[4808]: I0311 08:43:37.601342 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e8420f8a-2d3b-440b-990b-c7fa2a491de6-serving-cert\") pod \"route-controller-manager-657d894494-dcmx4\" (UID: \"e8420f8a-2d3b-440b-990b-c7fa2a491de6\") " pod="openshift-route-controller-manager/route-controller-manager-657d894494-dcmx4" Mar 11 08:43:37 crc kubenswrapper[4808]: I0311 08:43:37.601398 4808 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c25fd6f8-54c9-4e97-b2f4-3e05ba48c52a-client-ca\") on node \"crc\" DevicePath \"\"" Mar 11 08:43:37 crc kubenswrapper[4808]: I0311 08:43:37.601409 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n28ln\" (UniqueName: \"kubernetes.io/projected/c25fd6f8-54c9-4e97-b2f4-3e05ba48c52a-kube-api-access-n28ln\") on node \"crc\" DevicePath \"\"" Mar 11 08:43:37 crc kubenswrapper[4808]: I0311 08:43:37.601422 4808 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c25fd6f8-54c9-4e97-b2f4-3e05ba48c52a-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 11 08:43:37 crc kubenswrapper[4808]: I0311 08:43:37.601431 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c25fd6f8-54c9-4e97-b2f4-3e05ba48c52a-config\") on node \"crc\" DevicePath \"\"" Mar 11 08:43:37 crc kubenswrapper[4808]: I0311 08:43:37.602973 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e8420f8a-2d3b-440b-990b-c7fa2a491de6-client-ca\") pod \"route-controller-manager-657d894494-dcmx4\" (UID: \"e8420f8a-2d3b-440b-990b-c7fa2a491de6\") " pod="openshift-route-controller-manager/route-controller-manager-657d894494-dcmx4" Mar 11 08:43:37 crc kubenswrapper[4808]: I0311 08:43:37.608157 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e8420f8a-2d3b-440b-990b-c7fa2a491de6-serving-cert\") pod \"route-controller-manager-657d894494-dcmx4\" (UID: \"e8420f8a-2d3b-440b-990b-c7fa2a491de6\") " pod="openshift-route-controller-manager/route-controller-manager-657d894494-dcmx4" Mar 11 08:43:37 crc kubenswrapper[4808]: I0311 08:43:37.618714 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8j96\" (UniqueName: \"kubernetes.io/projected/e8420f8a-2d3b-440b-990b-c7fa2a491de6-kube-api-access-f8j96\") pod \"route-controller-manager-657d894494-dcmx4\" (UID: \"e8420f8a-2d3b-440b-990b-c7fa2a491de6\") " pod="openshift-route-controller-manager/route-controller-manager-657d894494-dcmx4" Mar 11 08:43:37 crc kubenswrapper[4808]: I0311 08:43:37.765500 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8420f8a-2d3b-440b-990b-c7fa2a491de6-config\") pod \"route-controller-manager-657d894494-dcmx4\" (UID: \"e8420f8a-2d3b-440b-990b-c7fa2a491de6\") " pod="openshift-route-controller-manager/route-controller-manager-657d894494-dcmx4" Mar 11 08:43:37 crc kubenswrapper[4808]: I0311 08:43:37.844112 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-657d894494-dcmx4" Mar 11 08:43:38 crc kubenswrapper[4808]: I0311 08:43:38.054924 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-657d894494-dcmx4"] Mar 11 08:43:38 crc kubenswrapper[4808]: W0311 08:43:38.063740 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode8420f8a_2d3b_440b_990b_c7fa2a491de6.slice/crio-55b88efb4d339d437a9258248bcd38f8a8dbf04f64336cefc3980080da9ed6a1 WatchSource:0}: Error finding container 55b88efb4d339d437a9258248bcd38f8a8dbf04f64336cefc3980080da9ed6a1: Status 404 returned error can't find the container with id 55b88efb4d339d437a9258248bcd38f8a8dbf04f64336cefc3980080da9ed6a1 Mar 11 08:43:38 crc kubenswrapper[4808]: I0311 08:43:38.216241 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-657d894494-dcmx4" event={"ID":"e8420f8a-2d3b-440b-990b-c7fa2a491de6","Type":"ContainerStarted","Data":"55b88efb4d339d437a9258248bcd38f8a8dbf04f64336cefc3980080da9ed6a1"} Mar 11 08:43:38 crc kubenswrapper[4808]: I0311 08:43:38.217583 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-6cc9c5fb9c-bhw8h_c25fd6f8-54c9-4e97-b2f4-3e05ba48c52a/route-controller-manager/0.log" Mar 11 08:43:38 crc kubenswrapper[4808]: I0311 08:43:38.217659 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6cc9c5fb9c-bhw8h" event={"ID":"c25fd6f8-54c9-4e97-b2f4-3e05ba48c52a","Type":"ContainerDied","Data":"068f8a3732426376b265c25b39ea80201516251b6d4ea22efe5a7a743d6c291c"} Mar 11 08:43:38 crc kubenswrapper[4808]: I0311 08:43:38.217699 4808 scope.go:117] "RemoveContainer" containerID="275670557d57f0803032fa30f8bc8e44b8682e15dca7097cfe3bd5fdbc6622bb" Mar 11 08:43:38 crc kubenswrapper[4808]: I0311 08:43:38.217991 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6cc9c5fb9c-bhw8h" Mar 11 08:43:38 crc kubenswrapper[4808]: I0311 08:43:38.222083 4808 generic.go:334] "Generic (PLEG): container finished" podID="bf48c1f1-0cc2-45ee-924b-684e78843aff" containerID="7afff8d865f705296bfacfaf2857144fbcb3110b7daa423114832e29691666fb" exitCode=0 Mar 11 08:43:38 crc kubenswrapper[4808]: I0311 08:43:38.222152 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"bf48c1f1-0cc2-45ee-924b-684e78843aff","Type":"ContainerDied","Data":"7afff8d865f705296bfacfaf2857144fbcb3110b7daa423114832e29691666fb"} Mar 11 08:43:38 crc kubenswrapper[4808]: I0311 08:43:38.223845 4808 generic.go:334] "Generic (PLEG): container finished" podID="717368c6-5802-4b49-8081-5f957031df07" containerID="e1f365bb578c8c8bbdfef6a34e8e7663c327d670bb38462c7d63af9737f63fb1" exitCode=0 Mar 11 08:43:38 crc kubenswrapper[4808]: I0311 08:43:38.224457 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-74d4b4f8bf-sj6xb" event={"ID":"717368c6-5802-4b49-8081-5f957031df07","Type":"ContainerDied","Data":"e1f365bb578c8c8bbdfef6a34e8e7663c327d670bb38462c7d63af9737f63fb1"} Mar 11 08:43:38 crc kubenswrapper[4808]: I0311 08:43:38.290873 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cc9c5fb9c-bhw8h"] Mar 11 08:43:38 crc kubenswrapper[4808]: I0311 08:43:38.293892 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cc9c5fb9c-bhw8h"] Mar 11 08:43:38 crc kubenswrapper[4808]: I0311 08:43:38.579579 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-74d4b4f8bf-sj6xb" Mar 11 08:43:38 crc kubenswrapper[4808]: I0311 08:43:38.717897 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/717368c6-5802-4b49-8081-5f957031df07-proxy-ca-bundles\") pod \"717368c6-5802-4b49-8081-5f957031df07\" (UID: \"717368c6-5802-4b49-8081-5f957031df07\") " Mar 11 08:43:38 crc kubenswrapper[4808]: I0311 08:43:38.717966 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/717368c6-5802-4b49-8081-5f957031df07-serving-cert\") pod \"717368c6-5802-4b49-8081-5f957031df07\" (UID: \"717368c6-5802-4b49-8081-5f957031df07\") " Mar 11 08:43:38 crc kubenswrapper[4808]: I0311 08:43:38.718000 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/717368c6-5802-4b49-8081-5f957031df07-client-ca\") pod \"717368c6-5802-4b49-8081-5f957031df07\" (UID: \"717368c6-5802-4b49-8081-5f957031df07\") " Mar 11 08:43:38 crc kubenswrapper[4808]: I0311 08:43:38.718036 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hbg9t\" (UniqueName: \"kubernetes.io/projected/717368c6-5802-4b49-8081-5f957031df07-kube-api-access-hbg9t\") pod \"717368c6-5802-4b49-8081-5f957031df07\" (UID: \"717368c6-5802-4b49-8081-5f957031df07\") " Mar 11 08:43:38 crc kubenswrapper[4808]: I0311 08:43:38.718140 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/717368c6-5802-4b49-8081-5f957031df07-config\") pod \"717368c6-5802-4b49-8081-5f957031df07\" (UID: \"717368c6-5802-4b49-8081-5f957031df07\") " Mar 11 08:43:38 crc kubenswrapper[4808]: I0311 08:43:38.718765 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/717368c6-5802-4b49-8081-5f957031df07-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "717368c6-5802-4b49-8081-5f957031df07" (UID: "717368c6-5802-4b49-8081-5f957031df07"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:43:38 crc kubenswrapper[4808]: I0311 08:43:38.718800 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/717368c6-5802-4b49-8081-5f957031df07-client-ca" (OuterVolumeSpecName: "client-ca") pod "717368c6-5802-4b49-8081-5f957031df07" (UID: "717368c6-5802-4b49-8081-5f957031df07"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:43:38 crc kubenswrapper[4808]: I0311 08:43:38.719083 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/717368c6-5802-4b49-8081-5f957031df07-config" (OuterVolumeSpecName: "config") pod "717368c6-5802-4b49-8081-5f957031df07" (UID: "717368c6-5802-4b49-8081-5f957031df07"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:43:38 crc kubenswrapper[4808]: I0311 08:43:38.722910 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/717368c6-5802-4b49-8081-5f957031df07-kube-api-access-hbg9t" (OuterVolumeSpecName: "kube-api-access-hbg9t") pod "717368c6-5802-4b49-8081-5f957031df07" (UID: "717368c6-5802-4b49-8081-5f957031df07"). InnerVolumeSpecName "kube-api-access-hbg9t". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:43:38 crc kubenswrapper[4808]: I0311 08:43:38.724982 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/717368c6-5802-4b49-8081-5f957031df07-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "717368c6-5802-4b49-8081-5f957031df07" (UID: "717368c6-5802-4b49-8081-5f957031df07"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:43:38 crc kubenswrapper[4808]: I0311 08:43:38.819588 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/717368c6-5802-4b49-8081-5f957031df07-config\") on node \"crc\" DevicePath \"\"" Mar 11 08:43:38 crc kubenswrapper[4808]: I0311 08:43:38.819624 4808 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/717368c6-5802-4b49-8081-5f957031df07-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 11 08:43:38 crc kubenswrapper[4808]: I0311 08:43:38.819639 4808 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/717368c6-5802-4b49-8081-5f957031df07-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 11 08:43:38 crc kubenswrapper[4808]: I0311 08:43:38.819671 4808 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/717368c6-5802-4b49-8081-5f957031df07-client-ca\") on node \"crc\" DevicePath \"\"" Mar 11 08:43:38 crc kubenswrapper[4808]: I0311 08:43:38.819683 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hbg9t\" (UniqueName: \"kubernetes.io/projected/717368c6-5802-4b49-8081-5f957031df07-kube-api-access-hbg9t\") on node \"crc\" DevicePath \"\"" Mar 11 08:43:39 crc kubenswrapper[4808]: I0311 08:43:39.232270 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-74d4b4f8bf-sj6xb" Mar 11 08:43:39 crc kubenswrapper[4808]: I0311 08:43:39.232404 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-74d4b4f8bf-sj6xb" event={"ID":"717368c6-5802-4b49-8081-5f957031df07","Type":"ContainerDied","Data":"d507fcb0abbb8aa603eb00df591aff76dd8600675fd3c0cc1660404cd4818d5c"} Mar 11 08:43:39 crc kubenswrapper[4808]: I0311 08:43:39.233025 4808 scope.go:117] "RemoveContainer" containerID="e1f365bb578c8c8bbdfef6a34e8e7663c327d670bb38462c7d63af9737f63fb1" Mar 11 08:43:39 crc kubenswrapper[4808]: I0311 08:43:39.268534 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-74d4b4f8bf-sj6xb"] Mar 11 08:43:39 crc kubenswrapper[4808]: I0311 08:43:39.271980 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-74d4b4f8bf-sj6xb"] Mar 11 08:43:39 crc kubenswrapper[4808]: I0311 08:43:39.545399 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 11 08:43:39 crc kubenswrapper[4808]: I0311 08:43:39.650928 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5d4b7d57c-s64kv"] Mar 11 08:43:39 crc kubenswrapper[4808]: E0311 08:43:39.651117 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf48c1f1-0cc2-45ee-924b-684e78843aff" containerName="pruner" Mar 11 08:43:39 crc kubenswrapper[4808]: I0311 08:43:39.651128 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf48c1f1-0cc2-45ee-924b-684e78843aff" containerName="pruner" Mar 11 08:43:39 crc kubenswrapper[4808]: E0311 08:43:39.651138 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="717368c6-5802-4b49-8081-5f957031df07" containerName="controller-manager" Mar 11 08:43:39 crc kubenswrapper[4808]: I0311 08:43:39.651144 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="717368c6-5802-4b49-8081-5f957031df07" containerName="controller-manager" Mar 11 08:43:39 crc kubenswrapper[4808]: I0311 08:43:39.651242 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf48c1f1-0cc2-45ee-924b-684e78843aff" containerName="pruner" Mar 11 08:43:39 crc kubenswrapper[4808]: I0311 08:43:39.651256 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="717368c6-5802-4b49-8081-5f957031df07" containerName="controller-manager" Mar 11 08:43:39 crc kubenswrapper[4808]: I0311 08:43:39.651623 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d4b7d57c-s64kv" Mar 11 08:43:39 crc kubenswrapper[4808]: I0311 08:43:39.656278 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 11 08:43:39 crc kubenswrapper[4808]: I0311 08:43:39.656419 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 11 08:43:39 crc kubenswrapper[4808]: I0311 08:43:39.656485 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 11 08:43:39 crc kubenswrapper[4808]: I0311 08:43:39.656699 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 11 08:43:39 crc kubenswrapper[4808]: I0311 08:43:39.656913 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 11 08:43:39 crc kubenswrapper[4808]: I0311 08:43:39.657104 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 11 08:43:39 crc kubenswrapper[4808]: I0311 08:43:39.663926 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 11 08:43:39 crc kubenswrapper[4808]: I0311 08:43:39.670052 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5d4b7d57c-s64kv"] Mar 11 08:43:39 crc kubenswrapper[4808]: I0311 08:43:39.730566 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bf48c1f1-0cc2-45ee-924b-684e78843aff-kubelet-dir\") pod \"bf48c1f1-0cc2-45ee-924b-684e78843aff\" (UID: \"bf48c1f1-0cc2-45ee-924b-684e78843aff\") " Mar 11 08:43:39 crc kubenswrapper[4808]: I0311 08:43:39.730596 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf48c1f1-0cc2-45ee-924b-684e78843aff-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "bf48c1f1-0cc2-45ee-924b-684e78843aff" (UID: "bf48c1f1-0cc2-45ee-924b-684e78843aff"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 11 08:43:39 crc kubenswrapper[4808]: I0311 08:43:39.730699 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bf48c1f1-0cc2-45ee-924b-684e78843aff-kube-api-access\") pod \"bf48c1f1-0cc2-45ee-924b-684e78843aff\" (UID: \"bf48c1f1-0cc2-45ee-924b-684e78843aff\") " Mar 11 08:43:39 crc kubenswrapper[4808]: I0311 08:43:39.731172 4808 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bf48c1f1-0cc2-45ee-924b-684e78843aff-kubelet-dir\") on node \"crc\" DevicePath \"\"" Mar 11 08:43:39 crc kubenswrapper[4808]: I0311 08:43:39.735115 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf48c1f1-0cc2-45ee-924b-684e78843aff-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "bf48c1f1-0cc2-45ee-924b-684e78843aff" (UID: "bf48c1f1-0cc2-45ee-924b-684e78843aff"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:43:39 crc kubenswrapper[4808]: I0311 08:43:39.799120 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="717368c6-5802-4b49-8081-5f957031df07" path="/var/lib/kubelet/pods/717368c6-5802-4b49-8081-5f957031df07/volumes" Mar 11 08:43:39 crc kubenswrapper[4808]: I0311 08:43:39.799903 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c25fd6f8-54c9-4e97-b2f4-3e05ba48c52a" path="/var/lib/kubelet/pods/c25fd6f8-54c9-4e97-b2f4-3e05ba48c52a/volumes" Mar 11 08:43:39 crc kubenswrapper[4808]: I0311 08:43:39.832712 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25m7w\" (UniqueName: \"kubernetes.io/projected/ef61fb6f-dce7-4207-b83f-b1877f13875e-kube-api-access-25m7w\") pod \"controller-manager-5d4b7d57c-s64kv\" (UID: \"ef61fb6f-dce7-4207-b83f-b1877f13875e\") " pod="openshift-controller-manager/controller-manager-5d4b7d57c-s64kv" Mar 11 08:43:39 crc kubenswrapper[4808]: I0311 08:43:39.832848 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ef61fb6f-dce7-4207-b83f-b1877f13875e-client-ca\") pod \"controller-manager-5d4b7d57c-s64kv\" (UID: \"ef61fb6f-dce7-4207-b83f-b1877f13875e\") " pod="openshift-controller-manager/controller-manager-5d4b7d57c-s64kv" Mar 11 08:43:39 crc kubenswrapper[4808]: I0311 08:43:39.832869 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef61fb6f-dce7-4207-b83f-b1877f13875e-config\") pod \"controller-manager-5d4b7d57c-s64kv\" (UID: \"ef61fb6f-dce7-4207-b83f-b1877f13875e\") " pod="openshift-controller-manager/controller-manager-5d4b7d57c-s64kv" Mar 11 08:43:39 crc kubenswrapper[4808]: I0311 08:43:39.832891 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef61fb6f-dce7-4207-b83f-b1877f13875e-serving-cert\") pod \"controller-manager-5d4b7d57c-s64kv\" (UID: \"ef61fb6f-dce7-4207-b83f-b1877f13875e\") " pod="openshift-controller-manager/controller-manager-5d4b7d57c-s64kv" Mar 11 08:43:39 crc kubenswrapper[4808]: I0311 08:43:39.832912 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ef61fb6f-dce7-4207-b83f-b1877f13875e-proxy-ca-bundles\") pod \"controller-manager-5d4b7d57c-s64kv\" (UID: \"ef61fb6f-dce7-4207-b83f-b1877f13875e\") " pod="openshift-controller-manager/controller-manager-5d4b7d57c-s64kv" Mar 11 08:43:39 crc kubenswrapper[4808]: I0311 08:43:39.833156 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bf48c1f1-0cc2-45ee-924b-684e78843aff-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 11 08:43:39 crc kubenswrapper[4808]: I0311 08:43:39.933814 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25m7w\" (UniqueName: \"kubernetes.io/projected/ef61fb6f-dce7-4207-b83f-b1877f13875e-kube-api-access-25m7w\") pod \"controller-manager-5d4b7d57c-s64kv\" (UID: \"ef61fb6f-dce7-4207-b83f-b1877f13875e\") " pod="openshift-controller-manager/controller-manager-5d4b7d57c-s64kv" Mar 11 08:43:39 crc kubenswrapper[4808]: I0311 08:43:39.933912 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ef61fb6f-dce7-4207-b83f-b1877f13875e-client-ca\") pod \"controller-manager-5d4b7d57c-s64kv\" (UID: \"ef61fb6f-dce7-4207-b83f-b1877f13875e\") " pod="openshift-controller-manager/controller-manager-5d4b7d57c-s64kv" Mar 11 08:43:39 crc kubenswrapper[4808]: I0311 08:43:39.933933 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef61fb6f-dce7-4207-b83f-b1877f13875e-config\") pod \"controller-manager-5d4b7d57c-s64kv\" (UID: \"ef61fb6f-dce7-4207-b83f-b1877f13875e\") " pod="openshift-controller-manager/controller-manager-5d4b7d57c-s64kv" Mar 11 08:43:39 crc kubenswrapper[4808]: I0311 08:43:39.933951 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef61fb6f-dce7-4207-b83f-b1877f13875e-serving-cert\") pod \"controller-manager-5d4b7d57c-s64kv\" (UID: \"ef61fb6f-dce7-4207-b83f-b1877f13875e\") " pod="openshift-controller-manager/controller-manager-5d4b7d57c-s64kv" Mar 11 08:43:39 crc kubenswrapper[4808]: I0311 08:43:39.933968 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ef61fb6f-dce7-4207-b83f-b1877f13875e-proxy-ca-bundles\") pod \"controller-manager-5d4b7d57c-s64kv\" (UID: \"ef61fb6f-dce7-4207-b83f-b1877f13875e\") " pod="openshift-controller-manager/controller-manager-5d4b7d57c-s64kv" Mar 11 08:43:39 crc kubenswrapper[4808]: I0311 08:43:39.934972 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ef61fb6f-dce7-4207-b83f-b1877f13875e-proxy-ca-bundles\") pod \"controller-manager-5d4b7d57c-s64kv\" (UID: \"ef61fb6f-dce7-4207-b83f-b1877f13875e\") " pod="openshift-controller-manager/controller-manager-5d4b7d57c-s64kv" Mar 11 08:43:39 crc kubenswrapper[4808]: I0311 08:43:39.934970 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ef61fb6f-dce7-4207-b83f-b1877f13875e-client-ca\") pod \"controller-manager-5d4b7d57c-s64kv\" (UID: \"ef61fb6f-dce7-4207-b83f-b1877f13875e\") " pod="openshift-controller-manager/controller-manager-5d4b7d57c-s64kv" Mar 11 08:43:39 crc kubenswrapper[4808]: I0311 08:43:39.935231 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef61fb6f-dce7-4207-b83f-b1877f13875e-config\") pod \"controller-manager-5d4b7d57c-s64kv\" (UID: \"ef61fb6f-dce7-4207-b83f-b1877f13875e\") " pod="openshift-controller-manager/controller-manager-5d4b7d57c-s64kv" Mar 11 08:43:39 crc kubenswrapper[4808]: I0311 08:43:39.940176 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef61fb6f-dce7-4207-b83f-b1877f13875e-serving-cert\") pod \"controller-manager-5d4b7d57c-s64kv\" (UID: \"ef61fb6f-dce7-4207-b83f-b1877f13875e\") " pod="openshift-controller-manager/controller-manager-5d4b7d57c-s64kv" Mar 11 08:43:39 crc kubenswrapper[4808]: I0311 08:43:39.949581 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25m7w\" (UniqueName: \"kubernetes.io/projected/ef61fb6f-dce7-4207-b83f-b1877f13875e-kube-api-access-25m7w\") pod \"controller-manager-5d4b7d57c-s64kv\" (UID: \"ef61fb6f-dce7-4207-b83f-b1877f13875e\") " pod="openshift-controller-manager/controller-manager-5d4b7d57c-s64kv" Mar 11 08:43:39 crc kubenswrapper[4808]: I0311 08:43:39.970107 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d4b7d57c-s64kv" Mar 11 08:43:40 crc kubenswrapper[4808]: I0311 08:43:40.264799 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-657d894494-dcmx4" event={"ID":"e8420f8a-2d3b-440b-990b-c7fa2a491de6","Type":"ContainerStarted","Data":"9d0aad97e7aec8b09d91887ac28109f0db55a3334ab39d4be045e105b9c47721"} Mar 11 08:43:40 crc kubenswrapper[4808]: I0311 08:43:40.265575 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-657d894494-dcmx4" Mar 11 08:43:40 crc kubenswrapper[4808]: I0311 08:43:40.271821 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"bf48c1f1-0cc2-45ee-924b-684e78843aff","Type":"ContainerDied","Data":"0f9878e1a359b4030c5bf971f305e19d83e63488ec5d6311c88c437b2c02f2dc"} Mar 11 08:43:40 crc kubenswrapper[4808]: I0311 08:43:40.272548 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f9878e1a359b4030c5bf971f305e19d83e63488ec5d6311c88c437b2c02f2dc" Mar 11 08:43:40 crc kubenswrapper[4808]: I0311 08:43:40.272684 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 11 08:43:40 crc kubenswrapper[4808]: I0311 08:43:40.276947 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-657d894494-dcmx4" Mar 11 08:43:40 crc kubenswrapper[4808]: I0311 08:43:40.293092 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-657d894494-dcmx4" podStartSLOduration=8.293079337 podStartE2EDuration="8.293079337s" podCreationTimestamp="2026-03-11 08:43:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:43:40.292951803 +0000 UTC m=+271.246275133" watchObservedRunningTime="2026-03-11 08:43:40.293079337 +0000 UTC m=+271.246402657" Mar 11 08:43:40 crc kubenswrapper[4808]: I0311 08:43:40.848300 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5d4b7d57c-s64kv"] Mar 11 08:43:40 crc kubenswrapper[4808]: W0311 08:43:40.855742 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef61fb6f_dce7_4207_b83f_b1877f13875e.slice/crio-32c012f02fcf2c50fac4a46da241d81d5a6e2605fb3e450674eb976f7d1a15b8 WatchSource:0}: Error finding container 32c012f02fcf2c50fac4a46da241d81d5a6e2605fb3e450674eb976f7d1a15b8: Status 404 returned error can't find the container with id 32c012f02fcf2c50fac4a46da241d81d5a6e2605fb3e450674eb976f7d1a15b8 Mar 11 08:43:41 crc kubenswrapper[4808]: I0311 08:43:41.277239 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553642-n69sp" event={"ID":"29ace785-0297-4201-9d1c-778af0740058","Type":"ContainerStarted","Data":"adc1b8c556c2fe10f316cd7fea1f8b3c3c2b8e6b4d70df7b29eb9cd75c956b58"} Mar 11 08:43:41 crc kubenswrapper[4808]: I0311 08:43:41.280220 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p7nmq" event={"ID":"5cb71001-8e81-4655-97db-e7bd5eaccb2a","Type":"ContainerStarted","Data":"049343137e7e50d65de88b8879d2c16d19987e8b09bcc39698d0944d27482291"} Mar 11 08:43:41 crc kubenswrapper[4808]: I0311 08:43:41.283218 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bc8sc" event={"ID":"8d26a515-59f9-49a6-a9e0-6ff62b523ab4","Type":"ContainerStarted","Data":"dc7cefd976b122d8c3e8895f0c75162bb03c6521fc2697025802f893b8715e94"} Mar 11 08:43:41 crc kubenswrapper[4808]: I0311 08:43:41.284758 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cnszv" event={"ID":"5a83bb5a-5523-4258-b6ce-895c0b1d7f07","Type":"ContainerStarted","Data":"f2c6d092023bbd464a5a5d9a4736b9c7061598634a2633b2a31add43e5c6dd8a"} Mar 11 08:43:41 crc kubenswrapper[4808]: I0311 08:43:41.285809 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5d4b7d57c-s64kv" event={"ID":"ef61fb6f-dce7-4207-b83f-b1877f13875e","Type":"ContainerStarted","Data":"62fc88f7dec1159b18bd30c5dabe5da96b6fbc513f62852d1e278cbad92e9cb8"} Mar 11 08:43:41 crc kubenswrapper[4808]: I0311 08:43:41.285833 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5d4b7d57c-s64kv" event={"ID":"ef61fb6f-dce7-4207-b83f-b1877f13875e","Type":"ContainerStarted","Data":"32c012f02fcf2c50fac4a46da241d81d5a6e2605fb3e450674eb976f7d1a15b8"} Mar 11 08:43:41 crc kubenswrapper[4808]: I0311 08:43:41.286164 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5d4b7d57c-s64kv" Mar 11 08:43:41 crc kubenswrapper[4808]: I0311 08:43:41.297451 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29553642-n69sp" podStartSLOduration=49.174711405 podStartE2EDuration="1m41.297435886s" podCreationTimestamp="2026-03-11 08:42:00 +0000 UTC" firstStartedPulling="2026-03-11 08:42:48.68889226 +0000 UTC m=+219.642215580" lastFinishedPulling="2026-03-11 08:43:40.811616751 +0000 UTC m=+271.764940061" observedRunningTime="2026-03-11 08:43:41.29441127 +0000 UTC m=+272.247734590" watchObservedRunningTime="2026-03-11 08:43:41.297435886 +0000 UTC m=+272.250759206" Mar 11 08:43:41 crc kubenswrapper[4808]: I0311 08:43:41.312296 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-p7nmq" podStartSLOduration=3.059457751 podStartE2EDuration="49.312279997s" podCreationTimestamp="2026-03-11 08:42:52 +0000 UTC" firstStartedPulling="2026-03-11 08:42:54.429553097 +0000 UTC m=+225.382876407" lastFinishedPulling="2026-03-11 08:43:40.682375343 +0000 UTC m=+271.635698653" observedRunningTime="2026-03-11 08:43:41.310088885 +0000 UTC m=+272.263412205" watchObservedRunningTime="2026-03-11 08:43:41.312279997 +0000 UTC m=+272.265603317" Mar 11 08:43:41 crc kubenswrapper[4808]: I0311 08:43:41.331296 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5d4b7d57c-s64kv" Mar 11 08:43:41 crc kubenswrapper[4808]: I0311 08:43:41.332865 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5d4b7d57c-s64kv" podStartSLOduration=9.33284748 podStartE2EDuration="9.33284748s" podCreationTimestamp="2026-03-11 08:43:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:43:41.328504717 +0000 UTC m=+272.281828037" watchObservedRunningTime="2026-03-11 08:43:41.33284748 +0000 UTC m=+272.286170790" Mar 11 08:43:41 crc kubenswrapper[4808]: I0311 08:43:41.360633 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bc8sc" podStartSLOduration=5.431786392 podStartE2EDuration="50.360613998s" podCreationTimestamp="2026-03-11 08:42:51 +0000 UTC" firstStartedPulling="2026-03-11 08:42:54.434030786 +0000 UTC m=+225.387354106" lastFinishedPulling="2026-03-11 08:43:39.362858382 +0000 UTC m=+270.316181712" observedRunningTime="2026-03-11 08:43:41.357568422 +0000 UTC m=+272.310891742" watchObservedRunningTime="2026-03-11 08:43:41.360613998 +0000 UTC m=+272.313937328" Mar 11 08:43:41 crc kubenswrapper[4808]: I0311 08:43:41.378208 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-cnszv" podStartSLOduration=4.886991243 podStartE2EDuration="48.378192897s" podCreationTimestamp="2026-03-11 08:42:53 +0000 UTC" firstStartedPulling="2026-03-11 08:42:55.50276301 +0000 UTC m=+226.456086330" lastFinishedPulling="2026-03-11 08:43:38.993964654 +0000 UTC m=+269.947287984" observedRunningTime="2026-03-11 08:43:41.377108836 +0000 UTC m=+272.330432166" watchObservedRunningTime="2026-03-11 08:43:41.378192897 +0000 UTC m=+272.331516217" Mar 11 08:43:41 crc kubenswrapper[4808]: I0311 08:43:41.744595 4808 csr.go:261] certificate signing request csr-zvt6z is approved, waiting to be issued Mar 11 08:43:41 crc kubenswrapper[4808]: I0311 08:43:41.751263 4808 csr.go:257] certificate signing request csr-zvt6z is issued Mar 11 08:43:42 crc kubenswrapper[4808]: I0311 08:43:42.285276 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bc8sc" Mar 11 08:43:42 crc kubenswrapper[4808]: I0311 08:43:42.285344 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bc8sc" Mar 11 08:43:42 crc kubenswrapper[4808]: I0311 08:43:42.295093 4808 generic.go:334] "Generic (PLEG): container finished" podID="29ace785-0297-4201-9d1c-778af0740058" containerID="adc1b8c556c2fe10f316cd7fea1f8b3c3c2b8e6b4d70df7b29eb9cd75c956b58" exitCode=0 Mar 11 08:43:42 crc kubenswrapper[4808]: I0311 08:43:42.295517 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553642-n69sp" event={"ID":"29ace785-0297-4201-9d1c-778af0740058","Type":"ContainerDied","Data":"adc1b8c556c2fe10f316cd7fea1f8b3c3c2b8e6b4d70df7b29eb9cd75c956b58"} Mar 11 08:43:42 crc kubenswrapper[4808]: I0311 08:43:42.753083 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2026-11-29 02:11:41.80925547 +0000 UTC Mar 11 08:43:42 crc kubenswrapper[4808]: I0311 08:43:42.753467 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6305h27m59.055795361s for next certificate rotation Mar 11 08:43:42 crc kubenswrapper[4808]: I0311 08:43:42.753512 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-p7nmq" Mar 11 08:43:42 crc kubenswrapper[4808]: I0311 08:43:42.753561 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-p7nmq" Mar 11 08:43:43 crc kubenswrapper[4808]: I0311 08:43:43.546155 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553642-n69sp" Mar 11 08:43:43 crc kubenswrapper[4808]: I0311 08:43:43.604031 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-bc8sc" podUID="8d26a515-59f9-49a6-a9e0-6ff62b523ab4" containerName="registry-server" probeResult="failure" output=< Mar 11 08:43:43 crc kubenswrapper[4808]: timeout: failed to connect service ":50051" within 1s Mar 11 08:43:43 crc kubenswrapper[4808]: > Mar 11 08:43:43 crc kubenswrapper[4808]: I0311 08:43:43.688173 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qfph\" (UniqueName: \"kubernetes.io/projected/29ace785-0297-4201-9d1c-778af0740058-kube-api-access-4qfph\") pod \"29ace785-0297-4201-9d1c-778af0740058\" (UID: \"29ace785-0297-4201-9d1c-778af0740058\") " Mar 11 08:43:43 crc kubenswrapper[4808]: I0311 08:43:43.693692 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29ace785-0297-4201-9d1c-778af0740058-kube-api-access-4qfph" (OuterVolumeSpecName: "kube-api-access-4qfph") pod "29ace785-0297-4201-9d1c-778af0740058" (UID: "29ace785-0297-4201-9d1c-778af0740058"). InnerVolumeSpecName "kube-api-access-4qfph". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:43:43 crc kubenswrapper[4808]: I0311 08:43:43.753646 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2027-01-04 17:32:29.526453617 +0000 UTC Mar 11 08:43:43 crc kubenswrapper[4808]: I0311 08:43:43.753696 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7184h48m45.772762116s for next certificate rotation Mar 11 08:43:43 crc kubenswrapper[4808]: I0311 08:43:43.789686 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qfph\" (UniqueName: \"kubernetes.io/projected/29ace785-0297-4201-9d1c-778af0740058-kube-api-access-4qfph\") on node \"crc\" DevicePath \"\"" Mar 11 08:43:43 crc kubenswrapper[4808]: I0311 08:43:43.790649 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-p7nmq" podUID="5cb71001-8e81-4655-97db-e7bd5eaccb2a" containerName="registry-server" probeResult="failure" output=< Mar 11 08:43:43 crc kubenswrapper[4808]: timeout: failed to connect service ":50051" within 1s Mar 11 08:43:43 crc kubenswrapper[4808]: > Mar 11 08:43:44 crc kubenswrapper[4808]: I0311 08:43:44.287342 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-cnszv" Mar 11 08:43:44 crc kubenswrapper[4808]: I0311 08:43:44.287713 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-cnszv" Mar 11 08:43:44 crc kubenswrapper[4808]: I0311 08:43:44.309894 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553642-n69sp" event={"ID":"29ace785-0297-4201-9d1c-778af0740058","Type":"ContainerDied","Data":"940d9f3a3d86aaca2ac9fe61d6caced57e1fae31f5882b626d70816e20a0d498"} Mar 11 08:43:44 crc kubenswrapper[4808]: I0311 08:43:44.309941 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="940d9f3a3d86aaca2ac9fe61d6caced57e1fae31f5882b626d70816e20a0d498" Mar 11 08:43:44 crc kubenswrapper[4808]: I0311 08:43:44.310005 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553642-n69sp" Mar 11 08:43:44 crc kubenswrapper[4808]: I0311 08:43:44.363266 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-cnszv" Mar 11 08:43:46 crc kubenswrapper[4808]: I0311 08:43:46.027245 4808 patch_prober.go:28] interesting pod/machine-config-daemon-tfsm9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 08:43:46 crc kubenswrapper[4808]: I0311 08:43:46.027319 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 08:43:46 crc kubenswrapper[4808]: I0311 08:43:46.027398 4808 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" Mar 11 08:43:46 crc kubenswrapper[4808]: I0311 08:43:46.027932 4808 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5dad38518c1c86c0eb3926b0ff17b9fe7847bd1e942591307c49dfbd87a61d19"} pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 11 08:43:46 crc kubenswrapper[4808]: I0311 08:43:46.028002 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerName="machine-config-daemon" containerID="cri-o://5dad38518c1c86c0eb3926b0ff17b9fe7847bd1e942591307c49dfbd87a61d19" gracePeriod=600 Mar 11 08:43:46 crc kubenswrapper[4808]: I0311 08:43:46.324571 4808 generic.go:334] "Generic (PLEG): container finished" podID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerID="5dad38518c1c86c0eb3926b0ff17b9fe7847bd1e942591307c49dfbd87a61d19" exitCode=0 Mar 11 08:43:46 crc kubenswrapper[4808]: I0311 08:43:46.324662 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" event={"ID":"3dda5309-668d-4e3c-b3b2-1d708eecc578","Type":"ContainerDied","Data":"5dad38518c1c86c0eb3926b0ff17b9fe7847bd1e942591307c49dfbd87a61d19"} Mar 11 08:43:46 crc kubenswrapper[4808]: I0311 08:43:46.325026 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" event={"ID":"3dda5309-668d-4e3c-b3b2-1d708eecc578","Type":"ContainerStarted","Data":"c6794dd7ab092f5a96326d5fa33059ecdf5805f11897a09c759ab292bb6c6eec"} Mar 11 08:43:49 crc kubenswrapper[4808]: I0311 08:43:49.349841 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dqmwn" event={"ID":"5cd00c2e-5e6d-4fa5-a2a5-28153c8ef4bd","Type":"ContainerStarted","Data":"5c58af5174cca0dba072646bf90a707fcdbf5317f44b1005ec563a8ef47be2ee"} Mar 11 08:43:49 crc kubenswrapper[4808]: I0311 08:43:49.352070 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nn2pz" event={"ID":"06cf5014-8032-4d6d-b905-1d7196c123c7","Type":"ContainerStarted","Data":"93cd7d5b146d97d7cae6f2c9b7142aa64f730a0f0af35b684c55560eafddcecb"} Mar 11 08:43:50 crc kubenswrapper[4808]: I0311 08:43:50.360139 4808 generic.go:334] "Generic (PLEG): container finished" podID="06cf5014-8032-4d6d-b905-1d7196c123c7" containerID="93cd7d5b146d97d7cae6f2c9b7142aa64f730a0f0af35b684c55560eafddcecb" exitCode=0 Mar 11 08:43:50 crc kubenswrapper[4808]: I0311 08:43:50.360230 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nn2pz" event={"ID":"06cf5014-8032-4d6d-b905-1d7196c123c7","Type":"ContainerDied","Data":"93cd7d5b146d97d7cae6f2c9b7142aa64f730a0f0af35b684c55560eafddcecb"} Mar 11 08:43:50 crc kubenswrapper[4808]: I0311 08:43:50.362805 4808 generic.go:334] "Generic (PLEG): container finished" podID="5cd00c2e-5e6d-4fa5-a2a5-28153c8ef4bd" containerID="5c58af5174cca0dba072646bf90a707fcdbf5317f44b1005ec563a8ef47be2ee" exitCode=0 Mar 11 08:43:50 crc kubenswrapper[4808]: I0311 08:43:50.362843 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dqmwn" event={"ID":"5cd00c2e-5e6d-4fa5-a2a5-28153c8ef4bd","Type":"ContainerDied","Data":"5c58af5174cca0dba072646bf90a707fcdbf5317f44b1005ec563a8ef47be2ee"} Mar 11 08:43:51 crc kubenswrapper[4808]: I0311 08:43:51.369848 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dqmwn" event={"ID":"5cd00c2e-5e6d-4fa5-a2a5-28153c8ef4bd","Type":"ContainerStarted","Data":"52adc17afa44923452a812d2cfc7bc81f38473d12438047c7572015d462dc61b"} Mar 11 08:43:51 crc kubenswrapper[4808]: I0311 08:43:51.373471 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nn2pz" event={"ID":"06cf5014-8032-4d6d-b905-1d7196c123c7","Type":"ContainerStarted","Data":"d6b53f37386761f805c4e305a0cad2ecf5dc1cb8ab21a8cf4af29509e0985604"} Mar 11 08:43:51 crc kubenswrapper[4808]: I0311 08:43:51.377393 4808 generic.go:334] "Generic (PLEG): container finished" podID="32a93168-4bf6-48c0-89b4-5e4393234562" containerID="87d5a46de9ee3150455457a050c7c44516668e08c685c81df04c1b29771b5189" exitCode=0 Mar 11 08:43:51 crc kubenswrapper[4808]: I0311 08:43:51.377428 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pf6x9" event={"ID":"32a93168-4bf6-48c0-89b4-5e4393234562","Type":"ContainerDied","Data":"87d5a46de9ee3150455457a050c7c44516668e08c685c81df04c1b29771b5189"} Mar 11 08:43:51 crc kubenswrapper[4808]: I0311 08:43:51.410421 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dqmwn" podStartSLOduration=1.849536685 podStartE2EDuration="56.410402479s" podCreationTimestamp="2026-03-11 08:42:55 +0000 UTC" firstStartedPulling="2026-03-11 08:42:56.57516468 +0000 UTC m=+227.528488000" lastFinishedPulling="2026-03-11 08:43:51.136030474 +0000 UTC m=+282.089353794" observedRunningTime="2026-03-11 08:43:51.392039718 +0000 UTC m=+282.345363028" watchObservedRunningTime="2026-03-11 08:43:51.410402479 +0000 UTC m=+282.363725799" Mar 11 08:43:51 crc kubenswrapper[4808]: I0311 08:43:51.425819 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-nn2pz" podStartSLOduration=3.264707132 podStartE2EDuration="56.425800576s" podCreationTimestamp="2026-03-11 08:42:55 +0000 UTC" firstStartedPulling="2026-03-11 08:42:57.68772526 +0000 UTC m=+228.641048580" lastFinishedPulling="2026-03-11 08:43:50.848818704 +0000 UTC m=+281.802142024" observedRunningTime="2026-03-11 08:43:51.420932048 +0000 UTC m=+282.374255368" watchObservedRunningTime="2026-03-11 08:43:51.425800576 +0000 UTC m=+282.379123896" Mar 11 08:43:52 crc kubenswrapper[4808]: I0311 08:43:52.348954 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bc8sc" Mar 11 08:43:52 crc kubenswrapper[4808]: I0311 08:43:52.367311 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5d4b7d57c-s64kv"] Mar 11 08:43:52 crc kubenswrapper[4808]: I0311 08:43:52.367580 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5d4b7d57c-s64kv" podUID="ef61fb6f-dce7-4207-b83f-b1877f13875e" containerName="controller-manager" containerID="cri-o://62fc88f7dec1159b18bd30c5dabe5da96b6fbc513f62852d1e278cbad92e9cb8" gracePeriod=30 Mar 11 08:43:52 crc kubenswrapper[4808]: I0311 08:43:52.384050 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-657d894494-dcmx4"] Mar 11 08:43:52 crc kubenswrapper[4808]: I0311 08:43:52.384300 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-657d894494-dcmx4" podUID="e8420f8a-2d3b-440b-990b-c7fa2a491de6" containerName="route-controller-manager" containerID="cri-o://9d0aad97e7aec8b09d91887ac28109f0db55a3334ab39d4be045e105b9c47721" gracePeriod=30 Mar 11 08:43:52 crc kubenswrapper[4808]: I0311 08:43:52.394136 4808 generic.go:334] "Generic (PLEG): container finished" podID="8d9db75f-12f9-4870-94c5-474c8b74f021" containerID="9fe1608911440ea872369dcc688840473f2b41c0ead00e2f77b20b66a4f55a73" exitCode=0 Mar 11 08:43:52 crc kubenswrapper[4808]: I0311 08:43:52.394236 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wxbs4" event={"ID":"8d9db75f-12f9-4870-94c5-474c8b74f021","Type":"ContainerDied","Data":"9fe1608911440ea872369dcc688840473f2b41c0ead00e2f77b20b66a4f55a73"} Mar 11 08:43:52 crc kubenswrapper[4808]: I0311 08:43:52.394504 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bc8sc" Mar 11 08:43:52 crc kubenswrapper[4808]: I0311 08:43:52.398549 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pf6x9" event={"ID":"32a93168-4bf6-48c0-89b4-5e4393234562","Type":"ContainerStarted","Data":"a850b6949811010daca36762a9c5a798c603bbad12e27d18071279f6090857b3"} Mar 11 08:43:52 crc kubenswrapper[4808]: I0311 08:43:52.452373 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-pf6x9" podStartSLOduration=3.120098212 podStartE2EDuration="1m0.452331073s" podCreationTimestamp="2026-03-11 08:42:52 +0000 UTC" firstStartedPulling="2026-03-11 08:42:54.454932759 +0000 UTC m=+225.408256079" lastFinishedPulling="2026-03-11 08:43:51.78716562 +0000 UTC m=+282.740488940" observedRunningTime="2026-03-11 08:43:52.451941552 +0000 UTC m=+283.405264872" watchObservedRunningTime="2026-03-11 08:43:52.452331073 +0000 UTC m=+283.405654393" Mar 11 08:43:52 crc kubenswrapper[4808]: I0311 08:43:52.494796 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-pf6x9" Mar 11 08:43:52 crc kubenswrapper[4808]: I0311 08:43:52.494853 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-pf6x9" Mar 11 08:43:52 crc kubenswrapper[4808]: I0311 08:43:52.811589 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-p7nmq" Mar 11 08:43:52 crc kubenswrapper[4808]: I0311 08:43:52.869183 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-p7nmq" Mar 11 08:43:52 crc kubenswrapper[4808]: I0311 08:43:52.872621 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-657d894494-dcmx4" Mar 11 08:43:52 crc kubenswrapper[4808]: I0311 08:43:52.928227 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e8420f8a-2d3b-440b-990b-c7fa2a491de6-serving-cert\") pod \"e8420f8a-2d3b-440b-990b-c7fa2a491de6\" (UID: \"e8420f8a-2d3b-440b-990b-c7fa2a491de6\") " Mar 11 08:43:52 crc kubenswrapper[4808]: I0311 08:43:52.928646 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8420f8a-2d3b-440b-990b-c7fa2a491de6-config\") pod \"e8420f8a-2d3b-440b-990b-c7fa2a491de6\" (UID: \"e8420f8a-2d3b-440b-990b-c7fa2a491de6\") " Mar 11 08:43:52 crc kubenswrapper[4808]: I0311 08:43:52.928681 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e8420f8a-2d3b-440b-990b-c7fa2a491de6-client-ca\") pod \"e8420f8a-2d3b-440b-990b-c7fa2a491de6\" (UID: \"e8420f8a-2d3b-440b-990b-c7fa2a491de6\") " Mar 11 08:43:52 crc kubenswrapper[4808]: I0311 08:43:52.928734 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f8j96\" (UniqueName: \"kubernetes.io/projected/e8420f8a-2d3b-440b-990b-c7fa2a491de6-kube-api-access-f8j96\") pod \"e8420f8a-2d3b-440b-990b-c7fa2a491de6\" (UID: \"e8420f8a-2d3b-440b-990b-c7fa2a491de6\") " Mar 11 08:43:52 crc kubenswrapper[4808]: I0311 08:43:52.929316 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8420f8a-2d3b-440b-990b-c7fa2a491de6-config" (OuterVolumeSpecName: "config") pod "e8420f8a-2d3b-440b-990b-c7fa2a491de6" (UID: "e8420f8a-2d3b-440b-990b-c7fa2a491de6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:43:52 crc kubenswrapper[4808]: I0311 08:43:52.929670 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8420f8a-2d3b-440b-990b-c7fa2a491de6-client-ca" (OuterVolumeSpecName: "client-ca") pod "e8420f8a-2d3b-440b-990b-c7fa2a491de6" (UID: "e8420f8a-2d3b-440b-990b-c7fa2a491de6"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:43:52 crc kubenswrapper[4808]: I0311 08:43:52.933851 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8420f8a-2d3b-440b-990b-c7fa2a491de6-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e8420f8a-2d3b-440b-990b-c7fa2a491de6" (UID: "e8420f8a-2d3b-440b-990b-c7fa2a491de6"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:43:52 crc kubenswrapper[4808]: I0311 08:43:52.934945 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8420f8a-2d3b-440b-990b-c7fa2a491de6-kube-api-access-f8j96" (OuterVolumeSpecName: "kube-api-access-f8j96") pod "e8420f8a-2d3b-440b-990b-c7fa2a491de6" (UID: "e8420f8a-2d3b-440b-990b-c7fa2a491de6"). InnerVolumeSpecName "kube-api-access-f8j96". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:43:52 crc kubenswrapper[4808]: I0311 08:43:52.991203 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d4b7d57c-s64kv" Mar 11 08:43:53 crc kubenswrapper[4808]: I0311 08:43:53.029298 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef61fb6f-dce7-4207-b83f-b1877f13875e-config\") pod \"ef61fb6f-dce7-4207-b83f-b1877f13875e\" (UID: \"ef61fb6f-dce7-4207-b83f-b1877f13875e\") " Mar 11 08:43:53 crc kubenswrapper[4808]: I0311 08:43:53.029384 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-25m7w\" (UniqueName: \"kubernetes.io/projected/ef61fb6f-dce7-4207-b83f-b1877f13875e-kube-api-access-25m7w\") pod \"ef61fb6f-dce7-4207-b83f-b1877f13875e\" (UID: \"ef61fb6f-dce7-4207-b83f-b1877f13875e\") " Mar 11 08:43:53 crc kubenswrapper[4808]: I0311 08:43:53.029422 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ef61fb6f-dce7-4207-b83f-b1877f13875e-proxy-ca-bundles\") pod \"ef61fb6f-dce7-4207-b83f-b1877f13875e\" (UID: \"ef61fb6f-dce7-4207-b83f-b1877f13875e\") " Mar 11 08:43:53 crc kubenswrapper[4808]: I0311 08:43:53.029469 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef61fb6f-dce7-4207-b83f-b1877f13875e-serving-cert\") pod \"ef61fb6f-dce7-4207-b83f-b1877f13875e\" (UID: \"ef61fb6f-dce7-4207-b83f-b1877f13875e\") " Mar 11 08:43:53 crc kubenswrapper[4808]: I0311 08:43:53.029504 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ef61fb6f-dce7-4207-b83f-b1877f13875e-client-ca\") pod \"ef61fb6f-dce7-4207-b83f-b1877f13875e\" (UID: \"ef61fb6f-dce7-4207-b83f-b1877f13875e\") " Mar 11 08:43:53 crc kubenswrapper[4808]: I0311 08:43:53.029813 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8420f8a-2d3b-440b-990b-c7fa2a491de6-config\") on node \"crc\" DevicePath \"\"" Mar 11 08:43:53 crc kubenswrapper[4808]: I0311 08:43:53.029852 4808 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e8420f8a-2d3b-440b-990b-c7fa2a491de6-client-ca\") on node \"crc\" DevicePath \"\"" Mar 11 08:43:53 crc kubenswrapper[4808]: I0311 08:43:53.029866 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f8j96\" (UniqueName: \"kubernetes.io/projected/e8420f8a-2d3b-440b-990b-c7fa2a491de6-kube-api-access-f8j96\") on node \"crc\" DevicePath \"\"" Mar 11 08:43:53 crc kubenswrapper[4808]: I0311 08:43:53.029878 4808 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e8420f8a-2d3b-440b-990b-c7fa2a491de6-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 11 08:43:53 crc kubenswrapper[4808]: I0311 08:43:53.030390 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef61fb6f-dce7-4207-b83f-b1877f13875e-config" (OuterVolumeSpecName: "config") pod "ef61fb6f-dce7-4207-b83f-b1877f13875e" (UID: "ef61fb6f-dce7-4207-b83f-b1877f13875e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:43:53 crc kubenswrapper[4808]: I0311 08:43:53.030452 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef61fb6f-dce7-4207-b83f-b1877f13875e-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "ef61fb6f-dce7-4207-b83f-b1877f13875e" (UID: "ef61fb6f-dce7-4207-b83f-b1877f13875e"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:43:53 crc kubenswrapper[4808]: I0311 08:43:53.032670 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef61fb6f-dce7-4207-b83f-b1877f13875e-client-ca" (OuterVolumeSpecName: "client-ca") pod "ef61fb6f-dce7-4207-b83f-b1877f13875e" (UID: "ef61fb6f-dce7-4207-b83f-b1877f13875e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:43:53 crc kubenswrapper[4808]: I0311 08:43:53.034561 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef61fb6f-dce7-4207-b83f-b1877f13875e-kube-api-access-25m7w" (OuterVolumeSpecName: "kube-api-access-25m7w") pod "ef61fb6f-dce7-4207-b83f-b1877f13875e" (UID: "ef61fb6f-dce7-4207-b83f-b1877f13875e"). InnerVolumeSpecName "kube-api-access-25m7w". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:43:53 crc kubenswrapper[4808]: I0311 08:43:53.036566 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef61fb6f-dce7-4207-b83f-b1877f13875e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ef61fb6f-dce7-4207-b83f-b1877f13875e" (UID: "ef61fb6f-dce7-4207-b83f-b1877f13875e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:43:53 crc kubenswrapper[4808]: I0311 08:43:53.131618 4808 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ef61fb6f-dce7-4207-b83f-b1877f13875e-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 11 08:43:53 crc kubenswrapper[4808]: I0311 08:43:53.131652 4808 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef61fb6f-dce7-4207-b83f-b1877f13875e-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 11 08:43:53 crc kubenswrapper[4808]: I0311 08:43:53.131661 4808 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ef61fb6f-dce7-4207-b83f-b1877f13875e-client-ca\") on node \"crc\" DevicePath \"\"" Mar 11 08:43:53 crc kubenswrapper[4808]: I0311 08:43:53.131673 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef61fb6f-dce7-4207-b83f-b1877f13875e-config\") on node \"crc\" DevicePath \"\"" Mar 11 08:43:53 crc kubenswrapper[4808]: I0311 08:43:53.131683 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-25m7w\" (UniqueName: \"kubernetes.io/projected/ef61fb6f-dce7-4207-b83f-b1877f13875e-kube-api-access-25m7w\") on node \"crc\" DevicePath \"\"" Mar 11 08:43:53 crc kubenswrapper[4808]: I0311 08:43:53.405169 4808 generic.go:334] "Generic (PLEG): container finished" podID="e8420f8a-2d3b-440b-990b-c7fa2a491de6" containerID="9d0aad97e7aec8b09d91887ac28109f0db55a3334ab39d4be045e105b9c47721" exitCode=0 Mar 11 08:43:53 crc kubenswrapper[4808]: I0311 08:43:53.405227 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-657d894494-dcmx4" Mar 11 08:43:53 crc kubenswrapper[4808]: I0311 08:43:53.405256 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-657d894494-dcmx4" event={"ID":"e8420f8a-2d3b-440b-990b-c7fa2a491de6","Type":"ContainerDied","Data":"9d0aad97e7aec8b09d91887ac28109f0db55a3334ab39d4be045e105b9c47721"} Mar 11 08:43:53 crc kubenswrapper[4808]: I0311 08:43:53.405303 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-657d894494-dcmx4" event={"ID":"e8420f8a-2d3b-440b-990b-c7fa2a491de6","Type":"ContainerDied","Data":"55b88efb4d339d437a9258248bcd38f8a8dbf04f64336cefc3980080da9ed6a1"} Mar 11 08:43:53 crc kubenswrapper[4808]: I0311 08:43:53.405369 4808 scope.go:117] "RemoveContainer" containerID="9d0aad97e7aec8b09d91887ac28109f0db55a3334ab39d4be045e105b9c47721" Mar 11 08:43:53 crc kubenswrapper[4808]: I0311 08:43:53.408039 4808 generic.go:334] "Generic (PLEG): container finished" podID="ef61fb6f-dce7-4207-b83f-b1877f13875e" containerID="62fc88f7dec1159b18bd30c5dabe5da96b6fbc513f62852d1e278cbad92e9cb8" exitCode=0 Mar 11 08:43:53 crc kubenswrapper[4808]: I0311 08:43:53.408099 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5d4b7d57c-s64kv" event={"ID":"ef61fb6f-dce7-4207-b83f-b1877f13875e","Type":"ContainerDied","Data":"62fc88f7dec1159b18bd30c5dabe5da96b6fbc513f62852d1e278cbad92e9cb8"} Mar 11 08:43:53 crc kubenswrapper[4808]: I0311 08:43:53.408118 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5d4b7d57c-s64kv" event={"ID":"ef61fb6f-dce7-4207-b83f-b1877f13875e","Type":"ContainerDied","Data":"32c012f02fcf2c50fac4a46da241d81d5a6e2605fb3e450674eb976f7d1a15b8"} Mar 11 08:43:53 crc kubenswrapper[4808]: I0311 08:43:53.408192 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d4b7d57c-s64kv" Mar 11 08:43:53 crc kubenswrapper[4808]: I0311 08:43:53.410694 4808 generic.go:334] "Generic (PLEG): container finished" podID="8c04d52d-2eda-40d3-8252-ac2e14d0a861" containerID="20b19aae727c3d0ad8a267910859427ee10d4f53cfb6db7a6826330892e8ead5" exitCode=0 Mar 11 08:43:53 crc kubenswrapper[4808]: I0311 08:43:53.410900 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p7jn7" event={"ID":"8c04d52d-2eda-40d3-8252-ac2e14d0a861","Type":"ContainerDied","Data":"20b19aae727c3d0ad8a267910859427ee10d4f53cfb6db7a6826330892e8ead5"} Mar 11 08:43:53 crc kubenswrapper[4808]: I0311 08:43:53.425610 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wxbs4" event={"ID":"8d9db75f-12f9-4870-94c5-474c8b74f021","Type":"ContainerStarted","Data":"c0689ffb2a068bf6e3adbd68ccab1889e7cdff50f2ce57957441dbe4a961013d"} Mar 11 08:43:53 crc kubenswrapper[4808]: I0311 08:43:53.433160 4808 scope.go:117] "RemoveContainer" containerID="9d0aad97e7aec8b09d91887ac28109f0db55a3334ab39d4be045e105b9c47721" Mar 11 08:43:53 crc kubenswrapper[4808]: E0311 08:43:53.434794 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d0aad97e7aec8b09d91887ac28109f0db55a3334ab39d4be045e105b9c47721\": container with ID starting with 9d0aad97e7aec8b09d91887ac28109f0db55a3334ab39d4be045e105b9c47721 not found: ID does not exist" containerID="9d0aad97e7aec8b09d91887ac28109f0db55a3334ab39d4be045e105b9c47721" Mar 11 08:43:53 crc kubenswrapper[4808]: I0311 08:43:53.434832 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d0aad97e7aec8b09d91887ac28109f0db55a3334ab39d4be045e105b9c47721"} err="failed to get container status \"9d0aad97e7aec8b09d91887ac28109f0db55a3334ab39d4be045e105b9c47721\": rpc error: code = NotFound desc = could not find container \"9d0aad97e7aec8b09d91887ac28109f0db55a3334ab39d4be045e105b9c47721\": container with ID starting with 9d0aad97e7aec8b09d91887ac28109f0db55a3334ab39d4be045e105b9c47721 not found: ID does not exist" Mar 11 08:43:53 crc kubenswrapper[4808]: I0311 08:43:53.434858 4808 scope.go:117] "RemoveContainer" containerID="62fc88f7dec1159b18bd30c5dabe5da96b6fbc513f62852d1e278cbad92e9cb8" Mar 11 08:43:53 crc kubenswrapper[4808]: I0311 08:43:53.452728 4808 scope.go:117] "RemoveContainer" containerID="62fc88f7dec1159b18bd30c5dabe5da96b6fbc513f62852d1e278cbad92e9cb8" Mar 11 08:43:53 crc kubenswrapper[4808]: E0311 08:43:53.453422 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62fc88f7dec1159b18bd30c5dabe5da96b6fbc513f62852d1e278cbad92e9cb8\": container with ID starting with 62fc88f7dec1159b18bd30c5dabe5da96b6fbc513f62852d1e278cbad92e9cb8 not found: ID does not exist" containerID="62fc88f7dec1159b18bd30c5dabe5da96b6fbc513f62852d1e278cbad92e9cb8" Mar 11 08:43:53 crc kubenswrapper[4808]: I0311 08:43:53.453473 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62fc88f7dec1159b18bd30c5dabe5da96b6fbc513f62852d1e278cbad92e9cb8"} err="failed to get container status \"62fc88f7dec1159b18bd30c5dabe5da96b6fbc513f62852d1e278cbad92e9cb8\": rpc error: code = NotFound desc = could not find container \"62fc88f7dec1159b18bd30c5dabe5da96b6fbc513f62852d1e278cbad92e9cb8\": container with ID starting with 62fc88f7dec1159b18bd30c5dabe5da96b6fbc513f62852d1e278cbad92e9cb8 not found: ID does not exist" Mar 11 08:43:53 crc kubenswrapper[4808]: I0311 08:43:53.457425 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wxbs4" podStartSLOduration=3.028662201 podStartE2EDuration="1m1.457405922s" podCreationTimestamp="2026-03-11 08:42:52 +0000 UTC" firstStartedPulling="2026-03-11 08:42:54.438810094 +0000 UTC m=+225.392133404" lastFinishedPulling="2026-03-11 08:43:52.867553805 +0000 UTC m=+283.820877125" observedRunningTime="2026-03-11 08:43:53.454097708 +0000 UTC m=+284.407421028" watchObservedRunningTime="2026-03-11 08:43:53.457405922 +0000 UTC m=+284.410729242" Mar 11 08:43:53 crc kubenswrapper[4808]: I0311 08:43:53.464988 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-657d894494-dcmx4"] Mar 11 08:43:53 crc kubenswrapper[4808]: I0311 08:43:53.474945 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-657d894494-dcmx4"] Mar 11 08:43:53 crc kubenswrapper[4808]: I0311 08:43:53.483285 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5d4b7d57c-s64kv"] Mar 11 08:43:53 crc kubenswrapper[4808]: I0311 08:43:53.485348 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5d4b7d57c-s64kv"] Mar 11 08:43:53 crc kubenswrapper[4808]: I0311 08:43:53.541852 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-pf6x9" podUID="32a93168-4bf6-48c0-89b4-5e4393234562" containerName="registry-server" probeResult="failure" output=< Mar 11 08:43:53 crc kubenswrapper[4808]: timeout: failed to connect service ":50051" within 1s Mar 11 08:43:53 crc kubenswrapper[4808]: > Mar 11 08:43:53 crc kubenswrapper[4808]: I0311 08:43:53.663411 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-79b55fbc6c-564f6"] Mar 11 08:43:53 crc kubenswrapper[4808]: E0311 08:43:53.663976 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef61fb6f-dce7-4207-b83f-b1877f13875e" containerName="controller-manager" Mar 11 08:43:53 crc kubenswrapper[4808]: I0311 08:43:53.663991 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef61fb6f-dce7-4207-b83f-b1877f13875e" containerName="controller-manager" Mar 11 08:43:53 crc kubenswrapper[4808]: E0311 08:43:53.664005 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8420f8a-2d3b-440b-990b-c7fa2a491de6" containerName="route-controller-manager" Mar 11 08:43:53 crc kubenswrapper[4808]: I0311 08:43:53.664013 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8420f8a-2d3b-440b-990b-c7fa2a491de6" containerName="route-controller-manager" Mar 11 08:43:53 crc kubenswrapper[4808]: E0311 08:43:53.664026 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29ace785-0297-4201-9d1c-778af0740058" containerName="oc" Mar 11 08:43:53 crc kubenswrapper[4808]: I0311 08:43:53.664035 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="29ace785-0297-4201-9d1c-778af0740058" containerName="oc" Mar 11 08:43:53 crc kubenswrapper[4808]: I0311 08:43:53.664147 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="29ace785-0297-4201-9d1c-778af0740058" containerName="oc" Mar 11 08:43:53 crc kubenswrapper[4808]: I0311 08:43:53.664161 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef61fb6f-dce7-4207-b83f-b1877f13875e" containerName="controller-manager" Mar 11 08:43:53 crc kubenswrapper[4808]: I0311 08:43:53.664175 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8420f8a-2d3b-440b-990b-c7fa2a491de6" containerName="route-controller-manager" Mar 11 08:43:53 crc kubenswrapper[4808]: I0311 08:43:53.664621 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-79b55fbc6c-564f6" Mar 11 08:43:53 crc kubenswrapper[4808]: I0311 08:43:53.666844 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 11 08:43:53 crc kubenswrapper[4808]: I0311 08:43:53.667292 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59d4d7cf9b-vrv7h"] Mar 11 08:43:53 crc kubenswrapper[4808]: I0311 08:43:53.667905 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-59d4d7cf9b-vrv7h" Mar 11 08:43:53 crc kubenswrapper[4808]: I0311 08:43:53.672591 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 11 08:43:53 crc kubenswrapper[4808]: I0311 08:43:53.672672 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 11 08:43:53 crc kubenswrapper[4808]: I0311 08:43:53.673007 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 11 08:43:53 crc kubenswrapper[4808]: I0311 08:43:53.673556 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 11 08:43:53 crc kubenswrapper[4808]: I0311 08:43:53.673769 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 11 08:43:53 crc kubenswrapper[4808]: I0311 08:43:53.673876 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 11 08:43:53 crc kubenswrapper[4808]: I0311 08:43:53.674139 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 11 08:43:53 crc kubenswrapper[4808]: I0311 08:43:53.677156 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 11 08:43:53 crc kubenswrapper[4808]: I0311 08:43:53.677560 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 11 08:43:53 crc kubenswrapper[4808]: I0311 08:43:53.677754 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 11 08:43:53 crc kubenswrapper[4808]: I0311 08:43:53.680920 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59d4d7cf9b-vrv7h"] Mar 11 08:43:53 crc kubenswrapper[4808]: I0311 08:43:53.680958 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-79b55fbc6c-564f6"] Mar 11 08:43:53 crc kubenswrapper[4808]: I0311 08:43:53.682695 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 11 08:43:53 crc kubenswrapper[4808]: I0311 08:43:53.686786 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 11 08:43:53 crc kubenswrapper[4808]: I0311 08:43:53.796674 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8420f8a-2d3b-440b-990b-c7fa2a491de6" path="/var/lib/kubelet/pods/e8420f8a-2d3b-440b-990b-c7fa2a491de6/volumes" Mar 11 08:43:53 crc kubenswrapper[4808]: I0311 08:43:53.797318 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef61fb6f-dce7-4207-b83f-b1877f13875e" path="/var/lib/kubelet/pods/ef61fb6f-dce7-4207-b83f-b1877f13875e/volumes" Mar 11 08:43:53 crc kubenswrapper[4808]: I0311 08:43:53.843541 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffdzz\" (UniqueName: \"kubernetes.io/projected/117895d7-5f8b-43b2-ae5b-519c36a3a717-kube-api-access-ffdzz\") pod \"controller-manager-79b55fbc6c-564f6\" (UID: \"117895d7-5f8b-43b2-ae5b-519c36a3a717\") " pod="openshift-controller-manager/controller-manager-79b55fbc6c-564f6" Mar 11 08:43:53 crc kubenswrapper[4808]: I0311 08:43:53.843598 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/117895d7-5f8b-43b2-ae5b-519c36a3a717-serving-cert\") pod \"controller-manager-79b55fbc6c-564f6\" (UID: \"117895d7-5f8b-43b2-ae5b-519c36a3a717\") " pod="openshift-controller-manager/controller-manager-79b55fbc6c-564f6" Mar 11 08:43:53 crc kubenswrapper[4808]: I0311 08:43:53.843643 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/117895d7-5f8b-43b2-ae5b-519c36a3a717-config\") pod \"controller-manager-79b55fbc6c-564f6\" (UID: \"117895d7-5f8b-43b2-ae5b-519c36a3a717\") " pod="openshift-controller-manager/controller-manager-79b55fbc6c-564f6" Mar 11 08:43:53 crc kubenswrapper[4808]: I0311 08:43:53.843660 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7aa16043-ed52-4863-8c2f-6f6556569382-client-ca\") pod \"route-controller-manager-59d4d7cf9b-vrv7h\" (UID: \"7aa16043-ed52-4863-8c2f-6f6556569382\") " pod="openshift-route-controller-manager/route-controller-manager-59d4d7cf9b-vrv7h" Mar 11 08:43:53 crc kubenswrapper[4808]: I0311 08:43:53.843700 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/117895d7-5f8b-43b2-ae5b-519c36a3a717-proxy-ca-bundles\") pod \"controller-manager-79b55fbc6c-564f6\" (UID: \"117895d7-5f8b-43b2-ae5b-519c36a3a717\") " pod="openshift-controller-manager/controller-manager-79b55fbc6c-564f6" Mar 11 08:43:53 crc kubenswrapper[4808]: I0311 08:43:53.843719 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/117895d7-5f8b-43b2-ae5b-519c36a3a717-client-ca\") pod \"controller-manager-79b55fbc6c-564f6\" (UID: \"117895d7-5f8b-43b2-ae5b-519c36a3a717\") " pod="openshift-controller-manager/controller-manager-79b55fbc6c-564f6" Mar 11 08:43:53 crc kubenswrapper[4808]: I0311 08:43:53.843736 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7aa16043-ed52-4863-8c2f-6f6556569382-config\") pod \"route-controller-manager-59d4d7cf9b-vrv7h\" (UID: \"7aa16043-ed52-4863-8c2f-6f6556569382\") " pod="openshift-route-controller-manager/route-controller-manager-59d4d7cf9b-vrv7h" Mar 11 08:43:53 crc kubenswrapper[4808]: I0311 08:43:53.843790 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59bkz\" (UniqueName: \"kubernetes.io/projected/7aa16043-ed52-4863-8c2f-6f6556569382-kube-api-access-59bkz\") pod \"route-controller-manager-59d4d7cf9b-vrv7h\" (UID: \"7aa16043-ed52-4863-8c2f-6f6556569382\") " pod="openshift-route-controller-manager/route-controller-manager-59d4d7cf9b-vrv7h" Mar 11 08:43:53 crc kubenswrapper[4808]: I0311 08:43:53.843822 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7aa16043-ed52-4863-8c2f-6f6556569382-serving-cert\") pod \"route-controller-manager-59d4d7cf9b-vrv7h\" (UID: \"7aa16043-ed52-4863-8c2f-6f6556569382\") " pod="openshift-route-controller-manager/route-controller-manager-59d4d7cf9b-vrv7h" Mar 11 08:43:53 crc kubenswrapper[4808]: I0311 08:43:53.944710 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffdzz\" (UniqueName: \"kubernetes.io/projected/117895d7-5f8b-43b2-ae5b-519c36a3a717-kube-api-access-ffdzz\") pod \"controller-manager-79b55fbc6c-564f6\" (UID: \"117895d7-5f8b-43b2-ae5b-519c36a3a717\") " pod="openshift-controller-manager/controller-manager-79b55fbc6c-564f6" Mar 11 08:43:53 crc kubenswrapper[4808]: I0311 08:43:53.945429 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/117895d7-5f8b-43b2-ae5b-519c36a3a717-serving-cert\") pod \"controller-manager-79b55fbc6c-564f6\" (UID: \"117895d7-5f8b-43b2-ae5b-519c36a3a717\") " pod="openshift-controller-manager/controller-manager-79b55fbc6c-564f6" Mar 11 08:43:53 crc kubenswrapper[4808]: I0311 08:43:53.946284 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/117895d7-5f8b-43b2-ae5b-519c36a3a717-config\") pod \"controller-manager-79b55fbc6c-564f6\" (UID: \"117895d7-5f8b-43b2-ae5b-519c36a3a717\") " pod="openshift-controller-manager/controller-manager-79b55fbc6c-564f6" Mar 11 08:43:53 crc kubenswrapper[4808]: I0311 08:43:53.946328 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7aa16043-ed52-4863-8c2f-6f6556569382-client-ca\") pod \"route-controller-manager-59d4d7cf9b-vrv7h\" (UID: \"7aa16043-ed52-4863-8c2f-6f6556569382\") " pod="openshift-route-controller-manager/route-controller-manager-59d4d7cf9b-vrv7h" Mar 11 08:43:53 crc kubenswrapper[4808]: I0311 08:43:53.947153 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7aa16043-ed52-4863-8c2f-6f6556569382-client-ca\") pod \"route-controller-manager-59d4d7cf9b-vrv7h\" (UID: \"7aa16043-ed52-4863-8c2f-6f6556569382\") " pod="openshift-route-controller-manager/route-controller-manager-59d4d7cf9b-vrv7h" Mar 11 08:43:53 crc kubenswrapper[4808]: I0311 08:43:53.947454 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/117895d7-5f8b-43b2-ae5b-519c36a3a717-config\") pod \"controller-manager-79b55fbc6c-564f6\" (UID: \"117895d7-5f8b-43b2-ae5b-519c36a3a717\") " pod="openshift-controller-manager/controller-manager-79b55fbc6c-564f6" Mar 11 08:43:53 crc kubenswrapper[4808]: I0311 08:43:53.948337 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/117895d7-5f8b-43b2-ae5b-519c36a3a717-proxy-ca-bundles\") pod \"controller-manager-79b55fbc6c-564f6\" (UID: \"117895d7-5f8b-43b2-ae5b-519c36a3a717\") " pod="openshift-controller-manager/controller-manager-79b55fbc6c-564f6" Mar 11 08:43:53 crc kubenswrapper[4808]: I0311 08:43:53.947015 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/117895d7-5f8b-43b2-ae5b-519c36a3a717-proxy-ca-bundles\") pod \"controller-manager-79b55fbc6c-564f6\" (UID: \"117895d7-5f8b-43b2-ae5b-519c36a3a717\") " pod="openshift-controller-manager/controller-manager-79b55fbc6c-564f6" Mar 11 08:43:53 crc kubenswrapper[4808]: I0311 08:43:53.948449 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/117895d7-5f8b-43b2-ae5b-519c36a3a717-client-ca\") pod \"controller-manager-79b55fbc6c-564f6\" (UID: \"117895d7-5f8b-43b2-ae5b-519c36a3a717\") " pod="openshift-controller-manager/controller-manager-79b55fbc6c-564f6" Mar 11 08:43:53 crc kubenswrapper[4808]: I0311 08:43:53.948477 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7aa16043-ed52-4863-8c2f-6f6556569382-config\") pod \"route-controller-manager-59d4d7cf9b-vrv7h\" (UID: \"7aa16043-ed52-4863-8c2f-6f6556569382\") " pod="openshift-route-controller-manager/route-controller-manager-59d4d7cf9b-vrv7h" Mar 11 08:43:53 crc kubenswrapper[4808]: I0311 08:43:53.949376 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59bkz\" (UniqueName: \"kubernetes.io/projected/7aa16043-ed52-4863-8c2f-6f6556569382-kube-api-access-59bkz\") pod \"route-controller-manager-59d4d7cf9b-vrv7h\" (UID: \"7aa16043-ed52-4863-8c2f-6f6556569382\") " pod="openshift-route-controller-manager/route-controller-manager-59d4d7cf9b-vrv7h" Mar 11 08:43:53 crc kubenswrapper[4808]: I0311 08:43:53.949454 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7aa16043-ed52-4863-8c2f-6f6556569382-serving-cert\") pod \"route-controller-manager-59d4d7cf9b-vrv7h\" (UID: \"7aa16043-ed52-4863-8c2f-6f6556569382\") " pod="openshift-route-controller-manager/route-controller-manager-59d4d7cf9b-vrv7h" Mar 11 08:43:53 crc kubenswrapper[4808]: I0311 08:43:53.949602 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7aa16043-ed52-4863-8c2f-6f6556569382-config\") pod \"route-controller-manager-59d4d7cf9b-vrv7h\" (UID: \"7aa16043-ed52-4863-8c2f-6f6556569382\") " pod="openshift-route-controller-manager/route-controller-manager-59d4d7cf9b-vrv7h" Mar 11 08:43:53 crc kubenswrapper[4808]: I0311 08:43:53.949275 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/117895d7-5f8b-43b2-ae5b-519c36a3a717-client-ca\") pod \"controller-manager-79b55fbc6c-564f6\" (UID: \"117895d7-5f8b-43b2-ae5b-519c36a3a717\") " pod="openshift-controller-manager/controller-manager-79b55fbc6c-564f6" Mar 11 08:43:53 crc kubenswrapper[4808]: I0311 08:43:53.950044 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/117895d7-5f8b-43b2-ae5b-519c36a3a717-serving-cert\") pod \"controller-manager-79b55fbc6c-564f6\" (UID: \"117895d7-5f8b-43b2-ae5b-519c36a3a717\") " pod="openshift-controller-manager/controller-manager-79b55fbc6c-564f6" Mar 11 08:43:53 crc kubenswrapper[4808]: I0311 08:43:53.953957 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7aa16043-ed52-4863-8c2f-6f6556569382-serving-cert\") pod \"route-controller-manager-59d4d7cf9b-vrv7h\" (UID: \"7aa16043-ed52-4863-8c2f-6f6556569382\") " pod="openshift-route-controller-manager/route-controller-manager-59d4d7cf9b-vrv7h" Mar 11 08:43:53 crc kubenswrapper[4808]: I0311 08:43:53.969057 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffdzz\" (UniqueName: \"kubernetes.io/projected/117895d7-5f8b-43b2-ae5b-519c36a3a717-kube-api-access-ffdzz\") pod \"controller-manager-79b55fbc6c-564f6\" (UID: \"117895d7-5f8b-43b2-ae5b-519c36a3a717\") " pod="openshift-controller-manager/controller-manager-79b55fbc6c-564f6" Mar 11 08:43:53 crc kubenswrapper[4808]: I0311 08:43:53.969867 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59bkz\" (UniqueName: \"kubernetes.io/projected/7aa16043-ed52-4863-8c2f-6f6556569382-kube-api-access-59bkz\") pod \"route-controller-manager-59d4d7cf9b-vrv7h\" (UID: \"7aa16043-ed52-4863-8c2f-6f6556569382\") " pod="openshift-route-controller-manager/route-controller-manager-59d4d7cf9b-vrv7h" Mar 11 08:43:53 crc kubenswrapper[4808]: I0311 08:43:53.994251 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-79b55fbc6c-564f6" Mar 11 08:43:54 crc kubenswrapper[4808]: I0311 08:43:54.008742 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-59d4d7cf9b-vrv7h" Mar 11 08:43:54 crc kubenswrapper[4808]: I0311 08:43:54.338921 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-cnszv" Mar 11 08:43:54 crc kubenswrapper[4808]: I0311 08:43:54.419042 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-79b55fbc6c-564f6"] Mar 11 08:43:54 crc kubenswrapper[4808]: W0311 08:43:54.425550 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod117895d7_5f8b_43b2_ae5b_519c36a3a717.slice/crio-5559990155f0e846e06245976fc42f3dedb47f6ecb356a5f54260d532b6f0f32 WatchSource:0}: Error finding container 5559990155f0e846e06245976fc42f3dedb47f6ecb356a5f54260d532b6f0f32: Status 404 returned error can't find the container with id 5559990155f0e846e06245976fc42f3dedb47f6ecb356a5f54260d532b6f0f32 Mar 11 08:43:54 crc kubenswrapper[4808]: I0311 08:43:54.435669 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p7jn7" event={"ID":"8c04d52d-2eda-40d3-8252-ac2e14d0a861","Type":"ContainerStarted","Data":"fcb4377b2d3b35e96a3230176cec7fc9c868830b5f174049b018f823bd64d989"} Mar 11 08:43:54 crc kubenswrapper[4808]: I0311 08:43:54.453200 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-p7jn7" podStartSLOduration=2.089336336 podStartE2EDuration="1m0.453180628s" podCreationTimestamp="2026-03-11 08:42:54 +0000 UTC" firstStartedPulling="2026-03-11 08:42:55.485198303 +0000 UTC m=+226.438521613" lastFinishedPulling="2026-03-11 08:43:53.849042585 +0000 UTC m=+284.802365905" observedRunningTime="2026-03-11 08:43:54.451659835 +0000 UTC m=+285.404983155" watchObservedRunningTime="2026-03-11 08:43:54.453180628 +0000 UTC m=+285.406503948" Mar 11 08:43:54 crc kubenswrapper[4808]: I0311 08:43:54.488320 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59d4d7cf9b-vrv7h"] Mar 11 08:43:54 crc kubenswrapper[4808]: W0311 08:43:54.500954 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7aa16043_ed52_4863_8c2f_6f6556569382.slice/crio-2c2f9039325ae227afa83cd317180216a5cb4db80ae40c988c7db995f722888c WatchSource:0}: Error finding container 2c2f9039325ae227afa83cd317180216a5cb4db80ae40c988c7db995f722888c: Status 404 returned error can't find the container with id 2c2f9039325ae227afa83cd317180216a5cb4db80ae40c988c7db995f722888c Mar 11 08:43:54 crc kubenswrapper[4808]: I0311 08:43:54.745249 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-p7jn7" Mar 11 08:43:54 crc kubenswrapper[4808]: I0311 08:43:54.745490 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-p7jn7" Mar 11 08:43:55 crc kubenswrapper[4808]: I0311 08:43:55.442737 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-79b55fbc6c-564f6" event={"ID":"117895d7-5f8b-43b2-ae5b-519c36a3a717","Type":"ContainerStarted","Data":"55931e97c99b8a4019201de8e35d50a1d65f6a871e5ebb48e8973296fa16af75"} Mar 11 08:43:55 crc kubenswrapper[4808]: I0311 08:43:55.443175 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-79b55fbc6c-564f6" Mar 11 08:43:55 crc kubenswrapper[4808]: I0311 08:43:55.443194 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-79b55fbc6c-564f6" event={"ID":"117895d7-5f8b-43b2-ae5b-519c36a3a717","Type":"ContainerStarted","Data":"5559990155f0e846e06245976fc42f3dedb47f6ecb356a5f54260d532b6f0f32"} Mar 11 08:43:55 crc kubenswrapper[4808]: I0311 08:43:55.444492 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-59d4d7cf9b-vrv7h" event={"ID":"7aa16043-ed52-4863-8c2f-6f6556569382","Type":"ContainerStarted","Data":"4fde05bc793c477b28a2149358456ef36abaf377d74673d9eaaedff9b4b5cdcb"} Mar 11 08:43:55 crc kubenswrapper[4808]: I0311 08:43:55.444532 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-59d4d7cf9b-vrv7h" event={"ID":"7aa16043-ed52-4863-8c2f-6f6556569382","Type":"ContainerStarted","Data":"2c2f9039325ae227afa83cd317180216a5cb4db80ae40c988c7db995f722888c"} Mar 11 08:43:55 crc kubenswrapper[4808]: I0311 08:43:55.444757 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-59d4d7cf9b-vrv7h" Mar 11 08:43:55 crc kubenswrapper[4808]: I0311 08:43:55.449039 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-79b55fbc6c-564f6" Mar 11 08:43:55 crc kubenswrapper[4808]: I0311 08:43:55.449737 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-59d4d7cf9b-vrv7h" Mar 11 08:43:55 crc kubenswrapper[4808]: I0311 08:43:55.450909 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dqmwn" Mar 11 08:43:55 crc kubenswrapper[4808]: I0311 08:43:55.450951 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dqmwn" Mar 11 08:43:55 crc kubenswrapper[4808]: I0311 08:43:55.494742 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-59d4d7cf9b-vrv7h" podStartSLOduration=3.494725532 podStartE2EDuration="3.494725532s" podCreationTimestamp="2026-03-11 08:43:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:43:55.492889889 +0000 UTC m=+286.446213220" watchObservedRunningTime="2026-03-11 08:43:55.494725532 +0000 UTC m=+286.448048852" Mar 11 08:43:55 crc kubenswrapper[4808]: I0311 08:43:55.496297 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-79b55fbc6c-564f6" podStartSLOduration=3.496292976 podStartE2EDuration="3.496292976s" podCreationTimestamp="2026-03-11 08:43:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:43:55.463095374 +0000 UTC m=+286.416418694" watchObservedRunningTime="2026-03-11 08:43:55.496292976 +0000 UTC m=+286.449616296" Mar 11 08:43:55 crc kubenswrapper[4808]: I0311 08:43:55.784640 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-p7jn7" podUID="8c04d52d-2eda-40d3-8252-ac2e14d0a861" containerName="registry-server" probeResult="failure" output=< Mar 11 08:43:55 crc kubenswrapper[4808]: timeout: failed to connect service ":50051" within 1s Mar 11 08:43:55 crc kubenswrapper[4808]: > Mar 11 08:43:55 crc kubenswrapper[4808]: I0311 08:43:55.924826 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-nn2pz" Mar 11 08:43:55 crc kubenswrapper[4808]: I0311 08:43:55.924877 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-nn2pz" Mar 11 08:43:56 crc kubenswrapper[4808]: I0311 08:43:56.492989 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dqmwn" podUID="5cd00c2e-5e6d-4fa5-a2a5-28153c8ef4bd" containerName="registry-server" probeResult="failure" output=< Mar 11 08:43:56 crc kubenswrapper[4808]: timeout: failed to connect service ":50051" within 1s Mar 11 08:43:56 crc kubenswrapper[4808]: > Mar 11 08:43:56 crc kubenswrapper[4808]: I0311 08:43:56.630343 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p7nmq"] Mar 11 08:43:56 crc kubenswrapper[4808]: I0311 08:43:56.630675 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-p7nmq" podUID="5cb71001-8e81-4655-97db-e7bd5eaccb2a" containerName="registry-server" containerID="cri-o://049343137e7e50d65de88b8879d2c16d19987e8b09bcc39698d0944d27482291" gracePeriod=2 Mar 11 08:43:56 crc kubenswrapper[4808]: I0311 08:43:56.967123 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nn2pz" podUID="06cf5014-8032-4d6d-b905-1d7196c123c7" containerName="registry-server" probeResult="failure" output=< Mar 11 08:43:56 crc kubenswrapper[4808]: timeout: failed to connect service ":50051" within 1s Mar 11 08:43:56 crc kubenswrapper[4808]: > Mar 11 08:43:57 crc kubenswrapper[4808]: I0311 08:43:57.460064 4808 generic.go:334] "Generic (PLEG): container finished" podID="5cb71001-8e81-4655-97db-e7bd5eaccb2a" containerID="049343137e7e50d65de88b8879d2c16d19987e8b09bcc39698d0944d27482291" exitCode=0 Mar 11 08:43:57 crc kubenswrapper[4808]: I0311 08:43:57.460141 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p7nmq" event={"ID":"5cb71001-8e81-4655-97db-e7bd5eaccb2a","Type":"ContainerDied","Data":"049343137e7e50d65de88b8879d2c16d19987e8b09bcc39698d0944d27482291"} Mar 11 08:43:57 crc kubenswrapper[4808]: I0311 08:43:57.669289 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p7nmq" Mar 11 08:43:57 crc kubenswrapper[4808]: I0311 08:43:57.705781 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5cb71001-8e81-4655-97db-e7bd5eaccb2a-utilities\") pod \"5cb71001-8e81-4655-97db-e7bd5eaccb2a\" (UID: \"5cb71001-8e81-4655-97db-e7bd5eaccb2a\") " Mar 11 08:43:57 crc kubenswrapper[4808]: I0311 08:43:57.705849 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9zxkp\" (UniqueName: \"kubernetes.io/projected/5cb71001-8e81-4655-97db-e7bd5eaccb2a-kube-api-access-9zxkp\") pod \"5cb71001-8e81-4655-97db-e7bd5eaccb2a\" (UID: \"5cb71001-8e81-4655-97db-e7bd5eaccb2a\") " Mar 11 08:43:57 crc kubenswrapper[4808]: I0311 08:43:57.705954 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5cb71001-8e81-4655-97db-e7bd5eaccb2a-catalog-content\") pod \"5cb71001-8e81-4655-97db-e7bd5eaccb2a\" (UID: \"5cb71001-8e81-4655-97db-e7bd5eaccb2a\") " Mar 11 08:43:57 crc kubenswrapper[4808]: I0311 08:43:57.706892 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5cb71001-8e81-4655-97db-e7bd5eaccb2a-utilities" (OuterVolumeSpecName: "utilities") pod "5cb71001-8e81-4655-97db-e7bd5eaccb2a" (UID: "5cb71001-8e81-4655-97db-e7bd5eaccb2a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 08:43:57 crc kubenswrapper[4808]: I0311 08:43:57.711544 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5cb71001-8e81-4655-97db-e7bd5eaccb2a-kube-api-access-9zxkp" (OuterVolumeSpecName: "kube-api-access-9zxkp") pod "5cb71001-8e81-4655-97db-e7bd5eaccb2a" (UID: "5cb71001-8e81-4655-97db-e7bd5eaccb2a"). InnerVolumeSpecName "kube-api-access-9zxkp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:43:57 crc kubenswrapper[4808]: I0311 08:43:57.769532 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5cb71001-8e81-4655-97db-e7bd5eaccb2a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5cb71001-8e81-4655-97db-e7bd5eaccb2a" (UID: "5cb71001-8e81-4655-97db-e7bd5eaccb2a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 08:43:57 crc kubenswrapper[4808]: I0311 08:43:57.811318 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9zxkp\" (UniqueName: \"kubernetes.io/projected/5cb71001-8e81-4655-97db-e7bd5eaccb2a-kube-api-access-9zxkp\") on node \"crc\" DevicePath \"\"" Mar 11 08:43:57 crc kubenswrapper[4808]: I0311 08:43:57.811426 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5cb71001-8e81-4655-97db-e7bd5eaccb2a-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 11 08:43:57 crc kubenswrapper[4808]: I0311 08:43:57.811445 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5cb71001-8e81-4655-97db-e7bd5eaccb2a-utilities\") on node \"crc\" DevicePath \"\"" Mar 11 08:43:58 crc kubenswrapper[4808]: I0311 08:43:58.467767 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p7nmq" event={"ID":"5cb71001-8e81-4655-97db-e7bd5eaccb2a","Type":"ContainerDied","Data":"a296c9bd23044dd63c43892375a60a792dd98911bab857bb693b8d3bb468456d"} Mar 11 08:43:58 crc kubenswrapper[4808]: I0311 08:43:58.467843 4808 scope.go:117] "RemoveContainer" containerID="049343137e7e50d65de88b8879d2c16d19987e8b09bcc39698d0944d27482291" Mar 11 08:43:58 crc kubenswrapper[4808]: I0311 08:43:58.467914 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p7nmq" Mar 11 08:43:58 crc kubenswrapper[4808]: I0311 08:43:58.486182 4808 scope.go:117] "RemoveContainer" containerID="7fdc6bbafce80b90af5831624a0bde3107007101e892ae2ffa1a536b916a00a5" Mar 11 08:43:58 crc kubenswrapper[4808]: I0311 08:43:58.490086 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p7nmq"] Mar 11 08:43:58 crc kubenswrapper[4808]: I0311 08:43:58.493272 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-p7nmq"] Mar 11 08:43:58 crc kubenswrapper[4808]: I0311 08:43:58.506639 4808 scope.go:117] "RemoveContainer" containerID="60cf3b87aeed0a9ce6e296efc7f4eb33421d98aed1b82a0211dabe1deade4f3f" Mar 11 08:43:59 crc kubenswrapper[4808]: I0311 08:43:59.797179 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5cb71001-8e81-4655-97db-e7bd5eaccb2a" path="/var/lib/kubelet/pods/5cb71001-8e81-4655-97db-e7bd5eaccb2a/volumes" Mar 11 08:44:00 crc kubenswrapper[4808]: I0311 08:44:00.138619 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553644-fdj6x"] Mar 11 08:44:00 crc kubenswrapper[4808]: E0311 08:44:00.139623 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cb71001-8e81-4655-97db-e7bd5eaccb2a" containerName="extract-content" Mar 11 08:44:00 crc kubenswrapper[4808]: I0311 08:44:00.139666 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cb71001-8e81-4655-97db-e7bd5eaccb2a" containerName="extract-content" Mar 11 08:44:00 crc kubenswrapper[4808]: E0311 08:44:00.139693 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cb71001-8e81-4655-97db-e7bd5eaccb2a" containerName="extract-utilities" Mar 11 08:44:00 crc kubenswrapper[4808]: I0311 08:44:00.139699 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cb71001-8e81-4655-97db-e7bd5eaccb2a" containerName="extract-utilities" Mar 11 08:44:00 crc kubenswrapper[4808]: E0311 08:44:00.139711 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cb71001-8e81-4655-97db-e7bd5eaccb2a" containerName="registry-server" Mar 11 08:44:00 crc kubenswrapper[4808]: I0311 08:44:00.139718 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cb71001-8e81-4655-97db-e7bd5eaccb2a" containerName="registry-server" Mar 11 08:44:00 crc kubenswrapper[4808]: I0311 08:44:00.142023 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cb71001-8e81-4655-97db-e7bd5eaccb2a" containerName="registry-server" Mar 11 08:44:00 crc kubenswrapper[4808]: I0311 08:44:00.142551 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553644-fdj6x" Mar 11 08:44:00 crc kubenswrapper[4808]: I0311 08:44:00.144841 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 08:44:00 crc kubenswrapper[4808]: I0311 08:44:00.146425 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 08:44:00 crc kubenswrapper[4808]: I0311 08:44:00.146792 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-8r7cc" Mar 11 08:44:00 crc kubenswrapper[4808]: I0311 08:44:00.155536 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553644-fdj6x"] Mar 11 08:44:00 crc kubenswrapper[4808]: I0311 08:44:00.249269 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlvkf\" (UniqueName: \"kubernetes.io/projected/a1b2195c-43fc-4253-9675-13d6836d7c49-kube-api-access-mlvkf\") pod \"auto-csr-approver-29553644-fdj6x\" (UID: \"a1b2195c-43fc-4253-9675-13d6836d7c49\") " pod="openshift-infra/auto-csr-approver-29553644-fdj6x" Mar 11 08:44:00 crc kubenswrapper[4808]: I0311 08:44:00.350878 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlvkf\" (UniqueName: \"kubernetes.io/projected/a1b2195c-43fc-4253-9675-13d6836d7c49-kube-api-access-mlvkf\") pod \"auto-csr-approver-29553644-fdj6x\" (UID: \"a1b2195c-43fc-4253-9675-13d6836d7c49\") " pod="openshift-infra/auto-csr-approver-29553644-fdj6x" Mar 11 08:44:00 crc kubenswrapper[4808]: I0311 08:44:00.378642 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlvkf\" (UniqueName: \"kubernetes.io/projected/a1b2195c-43fc-4253-9675-13d6836d7c49-kube-api-access-mlvkf\") pod \"auto-csr-approver-29553644-fdj6x\" (UID: \"a1b2195c-43fc-4253-9675-13d6836d7c49\") " pod="openshift-infra/auto-csr-approver-29553644-fdj6x" Mar 11 08:44:00 crc kubenswrapper[4808]: I0311 08:44:00.472037 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553644-fdj6x" Mar 11 08:44:00 crc kubenswrapper[4808]: W0311 08:44:00.879616 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1b2195c_43fc_4253_9675_13d6836d7c49.slice/crio-90c7740f8f38715d150666ad02ca341a93e129f0d2c51241dae737982de54089 WatchSource:0}: Error finding container 90c7740f8f38715d150666ad02ca341a93e129f0d2c51241dae737982de54089: Status 404 returned error can't find the container with id 90c7740f8f38715d150666ad02ca341a93e129f0d2c51241dae737982de54089 Mar 11 08:44:00 crc kubenswrapper[4808]: I0311 08:44:00.892893 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553644-fdj6x"] Mar 11 08:44:01 crc kubenswrapper[4808]: I0311 08:44:01.492773 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553644-fdj6x" event={"ID":"a1b2195c-43fc-4253-9675-13d6836d7c49","Type":"ContainerStarted","Data":"90c7740f8f38715d150666ad02ca341a93e129f0d2c51241dae737982de54089"} Mar 11 08:44:02 crc kubenswrapper[4808]: I0311 08:44:02.545210 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-pf6x9" Mar 11 08:44:02 crc kubenswrapper[4808]: I0311 08:44:02.587054 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-pf6x9" Mar 11 08:44:02 crc kubenswrapper[4808]: I0311 08:44:02.918067 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wxbs4" Mar 11 08:44:02 crc kubenswrapper[4808]: I0311 08:44:02.918148 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wxbs4" Mar 11 08:44:02 crc kubenswrapper[4808]: I0311 08:44:02.970731 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wxbs4" Mar 11 08:44:03 crc kubenswrapper[4808]: I0311 08:44:03.504377 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553644-fdj6x" event={"ID":"a1b2195c-43fc-4253-9675-13d6836d7c49","Type":"ContainerStarted","Data":"4b4e3d4d2083ba9f10c7206fa7a62ec4ff0cec8a7039796a036308eb0e3a51b9"} Mar 11 08:44:03 crc kubenswrapper[4808]: I0311 08:44:03.519285 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29553644-fdj6x" podStartSLOduration=1.291552606 podStartE2EDuration="3.519266286s" podCreationTimestamp="2026-03-11 08:44:00 +0000 UTC" firstStartedPulling="2026-03-11 08:44:00.884921227 +0000 UTC m=+291.838244547" lastFinishedPulling="2026-03-11 08:44:03.112634907 +0000 UTC m=+294.065958227" observedRunningTime="2026-03-11 08:44:03.51871895 +0000 UTC m=+294.472042300" watchObservedRunningTime="2026-03-11 08:44:03.519266286 +0000 UTC m=+294.472589606" Mar 11 08:44:03 crc kubenswrapper[4808]: I0311 08:44:03.547554 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wxbs4" Mar 11 08:44:04 crc kubenswrapper[4808]: I0311 08:44:04.513143 4808 generic.go:334] "Generic (PLEG): container finished" podID="a1b2195c-43fc-4253-9675-13d6836d7c49" containerID="4b4e3d4d2083ba9f10c7206fa7a62ec4ff0cec8a7039796a036308eb0e3a51b9" exitCode=0 Mar 11 08:44:04 crc kubenswrapper[4808]: I0311 08:44:04.515113 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553644-fdj6x" event={"ID":"a1b2195c-43fc-4253-9675-13d6836d7c49","Type":"ContainerDied","Data":"4b4e3d4d2083ba9f10c7206fa7a62ec4ff0cec8a7039796a036308eb0e3a51b9"} Mar 11 08:44:04 crc kubenswrapper[4808]: I0311 08:44:04.769568 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wxbs4"] Mar 11 08:44:04 crc kubenswrapper[4808]: I0311 08:44:04.782595 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-p7jn7" Mar 11 08:44:04 crc kubenswrapper[4808]: I0311 08:44:04.819888 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-p7jn7" Mar 11 08:44:05 crc kubenswrapper[4808]: I0311 08:44:05.508338 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dqmwn" Mar 11 08:44:05 crc kubenswrapper[4808]: I0311 08:44:05.519481 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-wxbs4" podUID="8d9db75f-12f9-4870-94c5-474c8b74f021" containerName="registry-server" containerID="cri-o://c0689ffb2a068bf6e3adbd68ccab1889e7cdff50f2ce57957441dbe4a961013d" gracePeriod=2 Mar 11 08:44:05 crc kubenswrapper[4808]: I0311 08:44:05.555568 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dqmwn" Mar 11 08:44:05 crc kubenswrapper[4808]: I0311 08:44:05.751270 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-vbbdr"] Mar 11 08:44:05 crc kubenswrapper[4808]: I0311 08:44:05.896155 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553644-fdj6x" Mar 11 08:44:05 crc kubenswrapper[4808]: I0311 08:44:05.940138 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mlvkf\" (UniqueName: \"kubernetes.io/projected/a1b2195c-43fc-4253-9675-13d6836d7c49-kube-api-access-mlvkf\") pod \"a1b2195c-43fc-4253-9675-13d6836d7c49\" (UID: \"a1b2195c-43fc-4253-9675-13d6836d7c49\") " Mar 11 08:44:05 crc kubenswrapper[4808]: I0311 08:44:05.968551 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1b2195c-43fc-4253-9675-13d6836d7c49-kube-api-access-mlvkf" (OuterVolumeSpecName: "kube-api-access-mlvkf") pod "a1b2195c-43fc-4253-9675-13d6836d7c49" (UID: "a1b2195c-43fc-4253-9675-13d6836d7c49"). InnerVolumeSpecName "kube-api-access-mlvkf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:44:05 crc kubenswrapper[4808]: I0311 08:44:05.979560 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-nn2pz" Mar 11 08:44:06 crc kubenswrapper[4808]: I0311 08:44:06.024869 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-nn2pz" Mar 11 08:44:06 crc kubenswrapper[4808]: I0311 08:44:06.041538 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mlvkf\" (UniqueName: \"kubernetes.io/projected/a1b2195c-43fc-4253-9675-13d6836d7c49-kube-api-access-mlvkf\") on node \"crc\" DevicePath \"\"" Mar 11 08:44:06 crc kubenswrapper[4808]: I0311 08:44:06.108417 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wxbs4" Mar 11 08:44:06 crc kubenswrapper[4808]: I0311 08:44:06.142709 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j7z98\" (UniqueName: \"kubernetes.io/projected/8d9db75f-12f9-4870-94c5-474c8b74f021-kube-api-access-j7z98\") pod \"8d9db75f-12f9-4870-94c5-474c8b74f021\" (UID: \"8d9db75f-12f9-4870-94c5-474c8b74f021\") " Mar 11 08:44:06 crc kubenswrapper[4808]: I0311 08:44:06.142865 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d9db75f-12f9-4870-94c5-474c8b74f021-utilities\") pod \"8d9db75f-12f9-4870-94c5-474c8b74f021\" (UID: \"8d9db75f-12f9-4870-94c5-474c8b74f021\") " Mar 11 08:44:06 crc kubenswrapper[4808]: I0311 08:44:06.142888 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d9db75f-12f9-4870-94c5-474c8b74f021-catalog-content\") pod \"8d9db75f-12f9-4870-94c5-474c8b74f021\" (UID: \"8d9db75f-12f9-4870-94c5-474c8b74f021\") " Mar 11 08:44:06 crc kubenswrapper[4808]: I0311 08:44:06.143763 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d9db75f-12f9-4870-94c5-474c8b74f021-utilities" (OuterVolumeSpecName: "utilities") pod "8d9db75f-12f9-4870-94c5-474c8b74f021" (UID: "8d9db75f-12f9-4870-94c5-474c8b74f021"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 08:44:06 crc kubenswrapper[4808]: I0311 08:44:06.145333 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d9db75f-12f9-4870-94c5-474c8b74f021-kube-api-access-j7z98" (OuterVolumeSpecName: "kube-api-access-j7z98") pod "8d9db75f-12f9-4870-94c5-474c8b74f021" (UID: "8d9db75f-12f9-4870-94c5-474c8b74f021"). InnerVolumeSpecName "kube-api-access-j7z98". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:44:06 crc kubenswrapper[4808]: I0311 08:44:06.205206 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d9db75f-12f9-4870-94c5-474c8b74f021-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8d9db75f-12f9-4870-94c5-474c8b74f021" (UID: "8d9db75f-12f9-4870-94c5-474c8b74f021"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 08:44:06 crc kubenswrapper[4808]: I0311 08:44:06.245140 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d9db75f-12f9-4870-94c5-474c8b74f021-utilities\") on node \"crc\" DevicePath \"\"" Mar 11 08:44:06 crc kubenswrapper[4808]: I0311 08:44:06.245194 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d9db75f-12f9-4870-94c5-474c8b74f021-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 11 08:44:06 crc kubenswrapper[4808]: I0311 08:44:06.245218 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j7z98\" (UniqueName: \"kubernetes.io/projected/8d9db75f-12f9-4870-94c5-474c8b74f021-kube-api-access-j7z98\") on node \"crc\" DevicePath \"\"" Mar 11 08:44:06 crc kubenswrapper[4808]: I0311 08:44:06.527425 4808 generic.go:334] "Generic (PLEG): container finished" podID="8d9db75f-12f9-4870-94c5-474c8b74f021" containerID="c0689ffb2a068bf6e3adbd68ccab1889e7cdff50f2ce57957441dbe4a961013d" exitCode=0 Mar 11 08:44:06 crc kubenswrapper[4808]: I0311 08:44:06.527521 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wxbs4" Mar 11 08:44:06 crc kubenswrapper[4808]: I0311 08:44:06.527524 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wxbs4" event={"ID":"8d9db75f-12f9-4870-94c5-474c8b74f021","Type":"ContainerDied","Data":"c0689ffb2a068bf6e3adbd68ccab1889e7cdff50f2ce57957441dbe4a961013d"} Mar 11 08:44:06 crc kubenswrapper[4808]: I0311 08:44:06.527750 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wxbs4" event={"ID":"8d9db75f-12f9-4870-94c5-474c8b74f021","Type":"ContainerDied","Data":"c1d164e250c1bbd115334178dc47780ce937c461d2848eab38765c830132fc50"} Mar 11 08:44:06 crc kubenswrapper[4808]: I0311 08:44:06.527784 4808 scope.go:117] "RemoveContainer" containerID="c0689ffb2a068bf6e3adbd68ccab1889e7cdff50f2ce57957441dbe4a961013d" Mar 11 08:44:06 crc kubenswrapper[4808]: I0311 08:44:06.530704 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553644-fdj6x" Mar 11 08:44:06 crc kubenswrapper[4808]: I0311 08:44:06.530722 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553644-fdj6x" event={"ID":"a1b2195c-43fc-4253-9675-13d6836d7c49","Type":"ContainerDied","Data":"90c7740f8f38715d150666ad02ca341a93e129f0d2c51241dae737982de54089"} Mar 11 08:44:06 crc kubenswrapper[4808]: I0311 08:44:06.530778 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90c7740f8f38715d150666ad02ca341a93e129f0d2c51241dae737982de54089" Mar 11 08:44:06 crc kubenswrapper[4808]: I0311 08:44:06.554546 4808 scope.go:117] "RemoveContainer" containerID="9fe1608911440ea872369dcc688840473f2b41c0ead00e2f77b20b66a4f55a73" Mar 11 08:44:06 crc kubenswrapper[4808]: I0311 08:44:06.578352 4808 scope.go:117] "RemoveContainer" containerID="eb2afbb6d4f5324a9a313b5908928b0c491d5c525a7e5264824969106a3c1629" Mar 11 08:44:06 crc kubenswrapper[4808]: I0311 08:44:06.599128 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wxbs4"] Mar 11 08:44:06 crc kubenswrapper[4808]: I0311 08:44:06.604320 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-wxbs4"] Mar 11 08:44:06 crc kubenswrapper[4808]: I0311 08:44:06.610548 4808 scope.go:117] "RemoveContainer" containerID="c0689ffb2a068bf6e3adbd68ccab1889e7cdff50f2ce57957441dbe4a961013d" Mar 11 08:44:06 crc kubenswrapper[4808]: E0311 08:44:06.615690 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0689ffb2a068bf6e3adbd68ccab1889e7cdff50f2ce57957441dbe4a961013d\": container with ID starting with c0689ffb2a068bf6e3adbd68ccab1889e7cdff50f2ce57957441dbe4a961013d not found: ID does not exist" containerID="c0689ffb2a068bf6e3adbd68ccab1889e7cdff50f2ce57957441dbe4a961013d" Mar 11 08:44:06 crc kubenswrapper[4808]: I0311 08:44:06.615745 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0689ffb2a068bf6e3adbd68ccab1889e7cdff50f2ce57957441dbe4a961013d"} err="failed to get container status \"c0689ffb2a068bf6e3adbd68ccab1889e7cdff50f2ce57957441dbe4a961013d\": rpc error: code = NotFound desc = could not find container \"c0689ffb2a068bf6e3adbd68ccab1889e7cdff50f2ce57957441dbe4a961013d\": container with ID starting with c0689ffb2a068bf6e3adbd68ccab1889e7cdff50f2ce57957441dbe4a961013d not found: ID does not exist" Mar 11 08:44:06 crc kubenswrapper[4808]: I0311 08:44:06.615774 4808 scope.go:117] "RemoveContainer" containerID="9fe1608911440ea872369dcc688840473f2b41c0ead00e2f77b20b66a4f55a73" Mar 11 08:44:06 crc kubenswrapper[4808]: E0311 08:44:06.616292 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9fe1608911440ea872369dcc688840473f2b41c0ead00e2f77b20b66a4f55a73\": container with ID starting with 9fe1608911440ea872369dcc688840473f2b41c0ead00e2f77b20b66a4f55a73 not found: ID does not exist" containerID="9fe1608911440ea872369dcc688840473f2b41c0ead00e2f77b20b66a4f55a73" Mar 11 08:44:06 crc kubenswrapper[4808]: I0311 08:44:06.616350 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fe1608911440ea872369dcc688840473f2b41c0ead00e2f77b20b66a4f55a73"} err="failed to get container status \"9fe1608911440ea872369dcc688840473f2b41c0ead00e2f77b20b66a4f55a73\": rpc error: code = NotFound desc = could not find container \"9fe1608911440ea872369dcc688840473f2b41c0ead00e2f77b20b66a4f55a73\": container with ID starting with 9fe1608911440ea872369dcc688840473f2b41c0ead00e2f77b20b66a4f55a73 not found: ID does not exist" Mar 11 08:44:06 crc kubenswrapper[4808]: I0311 08:44:06.616445 4808 scope.go:117] "RemoveContainer" containerID="eb2afbb6d4f5324a9a313b5908928b0c491d5c525a7e5264824969106a3c1629" Mar 11 08:44:06 crc kubenswrapper[4808]: E0311 08:44:06.616911 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb2afbb6d4f5324a9a313b5908928b0c491d5c525a7e5264824969106a3c1629\": container with ID starting with eb2afbb6d4f5324a9a313b5908928b0c491d5c525a7e5264824969106a3c1629 not found: ID does not exist" containerID="eb2afbb6d4f5324a9a313b5908928b0c491d5c525a7e5264824969106a3c1629" Mar 11 08:44:06 crc kubenswrapper[4808]: I0311 08:44:06.616939 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb2afbb6d4f5324a9a313b5908928b0c491d5c525a7e5264824969106a3c1629"} err="failed to get container status \"eb2afbb6d4f5324a9a313b5908928b0c491d5c525a7e5264824969106a3c1629\": rpc error: code = NotFound desc = could not find container \"eb2afbb6d4f5324a9a313b5908928b0c491d5c525a7e5264824969106a3c1629\": container with ID starting with eb2afbb6d4f5324a9a313b5908928b0c491d5c525a7e5264824969106a3c1629 not found: ID does not exist" Mar 11 08:44:07 crc kubenswrapper[4808]: I0311 08:44:07.173848 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-p7jn7"] Mar 11 08:44:07 crc kubenswrapper[4808]: I0311 08:44:07.174783 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-p7jn7" podUID="8c04d52d-2eda-40d3-8252-ac2e14d0a861" containerName="registry-server" containerID="cri-o://fcb4377b2d3b35e96a3230176cec7fc9c868830b5f174049b018f823bd64d989" gracePeriod=2 Mar 11 08:44:07 crc kubenswrapper[4808]: I0311 08:44:07.538347 4808 generic.go:334] "Generic (PLEG): container finished" podID="8c04d52d-2eda-40d3-8252-ac2e14d0a861" containerID="fcb4377b2d3b35e96a3230176cec7fc9c868830b5f174049b018f823bd64d989" exitCode=0 Mar 11 08:44:07 crc kubenswrapper[4808]: I0311 08:44:07.538406 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p7jn7" event={"ID":"8c04d52d-2eda-40d3-8252-ac2e14d0a861","Type":"ContainerDied","Data":"fcb4377b2d3b35e96a3230176cec7fc9c868830b5f174049b018f823bd64d989"} Mar 11 08:44:07 crc kubenswrapper[4808]: I0311 08:44:07.623665 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p7jn7" Mar 11 08:44:07 crc kubenswrapper[4808]: I0311 08:44:07.665724 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rf6jf\" (UniqueName: \"kubernetes.io/projected/8c04d52d-2eda-40d3-8252-ac2e14d0a861-kube-api-access-rf6jf\") pod \"8c04d52d-2eda-40d3-8252-ac2e14d0a861\" (UID: \"8c04d52d-2eda-40d3-8252-ac2e14d0a861\") " Mar 11 08:44:07 crc kubenswrapper[4808]: I0311 08:44:07.665803 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c04d52d-2eda-40d3-8252-ac2e14d0a861-catalog-content\") pod \"8c04d52d-2eda-40d3-8252-ac2e14d0a861\" (UID: \"8c04d52d-2eda-40d3-8252-ac2e14d0a861\") " Mar 11 08:44:07 crc kubenswrapper[4808]: I0311 08:44:07.665872 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c04d52d-2eda-40d3-8252-ac2e14d0a861-utilities\") pod \"8c04d52d-2eda-40d3-8252-ac2e14d0a861\" (UID: \"8c04d52d-2eda-40d3-8252-ac2e14d0a861\") " Mar 11 08:44:07 crc kubenswrapper[4808]: I0311 08:44:07.667002 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c04d52d-2eda-40d3-8252-ac2e14d0a861-utilities" (OuterVolumeSpecName: "utilities") pod "8c04d52d-2eda-40d3-8252-ac2e14d0a861" (UID: "8c04d52d-2eda-40d3-8252-ac2e14d0a861"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 08:44:07 crc kubenswrapper[4808]: I0311 08:44:07.682503 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c04d52d-2eda-40d3-8252-ac2e14d0a861-kube-api-access-rf6jf" (OuterVolumeSpecName: "kube-api-access-rf6jf") pod "8c04d52d-2eda-40d3-8252-ac2e14d0a861" (UID: "8c04d52d-2eda-40d3-8252-ac2e14d0a861"). InnerVolumeSpecName "kube-api-access-rf6jf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:44:07 crc kubenswrapper[4808]: I0311 08:44:07.687633 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c04d52d-2eda-40d3-8252-ac2e14d0a861-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8c04d52d-2eda-40d3-8252-ac2e14d0a861" (UID: "8c04d52d-2eda-40d3-8252-ac2e14d0a861"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 08:44:07 crc kubenswrapper[4808]: I0311 08:44:07.767310 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rf6jf\" (UniqueName: \"kubernetes.io/projected/8c04d52d-2eda-40d3-8252-ac2e14d0a861-kube-api-access-rf6jf\") on node \"crc\" DevicePath \"\"" Mar 11 08:44:07 crc kubenswrapper[4808]: I0311 08:44:07.767346 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c04d52d-2eda-40d3-8252-ac2e14d0a861-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 11 08:44:07 crc kubenswrapper[4808]: I0311 08:44:07.767392 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c04d52d-2eda-40d3-8252-ac2e14d0a861-utilities\") on node \"crc\" DevicePath \"\"" Mar 11 08:44:07 crc kubenswrapper[4808]: I0311 08:44:07.798842 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d9db75f-12f9-4870-94c5-474c8b74f021" path="/var/lib/kubelet/pods/8d9db75f-12f9-4870-94c5-474c8b74f021/volumes" Mar 11 08:44:08 crc kubenswrapper[4808]: I0311 08:44:08.550173 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p7jn7" event={"ID":"8c04d52d-2eda-40d3-8252-ac2e14d0a861","Type":"ContainerDied","Data":"5a1a7c5e5e8d459c48e5780db5bae38a05be7ae5e32d1ced13d7be4abe378296"} Mar 11 08:44:08 crc kubenswrapper[4808]: I0311 08:44:08.551547 4808 scope.go:117] "RemoveContainer" containerID="fcb4377b2d3b35e96a3230176cec7fc9c868830b5f174049b018f823bd64d989" Mar 11 08:44:08 crc kubenswrapper[4808]: I0311 08:44:08.551669 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p7jn7" Mar 11 08:44:08 crc kubenswrapper[4808]: I0311 08:44:08.574219 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-p7jn7"] Mar 11 08:44:08 crc kubenswrapper[4808]: I0311 08:44:08.578714 4808 scope.go:117] "RemoveContainer" containerID="20b19aae727c3d0ad8a267910859427ee10d4f53cfb6db7a6826330892e8ead5" Mar 11 08:44:08 crc kubenswrapper[4808]: I0311 08:44:08.581901 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-p7jn7"] Mar 11 08:44:08 crc kubenswrapper[4808]: I0311 08:44:08.603685 4808 scope.go:117] "RemoveContainer" containerID="11271cd9d7dbca602dd6d35ca2cf0972cd16896cb48334121797004bc4b4eb28" Mar 11 08:44:08 crc kubenswrapper[4808]: I0311 08:44:08.976806 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nn2pz"] Mar 11 08:44:08 crc kubenswrapper[4808]: I0311 08:44:08.977138 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-nn2pz" podUID="06cf5014-8032-4d6d-b905-1d7196c123c7" containerName="registry-server" containerID="cri-o://d6b53f37386761f805c4e305a0cad2ecf5dc1cb8ab21a8cf4af29509e0985604" gracePeriod=2 Mar 11 08:44:09 crc kubenswrapper[4808]: I0311 08:44:09.453670 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nn2pz" Mar 11 08:44:09 crc kubenswrapper[4808]: I0311 08:44:09.494214 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06cf5014-8032-4d6d-b905-1d7196c123c7-catalog-content\") pod \"06cf5014-8032-4d6d-b905-1d7196c123c7\" (UID: \"06cf5014-8032-4d6d-b905-1d7196c123c7\") " Mar 11 08:44:09 crc kubenswrapper[4808]: I0311 08:44:09.494256 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06cf5014-8032-4d6d-b905-1d7196c123c7-utilities\") pod \"06cf5014-8032-4d6d-b905-1d7196c123c7\" (UID: \"06cf5014-8032-4d6d-b905-1d7196c123c7\") " Mar 11 08:44:09 crc kubenswrapper[4808]: I0311 08:44:09.494302 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fgnjp\" (UniqueName: \"kubernetes.io/projected/06cf5014-8032-4d6d-b905-1d7196c123c7-kube-api-access-fgnjp\") pod \"06cf5014-8032-4d6d-b905-1d7196c123c7\" (UID: \"06cf5014-8032-4d6d-b905-1d7196c123c7\") " Mar 11 08:44:09 crc kubenswrapper[4808]: I0311 08:44:09.495585 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06cf5014-8032-4d6d-b905-1d7196c123c7-utilities" (OuterVolumeSpecName: "utilities") pod "06cf5014-8032-4d6d-b905-1d7196c123c7" (UID: "06cf5014-8032-4d6d-b905-1d7196c123c7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 08:44:09 crc kubenswrapper[4808]: I0311 08:44:09.498270 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06cf5014-8032-4d6d-b905-1d7196c123c7-kube-api-access-fgnjp" (OuterVolumeSpecName: "kube-api-access-fgnjp") pod "06cf5014-8032-4d6d-b905-1d7196c123c7" (UID: "06cf5014-8032-4d6d-b905-1d7196c123c7"). InnerVolumeSpecName "kube-api-access-fgnjp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:44:09 crc kubenswrapper[4808]: I0311 08:44:09.564638 4808 generic.go:334] "Generic (PLEG): container finished" podID="06cf5014-8032-4d6d-b905-1d7196c123c7" containerID="d6b53f37386761f805c4e305a0cad2ecf5dc1cb8ab21a8cf4af29509e0985604" exitCode=0 Mar 11 08:44:09 crc kubenswrapper[4808]: I0311 08:44:09.564677 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nn2pz" event={"ID":"06cf5014-8032-4d6d-b905-1d7196c123c7","Type":"ContainerDied","Data":"d6b53f37386761f805c4e305a0cad2ecf5dc1cb8ab21a8cf4af29509e0985604"} Mar 11 08:44:09 crc kubenswrapper[4808]: I0311 08:44:09.564704 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nn2pz" event={"ID":"06cf5014-8032-4d6d-b905-1d7196c123c7","Type":"ContainerDied","Data":"9177ac8f927c4fb8931bf30538c47aceff331d1de7ca078b99f112d7e3e49d7f"} Mar 11 08:44:09 crc kubenswrapper[4808]: I0311 08:44:09.564723 4808 scope.go:117] "RemoveContainer" containerID="d6b53f37386761f805c4e305a0cad2ecf5dc1cb8ab21a8cf4af29509e0985604" Mar 11 08:44:09 crc kubenswrapper[4808]: I0311 08:44:09.564741 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nn2pz" Mar 11 08:44:09 crc kubenswrapper[4808]: I0311 08:44:09.585499 4808 scope.go:117] "RemoveContainer" containerID="93cd7d5b146d97d7cae6f2c9b7142aa64f730a0f0af35b684c55560eafddcecb" Mar 11 08:44:09 crc kubenswrapper[4808]: I0311 08:44:09.602133 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06cf5014-8032-4d6d-b905-1d7196c123c7-utilities\") on node \"crc\" DevicePath \"\"" Mar 11 08:44:09 crc kubenswrapper[4808]: I0311 08:44:09.602172 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fgnjp\" (UniqueName: \"kubernetes.io/projected/06cf5014-8032-4d6d-b905-1d7196c123c7-kube-api-access-fgnjp\") on node \"crc\" DevicePath \"\"" Mar 11 08:44:09 crc kubenswrapper[4808]: I0311 08:44:09.633034 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06cf5014-8032-4d6d-b905-1d7196c123c7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "06cf5014-8032-4d6d-b905-1d7196c123c7" (UID: "06cf5014-8032-4d6d-b905-1d7196c123c7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 08:44:09 crc kubenswrapper[4808]: I0311 08:44:09.643930 4808 scope.go:117] "RemoveContainer" containerID="4d98b80721e95ed03e9a1f56cfc48377711f03acf42242d8e72dc321cd0e2b8c" Mar 11 08:44:09 crc kubenswrapper[4808]: I0311 08:44:09.669559 4808 scope.go:117] "RemoveContainer" containerID="d6b53f37386761f805c4e305a0cad2ecf5dc1cb8ab21a8cf4af29509e0985604" Mar 11 08:44:09 crc kubenswrapper[4808]: E0311 08:44:09.670064 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6b53f37386761f805c4e305a0cad2ecf5dc1cb8ab21a8cf4af29509e0985604\": container with ID starting with d6b53f37386761f805c4e305a0cad2ecf5dc1cb8ab21a8cf4af29509e0985604 not found: ID does not exist" containerID="d6b53f37386761f805c4e305a0cad2ecf5dc1cb8ab21a8cf4af29509e0985604" Mar 11 08:44:09 crc kubenswrapper[4808]: I0311 08:44:09.670106 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6b53f37386761f805c4e305a0cad2ecf5dc1cb8ab21a8cf4af29509e0985604"} err="failed to get container status \"d6b53f37386761f805c4e305a0cad2ecf5dc1cb8ab21a8cf4af29509e0985604\": rpc error: code = NotFound desc = could not find container \"d6b53f37386761f805c4e305a0cad2ecf5dc1cb8ab21a8cf4af29509e0985604\": container with ID starting with d6b53f37386761f805c4e305a0cad2ecf5dc1cb8ab21a8cf4af29509e0985604 not found: ID does not exist" Mar 11 08:44:09 crc kubenswrapper[4808]: I0311 08:44:09.670133 4808 scope.go:117] "RemoveContainer" containerID="93cd7d5b146d97d7cae6f2c9b7142aa64f730a0f0af35b684c55560eafddcecb" Mar 11 08:44:09 crc kubenswrapper[4808]: E0311 08:44:09.670429 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93cd7d5b146d97d7cae6f2c9b7142aa64f730a0f0af35b684c55560eafddcecb\": container with ID starting with 93cd7d5b146d97d7cae6f2c9b7142aa64f730a0f0af35b684c55560eafddcecb not found: ID does not exist" containerID="93cd7d5b146d97d7cae6f2c9b7142aa64f730a0f0af35b684c55560eafddcecb" Mar 11 08:44:09 crc kubenswrapper[4808]: I0311 08:44:09.670448 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93cd7d5b146d97d7cae6f2c9b7142aa64f730a0f0af35b684c55560eafddcecb"} err="failed to get container status \"93cd7d5b146d97d7cae6f2c9b7142aa64f730a0f0af35b684c55560eafddcecb\": rpc error: code = NotFound desc = could not find container \"93cd7d5b146d97d7cae6f2c9b7142aa64f730a0f0af35b684c55560eafddcecb\": container with ID starting with 93cd7d5b146d97d7cae6f2c9b7142aa64f730a0f0af35b684c55560eafddcecb not found: ID does not exist" Mar 11 08:44:09 crc kubenswrapper[4808]: I0311 08:44:09.670469 4808 scope.go:117] "RemoveContainer" containerID="4d98b80721e95ed03e9a1f56cfc48377711f03acf42242d8e72dc321cd0e2b8c" Mar 11 08:44:09 crc kubenswrapper[4808]: E0311 08:44:09.670693 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d98b80721e95ed03e9a1f56cfc48377711f03acf42242d8e72dc321cd0e2b8c\": container with ID starting with 4d98b80721e95ed03e9a1f56cfc48377711f03acf42242d8e72dc321cd0e2b8c not found: ID does not exist" containerID="4d98b80721e95ed03e9a1f56cfc48377711f03acf42242d8e72dc321cd0e2b8c" Mar 11 08:44:09 crc kubenswrapper[4808]: I0311 08:44:09.670710 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d98b80721e95ed03e9a1f56cfc48377711f03acf42242d8e72dc321cd0e2b8c"} err="failed to get container status \"4d98b80721e95ed03e9a1f56cfc48377711f03acf42242d8e72dc321cd0e2b8c\": rpc error: code = NotFound desc = could not find container \"4d98b80721e95ed03e9a1f56cfc48377711f03acf42242d8e72dc321cd0e2b8c\": container with ID starting with 4d98b80721e95ed03e9a1f56cfc48377711f03acf42242d8e72dc321cd0e2b8c not found: ID does not exist" Mar 11 08:44:09 crc kubenswrapper[4808]: I0311 08:44:09.703030 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06cf5014-8032-4d6d-b905-1d7196c123c7-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 11 08:44:09 crc kubenswrapper[4808]: I0311 08:44:09.817889 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c04d52d-2eda-40d3-8252-ac2e14d0a861" path="/var/lib/kubelet/pods/8c04d52d-2eda-40d3-8252-ac2e14d0a861/volumes" Mar 11 08:44:09 crc kubenswrapper[4808]: I0311 08:44:09.887295 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nn2pz"] Mar 11 08:44:09 crc kubenswrapper[4808]: I0311 08:44:09.891514 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-nn2pz"] Mar 11 08:44:11 crc kubenswrapper[4808]: I0311 08:44:11.796280 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06cf5014-8032-4d6d-b905-1d7196c123c7" path="/var/lib/kubelet/pods/06cf5014-8032-4d6d-b905-1d7196c123c7/volumes" Mar 11 08:44:12 crc kubenswrapper[4808]: I0311 08:44:12.349151 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-79b55fbc6c-564f6"] Mar 11 08:44:12 crc kubenswrapper[4808]: I0311 08:44:12.349348 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-79b55fbc6c-564f6" podUID="117895d7-5f8b-43b2-ae5b-519c36a3a717" containerName="controller-manager" containerID="cri-o://55931e97c99b8a4019201de8e35d50a1d65f6a871e5ebb48e8973296fa16af75" gracePeriod=30 Mar 11 08:44:12 crc kubenswrapper[4808]: I0311 08:44:12.443951 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59d4d7cf9b-vrv7h"] Mar 11 08:44:12 crc kubenswrapper[4808]: I0311 08:44:12.444176 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-59d4d7cf9b-vrv7h" podUID="7aa16043-ed52-4863-8c2f-6f6556569382" containerName="route-controller-manager" containerID="cri-o://4fde05bc793c477b28a2149358456ef36abaf377d74673d9eaaedff9b4b5cdcb" gracePeriod=30 Mar 11 08:44:12 crc kubenswrapper[4808]: I0311 08:44:12.587205 4808 generic.go:334] "Generic (PLEG): container finished" podID="7aa16043-ed52-4863-8c2f-6f6556569382" containerID="4fde05bc793c477b28a2149358456ef36abaf377d74673d9eaaedff9b4b5cdcb" exitCode=0 Mar 11 08:44:12 crc kubenswrapper[4808]: I0311 08:44:12.587305 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-59d4d7cf9b-vrv7h" event={"ID":"7aa16043-ed52-4863-8c2f-6f6556569382","Type":"ContainerDied","Data":"4fde05bc793c477b28a2149358456ef36abaf377d74673d9eaaedff9b4b5cdcb"} Mar 11 08:44:12 crc kubenswrapper[4808]: I0311 08:44:12.589020 4808 generic.go:334] "Generic (PLEG): container finished" podID="117895d7-5f8b-43b2-ae5b-519c36a3a717" containerID="55931e97c99b8a4019201de8e35d50a1d65f6a871e5ebb48e8973296fa16af75" exitCode=0 Mar 11 08:44:12 crc kubenswrapper[4808]: I0311 08:44:12.589057 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-79b55fbc6c-564f6" event={"ID":"117895d7-5f8b-43b2-ae5b-519c36a3a717","Type":"ContainerDied","Data":"55931e97c99b8a4019201de8e35d50a1d65f6a871e5ebb48e8973296fa16af75"} Mar 11 08:44:12 crc kubenswrapper[4808]: I0311 08:44:12.887229 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-59d4d7cf9b-vrv7h" Mar 11 08:44:12 crc kubenswrapper[4808]: I0311 08:44:12.954971 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7aa16043-ed52-4863-8c2f-6f6556569382-config\") pod \"7aa16043-ed52-4863-8c2f-6f6556569382\" (UID: \"7aa16043-ed52-4863-8c2f-6f6556569382\") " Mar 11 08:44:12 crc kubenswrapper[4808]: I0311 08:44:12.955006 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7aa16043-ed52-4863-8c2f-6f6556569382-serving-cert\") pod \"7aa16043-ed52-4863-8c2f-6f6556569382\" (UID: \"7aa16043-ed52-4863-8c2f-6f6556569382\") " Mar 11 08:44:12 crc kubenswrapper[4808]: I0311 08:44:12.955061 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7aa16043-ed52-4863-8c2f-6f6556569382-client-ca\") pod \"7aa16043-ed52-4863-8c2f-6f6556569382\" (UID: \"7aa16043-ed52-4863-8c2f-6f6556569382\") " Mar 11 08:44:12 crc kubenswrapper[4808]: I0311 08:44:12.955139 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-59bkz\" (UniqueName: \"kubernetes.io/projected/7aa16043-ed52-4863-8c2f-6f6556569382-kube-api-access-59bkz\") pod \"7aa16043-ed52-4863-8c2f-6f6556569382\" (UID: \"7aa16043-ed52-4863-8c2f-6f6556569382\") " Mar 11 08:44:12 crc kubenswrapper[4808]: I0311 08:44:12.955988 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7aa16043-ed52-4863-8c2f-6f6556569382-config" (OuterVolumeSpecName: "config") pod "7aa16043-ed52-4863-8c2f-6f6556569382" (UID: "7aa16043-ed52-4863-8c2f-6f6556569382"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:44:12 crc kubenswrapper[4808]: I0311 08:44:12.956415 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7aa16043-ed52-4863-8c2f-6f6556569382-client-ca" (OuterVolumeSpecName: "client-ca") pod "7aa16043-ed52-4863-8c2f-6f6556569382" (UID: "7aa16043-ed52-4863-8c2f-6f6556569382"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:44:12 crc kubenswrapper[4808]: I0311 08:44:12.956578 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-79b55fbc6c-564f6" Mar 11 08:44:12 crc kubenswrapper[4808]: I0311 08:44:12.960122 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7aa16043-ed52-4863-8c2f-6f6556569382-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7aa16043-ed52-4863-8c2f-6f6556569382" (UID: "7aa16043-ed52-4863-8c2f-6f6556569382"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:44:12 crc kubenswrapper[4808]: I0311 08:44:12.961024 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7aa16043-ed52-4863-8c2f-6f6556569382-kube-api-access-59bkz" (OuterVolumeSpecName: "kube-api-access-59bkz") pod "7aa16043-ed52-4863-8c2f-6f6556569382" (UID: "7aa16043-ed52-4863-8c2f-6f6556569382"). InnerVolumeSpecName "kube-api-access-59bkz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:44:13 crc kubenswrapper[4808]: I0311 08:44:13.056813 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/117895d7-5f8b-43b2-ae5b-519c36a3a717-proxy-ca-bundles\") pod \"117895d7-5f8b-43b2-ae5b-519c36a3a717\" (UID: \"117895d7-5f8b-43b2-ae5b-519c36a3a717\") " Mar 11 08:44:13 crc kubenswrapper[4808]: I0311 08:44:13.057090 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ffdzz\" (UniqueName: \"kubernetes.io/projected/117895d7-5f8b-43b2-ae5b-519c36a3a717-kube-api-access-ffdzz\") pod \"117895d7-5f8b-43b2-ae5b-519c36a3a717\" (UID: \"117895d7-5f8b-43b2-ae5b-519c36a3a717\") " Mar 11 08:44:13 crc kubenswrapper[4808]: I0311 08:44:13.057239 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/117895d7-5f8b-43b2-ae5b-519c36a3a717-config\") pod \"117895d7-5f8b-43b2-ae5b-519c36a3a717\" (UID: \"117895d7-5f8b-43b2-ae5b-519c36a3a717\") " Mar 11 08:44:13 crc kubenswrapper[4808]: I0311 08:44:13.057373 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/117895d7-5f8b-43b2-ae5b-519c36a3a717-client-ca\") pod \"117895d7-5f8b-43b2-ae5b-519c36a3a717\" (UID: \"117895d7-5f8b-43b2-ae5b-519c36a3a717\") " Mar 11 08:44:13 crc kubenswrapper[4808]: I0311 08:44:13.057527 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/117895d7-5f8b-43b2-ae5b-519c36a3a717-serving-cert\") pod \"117895d7-5f8b-43b2-ae5b-519c36a3a717\" (UID: \"117895d7-5f8b-43b2-ae5b-519c36a3a717\") " Mar 11 08:44:13 crc kubenswrapper[4808]: I0311 08:44:13.057821 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/117895d7-5f8b-43b2-ae5b-519c36a3a717-client-ca" (OuterVolumeSpecName: "client-ca") pod "117895d7-5f8b-43b2-ae5b-519c36a3a717" (UID: "117895d7-5f8b-43b2-ae5b-519c36a3a717"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:44:13 crc kubenswrapper[4808]: I0311 08:44:13.057865 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/117895d7-5f8b-43b2-ae5b-519c36a3a717-config" (OuterVolumeSpecName: "config") pod "117895d7-5f8b-43b2-ae5b-519c36a3a717" (UID: "117895d7-5f8b-43b2-ae5b-519c36a3a717"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:44:13 crc kubenswrapper[4808]: I0311 08:44:13.057881 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/117895d7-5f8b-43b2-ae5b-519c36a3a717-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "117895d7-5f8b-43b2-ae5b-519c36a3a717" (UID: "117895d7-5f8b-43b2-ae5b-519c36a3a717"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:44:13 crc kubenswrapper[4808]: I0311 08:44:13.058667 4808 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7aa16043-ed52-4863-8c2f-6f6556569382-client-ca\") on node \"crc\" DevicePath \"\"" Mar 11 08:44:13 crc kubenswrapper[4808]: I0311 08:44:13.058689 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/117895d7-5f8b-43b2-ae5b-519c36a3a717-config\") on node \"crc\" DevicePath \"\"" Mar 11 08:44:13 crc kubenswrapper[4808]: I0311 08:44:13.058701 4808 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/117895d7-5f8b-43b2-ae5b-519c36a3a717-client-ca\") on node \"crc\" DevicePath \"\"" Mar 11 08:44:13 crc kubenswrapper[4808]: I0311 08:44:13.058714 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-59bkz\" (UniqueName: \"kubernetes.io/projected/7aa16043-ed52-4863-8c2f-6f6556569382-kube-api-access-59bkz\") on node \"crc\" DevicePath \"\"" Mar 11 08:44:13 crc kubenswrapper[4808]: I0311 08:44:13.058727 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7aa16043-ed52-4863-8c2f-6f6556569382-config\") on node \"crc\" DevicePath \"\"" Mar 11 08:44:13 crc kubenswrapper[4808]: I0311 08:44:13.058738 4808 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7aa16043-ed52-4863-8c2f-6f6556569382-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 11 08:44:13 crc kubenswrapper[4808]: I0311 08:44:13.058749 4808 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/117895d7-5f8b-43b2-ae5b-519c36a3a717-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 11 08:44:13 crc kubenswrapper[4808]: I0311 08:44:13.060168 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/117895d7-5f8b-43b2-ae5b-519c36a3a717-kube-api-access-ffdzz" (OuterVolumeSpecName: "kube-api-access-ffdzz") pod "117895d7-5f8b-43b2-ae5b-519c36a3a717" (UID: "117895d7-5f8b-43b2-ae5b-519c36a3a717"). InnerVolumeSpecName "kube-api-access-ffdzz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:44:13 crc kubenswrapper[4808]: I0311 08:44:13.061119 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/117895d7-5f8b-43b2-ae5b-519c36a3a717-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "117895d7-5f8b-43b2-ae5b-519c36a3a717" (UID: "117895d7-5f8b-43b2-ae5b-519c36a3a717"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:44:13 crc kubenswrapper[4808]: I0311 08:44:13.159836 4808 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/117895d7-5f8b-43b2-ae5b-519c36a3a717-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 11 08:44:13 crc kubenswrapper[4808]: I0311 08:44:13.159867 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ffdzz\" (UniqueName: \"kubernetes.io/projected/117895d7-5f8b-43b2-ae5b-519c36a3a717-kube-api-access-ffdzz\") on node \"crc\" DevicePath \"\"" Mar 11 08:44:13 crc kubenswrapper[4808]: I0311 08:44:13.595463 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-79b55fbc6c-564f6" event={"ID":"117895d7-5f8b-43b2-ae5b-519c36a3a717","Type":"ContainerDied","Data":"5559990155f0e846e06245976fc42f3dedb47f6ecb356a5f54260d532b6f0f32"} Mar 11 08:44:13 crc kubenswrapper[4808]: I0311 08:44:13.595520 4808 scope.go:117] "RemoveContainer" containerID="55931e97c99b8a4019201de8e35d50a1d65f6a871e5ebb48e8973296fa16af75" Mar 11 08:44:13 crc kubenswrapper[4808]: I0311 08:44:13.595633 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-79b55fbc6c-564f6" Mar 11 08:44:13 crc kubenswrapper[4808]: I0311 08:44:13.608870 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-59d4d7cf9b-vrv7h" event={"ID":"7aa16043-ed52-4863-8c2f-6f6556569382","Type":"ContainerDied","Data":"2c2f9039325ae227afa83cd317180216a5cb4db80ae40c988c7db995f722888c"} Mar 11 08:44:13 crc kubenswrapper[4808]: I0311 08:44:13.608927 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-59d4d7cf9b-vrv7h" Mar 11 08:44:13 crc kubenswrapper[4808]: I0311 08:44:13.626493 4808 scope.go:117] "RemoveContainer" containerID="4fde05bc793c477b28a2149358456ef36abaf377d74673d9eaaedff9b4b5cdcb" Mar 11 08:44:13 crc kubenswrapper[4808]: I0311 08:44:13.635156 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59d4d7cf9b-vrv7h"] Mar 11 08:44:13 crc kubenswrapper[4808]: I0311 08:44:13.638073 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59d4d7cf9b-vrv7h"] Mar 11 08:44:13 crc kubenswrapper[4808]: I0311 08:44:13.647694 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-79b55fbc6c-564f6"] Mar 11 08:44:13 crc kubenswrapper[4808]: I0311 08:44:13.650183 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-79b55fbc6c-564f6"] Mar 11 08:44:13 crc kubenswrapper[4808]: I0311 08:44:13.694563 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7774777c67-x9jd7"] Mar 11 08:44:13 crc kubenswrapper[4808]: E0311 08:44:13.694768 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c04d52d-2eda-40d3-8252-ac2e14d0a861" containerName="extract-utilities" Mar 11 08:44:13 crc kubenswrapper[4808]: I0311 08:44:13.694781 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c04d52d-2eda-40d3-8252-ac2e14d0a861" containerName="extract-utilities" Mar 11 08:44:13 crc kubenswrapper[4808]: E0311 08:44:13.694793 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06cf5014-8032-4d6d-b905-1d7196c123c7" containerName="extract-utilities" Mar 11 08:44:13 crc kubenswrapper[4808]: I0311 08:44:13.694799 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="06cf5014-8032-4d6d-b905-1d7196c123c7" containerName="extract-utilities" Mar 11 08:44:13 crc kubenswrapper[4808]: E0311 08:44:13.694809 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c04d52d-2eda-40d3-8252-ac2e14d0a861" containerName="extract-content" Mar 11 08:44:13 crc kubenswrapper[4808]: I0311 08:44:13.694815 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c04d52d-2eda-40d3-8252-ac2e14d0a861" containerName="extract-content" Mar 11 08:44:13 crc kubenswrapper[4808]: E0311 08:44:13.694822 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7aa16043-ed52-4863-8c2f-6f6556569382" containerName="route-controller-manager" Mar 11 08:44:13 crc kubenswrapper[4808]: I0311 08:44:13.694828 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="7aa16043-ed52-4863-8c2f-6f6556569382" containerName="route-controller-manager" Mar 11 08:44:13 crc kubenswrapper[4808]: E0311 08:44:13.694839 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d9db75f-12f9-4870-94c5-474c8b74f021" containerName="extract-content" Mar 11 08:44:13 crc kubenswrapper[4808]: I0311 08:44:13.694845 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d9db75f-12f9-4870-94c5-474c8b74f021" containerName="extract-content" Mar 11 08:44:13 crc kubenswrapper[4808]: E0311 08:44:13.694853 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d9db75f-12f9-4870-94c5-474c8b74f021" containerName="extract-utilities" Mar 11 08:44:13 crc kubenswrapper[4808]: I0311 08:44:13.694859 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d9db75f-12f9-4870-94c5-474c8b74f021" containerName="extract-utilities" Mar 11 08:44:13 crc kubenswrapper[4808]: E0311 08:44:13.694869 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1b2195c-43fc-4253-9675-13d6836d7c49" containerName="oc" Mar 11 08:44:13 crc kubenswrapper[4808]: I0311 08:44:13.694876 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1b2195c-43fc-4253-9675-13d6836d7c49" containerName="oc" Mar 11 08:44:13 crc kubenswrapper[4808]: E0311 08:44:13.694886 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="117895d7-5f8b-43b2-ae5b-519c36a3a717" containerName="controller-manager" Mar 11 08:44:13 crc kubenswrapper[4808]: I0311 08:44:13.694891 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="117895d7-5f8b-43b2-ae5b-519c36a3a717" containerName="controller-manager" Mar 11 08:44:13 crc kubenswrapper[4808]: E0311 08:44:13.694899 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c04d52d-2eda-40d3-8252-ac2e14d0a861" containerName="registry-server" Mar 11 08:44:13 crc kubenswrapper[4808]: I0311 08:44:13.694904 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c04d52d-2eda-40d3-8252-ac2e14d0a861" containerName="registry-server" Mar 11 08:44:13 crc kubenswrapper[4808]: E0311 08:44:13.694913 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06cf5014-8032-4d6d-b905-1d7196c123c7" containerName="extract-content" Mar 11 08:44:13 crc kubenswrapper[4808]: I0311 08:44:13.694919 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="06cf5014-8032-4d6d-b905-1d7196c123c7" containerName="extract-content" Mar 11 08:44:13 crc kubenswrapper[4808]: E0311 08:44:13.694926 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06cf5014-8032-4d6d-b905-1d7196c123c7" containerName="registry-server" Mar 11 08:44:13 crc kubenswrapper[4808]: I0311 08:44:13.694931 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="06cf5014-8032-4d6d-b905-1d7196c123c7" containerName="registry-server" Mar 11 08:44:13 crc kubenswrapper[4808]: E0311 08:44:13.694949 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d9db75f-12f9-4870-94c5-474c8b74f021" containerName="registry-server" Mar 11 08:44:13 crc kubenswrapper[4808]: I0311 08:44:13.694955 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d9db75f-12f9-4870-94c5-474c8b74f021" containerName="registry-server" Mar 11 08:44:13 crc kubenswrapper[4808]: I0311 08:44:13.695035 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1b2195c-43fc-4253-9675-13d6836d7c49" containerName="oc" Mar 11 08:44:13 crc kubenswrapper[4808]: I0311 08:44:13.695047 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="7aa16043-ed52-4863-8c2f-6f6556569382" containerName="route-controller-manager" Mar 11 08:44:13 crc kubenswrapper[4808]: I0311 08:44:13.695053 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d9db75f-12f9-4870-94c5-474c8b74f021" containerName="registry-server" Mar 11 08:44:13 crc kubenswrapper[4808]: I0311 08:44:13.695059 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="117895d7-5f8b-43b2-ae5b-519c36a3a717" containerName="controller-manager" Mar 11 08:44:13 crc kubenswrapper[4808]: I0311 08:44:13.695068 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="06cf5014-8032-4d6d-b905-1d7196c123c7" containerName="registry-server" Mar 11 08:44:13 crc kubenswrapper[4808]: I0311 08:44:13.695076 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c04d52d-2eda-40d3-8252-ac2e14d0a861" containerName="registry-server" Mar 11 08:44:13 crc kubenswrapper[4808]: I0311 08:44:13.695434 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7774777c67-x9jd7" Mar 11 08:44:13 crc kubenswrapper[4808]: I0311 08:44:13.697945 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 11 08:44:13 crc kubenswrapper[4808]: I0311 08:44:13.698099 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 11 08:44:13 crc kubenswrapper[4808]: I0311 08:44:13.698204 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 11 08:44:13 crc kubenswrapper[4808]: I0311 08:44:13.698311 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 11 08:44:13 crc kubenswrapper[4808]: I0311 08:44:13.698532 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 11 08:44:13 crc kubenswrapper[4808]: I0311 08:44:13.700983 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 11 08:44:13 crc kubenswrapper[4808]: I0311 08:44:13.702274 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6d7975b9fc-7zfbx"] Mar 11 08:44:13 crc kubenswrapper[4808]: I0311 08:44:13.702788 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6d7975b9fc-7zfbx" Mar 11 08:44:13 crc kubenswrapper[4808]: I0311 08:44:13.703739 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 11 08:44:13 crc kubenswrapper[4808]: I0311 08:44:13.704321 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 11 08:44:13 crc kubenswrapper[4808]: I0311 08:44:13.704392 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 11 08:44:13 crc kubenswrapper[4808]: I0311 08:44:13.704405 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 11 08:44:13 crc kubenswrapper[4808]: I0311 08:44:13.704543 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 11 08:44:13 crc kubenswrapper[4808]: I0311 08:44:13.705774 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 11 08:44:13 crc kubenswrapper[4808]: I0311 08:44:13.709846 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7774777c67-x9jd7"] Mar 11 08:44:13 crc kubenswrapper[4808]: I0311 08:44:13.711879 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6d7975b9fc-7zfbx"] Mar 11 08:44:13 crc kubenswrapper[4808]: I0311 08:44:13.714750 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 11 08:44:13 crc kubenswrapper[4808]: I0311 08:44:13.767340 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3a5b563b-dfdf-4720-a13f-85f72959f234-client-ca\") pod \"route-controller-manager-7774777c67-x9jd7\" (UID: \"3a5b563b-dfdf-4720-a13f-85f72959f234\") " pod="openshift-route-controller-manager/route-controller-manager-7774777c67-x9jd7" Mar 11 08:44:13 crc kubenswrapper[4808]: I0311 08:44:13.767425 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee8ed134-1c66-4b5d-925c-6cfc105dde4b-serving-cert\") pod \"controller-manager-6d7975b9fc-7zfbx\" (UID: \"ee8ed134-1c66-4b5d-925c-6cfc105dde4b\") " pod="openshift-controller-manager/controller-manager-6d7975b9fc-7zfbx" Mar 11 08:44:13 crc kubenswrapper[4808]: I0311 08:44:13.767715 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ee8ed134-1c66-4b5d-925c-6cfc105dde4b-proxy-ca-bundles\") pod \"controller-manager-6d7975b9fc-7zfbx\" (UID: \"ee8ed134-1c66-4b5d-925c-6cfc105dde4b\") " pod="openshift-controller-manager/controller-manager-6d7975b9fc-7zfbx" Mar 11 08:44:13 crc kubenswrapper[4808]: I0311 08:44:13.767942 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8lv8\" (UniqueName: \"kubernetes.io/projected/3a5b563b-dfdf-4720-a13f-85f72959f234-kube-api-access-z8lv8\") pod \"route-controller-manager-7774777c67-x9jd7\" (UID: \"3a5b563b-dfdf-4720-a13f-85f72959f234\") " pod="openshift-route-controller-manager/route-controller-manager-7774777c67-x9jd7" Mar 11 08:44:13 crc kubenswrapper[4808]: I0311 08:44:13.768038 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee8ed134-1c66-4b5d-925c-6cfc105dde4b-config\") pod \"controller-manager-6d7975b9fc-7zfbx\" (UID: \"ee8ed134-1c66-4b5d-925c-6cfc105dde4b\") " pod="openshift-controller-manager/controller-manager-6d7975b9fc-7zfbx" Mar 11 08:44:13 crc kubenswrapper[4808]: I0311 08:44:13.768077 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ee8ed134-1c66-4b5d-925c-6cfc105dde4b-client-ca\") pod \"controller-manager-6d7975b9fc-7zfbx\" (UID: \"ee8ed134-1c66-4b5d-925c-6cfc105dde4b\") " pod="openshift-controller-manager/controller-manager-6d7975b9fc-7zfbx" Mar 11 08:44:13 crc kubenswrapper[4808]: I0311 08:44:13.768165 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a5b563b-dfdf-4720-a13f-85f72959f234-config\") pod \"route-controller-manager-7774777c67-x9jd7\" (UID: \"3a5b563b-dfdf-4720-a13f-85f72959f234\") " pod="openshift-route-controller-manager/route-controller-manager-7774777c67-x9jd7" Mar 11 08:44:13 crc kubenswrapper[4808]: I0311 08:44:13.768193 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a5b563b-dfdf-4720-a13f-85f72959f234-serving-cert\") pod \"route-controller-manager-7774777c67-x9jd7\" (UID: \"3a5b563b-dfdf-4720-a13f-85f72959f234\") " pod="openshift-route-controller-manager/route-controller-manager-7774777c67-x9jd7" Mar 11 08:44:13 crc kubenswrapper[4808]: I0311 08:44:13.768227 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzsz9\" (UniqueName: \"kubernetes.io/projected/ee8ed134-1c66-4b5d-925c-6cfc105dde4b-kube-api-access-mzsz9\") pod \"controller-manager-6d7975b9fc-7zfbx\" (UID: \"ee8ed134-1c66-4b5d-925c-6cfc105dde4b\") " pod="openshift-controller-manager/controller-manager-6d7975b9fc-7zfbx" Mar 11 08:44:13 crc kubenswrapper[4808]: I0311 08:44:13.795069 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="117895d7-5f8b-43b2-ae5b-519c36a3a717" path="/var/lib/kubelet/pods/117895d7-5f8b-43b2-ae5b-519c36a3a717/volumes" Mar 11 08:44:13 crc kubenswrapper[4808]: I0311 08:44:13.795589 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7aa16043-ed52-4863-8c2f-6f6556569382" path="/var/lib/kubelet/pods/7aa16043-ed52-4863-8c2f-6f6556569382/volumes" Mar 11 08:44:13 crc kubenswrapper[4808]: I0311 08:44:13.869641 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee8ed134-1c66-4b5d-925c-6cfc105dde4b-config\") pod \"controller-manager-6d7975b9fc-7zfbx\" (UID: \"ee8ed134-1c66-4b5d-925c-6cfc105dde4b\") " pod="openshift-controller-manager/controller-manager-6d7975b9fc-7zfbx" Mar 11 08:44:13 crc kubenswrapper[4808]: I0311 08:44:13.869736 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ee8ed134-1c66-4b5d-925c-6cfc105dde4b-client-ca\") pod \"controller-manager-6d7975b9fc-7zfbx\" (UID: \"ee8ed134-1c66-4b5d-925c-6cfc105dde4b\") " pod="openshift-controller-manager/controller-manager-6d7975b9fc-7zfbx" Mar 11 08:44:13 crc kubenswrapper[4808]: I0311 08:44:13.869791 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a5b563b-dfdf-4720-a13f-85f72959f234-config\") pod \"route-controller-manager-7774777c67-x9jd7\" (UID: \"3a5b563b-dfdf-4720-a13f-85f72959f234\") " pod="openshift-route-controller-manager/route-controller-manager-7774777c67-x9jd7" Mar 11 08:44:13 crc kubenswrapper[4808]: I0311 08:44:13.869808 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a5b563b-dfdf-4720-a13f-85f72959f234-serving-cert\") pod \"route-controller-manager-7774777c67-x9jd7\" (UID: \"3a5b563b-dfdf-4720-a13f-85f72959f234\") " pod="openshift-route-controller-manager/route-controller-manager-7774777c67-x9jd7" Mar 11 08:44:13 crc kubenswrapper[4808]: I0311 08:44:13.869823 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzsz9\" (UniqueName: \"kubernetes.io/projected/ee8ed134-1c66-4b5d-925c-6cfc105dde4b-kube-api-access-mzsz9\") pod \"controller-manager-6d7975b9fc-7zfbx\" (UID: \"ee8ed134-1c66-4b5d-925c-6cfc105dde4b\") " pod="openshift-controller-manager/controller-manager-6d7975b9fc-7zfbx" Mar 11 08:44:13 crc kubenswrapper[4808]: I0311 08:44:13.869861 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3a5b563b-dfdf-4720-a13f-85f72959f234-client-ca\") pod \"route-controller-manager-7774777c67-x9jd7\" (UID: \"3a5b563b-dfdf-4720-a13f-85f72959f234\") " pod="openshift-route-controller-manager/route-controller-manager-7774777c67-x9jd7" Mar 11 08:44:13 crc kubenswrapper[4808]: I0311 08:44:13.869885 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee8ed134-1c66-4b5d-925c-6cfc105dde4b-serving-cert\") pod \"controller-manager-6d7975b9fc-7zfbx\" (UID: \"ee8ed134-1c66-4b5d-925c-6cfc105dde4b\") " pod="openshift-controller-manager/controller-manager-6d7975b9fc-7zfbx" Mar 11 08:44:13 crc kubenswrapper[4808]: I0311 08:44:13.869936 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ee8ed134-1c66-4b5d-925c-6cfc105dde4b-proxy-ca-bundles\") pod \"controller-manager-6d7975b9fc-7zfbx\" (UID: \"ee8ed134-1c66-4b5d-925c-6cfc105dde4b\") " pod="openshift-controller-manager/controller-manager-6d7975b9fc-7zfbx" Mar 11 08:44:13 crc kubenswrapper[4808]: I0311 08:44:13.869963 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8lv8\" (UniqueName: \"kubernetes.io/projected/3a5b563b-dfdf-4720-a13f-85f72959f234-kube-api-access-z8lv8\") pod \"route-controller-manager-7774777c67-x9jd7\" (UID: \"3a5b563b-dfdf-4720-a13f-85f72959f234\") " pod="openshift-route-controller-manager/route-controller-manager-7774777c67-x9jd7" Mar 11 08:44:13 crc kubenswrapper[4808]: I0311 08:44:13.872159 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ee8ed134-1c66-4b5d-925c-6cfc105dde4b-proxy-ca-bundles\") pod \"controller-manager-6d7975b9fc-7zfbx\" (UID: \"ee8ed134-1c66-4b5d-925c-6cfc105dde4b\") " pod="openshift-controller-manager/controller-manager-6d7975b9fc-7zfbx" Mar 11 08:44:13 crc kubenswrapper[4808]: I0311 08:44:13.872409 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3a5b563b-dfdf-4720-a13f-85f72959f234-client-ca\") pod \"route-controller-manager-7774777c67-x9jd7\" (UID: \"3a5b563b-dfdf-4720-a13f-85f72959f234\") " pod="openshift-route-controller-manager/route-controller-manager-7774777c67-x9jd7" Mar 11 08:44:13 crc kubenswrapper[4808]: I0311 08:44:13.872651 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ee8ed134-1c66-4b5d-925c-6cfc105dde4b-client-ca\") pod \"controller-manager-6d7975b9fc-7zfbx\" (UID: \"ee8ed134-1c66-4b5d-925c-6cfc105dde4b\") " pod="openshift-controller-manager/controller-manager-6d7975b9fc-7zfbx" Mar 11 08:44:13 crc kubenswrapper[4808]: I0311 08:44:13.872724 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a5b563b-dfdf-4720-a13f-85f72959f234-config\") pod \"route-controller-manager-7774777c67-x9jd7\" (UID: \"3a5b563b-dfdf-4720-a13f-85f72959f234\") " pod="openshift-route-controller-manager/route-controller-manager-7774777c67-x9jd7" Mar 11 08:44:13 crc kubenswrapper[4808]: I0311 08:44:13.881555 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee8ed134-1c66-4b5d-925c-6cfc105dde4b-config\") pod \"controller-manager-6d7975b9fc-7zfbx\" (UID: \"ee8ed134-1c66-4b5d-925c-6cfc105dde4b\") " pod="openshift-controller-manager/controller-manager-6d7975b9fc-7zfbx" Mar 11 08:44:13 crc kubenswrapper[4808]: I0311 08:44:13.882448 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee8ed134-1c66-4b5d-925c-6cfc105dde4b-serving-cert\") pod \"controller-manager-6d7975b9fc-7zfbx\" (UID: \"ee8ed134-1c66-4b5d-925c-6cfc105dde4b\") " pod="openshift-controller-manager/controller-manager-6d7975b9fc-7zfbx" Mar 11 08:44:13 crc kubenswrapper[4808]: I0311 08:44:13.882823 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a5b563b-dfdf-4720-a13f-85f72959f234-serving-cert\") pod \"route-controller-manager-7774777c67-x9jd7\" (UID: \"3a5b563b-dfdf-4720-a13f-85f72959f234\") " pod="openshift-route-controller-manager/route-controller-manager-7774777c67-x9jd7" Mar 11 08:44:13 crc kubenswrapper[4808]: I0311 08:44:13.893661 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8lv8\" (UniqueName: \"kubernetes.io/projected/3a5b563b-dfdf-4720-a13f-85f72959f234-kube-api-access-z8lv8\") pod \"route-controller-manager-7774777c67-x9jd7\" (UID: \"3a5b563b-dfdf-4720-a13f-85f72959f234\") " pod="openshift-route-controller-manager/route-controller-manager-7774777c67-x9jd7" Mar 11 08:44:13 crc kubenswrapper[4808]: I0311 08:44:13.894031 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzsz9\" (UniqueName: \"kubernetes.io/projected/ee8ed134-1c66-4b5d-925c-6cfc105dde4b-kube-api-access-mzsz9\") pod \"controller-manager-6d7975b9fc-7zfbx\" (UID: \"ee8ed134-1c66-4b5d-925c-6cfc105dde4b\") " pod="openshift-controller-manager/controller-manager-6d7975b9fc-7zfbx" Mar 11 08:44:14 crc kubenswrapper[4808]: I0311 08:44:14.011045 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7774777c67-x9jd7" Mar 11 08:44:14 crc kubenswrapper[4808]: I0311 08:44:14.022204 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6d7975b9fc-7zfbx" Mar 11 08:44:14 crc kubenswrapper[4808]: I0311 08:44:14.060895 4808 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Mar 11 08:44:14 crc kubenswrapper[4808]: I0311 08:44:14.061838 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 11 08:44:14 crc kubenswrapper[4808]: I0311 08:44:14.062529 4808 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 11 08:44:14 crc kubenswrapper[4808]: I0311 08:44:14.062917 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://b9a36b70daad0e5e33a84c699b4c0680f47f80bad4c9852616ea933ba8c6b221" gracePeriod=15 Mar 11 08:44:14 crc kubenswrapper[4808]: I0311 08:44:14.063089 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://aa1f460dd013b0b42bfb8031fcb1e2f49ace0fe24ce8194bd6e6970ad3e750ea" gracePeriod=15 Mar 11 08:44:14 crc kubenswrapper[4808]: I0311 08:44:14.063137 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://46e7f27f0f53872704c9eff7a30b8cb82afea98d12ec0723521b901c7503f203" gracePeriod=15 Mar 11 08:44:14 crc kubenswrapper[4808]: I0311 08:44:14.063156 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://675da4e867613d165a41176c1b725f3601ffdd7de762c52efec0ee5485b1d628" gracePeriod=15 Mar 11 08:44:14 crc kubenswrapper[4808]: I0311 08:44:14.063090 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://506d71773927c014aeafefdc00c93d5faf4e6a5827bc8b47f525f3ea970ed14e" gracePeriod=15 Mar 11 08:44:14 crc kubenswrapper[4808]: I0311 08:44:14.063682 4808 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 11 08:44:14 crc kubenswrapper[4808]: E0311 08:44:14.064682 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Mar 11 08:44:14 crc kubenswrapper[4808]: I0311 08:44:14.064706 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Mar 11 08:44:14 crc kubenswrapper[4808]: E0311 08:44:14.064720 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 11 08:44:14 crc kubenswrapper[4808]: I0311 08:44:14.064727 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 11 08:44:14 crc kubenswrapper[4808]: E0311 08:44:14.064735 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Mar 11 08:44:14 crc kubenswrapper[4808]: I0311 08:44:14.064779 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Mar 11 08:44:14 crc kubenswrapper[4808]: E0311 08:44:14.064793 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 11 08:44:14 crc kubenswrapper[4808]: I0311 08:44:14.064801 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 11 08:44:14 crc kubenswrapper[4808]: E0311 08:44:14.064816 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Mar 11 08:44:14 crc kubenswrapper[4808]: I0311 08:44:14.064823 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Mar 11 08:44:14 crc kubenswrapper[4808]: E0311 08:44:14.064837 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 11 08:44:14 crc kubenswrapper[4808]: I0311 08:44:14.064854 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 11 08:44:14 crc kubenswrapper[4808]: E0311 08:44:14.064863 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Mar 11 08:44:14 crc kubenswrapper[4808]: I0311 08:44:14.064872 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Mar 11 08:44:14 crc kubenswrapper[4808]: E0311 08:44:14.064882 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Mar 11 08:44:14 crc kubenswrapper[4808]: I0311 08:44:14.064889 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Mar 11 08:44:14 crc kubenswrapper[4808]: E0311 08:44:14.064904 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 11 08:44:14 crc kubenswrapper[4808]: I0311 08:44:14.064910 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 11 08:44:14 crc kubenswrapper[4808]: I0311 08:44:14.065076 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 11 08:44:14 crc kubenswrapper[4808]: I0311 08:44:14.065091 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Mar 11 08:44:14 crc kubenswrapper[4808]: I0311 08:44:14.065105 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 11 08:44:14 crc kubenswrapper[4808]: I0311 08:44:14.065114 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Mar 11 08:44:14 crc kubenswrapper[4808]: I0311 08:44:14.065126 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Mar 11 08:44:14 crc kubenswrapper[4808]: I0311 08:44:14.065136 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 11 08:44:14 crc kubenswrapper[4808]: I0311 08:44:14.065146 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 11 08:44:14 crc kubenswrapper[4808]: I0311 08:44:14.065157 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Mar 11 08:44:14 crc kubenswrapper[4808]: E0311 08:44:14.065260 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 11 08:44:14 crc kubenswrapper[4808]: I0311 08:44:14.065269 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 11 08:44:14 crc kubenswrapper[4808]: I0311 08:44:14.065380 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 11 08:44:14 crc kubenswrapper[4808]: I0311 08:44:14.100643 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Mar 11 08:44:14 crc kubenswrapper[4808]: I0311 08:44:14.177793 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 11 08:44:14 crc kubenswrapper[4808]: I0311 08:44:14.177849 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 11 08:44:14 crc kubenswrapper[4808]: I0311 08:44:14.177904 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 11 08:44:14 crc kubenswrapper[4808]: I0311 08:44:14.177931 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 11 08:44:14 crc kubenswrapper[4808]: I0311 08:44:14.177951 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 11 08:44:14 crc kubenswrapper[4808]: I0311 08:44:14.177987 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 11 08:44:14 crc kubenswrapper[4808]: I0311 08:44:14.178018 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 11 08:44:14 crc kubenswrapper[4808]: I0311 08:44:14.178045 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 11 08:44:14 crc kubenswrapper[4808]: I0311 08:44:14.279210 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 11 08:44:14 crc kubenswrapper[4808]: I0311 08:44:14.279268 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 11 08:44:14 crc kubenswrapper[4808]: I0311 08:44:14.279325 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 11 08:44:14 crc kubenswrapper[4808]: I0311 08:44:14.279350 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 11 08:44:14 crc kubenswrapper[4808]: I0311 08:44:14.279378 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 11 08:44:14 crc kubenswrapper[4808]: I0311 08:44:14.279445 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 11 08:44:14 crc kubenswrapper[4808]: I0311 08:44:14.279445 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 11 08:44:14 crc kubenswrapper[4808]: I0311 08:44:14.279396 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 11 08:44:14 crc kubenswrapper[4808]: I0311 08:44:14.279502 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 11 08:44:14 crc kubenswrapper[4808]: I0311 08:44:14.279474 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 11 08:44:14 crc kubenswrapper[4808]: I0311 08:44:14.279524 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 11 08:44:14 crc kubenswrapper[4808]: I0311 08:44:14.279561 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 11 08:44:14 crc kubenswrapper[4808]: I0311 08:44:14.279581 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 11 08:44:14 crc kubenswrapper[4808]: I0311 08:44:14.279620 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 11 08:44:14 crc kubenswrapper[4808]: I0311 08:44:14.279656 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 11 08:44:14 crc kubenswrapper[4808]: I0311 08:44:14.279660 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 11 08:44:14 crc kubenswrapper[4808]: I0311 08:44:14.397333 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 11 08:44:14 crc kubenswrapper[4808]: E0311 08:44:14.423473 4808 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.113:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.189bbcf8a5f4ba24 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:44:14.422612516 +0000 UTC m=+305.375935836,LastTimestamp:2026-03-11 08:44:14.422612516 +0000 UTC m=+305.375935836,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:44:14 crc kubenswrapper[4808]: E0311 08:44:14.483813 4808 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 11 08:44:14 crc kubenswrapper[4808]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-6d7975b9fc-7zfbx_openshift-controller-manager_ee8ed134-1c66-4b5d-925c-6cfc105dde4b_0(5530c8e7eb759410252c91f454ef62413ce13eb36136bb4ecafaee5548a6bc1e): error adding pod openshift-controller-manager_controller-manager-6d7975b9fc-7zfbx to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"5530c8e7eb759410252c91f454ef62413ce13eb36136bb4ecafaee5548a6bc1e" Netns:"/var/run/netns/6fb9b1d8-cb5d-44a8-b3b5-af38bfcb264e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-6d7975b9fc-7zfbx;K8S_POD_INFRA_CONTAINER_ID=5530c8e7eb759410252c91f454ef62413ce13eb36136bb4ecafaee5548a6bc1e;K8S_POD_UID=ee8ed134-1c66-4b5d-925c-6cfc105dde4b" Path:"" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-6d7975b9fc-7zfbx] networking: Multus: [openshift-controller-manager/controller-manager-6d7975b9fc-7zfbx/ee8ed134-1c66-4b5d-925c-6cfc105dde4b]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod controller-manager-6d7975b9fc-7zfbx in out of cluster comm: SetNetworkStatus: failed to update the pod controller-manager-6d7975b9fc-7zfbx in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6d7975b9fc-7zfbx?timeout=1m0s": dial tcp 38.102.83.113:6443: connect: connection refused Mar 11 08:44:14 crc kubenswrapper[4808]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 11 08:44:14 crc kubenswrapper[4808]: > Mar 11 08:44:14 crc kubenswrapper[4808]: E0311 08:44:14.483918 4808 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 11 08:44:14 crc kubenswrapper[4808]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-6d7975b9fc-7zfbx_openshift-controller-manager_ee8ed134-1c66-4b5d-925c-6cfc105dde4b_0(5530c8e7eb759410252c91f454ef62413ce13eb36136bb4ecafaee5548a6bc1e): error adding pod openshift-controller-manager_controller-manager-6d7975b9fc-7zfbx to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"5530c8e7eb759410252c91f454ef62413ce13eb36136bb4ecafaee5548a6bc1e" Netns:"/var/run/netns/6fb9b1d8-cb5d-44a8-b3b5-af38bfcb264e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-6d7975b9fc-7zfbx;K8S_POD_INFRA_CONTAINER_ID=5530c8e7eb759410252c91f454ef62413ce13eb36136bb4ecafaee5548a6bc1e;K8S_POD_UID=ee8ed134-1c66-4b5d-925c-6cfc105dde4b" Path:"" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-6d7975b9fc-7zfbx] networking: Multus: [openshift-controller-manager/controller-manager-6d7975b9fc-7zfbx/ee8ed134-1c66-4b5d-925c-6cfc105dde4b]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod controller-manager-6d7975b9fc-7zfbx in out of cluster comm: SetNetworkStatus: failed to update the pod controller-manager-6d7975b9fc-7zfbx in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6d7975b9fc-7zfbx?timeout=1m0s": dial tcp 38.102.83.113:6443: connect: connection refused Mar 11 08:44:14 crc kubenswrapper[4808]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 11 08:44:14 crc kubenswrapper[4808]: > pod="openshift-controller-manager/controller-manager-6d7975b9fc-7zfbx" Mar 11 08:44:14 crc kubenswrapper[4808]: E0311 08:44:14.483943 4808 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 11 08:44:14 crc kubenswrapper[4808]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-6d7975b9fc-7zfbx_openshift-controller-manager_ee8ed134-1c66-4b5d-925c-6cfc105dde4b_0(5530c8e7eb759410252c91f454ef62413ce13eb36136bb4ecafaee5548a6bc1e): error adding pod openshift-controller-manager_controller-manager-6d7975b9fc-7zfbx to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"5530c8e7eb759410252c91f454ef62413ce13eb36136bb4ecafaee5548a6bc1e" Netns:"/var/run/netns/6fb9b1d8-cb5d-44a8-b3b5-af38bfcb264e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-6d7975b9fc-7zfbx;K8S_POD_INFRA_CONTAINER_ID=5530c8e7eb759410252c91f454ef62413ce13eb36136bb4ecafaee5548a6bc1e;K8S_POD_UID=ee8ed134-1c66-4b5d-925c-6cfc105dde4b" Path:"" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-6d7975b9fc-7zfbx] networking: Multus: [openshift-controller-manager/controller-manager-6d7975b9fc-7zfbx/ee8ed134-1c66-4b5d-925c-6cfc105dde4b]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod controller-manager-6d7975b9fc-7zfbx in out of cluster comm: SetNetworkStatus: failed to update the pod controller-manager-6d7975b9fc-7zfbx in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6d7975b9fc-7zfbx?timeout=1m0s": dial tcp 38.102.83.113:6443: connect: connection refused Mar 11 08:44:14 crc kubenswrapper[4808]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 11 08:44:14 crc kubenswrapper[4808]: > pod="openshift-controller-manager/controller-manager-6d7975b9fc-7zfbx" Mar 11 08:44:14 crc kubenswrapper[4808]: E0311 08:44:14.484015 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"controller-manager-6d7975b9fc-7zfbx_openshift-controller-manager(ee8ed134-1c66-4b5d-925c-6cfc105dde4b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"controller-manager-6d7975b9fc-7zfbx_openshift-controller-manager(ee8ed134-1c66-4b5d-925c-6cfc105dde4b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-6d7975b9fc-7zfbx_openshift-controller-manager_ee8ed134-1c66-4b5d-925c-6cfc105dde4b_0(5530c8e7eb759410252c91f454ef62413ce13eb36136bb4ecafaee5548a6bc1e): error adding pod openshift-controller-manager_controller-manager-6d7975b9fc-7zfbx to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"5530c8e7eb759410252c91f454ef62413ce13eb36136bb4ecafaee5548a6bc1e\\\" Netns:\\\"/var/run/netns/6fb9b1d8-cb5d-44a8-b3b5-af38bfcb264e\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-6d7975b9fc-7zfbx;K8S_POD_INFRA_CONTAINER_ID=5530c8e7eb759410252c91f454ef62413ce13eb36136bb4ecafaee5548a6bc1e;K8S_POD_UID=ee8ed134-1c66-4b5d-925c-6cfc105dde4b\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-6d7975b9fc-7zfbx] networking: Multus: [openshift-controller-manager/controller-manager-6d7975b9fc-7zfbx/ee8ed134-1c66-4b5d-925c-6cfc105dde4b]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod controller-manager-6d7975b9fc-7zfbx in out of cluster comm: SetNetworkStatus: failed to update the pod controller-manager-6d7975b9fc-7zfbx in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6d7975b9fc-7zfbx?timeout=1m0s\\\": dial tcp 38.102.83.113:6443: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-controller-manager/controller-manager-6d7975b9fc-7zfbx" podUID="ee8ed134-1c66-4b5d-925c-6cfc105dde4b" Mar 11 08:44:14 crc kubenswrapper[4808]: I0311 08:44:14.640044 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"bc4834dbdafa716f6eec74be9c6f155c1595dc7e65e84506b3ab99ed40204abe"} Mar 11 08:44:14 crc kubenswrapper[4808]: I0311 08:44:14.641838 4808 generic.go:334] "Generic (PLEG): container finished" podID="30c9c874-96ee-4ff1-88fb-eedbd0d114e8" containerID="dcad9b652a3d7e1a8db40c6e9fab2d4d4f80d84902747a4d68e6ea4f938dc1de" exitCode=0 Mar 11 08:44:14 crc kubenswrapper[4808]: I0311 08:44:14.641927 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"30c9c874-96ee-4ff1-88fb-eedbd0d114e8","Type":"ContainerDied","Data":"dcad9b652a3d7e1a8db40c6e9fab2d4d4f80d84902747a4d68e6ea4f938dc1de"} Mar 11 08:44:14 crc kubenswrapper[4808]: I0311 08:44:14.642684 4808 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Mar 11 08:44:14 crc kubenswrapper[4808]: I0311 08:44:14.643018 4808 status_manager.go:851] "Failed to get status for pod" podUID="30c9c874-96ee-4ff1-88fb-eedbd0d114e8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Mar 11 08:44:14 crc kubenswrapper[4808]: I0311 08:44:14.643252 4808 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Mar 11 08:44:14 crc kubenswrapper[4808]: I0311 08:44:14.644849 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Mar 11 08:44:14 crc kubenswrapper[4808]: I0311 08:44:14.645797 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Mar 11 08:44:14 crc kubenswrapper[4808]: I0311 08:44:14.646275 4808 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="46e7f27f0f53872704c9eff7a30b8cb82afea98d12ec0723521b901c7503f203" exitCode=0 Mar 11 08:44:14 crc kubenswrapper[4808]: I0311 08:44:14.646298 4808 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="506d71773927c014aeafefdc00c93d5faf4e6a5827bc8b47f525f3ea970ed14e" exitCode=0 Mar 11 08:44:14 crc kubenswrapper[4808]: I0311 08:44:14.646305 4808 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="aa1f460dd013b0b42bfb8031fcb1e2f49ace0fe24ce8194bd6e6970ad3e750ea" exitCode=0 Mar 11 08:44:14 crc kubenswrapper[4808]: I0311 08:44:14.646314 4808 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="675da4e867613d165a41176c1b725f3601ffdd7de762c52efec0ee5485b1d628" exitCode=2 Mar 11 08:44:14 crc kubenswrapper[4808]: I0311 08:44:14.646395 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6d7975b9fc-7zfbx" Mar 11 08:44:14 crc kubenswrapper[4808]: I0311 08:44:14.646835 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6d7975b9fc-7zfbx" Mar 11 08:44:14 crc kubenswrapper[4808]: I0311 08:44:14.647037 4808 scope.go:117] "RemoveContainer" containerID="9f99d53f25011afeb242934cb64a3cce6d5a03c17bd6008729f5e07298f152fb" Mar 11 08:44:14 crc kubenswrapper[4808]: E0311 08:44:14.712293 4808 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 11 08:44:14 crc kubenswrapper[4808]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-7774777c67-x9jd7_openshift-route-controller-manager_3a5b563b-dfdf-4720-a13f-85f72959f234_0(8949aecda9d2dca0a8220cfcb52e24a2b0ed53b699a2b0e72bcc1b285baf2466): error adding pod openshift-route-controller-manager_route-controller-manager-7774777c67-x9jd7 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"8949aecda9d2dca0a8220cfcb52e24a2b0ed53b699a2b0e72bcc1b285baf2466" Netns:"/var/run/netns/3052d99f-3d4f-45ad-a570-985e86597ff9" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-7774777c67-x9jd7;K8S_POD_INFRA_CONTAINER_ID=8949aecda9d2dca0a8220cfcb52e24a2b0ed53b699a2b0e72bcc1b285baf2466;K8S_POD_UID=3a5b563b-dfdf-4720-a13f-85f72959f234" Path:"" ERRORED: error configuring pod [openshift-route-controller-manager/route-controller-manager-7774777c67-x9jd7] networking: Multus: [openshift-route-controller-manager/route-controller-manager-7774777c67-x9jd7/3a5b563b-dfdf-4720-a13f-85f72959f234]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod route-controller-manager-7774777c67-x9jd7 in out of cluster comm: SetNetworkStatus: failed to update the pod route-controller-manager-7774777c67-x9jd7 in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-7774777c67-x9jd7?timeout=1m0s": dial tcp 38.102.83.113:6443: connect: connection refused Mar 11 08:44:14 crc kubenswrapper[4808]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 11 08:44:14 crc kubenswrapper[4808]: > Mar 11 08:44:14 crc kubenswrapper[4808]: E0311 08:44:14.712381 4808 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 11 08:44:14 crc kubenswrapper[4808]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-7774777c67-x9jd7_openshift-route-controller-manager_3a5b563b-dfdf-4720-a13f-85f72959f234_0(8949aecda9d2dca0a8220cfcb52e24a2b0ed53b699a2b0e72bcc1b285baf2466): error adding pod openshift-route-controller-manager_route-controller-manager-7774777c67-x9jd7 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"8949aecda9d2dca0a8220cfcb52e24a2b0ed53b699a2b0e72bcc1b285baf2466" Netns:"/var/run/netns/3052d99f-3d4f-45ad-a570-985e86597ff9" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-7774777c67-x9jd7;K8S_POD_INFRA_CONTAINER_ID=8949aecda9d2dca0a8220cfcb52e24a2b0ed53b699a2b0e72bcc1b285baf2466;K8S_POD_UID=3a5b563b-dfdf-4720-a13f-85f72959f234" Path:"" ERRORED: error configuring pod [openshift-route-controller-manager/route-controller-manager-7774777c67-x9jd7] networking: Multus: [openshift-route-controller-manager/route-controller-manager-7774777c67-x9jd7/3a5b563b-dfdf-4720-a13f-85f72959f234]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod route-controller-manager-7774777c67-x9jd7 in out of cluster comm: SetNetworkStatus: failed to update the pod route-controller-manager-7774777c67-x9jd7 in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-7774777c67-x9jd7?timeout=1m0s": dial tcp 38.102.83.113:6443: connect: connection refused Mar 11 08:44:14 crc kubenswrapper[4808]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 11 08:44:14 crc kubenswrapper[4808]: > pod="openshift-route-controller-manager/route-controller-manager-7774777c67-x9jd7" Mar 11 08:44:14 crc kubenswrapper[4808]: E0311 08:44:14.712405 4808 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 11 08:44:14 crc kubenswrapper[4808]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-7774777c67-x9jd7_openshift-route-controller-manager_3a5b563b-dfdf-4720-a13f-85f72959f234_0(8949aecda9d2dca0a8220cfcb52e24a2b0ed53b699a2b0e72bcc1b285baf2466): error adding pod openshift-route-controller-manager_route-controller-manager-7774777c67-x9jd7 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"8949aecda9d2dca0a8220cfcb52e24a2b0ed53b699a2b0e72bcc1b285baf2466" Netns:"/var/run/netns/3052d99f-3d4f-45ad-a570-985e86597ff9" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-7774777c67-x9jd7;K8S_POD_INFRA_CONTAINER_ID=8949aecda9d2dca0a8220cfcb52e24a2b0ed53b699a2b0e72bcc1b285baf2466;K8S_POD_UID=3a5b563b-dfdf-4720-a13f-85f72959f234" Path:"" ERRORED: error configuring pod [openshift-route-controller-manager/route-controller-manager-7774777c67-x9jd7] networking: Multus: [openshift-route-controller-manager/route-controller-manager-7774777c67-x9jd7/3a5b563b-dfdf-4720-a13f-85f72959f234]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod route-controller-manager-7774777c67-x9jd7 in out of cluster comm: SetNetworkStatus: failed to update the pod route-controller-manager-7774777c67-x9jd7 in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-7774777c67-x9jd7?timeout=1m0s": dial tcp 38.102.83.113:6443: connect: connection refused Mar 11 08:44:14 crc kubenswrapper[4808]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 11 08:44:14 crc kubenswrapper[4808]: > pod="openshift-route-controller-manager/route-controller-manager-7774777c67-x9jd7" Mar 11 08:44:14 crc kubenswrapper[4808]: E0311 08:44:14.712504 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"route-controller-manager-7774777c67-x9jd7_openshift-route-controller-manager(3a5b563b-dfdf-4720-a13f-85f72959f234)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"route-controller-manager-7774777c67-x9jd7_openshift-route-controller-manager(3a5b563b-dfdf-4720-a13f-85f72959f234)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-7774777c67-x9jd7_openshift-route-controller-manager_3a5b563b-dfdf-4720-a13f-85f72959f234_0(8949aecda9d2dca0a8220cfcb52e24a2b0ed53b699a2b0e72bcc1b285baf2466): error adding pod openshift-route-controller-manager_route-controller-manager-7774777c67-x9jd7 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"8949aecda9d2dca0a8220cfcb52e24a2b0ed53b699a2b0e72bcc1b285baf2466\\\" Netns:\\\"/var/run/netns/3052d99f-3d4f-45ad-a570-985e86597ff9\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-7774777c67-x9jd7;K8S_POD_INFRA_CONTAINER_ID=8949aecda9d2dca0a8220cfcb52e24a2b0ed53b699a2b0e72bcc1b285baf2466;K8S_POD_UID=3a5b563b-dfdf-4720-a13f-85f72959f234\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-route-controller-manager/route-controller-manager-7774777c67-x9jd7] networking: Multus: [openshift-route-controller-manager/route-controller-manager-7774777c67-x9jd7/3a5b563b-dfdf-4720-a13f-85f72959f234]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod route-controller-manager-7774777c67-x9jd7 in out of cluster comm: SetNetworkStatus: failed to update the pod route-controller-manager-7774777c67-x9jd7 in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-7774777c67-x9jd7?timeout=1m0s\\\": dial tcp 38.102.83.113:6443: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-route-controller-manager/route-controller-manager-7774777c67-x9jd7" podUID="3a5b563b-dfdf-4720-a13f-85f72959f234" Mar 11 08:44:15 crc kubenswrapper[4808]: E0311 08:44:15.133977 4808 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.113:6443: connect: connection refused" Mar 11 08:44:15 crc kubenswrapper[4808]: E0311 08:44:15.134767 4808 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.113:6443: connect: connection refused" Mar 11 08:44:15 crc kubenswrapper[4808]: E0311 08:44:15.135072 4808 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.113:6443: connect: connection refused" Mar 11 08:44:15 crc kubenswrapper[4808]: E0311 08:44:15.135338 4808 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.113:6443: connect: connection refused" Mar 11 08:44:15 crc kubenswrapper[4808]: E0311 08:44:15.135640 4808 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.113:6443: connect: connection refused" Mar 11 08:44:15 crc kubenswrapper[4808]: I0311 08:44:15.135674 4808 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Mar 11 08:44:15 crc kubenswrapper[4808]: E0311 08:44:15.135940 4808 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.113:6443: connect: connection refused" interval="200ms" Mar 11 08:44:15 crc kubenswrapper[4808]: E0311 08:44:15.212634 4808 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 11 08:44:15 crc kubenswrapper[4808]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-6d7975b9fc-7zfbx_openshift-controller-manager_ee8ed134-1c66-4b5d-925c-6cfc105dde4b_0(d3144db620709a7403284390c318cd9ed6fbf93da52f7a8dbdc72596641f27ba): error adding pod openshift-controller-manager_controller-manager-6d7975b9fc-7zfbx to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"d3144db620709a7403284390c318cd9ed6fbf93da52f7a8dbdc72596641f27ba" Netns:"/var/run/netns/730f23ad-3380-4e69-97d0-a6a9504e97a7" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-6d7975b9fc-7zfbx;K8S_POD_INFRA_CONTAINER_ID=d3144db620709a7403284390c318cd9ed6fbf93da52f7a8dbdc72596641f27ba;K8S_POD_UID=ee8ed134-1c66-4b5d-925c-6cfc105dde4b" Path:"" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-6d7975b9fc-7zfbx] networking: Multus: [openshift-controller-manager/controller-manager-6d7975b9fc-7zfbx/ee8ed134-1c66-4b5d-925c-6cfc105dde4b]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod controller-manager-6d7975b9fc-7zfbx in out of cluster comm: SetNetworkStatus: failed to update the pod controller-manager-6d7975b9fc-7zfbx in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6d7975b9fc-7zfbx?timeout=1m0s": dial tcp 38.102.83.113:6443: connect: connection refused Mar 11 08:44:15 crc kubenswrapper[4808]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 11 08:44:15 crc kubenswrapper[4808]: > Mar 11 08:44:15 crc kubenswrapper[4808]: E0311 08:44:15.212701 4808 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 11 08:44:15 crc kubenswrapper[4808]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-6d7975b9fc-7zfbx_openshift-controller-manager_ee8ed134-1c66-4b5d-925c-6cfc105dde4b_0(d3144db620709a7403284390c318cd9ed6fbf93da52f7a8dbdc72596641f27ba): error adding pod openshift-controller-manager_controller-manager-6d7975b9fc-7zfbx to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"d3144db620709a7403284390c318cd9ed6fbf93da52f7a8dbdc72596641f27ba" Netns:"/var/run/netns/730f23ad-3380-4e69-97d0-a6a9504e97a7" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-6d7975b9fc-7zfbx;K8S_POD_INFRA_CONTAINER_ID=d3144db620709a7403284390c318cd9ed6fbf93da52f7a8dbdc72596641f27ba;K8S_POD_UID=ee8ed134-1c66-4b5d-925c-6cfc105dde4b" Path:"" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-6d7975b9fc-7zfbx] networking: Multus: [openshift-controller-manager/controller-manager-6d7975b9fc-7zfbx/ee8ed134-1c66-4b5d-925c-6cfc105dde4b]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod controller-manager-6d7975b9fc-7zfbx in out of cluster comm: SetNetworkStatus: failed to update the pod controller-manager-6d7975b9fc-7zfbx in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6d7975b9fc-7zfbx?timeout=1m0s": dial tcp 38.102.83.113:6443: connect: connection refused Mar 11 08:44:15 crc kubenswrapper[4808]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 11 08:44:15 crc kubenswrapper[4808]: > pod="openshift-controller-manager/controller-manager-6d7975b9fc-7zfbx" Mar 11 08:44:15 crc kubenswrapper[4808]: E0311 08:44:15.212721 4808 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 11 08:44:15 crc kubenswrapper[4808]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-6d7975b9fc-7zfbx_openshift-controller-manager_ee8ed134-1c66-4b5d-925c-6cfc105dde4b_0(d3144db620709a7403284390c318cd9ed6fbf93da52f7a8dbdc72596641f27ba): error adding pod openshift-controller-manager_controller-manager-6d7975b9fc-7zfbx to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"d3144db620709a7403284390c318cd9ed6fbf93da52f7a8dbdc72596641f27ba" Netns:"/var/run/netns/730f23ad-3380-4e69-97d0-a6a9504e97a7" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-6d7975b9fc-7zfbx;K8S_POD_INFRA_CONTAINER_ID=d3144db620709a7403284390c318cd9ed6fbf93da52f7a8dbdc72596641f27ba;K8S_POD_UID=ee8ed134-1c66-4b5d-925c-6cfc105dde4b" Path:"" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-6d7975b9fc-7zfbx] networking: Multus: [openshift-controller-manager/controller-manager-6d7975b9fc-7zfbx/ee8ed134-1c66-4b5d-925c-6cfc105dde4b]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod controller-manager-6d7975b9fc-7zfbx in out of cluster comm: SetNetworkStatus: failed to update the pod controller-manager-6d7975b9fc-7zfbx in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6d7975b9fc-7zfbx?timeout=1m0s": dial tcp 38.102.83.113:6443: connect: connection refused Mar 11 08:44:15 crc kubenswrapper[4808]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 11 08:44:15 crc kubenswrapper[4808]: > pod="openshift-controller-manager/controller-manager-6d7975b9fc-7zfbx" Mar 11 08:44:15 crc kubenswrapper[4808]: E0311 08:44:15.212781 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"controller-manager-6d7975b9fc-7zfbx_openshift-controller-manager(ee8ed134-1c66-4b5d-925c-6cfc105dde4b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"controller-manager-6d7975b9fc-7zfbx_openshift-controller-manager(ee8ed134-1c66-4b5d-925c-6cfc105dde4b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-6d7975b9fc-7zfbx_openshift-controller-manager_ee8ed134-1c66-4b5d-925c-6cfc105dde4b_0(d3144db620709a7403284390c318cd9ed6fbf93da52f7a8dbdc72596641f27ba): error adding pod openshift-controller-manager_controller-manager-6d7975b9fc-7zfbx to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"d3144db620709a7403284390c318cd9ed6fbf93da52f7a8dbdc72596641f27ba\\\" Netns:\\\"/var/run/netns/730f23ad-3380-4e69-97d0-a6a9504e97a7\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-6d7975b9fc-7zfbx;K8S_POD_INFRA_CONTAINER_ID=d3144db620709a7403284390c318cd9ed6fbf93da52f7a8dbdc72596641f27ba;K8S_POD_UID=ee8ed134-1c66-4b5d-925c-6cfc105dde4b\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-6d7975b9fc-7zfbx] networking: Multus: [openshift-controller-manager/controller-manager-6d7975b9fc-7zfbx/ee8ed134-1c66-4b5d-925c-6cfc105dde4b]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod controller-manager-6d7975b9fc-7zfbx in out of cluster comm: SetNetworkStatus: failed to update the pod controller-manager-6d7975b9fc-7zfbx in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6d7975b9fc-7zfbx?timeout=1m0s\\\": dial tcp 38.102.83.113:6443: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-controller-manager/controller-manager-6d7975b9fc-7zfbx" podUID="ee8ed134-1c66-4b5d-925c-6cfc105dde4b" Mar 11 08:44:15 crc kubenswrapper[4808]: E0311 08:44:15.337311 4808 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.113:6443: connect: connection refused" interval="400ms" Mar 11 08:44:15 crc kubenswrapper[4808]: I0311 08:44:15.424853 4808 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Mar 11 08:44:15 crc kubenswrapper[4808]: I0311 08:44:15.424927 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Mar 11 08:44:15 crc kubenswrapper[4808]: I0311 08:44:15.658347 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Mar 11 08:44:15 crc kubenswrapper[4808]: I0311 08:44:15.663507 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"4286bf7bd225f1f8e07665f0db6520ce46e55955aae3192a593aa24b290b6699"} Mar 11 08:44:15 crc kubenswrapper[4808]: I0311 08:44:15.663590 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7774777c67-x9jd7" Mar 11 08:44:15 crc kubenswrapper[4808]: I0311 08:44:15.664033 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7774777c67-x9jd7" Mar 11 08:44:15 crc kubenswrapper[4808]: I0311 08:44:15.664439 4808 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Mar 11 08:44:15 crc kubenswrapper[4808]: I0311 08:44:15.664767 4808 status_manager.go:851] "Failed to get status for pod" podUID="30c9c874-96ee-4ff1-88fb-eedbd0d114e8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Mar 11 08:44:15 crc kubenswrapper[4808]: I0311 08:44:15.665053 4808 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Mar 11 08:44:15 crc kubenswrapper[4808]: E0311 08:44:15.738288 4808 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.113:6443: connect: connection refused" interval="800ms" Mar 11 08:44:15 crc kubenswrapper[4808]: E0311 08:44:15.927325 4808 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.113:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.189bbcf8a5f4ba24 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:44:14.422612516 +0000 UTC m=+305.375935836,LastTimestamp:2026-03-11 08:44:14.422612516 +0000 UTC m=+305.375935836,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:44:15 crc kubenswrapper[4808]: I0311 08:44:15.985667 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Mar 11 08:44:15 crc kubenswrapper[4808]: I0311 08:44:15.986238 4808 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Mar 11 08:44:15 crc kubenswrapper[4808]: I0311 08:44:15.986622 4808 status_manager.go:851] "Failed to get status for pod" podUID="30c9c874-96ee-4ff1-88fb-eedbd0d114e8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Mar 11 08:44:16 crc kubenswrapper[4808]: I0311 08:44:16.115715 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/30c9c874-96ee-4ff1-88fb-eedbd0d114e8-var-lock\") pod \"30c9c874-96ee-4ff1-88fb-eedbd0d114e8\" (UID: \"30c9c874-96ee-4ff1-88fb-eedbd0d114e8\") " Mar 11 08:44:16 crc kubenswrapper[4808]: I0311 08:44:16.115806 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/30c9c874-96ee-4ff1-88fb-eedbd0d114e8-kube-api-access\") pod \"30c9c874-96ee-4ff1-88fb-eedbd0d114e8\" (UID: \"30c9c874-96ee-4ff1-88fb-eedbd0d114e8\") " Mar 11 08:44:16 crc kubenswrapper[4808]: I0311 08:44:16.115951 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/30c9c874-96ee-4ff1-88fb-eedbd0d114e8-kubelet-dir\") pod \"30c9c874-96ee-4ff1-88fb-eedbd0d114e8\" (UID: \"30c9c874-96ee-4ff1-88fb-eedbd0d114e8\") " Mar 11 08:44:16 crc kubenswrapper[4808]: I0311 08:44:16.116489 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/30c9c874-96ee-4ff1-88fb-eedbd0d114e8-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "30c9c874-96ee-4ff1-88fb-eedbd0d114e8" (UID: "30c9c874-96ee-4ff1-88fb-eedbd0d114e8"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 11 08:44:16 crc kubenswrapper[4808]: I0311 08:44:16.116559 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/30c9c874-96ee-4ff1-88fb-eedbd0d114e8-var-lock" (OuterVolumeSpecName: "var-lock") pod "30c9c874-96ee-4ff1-88fb-eedbd0d114e8" (UID: "30c9c874-96ee-4ff1-88fb-eedbd0d114e8"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 11 08:44:16 crc kubenswrapper[4808]: I0311 08:44:16.128679 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30c9c874-96ee-4ff1-88fb-eedbd0d114e8-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "30c9c874-96ee-4ff1-88fb-eedbd0d114e8" (UID: "30c9c874-96ee-4ff1-88fb-eedbd0d114e8"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:44:16 crc kubenswrapper[4808]: I0311 08:44:16.222319 4808 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/30c9c874-96ee-4ff1-88fb-eedbd0d114e8-kubelet-dir\") on node \"crc\" DevicePath \"\"" Mar 11 08:44:16 crc kubenswrapper[4808]: I0311 08:44:16.222763 4808 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/30c9c874-96ee-4ff1-88fb-eedbd0d114e8-var-lock\") on node \"crc\" DevicePath \"\"" Mar 11 08:44:16 crc kubenswrapper[4808]: I0311 08:44:16.222780 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/30c9c874-96ee-4ff1-88fb-eedbd0d114e8-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 11 08:44:16 crc kubenswrapper[4808]: E0311 08:44:16.379739 4808 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 11 08:44:16 crc kubenswrapper[4808]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-7774777c67-x9jd7_openshift-route-controller-manager_3a5b563b-dfdf-4720-a13f-85f72959f234_0(2a02de88b3d308650bbf9d257d48cbd46e60f28847eaf5ce1f157cfa34581e2f): error adding pod openshift-route-controller-manager_route-controller-manager-7774777c67-x9jd7 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"2a02de88b3d308650bbf9d257d48cbd46e60f28847eaf5ce1f157cfa34581e2f" Netns:"/var/run/netns/ac56c5a3-0d3c-48d5-a7b4-5a6df1a5aac9" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-7774777c67-x9jd7;K8S_POD_INFRA_CONTAINER_ID=2a02de88b3d308650bbf9d257d48cbd46e60f28847eaf5ce1f157cfa34581e2f;K8S_POD_UID=3a5b563b-dfdf-4720-a13f-85f72959f234" Path:"" ERRORED: error configuring pod [openshift-route-controller-manager/route-controller-manager-7774777c67-x9jd7] networking: Multus: [openshift-route-controller-manager/route-controller-manager-7774777c67-x9jd7/3a5b563b-dfdf-4720-a13f-85f72959f234]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod route-controller-manager-7774777c67-x9jd7 in out of cluster comm: SetNetworkStatus: failed to update the pod route-controller-manager-7774777c67-x9jd7 in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-7774777c67-x9jd7?timeout=1m0s": dial tcp 38.102.83.113:6443: connect: connection refused Mar 11 08:44:16 crc kubenswrapper[4808]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 11 08:44:16 crc kubenswrapper[4808]: > Mar 11 08:44:16 crc kubenswrapper[4808]: E0311 08:44:16.379810 4808 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 11 08:44:16 crc kubenswrapper[4808]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-7774777c67-x9jd7_openshift-route-controller-manager_3a5b563b-dfdf-4720-a13f-85f72959f234_0(2a02de88b3d308650bbf9d257d48cbd46e60f28847eaf5ce1f157cfa34581e2f): error adding pod openshift-route-controller-manager_route-controller-manager-7774777c67-x9jd7 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"2a02de88b3d308650bbf9d257d48cbd46e60f28847eaf5ce1f157cfa34581e2f" Netns:"/var/run/netns/ac56c5a3-0d3c-48d5-a7b4-5a6df1a5aac9" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-7774777c67-x9jd7;K8S_POD_INFRA_CONTAINER_ID=2a02de88b3d308650bbf9d257d48cbd46e60f28847eaf5ce1f157cfa34581e2f;K8S_POD_UID=3a5b563b-dfdf-4720-a13f-85f72959f234" Path:"" ERRORED: error configuring pod [openshift-route-controller-manager/route-controller-manager-7774777c67-x9jd7] networking: Multus: [openshift-route-controller-manager/route-controller-manager-7774777c67-x9jd7/3a5b563b-dfdf-4720-a13f-85f72959f234]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod route-controller-manager-7774777c67-x9jd7 in out of cluster comm: SetNetworkStatus: failed to update the pod route-controller-manager-7774777c67-x9jd7 in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-7774777c67-x9jd7?timeout=1m0s": dial tcp 38.102.83.113:6443: connect: connection refused Mar 11 08:44:16 crc kubenswrapper[4808]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 11 08:44:16 crc kubenswrapper[4808]: > pod="openshift-route-controller-manager/route-controller-manager-7774777c67-x9jd7" Mar 11 08:44:16 crc kubenswrapper[4808]: E0311 08:44:16.379840 4808 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 11 08:44:16 crc kubenswrapper[4808]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-7774777c67-x9jd7_openshift-route-controller-manager_3a5b563b-dfdf-4720-a13f-85f72959f234_0(2a02de88b3d308650bbf9d257d48cbd46e60f28847eaf5ce1f157cfa34581e2f): error adding pod openshift-route-controller-manager_route-controller-manager-7774777c67-x9jd7 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"2a02de88b3d308650bbf9d257d48cbd46e60f28847eaf5ce1f157cfa34581e2f" Netns:"/var/run/netns/ac56c5a3-0d3c-48d5-a7b4-5a6df1a5aac9" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-7774777c67-x9jd7;K8S_POD_INFRA_CONTAINER_ID=2a02de88b3d308650bbf9d257d48cbd46e60f28847eaf5ce1f157cfa34581e2f;K8S_POD_UID=3a5b563b-dfdf-4720-a13f-85f72959f234" Path:"" ERRORED: error configuring pod [openshift-route-controller-manager/route-controller-manager-7774777c67-x9jd7] networking: Multus: [openshift-route-controller-manager/route-controller-manager-7774777c67-x9jd7/3a5b563b-dfdf-4720-a13f-85f72959f234]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod route-controller-manager-7774777c67-x9jd7 in out of cluster comm: SetNetworkStatus: failed to update the pod route-controller-manager-7774777c67-x9jd7 in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-7774777c67-x9jd7?timeout=1m0s": dial tcp 38.102.83.113:6443: connect: connection refused Mar 11 08:44:16 crc kubenswrapper[4808]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 11 08:44:16 crc kubenswrapper[4808]: > pod="openshift-route-controller-manager/route-controller-manager-7774777c67-x9jd7" Mar 11 08:44:16 crc kubenswrapper[4808]: E0311 08:44:16.379924 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"route-controller-manager-7774777c67-x9jd7_openshift-route-controller-manager(3a5b563b-dfdf-4720-a13f-85f72959f234)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"route-controller-manager-7774777c67-x9jd7_openshift-route-controller-manager(3a5b563b-dfdf-4720-a13f-85f72959f234)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-7774777c67-x9jd7_openshift-route-controller-manager_3a5b563b-dfdf-4720-a13f-85f72959f234_0(2a02de88b3d308650bbf9d257d48cbd46e60f28847eaf5ce1f157cfa34581e2f): error adding pod openshift-route-controller-manager_route-controller-manager-7774777c67-x9jd7 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"2a02de88b3d308650bbf9d257d48cbd46e60f28847eaf5ce1f157cfa34581e2f\\\" Netns:\\\"/var/run/netns/ac56c5a3-0d3c-48d5-a7b4-5a6df1a5aac9\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-7774777c67-x9jd7;K8S_POD_INFRA_CONTAINER_ID=2a02de88b3d308650bbf9d257d48cbd46e60f28847eaf5ce1f157cfa34581e2f;K8S_POD_UID=3a5b563b-dfdf-4720-a13f-85f72959f234\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-route-controller-manager/route-controller-manager-7774777c67-x9jd7] networking: Multus: [openshift-route-controller-manager/route-controller-manager-7774777c67-x9jd7/3a5b563b-dfdf-4720-a13f-85f72959f234]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod route-controller-manager-7774777c67-x9jd7 in out of cluster comm: SetNetworkStatus: failed to update the pod route-controller-manager-7774777c67-x9jd7 in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-7774777c67-x9jd7?timeout=1m0s\\\": dial tcp 38.102.83.113:6443: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-route-controller-manager/route-controller-manager-7774777c67-x9jd7" podUID="3a5b563b-dfdf-4720-a13f-85f72959f234" Mar 11 08:44:16 crc kubenswrapper[4808]: I0311 08:44:16.426225 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Mar 11 08:44:16 crc kubenswrapper[4808]: I0311 08:44:16.427139 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 11 08:44:16 crc kubenswrapper[4808]: I0311 08:44:16.428478 4808 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Mar 11 08:44:16 crc kubenswrapper[4808]: I0311 08:44:16.429178 4808 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Mar 11 08:44:16 crc kubenswrapper[4808]: I0311 08:44:16.429783 4808 status_manager.go:851] "Failed to get status for pod" podUID="30c9c874-96ee-4ff1-88fb-eedbd0d114e8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Mar 11 08:44:16 crc kubenswrapper[4808]: I0311 08:44:16.526610 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Mar 11 08:44:16 crc kubenswrapper[4808]: I0311 08:44:16.526669 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Mar 11 08:44:16 crc kubenswrapper[4808]: I0311 08:44:16.526716 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Mar 11 08:44:16 crc kubenswrapper[4808]: I0311 08:44:16.526752 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 11 08:44:16 crc kubenswrapper[4808]: I0311 08:44:16.526859 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 11 08:44:16 crc kubenswrapper[4808]: I0311 08:44:16.526925 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 11 08:44:16 crc kubenswrapper[4808]: I0311 08:44:16.527229 4808 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Mar 11 08:44:16 crc kubenswrapper[4808]: I0311 08:44:16.527262 4808 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Mar 11 08:44:16 crc kubenswrapper[4808]: I0311 08:44:16.527279 4808 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Mar 11 08:44:16 crc kubenswrapper[4808]: E0311 08:44:16.539455 4808 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.113:6443: connect: connection refused" interval="1.6s" Mar 11 08:44:16 crc kubenswrapper[4808]: I0311 08:44:16.670807 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Mar 11 08:44:16 crc kubenswrapper[4808]: I0311 08:44:16.670818 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"30c9c874-96ee-4ff1-88fb-eedbd0d114e8","Type":"ContainerDied","Data":"1c217141f0b077043594ce584240558c7cead2f99363b5b6a3066ff19b88e026"} Mar 11 08:44:16 crc kubenswrapper[4808]: I0311 08:44:16.670868 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c217141f0b077043594ce584240558c7cead2f99363b5b6a3066ff19b88e026" Mar 11 08:44:16 crc kubenswrapper[4808]: I0311 08:44:16.675631 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Mar 11 08:44:16 crc kubenswrapper[4808]: I0311 08:44:16.676878 4808 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b9a36b70daad0e5e33a84c699b4c0680f47f80bad4c9852616ea933ba8c6b221" exitCode=0 Mar 11 08:44:16 crc kubenswrapper[4808]: I0311 08:44:16.676983 4808 scope.go:117] "RemoveContainer" containerID="46e7f27f0f53872704c9eff7a30b8cb82afea98d12ec0723521b901c7503f203" Mar 11 08:44:16 crc kubenswrapper[4808]: I0311 08:44:16.677056 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 11 08:44:16 crc kubenswrapper[4808]: I0311 08:44:16.686342 4808 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Mar 11 08:44:16 crc kubenswrapper[4808]: I0311 08:44:16.686876 4808 status_manager.go:851] "Failed to get status for pod" podUID="30c9c874-96ee-4ff1-88fb-eedbd0d114e8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Mar 11 08:44:16 crc kubenswrapper[4808]: I0311 08:44:16.687421 4808 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Mar 11 08:44:16 crc kubenswrapper[4808]: I0311 08:44:16.702267 4808 scope.go:117] "RemoveContainer" containerID="506d71773927c014aeafefdc00c93d5faf4e6a5827bc8b47f525f3ea970ed14e" Mar 11 08:44:16 crc kubenswrapper[4808]: I0311 08:44:16.703209 4808 status_manager.go:851] "Failed to get status for pod" podUID="30c9c874-96ee-4ff1-88fb-eedbd0d114e8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Mar 11 08:44:16 crc kubenswrapper[4808]: I0311 08:44:16.703734 4808 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Mar 11 08:44:16 crc kubenswrapper[4808]: I0311 08:44:16.704087 4808 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Mar 11 08:44:16 crc kubenswrapper[4808]: I0311 08:44:16.725809 4808 scope.go:117] "RemoveContainer" containerID="aa1f460dd013b0b42bfb8031fcb1e2f49ace0fe24ce8194bd6e6970ad3e750ea" Mar 11 08:44:16 crc kubenswrapper[4808]: I0311 08:44:16.746501 4808 scope.go:117] "RemoveContainer" containerID="675da4e867613d165a41176c1b725f3601ffdd7de762c52efec0ee5485b1d628" Mar 11 08:44:16 crc kubenswrapper[4808]: I0311 08:44:16.766920 4808 scope.go:117] "RemoveContainer" containerID="b9a36b70daad0e5e33a84c699b4c0680f47f80bad4c9852616ea933ba8c6b221" Mar 11 08:44:16 crc kubenswrapper[4808]: I0311 08:44:16.786655 4808 scope.go:117] "RemoveContainer" containerID="184b4659910cdc9b9ee8dc34db1ede687099f4a5103dfba9e9a2fc0428c4e2ea" Mar 11 08:44:16 crc kubenswrapper[4808]: I0311 08:44:16.810133 4808 scope.go:117] "RemoveContainer" containerID="46e7f27f0f53872704c9eff7a30b8cb82afea98d12ec0723521b901c7503f203" Mar 11 08:44:16 crc kubenswrapper[4808]: E0311 08:44:16.810880 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46e7f27f0f53872704c9eff7a30b8cb82afea98d12ec0723521b901c7503f203\": container with ID starting with 46e7f27f0f53872704c9eff7a30b8cb82afea98d12ec0723521b901c7503f203 not found: ID does not exist" containerID="46e7f27f0f53872704c9eff7a30b8cb82afea98d12ec0723521b901c7503f203" Mar 11 08:44:16 crc kubenswrapper[4808]: I0311 08:44:16.810992 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46e7f27f0f53872704c9eff7a30b8cb82afea98d12ec0723521b901c7503f203"} err="failed to get container status \"46e7f27f0f53872704c9eff7a30b8cb82afea98d12ec0723521b901c7503f203\": rpc error: code = NotFound desc = could not find container \"46e7f27f0f53872704c9eff7a30b8cb82afea98d12ec0723521b901c7503f203\": container with ID starting with 46e7f27f0f53872704c9eff7a30b8cb82afea98d12ec0723521b901c7503f203 not found: ID does not exist" Mar 11 08:44:16 crc kubenswrapper[4808]: I0311 08:44:16.811084 4808 scope.go:117] "RemoveContainer" containerID="506d71773927c014aeafefdc00c93d5faf4e6a5827bc8b47f525f3ea970ed14e" Mar 11 08:44:16 crc kubenswrapper[4808]: E0311 08:44:16.811694 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"506d71773927c014aeafefdc00c93d5faf4e6a5827bc8b47f525f3ea970ed14e\": container with ID starting with 506d71773927c014aeafefdc00c93d5faf4e6a5827bc8b47f525f3ea970ed14e not found: ID does not exist" containerID="506d71773927c014aeafefdc00c93d5faf4e6a5827bc8b47f525f3ea970ed14e" Mar 11 08:44:16 crc kubenswrapper[4808]: I0311 08:44:16.811728 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"506d71773927c014aeafefdc00c93d5faf4e6a5827bc8b47f525f3ea970ed14e"} err="failed to get container status \"506d71773927c014aeafefdc00c93d5faf4e6a5827bc8b47f525f3ea970ed14e\": rpc error: code = NotFound desc = could not find container \"506d71773927c014aeafefdc00c93d5faf4e6a5827bc8b47f525f3ea970ed14e\": container with ID starting with 506d71773927c014aeafefdc00c93d5faf4e6a5827bc8b47f525f3ea970ed14e not found: ID does not exist" Mar 11 08:44:16 crc kubenswrapper[4808]: I0311 08:44:16.811756 4808 scope.go:117] "RemoveContainer" containerID="aa1f460dd013b0b42bfb8031fcb1e2f49ace0fe24ce8194bd6e6970ad3e750ea" Mar 11 08:44:16 crc kubenswrapper[4808]: E0311 08:44:16.812272 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa1f460dd013b0b42bfb8031fcb1e2f49ace0fe24ce8194bd6e6970ad3e750ea\": container with ID starting with aa1f460dd013b0b42bfb8031fcb1e2f49ace0fe24ce8194bd6e6970ad3e750ea not found: ID does not exist" containerID="aa1f460dd013b0b42bfb8031fcb1e2f49ace0fe24ce8194bd6e6970ad3e750ea" Mar 11 08:44:16 crc kubenswrapper[4808]: I0311 08:44:16.812323 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa1f460dd013b0b42bfb8031fcb1e2f49ace0fe24ce8194bd6e6970ad3e750ea"} err="failed to get container status \"aa1f460dd013b0b42bfb8031fcb1e2f49ace0fe24ce8194bd6e6970ad3e750ea\": rpc error: code = NotFound desc = could not find container \"aa1f460dd013b0b42bfb8031fcb1e2f49ace0fe24ce8194bd6e6970ad3e750ea\": container with ID starting with aa1f460dd013b0b42bfb8031fcb1e2f49ace0fe24ce8194bd6e6970ad3e750ea not found: ID does not exist" Mar 11 08:44:16 crc kubenswrapper[4808]: I0311 08:44:16.812352 4808 scope.go:117] "RemoveContainer" containerID="675da4e867613d165a41176c1b725f3601ffdd7de762c52efec0ee5485b1d628" Mar 11 08:44:16 crc kubenswrapper[4808]: E0311 08:44:16.812790 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"675da4e867613d165a41176c1b725f3601ffdd7de762c52efec0ee5485b1d628\": container with ID starting with 675da4e867613d165a41176c1b725f3601ffdd7de762c52efec0ee5485b1d628 not found: ID does not exist" containerID="675da4e867613d165a41176c1b725f3601ffdd7de762c52efec0ee5485b1d628" Mar 11 08:44:16 crc kubenswrapper[4808]: I0311 08:44:16.812869 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"675da4e867613d165a41176c1b725f3601ffdd7de762c52efec0ee5485b1d628"} err="failed to get container status \"675da4e867613d165a41176c1b725f3601ffdd7de762c52efec0ee5485b1d628\": rpc error: code = NotFound desc = could not find container \"675da4e867613d165a41176c1b725f3601ffdd7de762c52efec0ee5485b1d628\": container with ID starting with 675da4e867613d165a41176c1b725f3601ffdd7de762c52efec0ee5485b1d628 not found: ID does not exist" Mar 11 08:44:16 crc kubenswrapper[4808]: I0311 08:44:16.812923 4808 scope.go:117] "RemoveContainer" containerID="b9a36b70daad0e5e33a84c699b4c0680f47f80bad4c9852616ea933ba8c6b221" Mar 11 08:44:16 crc kubenswrapper[4808]: E0311 08:44:16.813391 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9a36b70daad0e5e33a84c699b4c0680f47f80bad4c9852616ea933ba8c6b221\": container with ID starting with b9a36b70daad0e5e33a84c699b4c0680f47f80bad4c9852616ea933ba8c6b221 not found: ID does not exist" containerID="b9a36b70daad0e5e33a84c699b4c0680f47f80bad4c9852616ea933ba8c6b221" Mar 11 08:44:16 crc kubenswrapper[4808]: I0311 08:44:16.813422 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9a36b70daad0e5e33a84c699b4c0680f47f80bad4c9852616ea933ba8c6b221"} err="failed to get container status \"b9a36b70daad0e5e33a84c699b4c0680f47f80bad4c9852616ea933ba8c6b221\": rpc error: code = NotFound desc = could not find container \"b9a36b70daad0e5e33a84c699b4c0680f47f80bad4c9852616ea933ba8c6b221\": container with ID starting with b9a36b70daad0e5e33a84c699b4c0680f47f80bad4c9852616ea933ba8c6b221 not found: ID does not exist" Mar 11 08:44:16 crc kubenswrapper[4808]: I0311 08:44:16.813441 4808 scope.go:117] "RemoveContainer" containerID="184b4659910cdc9b9ee8dc34db1ede687099f4a5103dfba9e9a2fc0428c4e2ea" Mar 11 08:44:16 crc kubenswrapper[4808]: E0311 08:44:16.813943 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"184b4659910cdc9b9ee8dc34db1ede687099f4a5103dfba9e9a2fc0428c4e2ea\": container with ID starting with 184b4659910cdc9b9ee8dc34db1ede687099f4a5103dfba9e9a2fc0428c4e2ea not found: ID does not exist" containerID="184b4659910cdc9b9ee8dc34db1ede687099f4a5103dfba9e9a2fc0428c4e2ea" Mar 11 08:44:16 crc kubenswrapper[4808]: I0311 08:44:16.813986 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"184b4659910cdc9b9ee8dc34db1ede687099f4a5103dfba9e9a2fc0428c4e2ea"} err="failed to get container status \"184b4659910cdc9b9ee8dc34db1ede687099f4a5103dfba9e9a2fc0428c4e2ea\": rpc error: code = NotFound desc = could not find container \"184b4659910cdc9b9ee8dc34db1ede687099f4a5103dfba9e9a2fc0428c4e2ea\": container with ID starting with 184b4659910cdc9b9ee8dc34db1ede687099f4a5103dfba9e9a2fc0428c4e2ea not found: ID does not exist" Mar 11 08:44:17 crc kubenswrapper[4808]: I0311 08:44:17.805383 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Mar 11 08:44:18 crc kubenswrapper[4808]: E0311 08:44:18.140791 4808 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.113:6443: connect: connection refused" interval="3.2s" Mar 11 08:44:19 crc kubenswrapper[4808]: I0311 08:44:19.792191 4808 status_manager.go:851] "Failed to get status for pod" podUID="30c9c874-96ee-4ff1-88fb-eedbd0d114e8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Mar 11 08:44:19 crc kubenswrapper[4808]: I0311 08:44:19.793483 4808 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Mar 11 08:44:21 crc kubenswrapper[4808]: E0311 08:44:21.362534 4808 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.113:6443: connect: connection refused" interval="6.4s" Mar 11 08:44:25 crc kubenswrapper[4808]: I0311 08:44:25.789464 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6d7975b9fc-7zfbx" Mar 11 08:44:25 crc kubenswrapper[4808]: I0311 08:44:25.790835 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6d7975b9fc-7zfbx" Mar 11 08:44:25 crc kubenswrapper[4808]: E0311 08:44:25.928823 4808 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.113:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.189bbcf8a5f4ba24 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:44:14.422612516 +0000 UTC m=+305.375935836,LastTimestamp:2026-03-11 08:44:14.422612516 +0000 UTC m=+305.375935836,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:44:26 crc kubenswrapper[4808]: E0311 08:44:26.133103 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:44:26Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:44:26Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:44:26Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:44:26Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.113:6443: connect: connection refused" Mar 11 08:44:26 crc kubenswrapper[4808]: E0311 08:44:26.133696 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.113:6443: connect: connection refused" Mar 11 08:44:26 crc kubenswrapper[4808]: E0311 08:44:26.133976 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.113:6443: connect: connection refused" Mar 11 08:44:26 crc kubenswrapper[4808]: E0311 08:44:26.134216 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.113:6443: connect: connection refused" Mar 11 08:44:26 crc kubenswrapper[4808]: E0311 08:44:26.134459 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.113:6443: connect: connection refused" Mar 11 08:44:26 crc kubenswrapper[4808]: E0311 08:44:26.134484 4808 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 11 08:44:26 crc kubenswrapper[4808]: E0311 08:44:26.531855 4808 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 11 08:44:26 crc kubenswrapper[4808]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-6d7975b9fc-7zfbx_openshift-controller-manager_ee8ed134-1c66-4b5d-925c-6cfc105dde4b_0(7d810bfabad609d277987529374eb49d4696560f752dfdf37bcffacfc79c2d0d): error adding pod openshift-controller-manager_controller-manager-6d7975b9fc-7zfbx to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"7d810bfabad609d277987529374eb49d4696560f752dfdf37bcffacfc79c2d0d" Netns:"/var/run/netns/e58bbaba-f45e-44dd-bd60-73954157901d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-6d7975b9fc-7zfbx;K8S_POD_INFRA_CONTAINER_ID=7d810bfabad609d277987529374eb49d4696560f752dfdf37bcffacfc79c2d0d;K8S_POD_UID=ee8ed134-1c66-4b5d-925c-6cfc105dde4b" Path:"" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-6d7975b9fc-7zfbx] networking: Multus: [openshift-controller-manager/controller-manager-6d7975b9fc-7zfbx/ee8ed134-1c66-4b5d-925c-6cfc105dde4b]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod controller-manager-6d7975b9fc-7zfbx in out of cluster comm: SetNetworkStatus: failed to update the pod controller-manager-6d7975b9fc-7zfbx in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6d7975b9fc-7zfbx?timeout=1m0s": dial tcp 38.102.83.113:6443: connect: connection refused Mar 11 08:44:26 crc kubenswrapper[4808]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 11 08:44:26 crc kubenswrapper[4808]: > Mar 11 08:44:26 crc kubenswrapper[4808]: E0311 08:44:26.531920 4808 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 11 08:44:26 crc kubenswrapper[4808]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-6d7975b9fc-7zfbx_openshift-controller-manager_ee8ed134-1c66-4b5d-925c-6cfc105dde4b_0(7d810bfabad609d277987529374eb49d4696560f752dfdf37bcffacfc79c2d0d): error adding pod openshift-controller-manager_controller-manager-6d7975b9fc-7zfbx to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"7d810bfabad609d277987529374eb49d4696560f752dfdf37bcffacfc79c2d0d" Netns:"/var/run/netns/e58bbaba-f45e-44dd-bd60-73954157901d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-6d7975b9fc-7zfbx;K8S_POD_INFRA_CONTAINER_ID=7d810bfabad609d277987529374eb49d4696560f752dfdf37bcffacfc79c2d0d;K8S_POD_UID=ee8ed134-1c66-4b5d-925c-6cfc105dde4b" Path:"" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-6d7975b9fc-7zfbx] networking: Multus: [openshift-controller-manager/controller-manager-6d7975b9fc-7zfbx/ee8ed134-1c66-4b5d-925c-6cfc105dde4b]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod controller-manager-6d7975b9fc-7zfbx in out of cluster comm: SetNetworkStatus: failed to update the pod controller-manager-6d7975b9fc-7zfbx in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6d7975b9fc-7zfbx?timeout=1m0s": dial tcp 38.102.83.113:6443: connect: connection refused Mar 11 08:44:26 crc kubenswrapper[4808]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 11 08:44:26 crc kubenswrapper[4808]: > pod="openshift-controller-manager/controller-manager-6d7975b9fc-7zfbx" Mar 11 08:44:26 crc kubenswrapper[4808]: E0311 08:44:26.531940 4808 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 11 08:44:26 crc kubenswrapper[4808]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-6d7975b9fc-7zfbx_openshift-controller-manager_ee8ed134-1c66-4b5d-925c-6cfc105dde4b_0(7d810bfabad609d277987529374eb49d4696560f752dfdf37bcffacfc79c2d0d): error adding pod openshift-controller-manager_controller-manager-6d7975b9fc-7zfbx to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"7d810bfabad609d277987529374eb49d4696560f752dfdf37bcffacfc79c2d0d" Netns:"/var/run/netns/e58bbaba-f45e-44dd-bd60-73954157901d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-6d7975b9fc-7zfbx;K8S_POD_INFRA_CONTAINER_ID=7d810bfabad609d277987529374eb49d4696560f752dfdf37bcffacfc79c2d0d;K8S_POD_UID=ee8ed134-1c66-4b5d-925c-6cfc105dde4b" Path:"" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-6d7975b9fc-7zfbx] networking: Multus: [openshift-controller-manager/controller-manager-6d7975b9fc-7zfbx/ee8ed134-1c66-4b5d-925c-6cfc105dde4b]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod controller-manager-6d7975b9fc-7zfbx in out of cluster comm: SetNetworkStatus: failed to update the pod controller-manager-6d7975b9fc-7zfbx in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6d7975b9fc-7zfbx?timeout=1m0s": dial tcp 38.102.83.113:6443: connect: connection refused Mar 11 08:44:26 crc kubenswrapper[4808]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 11 08:44:26 crc kubenswrapper[4808]: > pod="openshift-controller-manager/controller-manager-6d7975b9fc-7zfbx" Mar 11 08:44:26 crc kubenswrapper[4808]: E0311 08:44:26.532001 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"controller-manager-6d7975b9fc-7zfbx_openshift-controller-manager(ee8ed134-1c66-4b5d-925c-6cfc105dde4b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"controller-manager-6d7975b9fc-7zfbx_openshift-controller-manager(ee8ed134-1c66-4b5d-925c-6cfc105dde4b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-6d7975b9fc-7zfbx_openshift-controller-manager_ee8ed134-1c66-4b5d-925c-6cfc105dde4b_0(7d810bfabad609d277987529374eb49d4696560f752dfdf37bcffacfc79c2d0d): error adding pod openshift-controller-manager_controller-manager-6d7975b9fc-7zfbx to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"7d810bfabad609d277987529374eb49d4696560f752dfdf37bcffacfc79c2d0d\\\" Netns:\\\"/var/run/netns/e58bbaba-f45e-44dd-bd60-73954157901d\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-6d7975b9fc-7zfbx;K8S_POD_INFRA_CONTAINER_ID=7d810bfabad609d277987529374eb49d4696560f752dfdf37bcffacfc79c2d0d;K8S_POD_UID=ee8ed134-1c66-4b5d-925c-6cfc105dde4b\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-6d7975b9fc-7zfbx] networking: Multus: [openshift-controller-manager/controller-manager-6d7975b9fc-7zfbx/ee8ed134-1c66-4b5d-925c-6cfc105dde4b]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod controller-manager-6d7975b9fc-7zfbx in out of cluster comm: SetNetworkStatus: failed to update the pod controller-manager-6d7975b9fc-7zfbx in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6d7975b9fc-7zfbx?timeout=1m0s\\\": dial tcp 38.102.83.113:6443: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-controller-manager/controller-manager-6d7975b9fc-7zfbx" podUID="ee8ed134-1c66-4b5d-925c-6cfc105dde4b" Mar 11 08:44:26 crc kubenswrapper[4808]: I0311 08:44:26.789246 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 11 08:44:26 crc kubenswrapper[4808]: I0311 08:44:26.791122 4808 status_manager.go:851] "Failed to get status for pod" podUID="30c9c874-96ee-4ff1-88fb-eedbd0d114e8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Mar 11 08:44:26 crc kubenswrapper[4808]: I0311 08:44:26.791861 4808 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Mar 11 08:44:26 crc kubenswrapper[4808]: I0311 08:44:26.812981 4808 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="70aeaa0b-1b9a-450e-bac3-61beed554491" Mar 11 08:44:26 crc kubenswrapper[4808]: I0311 08:44:26.813030 4808 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="70aeaa0b-1b9a-450e-bac3-61beed554491" Mar 11 08:44:26 crc kubenswrapper[4808]: E0311 08:44:26.813621 4808 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 11 08:44:26 crc kubenswrapper[4808]: I0311 08:44:26.814717 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 11 08:44:26 crc kubenswrapper[4808]: W0311 08:44:26.848197 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-873ed2bddcf7357433cd2b5aed874fa3932d528545674e231111661228d256b4 WatchSource:0}: Error finding container 873ed2bddcf7357433cd2b5aed874fa3932d528545674e231111661228d256b4: Status 404 returned error can't find the container with id 873ed2bddcf7357433cd2b5aed874fa3932d528545674e231111661228d256b4 Mar 11 08:44:27 crc kubenswrapper[4808]: I0311 08:44:27.751757 4808 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="eaf3fe3951832741ff2476fba9d9feaea39a5b811b74fa4173736a4ebbe7628f" exitCode=0 Mar 11 08:44:27 crc kubenswrapper[4808]: I0311 08:44:27.751805 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"eaf3fe3951832741ff2476fba9d9feaea39a5b811b74fa4173736a4ebbe7628f"} Mar 11 08:44:27 crc kubenswrapper[4808]: I0311 08:44:27.751834 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"873ed2bddcf7357433cd2b5aed874fa3932d528545674e231111661228d256b4"} Mar 11 08:44:27 crc kubenswrapper[4808]: I0311 08:44:27.752129 4808 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="70aeaa0b-1b9a-450e-bac3-61beed554491" Mar 11 08:44:27 crc kubenswrapper[4808]: I0311 08:44:27.752160 4808 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="70aeaa0b-1b9a-450e-bac3-61beed554491" Mar 11 08:44:27 crc kubenswrapper[4808]: I0311 08:44:27.752962 4808 status_manager.go:851] "Failed to get status for pod" podUID="30c9c874-96ee-4ff1-88fb-eedbd0d114e8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Mar 11 08:44:27 crc kubenswrapper[4808]: E0311 08:44:27.753017 4808 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 11 08:44:27 crc kubenswrapper[4808]: I0311 08:44:27.753222 4808 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Mar 11 08:44:27 crc kubenswrapper[4808]: E0311 08:44:27.763936 4808 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.113:6443: connect: connection refused" interval="7s" Mar 11 08:44:27 crc kubenswrapper[4808]: I0311 08:44:27.789042 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7774777c67-x9jd7" Mar 11 08:44:27 crc kubenswrapper[4808]: I0311 08:44:27.790100 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7774777c67-x9jd7" Mar 11 08:44:28 crc kubenswrapper[4808]: E0311 08:44:28.206224 4808 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 11 08:44:28 crc kubenswrapper[4808]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-7774777c67-x9jd7_openshift-route-controller-manager_3a5b563b-dfdf-4720-a13f-85f72959f234_0(d15d6fa61aa746ec685f2aa285a1b56852ab8fc934104e0855d02945962b6f37): error adding pod openshift-route-controller-manager_route-controller-manager-7774777c67-x9jd7 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"d15d6fa61aa746ec685f2aa285a1b56852ab8fc934104e0855d02945962b6f37" Netns:"/var/run/netns/53484dd8-5dd2-4210-9f56-7033e2cab123" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-7774777c67-x9jd7;K8S_POD_INFRA_CONTAINER_ID=d15d6fa61aa746ec685f2aa285a1b56852ab8fc934104e0855d02945962b6f37;K8S_POD_UID=3a5b563b-dfdf-4720-a13f-85f72959f234" Path:"" ERRORED: error configuring pod [openshift-route-controller-manager/route-controller-manager-7774777c67-x9jd7] networking: Multus: [openshift-route-controller-manager/route-controller-manager-7774777c67-x9jd7/3a5b563b-dfdf-4720-a13f-85f72959f234]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod route-controller-manager-7774777c67-x9jd7 in out of cluster comm: SetNetworkStatus: failed to update the pod route-controller-manager-7774777c67-x9jd7 in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-7774777c67-x9jd7?timeout=1m0s": dial tcp 38.102.83.113:6443: connect: connection refused Mar 11 08:44:28 crc kubenswrapper[4808]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 11 08:44:28 crc kubenswrapper[4808]: > Mar 11 08:44:28 crc kubenswrapper[4808]: E0311 08:44:28.206854 4808 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 11 08:44:28 crc kubenswrapper[4808]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-7774777c67-x9jd7_openshift-route-controller-manager_3a5b563b-dfdf-4720-a13f-85f72959f234_0(d15d6fa61aa746ec685f2aa285a1b56852ab8fc934104e0855d02945962b6f37): error adding pod openshift-route-controller-manager_route-controller-manager-7774777c67-x9jd7 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"d15d6fa61aa746ec685f2aa285a1b56852ab8fc934104e0855d02945962b6f37" Netns:"/var/run/netns/53484dd8-5dd2-4210-9f56-7033e2cab123" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-7774777c67-x9jd7;K8S_POD_INFRA_CONTAINER_ID=d15d6fa61aa746ec685f2aa285a1b56852ab8fc934104e0855d02945962b6f37;K8S_POD_UID=3a5b563b-dfdf-4720-a13f-85f72959f234" Path:"" ERRORED: error configuring pod [openshift-route-controller-manager/route-controller-manager-7774777c67-x9jd7] networking: Multus: [openshift-route-controller-manager/route-controller-manager-7774777c67-x9jd7/3a5b563b-dfdf-4720-a13f-85f72959f234]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod route-controller-manager-7774777c67-x9jd7 in out of cluster comm: SetNetworkStatus: failed to update the pod route-controller-manager-7774777c67-x9jd7 in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-7774777c67-x9jd7?timeout=1m0s": dial tcp 38.102.83.113:6443: connect: connection refused Mar 11 08:44:28 crc kubenswrapper[4808]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 11 08:44:28 crc kubenswrapper[4808]: > pod="openshift-route-controller-manager/route-controller-manager-7774777c67-x9jd7" Mar 11 08:44:28 crc kubenswrapper[4808]: E0311 08:44:28.206896 4808 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 11 08:44:28 crc kubenswrapper[4808]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-7774777c67-x9jd7_openshift-route-controller-manager_3a5b563b-dfdf-4720-a13f-85f72959f234_0(d15d6fa61aa746ec685f2aa285a1b56852ab8fc934104e0855d02945962b6f37): error adding pod openshift-route-controller-manager_route-controller-manager-7774777c67-x9jd7 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"d15d6fa61aa746ec685f2aa285a1b56852ab8fc934104e0855d02945962b6f37" Netns:"/var/run/netns/53484dd8-5dd2-4210-9f56-7033e2cab123" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-7774777c67-x9jd7;K8S_POD_INFRA_CONTAINER_ID=d15d6fa61aa746ec685f2aa285a1b56852ab8fc934104e0855d02945962b6f37;K8S_POD_UID=3a5b563b-dfdf-4720-a13f-85f72959f234" Path:"" ERRORED: error configuring pod [openshift-route-controller-manager/route-controller-manager-7774777c67-x9jd7] networking: Multus: [openshift-route-controller-manager/route-controller-manager-7774777c67-x9jd7/3a5b563b-dfdf-4720-a13f-85f72959f234]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod route-controller-manager-7774777c67-x9jd7 in out of cluster comm: SetNetworkStatus: failed to update the pod route-controller-manager-7774777c67-x9jd7 in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-7774777c67-x9jd7?timeout=1m0s": dial tcp 38.102.83.113:6443: connect: connection refused Mar 11 08:44:28 crc kubenswrapper[4808]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 11 08:44:28 crc kubenswrapper[4808]: > pod="openshift-route-controller-manager/route-controller-manager-7774777c67-x9jd7" Mar 11 08:44:28 crc kubenswrapper[4808]: E0311 08:44:28.206964 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"route-controller-manager-7774777c67-x9jd7_openshift-route-controller-manager(3a5b563b-dfdf-4720-a13f-85f72959f234)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"route-controller-manager-7774777c67-x9jd7_openshift-route-controller-manager(3a5b563b-dfdf-4720-a13f-85f72959f234)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-7774777c67-x9jd7_openshift-route-controller-manager_3a5b563b-dfdf-4720-a13f-85f72959f234_0(d15d6fa61aa746ec685f2aa285a1b56852ab8fc934104e0855d02945962b6f37): error adding pod openshift-route-controller-manager_route-controller-manager-7774777c67-x9jd7 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"d15d6fa61aa746ec685f2aa285a1b56852ab8fc934104e0855d02945962b6f37\\\" Netns:\\\"/var/run/netns/53484dd8-5dd2-4210-9f56-7033e2cab123\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-7774777c67-x9jd7;K8S_POD_INFRA_CONTAINER_ID=d15d6fa61aa746ec685f2aa285a1b56852ab8fc934104e0855d02945962b6f37;K8S_POD_UID=3a5b563b-dfdf-4720-a13f-85f72959f234\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-route-controller-manager/route-controller-manager-7774777c67-x9jd7] networking: Multus: [openshift-route-controller-manager/route-controller-manager-7774777c67-x9jd7/3a5b563b-dfdf-4720-a13f-85f72959f234]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod route-controller-manager-7774777c67-x9jd7 in out of cluster comm: SetNetworkStatus: failed to update the pod route-controller-manager-7774777c67-x9jd7 in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-7774777c67-x9jd7?timeout=1m0s\\\": dial tcp 38.102.83.113:6443: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-route-controller-manager/route-controller-manager-7774777c67-x9jd7" podUID="3a5b563b-dfdf-4720-a13f-85f72959f234" Mar 11 08:44:28 crc kubenswrapper[4808]: I0311 08:44:28.778226 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"61ac4838f05669dc17ecb3648eb76a89ab2e9ca3971032f8beecee0aef5baf69"} Mar 11 08:44:28 crc kubenswrapper[4808]: I0311 08:44:28.778262 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"c32a9892015dc3d684e76543f1ecf8015a19e5d92f2997cd85a83c8cfc869478"} Mar 11 08:44:28 crc kubenswrapper[4808]: I0311 08:44:28.778274 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"32153b16cafe1c9a48e59299cf77f226b84d7d0e975c351565df484cff524d53"} Mar 11 08:44:28 crc kubenswrapper[4808]: I0311 08:44:28.778283 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"32c7af538b9b40b34a874dea1fa151aee50ca0b200a4afd05786d9ed86dcf213"} Mar 11 08:44:29 crc kubenswrapper[4808]: I0311 08:44:29.798602 4808 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="70aeaa0b-1b9a-450e-bac3-61beed554491" Mar 11 08:44:29 crc kubenswrapper[4808]: I0311 08:44:29.798637 4808 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="70aeaa0b-1b9a-450e-bac3-61beed554491" Mar 11 08:44:29 crc kubenswrapper[4808]: I0311 08:44:29.801522 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/1.log" Mar 11 08:44:29 crc kubenswrapper[4808]: I0311 08:44:29.803028 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Mar 11 08:44:29 crc kubenswrapper[4808]: I0311 08:44:29.803065 4808 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="e54127863996440e3e037d7cbbb6a23234fb6b4fcf9fcfd071867341bb3b7963" exitCode=1 Mar 11 08:44:29 crc kubenswrapper[4808]: I0311 08:44:29.804106 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 11 08:44:29 crc kubenswrapper[4808]: I0311 08:44:29.804159 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"1fda2869bf15d12adb24ebe59f9299927f07ba86eff15268a717f35718327415"} Mar 11 08:44:29 crc kubenswrapper[4808]: I0311 08:44:29.804186 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"e54127863996440e3e037d7cbbb6a23234fb6b4fcf9fcfd071867341bb3b7963"} Mar 11 08:44:29 crc kubenswrapper[4808]: I0311 08:44:29.804842 4808 scope.go:117] "RemoveContainer" containerID="e54127863996440e3e037d7cbbb6a23234fb6b4fcf9fcfd071867341bb3b7963" Mar 11 08:44:30 crc kubenswrapper[4808]: I0311 08:44:30.451450 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 11 08:44:30 crc kubenswrapper[4808]: I0311 08:44:30.774923 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-vbbdr" podUID="7b05200d-8025-468a-9c30-fbfd45a80b8b" containerName="oauth-openshift" containerID="cri-o://0b4c317af0392a9228a19a4528a8c24889882893b9f8cf41552da6be0d8cd340" gracePeriod=15 Mar 11 08:44:30 crc kubenswrapper[4808]: I0311 08:44:30.815260 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/1.log" Mar 11 08:44:30 crc kubenswrapper[4808]: I0311 08:44:30.817241 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Mar 11 08:44:30 crc kubenswrapper[4808]: I0311 08:44:30.817341 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"51fbcef8ec5ae527acfadc3975f132c12a260040b738209f5ff1db13aca7da00"} Mar 11 08:44:31 crc kubenswrapper[4808]: I0311 08:44:31.220835 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-vbbdr" Mar 11 08:44:31 crc kubenswrapper[4808]: I0311 08:44:31.342532 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bxxmn\" (UniqueName: \"kubernetes.io/projected/7b05200d-8025-468a-9c30-fbfd45a80b8b-kube-api-access-bxxmn\") pod \"7b05200d-8025-468a-9c30-fbfd45a80b8b\" (UID: \"7b05200d-8025-468a-9c30-fbfd45a80b8b\") " Mar 11 08:44:31 crc kubenswrapper[4808]: I0311 08:44:31.342593 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7b05200d-8025-468a-9c30-fbfd45a80b8b-v4-0-config-system-cliconfig\") pod \"7b05200d-8025-468a-9c30-fbfd45a80b8b\" (UID: \"7b05200d-8025-468a-9c30-fbfd45a80b8b\") " Mar 11 08:44:31 crc kubenswrapper[4808]: I0311 08:44:31.342629 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7b05200d-8025-468a-9c30-fbfd45a80b8b-v4-0-config-system-router-certs\") pod \"7b05200d-8025-468a-9c30-fbfd45a80b8b\" (UID: \"7b05200d-8025-468a-9c30-fbfd45a80b8b\") " Mar 11 08:44:31 crc kubenswrapper[4808]: I0311 08:44:31.342661 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7b05200d-8025-468a-9c30-fbfd45a80b8b-v4-0-config-system-serving-cert\") pod \"7b05200d-8025-468a-9c30-fbfd45a80b8b\" (UID: \"7b05200d-8025-468a-9c30-fbfd45a80b8b\") " Mar 11 08:44:31 crc kubenswrapper[4808]: I0311 08:44:31.342691 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b05200d-8025-468a-9c30-fbfd45a80b8b-v4-0-config-system-trusted-ca-bundle\") pod \"7b05200d-8025-468a-9c30-fbfd45a80b8b\" (UID: \"7b05200d-8025-468a-9c30-fbfd45a80b8b\") " Mar 11 08:44:31 crc kubenswrapper[4808]: I0311 08:44:31.342717 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7b05200d-8025-468a-9c30-fbfd45a80b8b-audit-dir\") pod \"7b05200d-8025-468a-9c30-fbfd45a80b8b\" (UID: \"7b05200d-8025-468a-9c30-fbfd45a80b8b\") " Mar 11 08:44:31 crc kubenswrapper[4808]: I0311 08:44:31.342743 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7b05200d-8025-468a-9c30-fbfd45a80b8b-v4-0-config-system-ocp-branding-template\") pod \"7b05200d-8025-468a-9c30-fbfd45a80b8b\" (UID: \"7b05200d-8025-468a-9c30-fbfd45a80b8b\") " Mar 11 08:44:31 crc kubenswrapper[4808]: I0311 08:44:31.342771 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7b05200d-8025-468a-9c30-fbfd45a80b8b-v4-0-config-user-template-login\") pod \"7b05200d-8025-468a-9c30-fbfd45a80b8b\" (UID: \"7b05200d-8025-468a-9c30-fbfd45a80b8b\") " Mar 11 08:44:31 crc kubenswrapper[4808]: I0311 08:44:31.342800 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7b05200d-8025-468a-9c30-fbfd45a80b8b-v4-0-config-user-template-provider-selection\") pod \"7b05200d-8025-468a-9c30-fbfd45a80b8b\" (UID: \"7b05200d-8025-468a-9c30-fbfd45a80b8b\") " Mar 11 08:44:31 crc kubenswrapper[4808]: I0311 08:44:31.342827 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7b05200d-8025-468a-9c30-fbfd45a80b8b-v4-0-config-system-session\") pod \"7b05200d-8025-468a-9c30-fbfd45a80b8b\" (UID: \"7b05200d-8025-468a-9c30-fbfd45a80b8b\") " Mar 11 08:44:31 crc kubenswrapper[4808]: I0311 08:44:31.342818 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7b05200d-8025-468a-9c30-fbfd45a80b8b-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "7b05200d-8025-468a-9c30-fbfd45a80b8b" (UID: "7b05200d-8025-468a-9c30-fbfd45a80b8b"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 11 08:44:31 crc kubenswrapper[4808]: I0311 08:44:31.342875 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7b05200d-8025-468a-9c30-fbfd45a80b8b-v4-0-config-user-template-error\") pod \"7b05200d-8025-468a-9c30-fbfd45a80b8b\" (UID: \"7b05200d-8025-468a-9c30-fbfd45a80b8b\") " Mar 11 08:44:31 crc kubenswrapper[4808]: I0311 08:44:31.342913 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7b05200d-8025-468a-9c30-fbfd45a80b8b-v4-0-config-system-service-ca\") pod \"7b05200d-8025-468a-9c30-fbfd45a80b8b\" (UID: \"7b05200d-8025-468a-9c30-fbfd45a80b8b\") " Mar 11 08:44:31 crc kubenswrapper[4808]: I0311 08:44:31.342945 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7b05200d-8025-468a-9c30-fbfd45a80b8b-audit-policies\") pod \"7b05200d-8025-468a-9c30-fbfd45a80b8b\" (UID: \"7b05200d-8025-468a-9c30-fbfd45a80b8b\") " Mar 11 08:44:31 crc kubenswrapper[4808]: I0311 08:44:31.342996 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7b05200d-8025-468a-9c30-fbfd45a80b8b-v4-0-config-user-idp-0-file-data\") pod \"7b05200d-8025-468a-9c30-fbfd45a80b8b\" (UID: \"7b05200d-8025-468a-9c30-fbfd45a80b8b\") " Mar 11 08:44:31 crc kubenswrapper[4808]: I0311 08:44:31.343281 4808 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7b05200d-8025-468a-9c30-fbfd45a80b8b-audit-dir\") on node \"crc\" DevicePath \"\"" Mar 11 08:44:31 crc kubenswrapper[4808]: I0311 08:44:31.343656 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b05200d-8025-468a-9c30-fbfd45a80b8b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "7b05200d-8025-468a-9c30-fbfd45a80b8b" (UID: "7b05200d-8025-468a-9c30-fbfd45a80b8b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:44:31 crc kubenswrapper[4808]: I0311 08:44:31.344052 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b05200d-8025-468a-9c30-fbfd45a80b8b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "7b05200d-8025-468a-9c30-fbfd45a80b8b" (UID: "7b05200d-8025-468a-9c30-fbfd45a80b8b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:44:31 crc kubenswrapper[4808]: I0311 08:44:31.344100 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b05200d-8025-468a-9c30-fbfd45a80b8b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "7b05200d-8025-468a-9c30-fbfd45a80b8b" (UID: "7b05200d-8025-468a-9c30-fbfd45a80b8b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:44:31 crc kubenswrapper[4808]: I0311 08:44:31.345250 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b05200d-8025-468a-9c30-fbfd45a80b8b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "7b05200d-8025-468a-9c30-fbfd45a80b8b" (UID: "7b05200d-8025-468a-9c30-fbfd45a80b8b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:44:31 crc kubenswrapper[4808]: I0311 08:44:31.349411 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b05200d-8025-468a-9c30-fbfd45a80b8b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "7b05200d-8025-468a-9c30-fbfd45a80b8b" (UID: "7b05200d-8025-468a-9c30-fbfd45a80b8b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:44:31 crc kubenswrapper[4808]: I0311 08:44:31.349987 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b05200d-8025-468a-9c30-fbfd45a80b8b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "7b05200d-8025-468a-9c30-fbfd45a80b8b" (UID: "7b05200d-8025-468a-9c30-fbfd45a80b8b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:44:31 crc kubenswrapper[4808]: I0311 08:44:31.350460 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b05200d-8025-468a-9c30-fbfd45a80b8b-kube-api-access-bxxmn" (OuterVolumeSpecName: "kube-api-access-bxxmn") pod "7b05200d-8025-468a-9c30-fbfd45a80b8b" (UID: "7b05200d-8025-468a-9c30-fbfd45a80b8b"). InnerVolumeSpecName "kube-api-access-bxxmn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:44:31 crc kubenswrapper[4808]: I0311 08:44:31.362561 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b05200d-8025-468a-9c30-fbfd45a80b8b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "7b05200d-8025-468a-9c30-fbfd45a80b8b" (UID: "7b05200d-8025-468a-9c30-fbfd45a80b8b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:44:31 crc kubenswrapper[4808]: I0311 08:44:31.362936 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b05200d-8025-468a-9c30-fbfd45a80b8b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "7b05200d-8025-468a-9c30-fbfd45a80b8b" (UID: "7b05200d-8025-468a-9c30-fbfd45a80b8b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:44:31 crc kubenswrapper[4808]: I0311 08:44:31.363020 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b05200d-8025-468a-9c30-fbfd45a80b8b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "7b05200d-8025-468a-9c30-fbfd45a80b8b" (UID: "7b05200d-8025-468a-9c30-fbfd45a80b8b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:44:31 crc kubenswrapper[4808]: I0311 08:44:31.363416 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b05200d-8025-468a-9c30-fbfd45a80b8b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "7b05200d-8025-468a-9c30-fbfd45a80b8b" (UID: "7b05200d-8025-468a-9c30-fbfd45a80b8b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:44:31 crc kubenswrapper[4808]: I0311 08:44:31.363563 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b05200d-8025-468a-9c30-fbfd45a80b8b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "7b05200d-8025-468a-9c30-fbfd45a80b8b" (UID: "7b05200d-8025-468a-9c30-fbfd45a80b8b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:44:31 crc kubenswrapper[4808]: I0311 08:44:31.363659 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b05200d-8025-468a-9c30-fbfd45a80b8b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "7b05200d-8025-468a-9c30-fbfd45a80b8b" (UID: "7b05200d-8025-468a-9c30-fbfd45a80b8b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:44:31 crc kubenswrapper[4808]: I0311 08:44:31.444045 4808 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7b05200d-8025-468a-9c30-fbfd45a80b8b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Mar 11 08:44:31 crc kubenswrapper[4808]: I0311 08:44:31.444090 4808 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7b05200d-8025-468a-9c30-fbfd45a80b8b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 11 08:44:31 crc kubenswrapper[4808]: I0311 08:44:31.444106 4808 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b05200d-8025-468a-9c30-fbfd45a80b8b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 08:44:31 crc kubenswrapper[4808]: I0311 08:44:31.444119 4808 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7b05200d-8025-468a-9c30-fbfd45a80b8b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Mar 11 08:44:31 crc kubenswrapper[4808]: I0311 08:44:31.444132 4808 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7b05200d-8025-468a-9c30-fbfd45a80b8b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Mar 11 08:44:31 crc kubenswrapper[4808]: I0311 08:44:31.444144 4808 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7b05200d-8025-468a-9c30-fbfd45a80b8b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Mar 11 08:44:31 crc kubenswrapper[4808]: I0311 08:44:31.444158 4808 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7b05200d-8025-468a-9c30-fbfd45a80b8b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Mar 11 08:44:31 crc kubenswrapper[4808]: I0311 08:44:31.444170 4808 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7b05200d-8025-468a-9c30-fbfd45a80b8b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Mar 11 08:44:31 crc kubenswrapper[4808]: I0311 08:44:31.444181 4808 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7b05200d-8025-468a-9c30-fbfd45a80b8b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Mar 11 08:44:31 crc kubenswrapper[4808]: I0311 08:44:31.444194 4808 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7b05200d-8025-468a-9c30-fbfd45a80b8b-audit-policies\") on node \"crc\" DevicePath \"\"" Mar 11 08:44:31 crc kubenswrapper[4808]: I0311 08:44:31.444205 4808 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7b05200d-8025-468a-9c30-fbfd45a80b8b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Mar 11 08:44:31 crc kubenswrapper[4808]: I0311 08:44:31.444216 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bxxmn\" (UniqueName: \"kubernetes.io/projected/7b05200d-8025-468a-9c30-fbfd45a80b8b-kube-api-access-bxxmn\") on node \"crc\" DevicePath \"\"" Mar 11 08:44:31 crc kubenswrapper[4808]: I0311 08:44:31.444229 4808 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7b05200d-8025-468a-9c30-fbfd45a80b8b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Mar 11 08:44:31 crc kubenswrapper[4808]: I0311 08:44:31.815155 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 11 08:44:31 crc kubenswrapper[4808]: I0311 08:44:31.815233 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 11 08:44:31 crc kubenswrapper[4808]: I0311 08:44:31.824574 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 11 08:44:31 crc kubenswrapper[4808]: I0311 08:44:31.833443 4808 generic.go:334] "Generic (PLEG): container finished" podID="7b05200d-8025-468a-9c30-fbfd45a80b8b" containerID="0b4c317af0392a9228a19a4528a8c24889882893b9f8cf41552da6be0d8cd340" exitCode=0 Mar 11 08:44:31 crc kubenswrapper[4808]: I0311 08:44:31.834748 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-vbbdr" event={"ID":"7b05200d-8025-468a-9c30-fbfd45a80b8b","Type":"ContainerDied","Data":"0b4c317af0392a9228a19a4528a8c24889882893b9f8cf41552da6be0d8cd340"} Mar 11 08:44:31 crc kubenswrapper[4808]: I0311 08:44:31.834842 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-vbbdr" event={"ID":"7b05200d-8025-468a-9c30-fbfd45a80b8b","Type":"ContainerDied","Data":"0893a81cee756ae0cdc2508227afd0f5c095c7f76f3bd2af9148bee2c38c66f7"} Mar 11 08:44:31 crc kubenswrapper[4808]: I0311 08:44:31.834884 4808 scope.go:117] "RemoveContainer" containerID="0b4c317af0392a9228a19a4528a8c24889882893b9f8cf41552da6be0d8cd340" Mar 11 08:44:31 crc kubenswrapper[4808]: I0311 08:44:31.835049 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-vbbdr" Mar 11 08:44:31 crc kubenswrapper[4808]: I0311 08:44:31.864581 4808 scope.go:117] "RemoveContainer" containerID="0b4c317af0392a9228a19a4528a8c24889882893b9f8cf41552da6be0d8cd340" Mar 11 08:44:31 crc kubenswrapper[4808]: E0311 08:44:31.865282 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b4c317af0392a9228a19a4528a8c24889882893b9f8cf41552da6be0d8cd340\": container with ID starting with 0b4c317af0392a9228a19a4528a8c24889882893b9f8cf41552da6be0d8cd340 not found: ID does not exist" containerID="0b4c317af0392a9228a19a4528a8c24889882893b9f8cf41552da6be0d8cd340" Mar 11 08:44:31 crc kubenswrapper[4808]: I0311 08:44:31.865514 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b4c317af0392a9228a19a4528a8c24889882893b9f8cf41552da6be0d8cd340"} err="failed to get container status \"0b4c317af0392a9228a19a4528a8c24889882893b9f8cf41552da6be0d8cd340\": rpc error: code = NotFound desc = could not find container \"0b4c317af0392a9228a19a4528a8c24889882893b9f8cf41552da6be0d8cd340\": container with ID starting with 0b4c317af0392a9228a19a4528a8c24889882893b9f8cf41552da6be0d8cd340 not found: ID does not exist" Mar 11 08:44:34 crc kubenswrapper[4808]: I0311 08:44:34.819533 4808 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 11 08:44:34 crc kubenswrapper[4808]: I0311 08:44:34.854058 4808 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="9adcd4bb-d971-4eee-9089-cc49eaaf5db2" Mar 11 08:44:34 crc kubenswrapper[4808]: I0311 08:44:34.854634 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_71bb4a3aecc4ba5b26c4b7318770ce13/kube-apiserver-check-endpoints/0.log" Mar 11 08:44:34 crc kubenswrapper[4808]: I0311 08:44:34.856191 4808 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="1fda2869bf15d12adb24ebe59f9299927f07ba86eff15268a717f35718327415" exitCode=255 Mar 11 08:44:34 crc kubenswrapper[4808]: I0311 08:44:34.856242 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"1fda2869bf15d12adb24ebe59f9299927f07ba86eff15268a717f35718327415"} Mar 11 08:44:34 crc kubenswrapper[4808]: I0311 08:44:34.856417 4808 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="70aeaa0b-1b9a-450e-bac3-61beed554491" Mar 11 08:44:34 crc kubenswrapper[4808]: I0311 08:44:34.856435 4808 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="70aeaa0b-1b9a-450e-bac3-61beed554491" Mar 11 08:44:34 crc kubenswrapper[4808]: I0311 08:44:34.858583 4808 scope.go:117] "RemoveContainer" containerID="1fda2869bf15d12adb24ebe59f9299927f07ba86eff15268a717f35718327415" Mar 11 08:44:34 crc kubenswrapper[4808]: I0311 08:44:34.859864 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 11 08:44:34 crc kubenswrapper[4808]: I0311 08:44:34.893071 4808 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="9adcd4bb-d971-4eee-9089-cc49eaaf5db2" Mar 11 08:44:35 crc kubenswrapper[4808]: E0311 08:44:35.464024 4808 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\": Failed to watch *v1.Secret: unknown (get secrets)" logger="UnhandledError" Mar 11 08:44:35 crc kubenswrapper[4808]: I0311 08:44:35.863114 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_71bb4a3aecc4ba5b26c4b7318770ce13/kube-apiserver-check-endpoints/0.log" Mar 11 08:44:35 crc kubenswrapper[4808]: I0311 08:44:35.864725 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"90740a351734ff1d4160818887e54291f0cbba05b5ad08d28b7c7ec79b01c0eb"} Mar 11 08:44:35 crc kubenswrapper[4808]: I0311 08:44:35.864962 4808 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="70aeaa0b-1b9a-450e-bac3-61beed554491" Mar 11 08:44:35 crc kubenswrapper[4808]: I0311 08:44:35.864980 4808 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="70aeaa0b-1b9a-450e-bac3-61beed554491" Mar 11 08:44:35 crc kubenswrapper[4808]: I0311 08:44:35.865162 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 11 08:44:35 crc kubenswrapper[4808]: I0311 08:44:35.869495 4808 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="9adcd4bb-d971-4eee-9089-cc49eaaf5db2" Mar 11 08:44:36 crc kubenswrapper[4808]: I0311 08:44:36.103659 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:44:36 crc kubenswrapper[4808]: I0311 08:44:36.103732 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:44:36 crc kubenswrapper[4808]: I0311 08:44:36.103766 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:44:36 crc kubenswrapper[4808]: I0311 08:44:36.103784 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:44:36 crc kubenswrapper[4808]: I0311 08:44:36.104974 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Mar 11 08:44:36 crc kubenswrapper[4808]: I0311 08:44:36.104974 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 11 08:44:36 crc kubenswrapper[4808]: I0311 08:44:36.105284 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Mar 11 08:44:36 crc kubenswrapper[4808]: I0311 08:44:36.115574 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:44:36 crc kubenswrapper[4808]: I0311 08:44:36.116112 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 11 08:44:36 crc kubenswrapper[4808]: I0311 08:44:36.119051 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:44:36 crc kubenswrapper[4808]: I0311 08:44:36.127135 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:44:36 crc kubenswrapper[4808]: I0311 08:44:36.130064 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:44:36 crc kubenswrapper[4808]: I0311 08:44:36.130345 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:44:36 crc kubenswrapper[4808]: I0311 08:44:36.410058 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:44:36 crc kubenswrapper[4808]: I0311 08:44:36.420909 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:44:36 crc kubenswrapper[4808]: W0311 08:44:36.559916 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-3f8c1b55df237d5b717fe282b65cde3519f9698020ee731eedfbd7f6abcff31b WatchSource:0}: Error finding container 3f8c1b55df237d5b717fe282b65cde3519f9698020ee731eedfbd7f6abcff31b: Status 404 returned error can't find the container with id 3f8c1b55df237d5b717fe282b65cde3519f9698020ee731eedfbd7f6abcff31b Mar 11 08:44:36 crc kubenswrapper[4808]: W0311 08:44:36.707546 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-d65f0f63d97461fcf8dddf90ca14c179a3a511253855d7b8ed76830561d0073f WatchSource:0}: Error finding container d65f0f63d97461fcf8dddf90ca14c179a3a511253855d7b8ed76830561d0073f: Status 404 returned error can't find the container with id d65f0f63d97461fcf8dddf90ca14c179a3a511253855d7b8ed76830561d0073f Mar 11 08:44:36 crc kubenswrapper[4808]: I0311 08:44:36.875770 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"103b59736de54d6755bb8b02b8a293f759b25ad4bccf594a39b53ed243b42708"} Mar 11 08:44:36 crc kubenswrapper[4808]: I0311 08:44:36.876211 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"3f8c1b55df237d5b717fe282b65cde3519f9698020ee731eedfbd7f6abcff31b"} Mar 11 08:44:36 crc kubenswrapper[4808]: I0311 08:44:36.879152 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"dbc2c9b3421836ae22fe7e0dd6d556ed63de205586c350ce1f1747e0bcc0e2b4"} Mar 11 08:44:36 crc kubenswrapper[4808]: I0311 08:44:36.879199 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"d65f0f63d97461fcf8dddf90ca14c179a3a511253855d7b8ed76830561d0073f"} Mar 11 08:44:36 crc kubenswrapper[4808]: I0311 08:44:36.879517 4808 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="70aeaa0b-1b9a-450e-bac3-61beed554491" Mar 11 08:44:36 crc kubenswrapper[4808]: I0311 08:44:36.879563 4808 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="70aeaa0b-1b9a-450e-bac3-61beed554491" Mar 11 08:44:36 crc kubenswrapper[4808]: W0311 08:44:36.888699 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-30ce0aecb632529ad0bed55e58eace3cbf8eb384657fedb066db8ef43bfd5e57 WatchSource:0}: Error finding container 30ce0aecb632529ad0bed55e58eace3cbf8eb384657fedb066db8ef43bfd5e57: Status 404 returned error can't find the container with id 30ce0aecb632529ad0bed55e58eace3cbf8eb384657fedb066db8ef43bfd5e57 Mar 11 08:44:36 crc kubenswrapper[4808]: I0311 08:44:36.896560 4808 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="9adcd4bb-d971-4eee-9089-cc49eaaf5db2" Mar 11 08:44:37 crc kubenswrapper[4808]: I0311 08:44:37.705703 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 11 08:44:37 crc kubenswrapper[4808]: I0311 08:44:37.712943 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 11 08:44:37 crc kubenswrapper[4808]: I0311 08:44:37.888297 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"4db5cc9cf2b585d150765fdde8e1861ee3243760bc7b74126d9cc2e0cc34718b"} Mar 11 08:44:37 crc kubenswrapper[4808]: I0311 08:44:37.888370 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"30ce0aecb632529ad0bed55e58eace3cbf8eb384657fedb066db8ef43bfd5e57"} Mar 11 08:44:37 crc kubenswrapper[4808]: I0311 08:44:37.888511 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 11 08:44:37 crc kubenswrapper[4808]: I0311 08:44:37.888742 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:44:38 crc kubenswrapper[4808]: I0311 08:44:38.901030 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/0.log" Mar 11 08:44:38 crc kubenswrapper[4808]: I0311 08:44:38.901141 4808 generic.go:334] "Generic (PLEG): container finished" podID="9d751cbb-f2e2-430d-9754-c882a5e924a5" containerID="103b59736de54d6755bb8b02b8a293f759b25ad4bccf594a39b53ed243b42708" exitCode=255 Mar 11 08:44:38 crc kubenswrapper[4808]: I0311 08:44:38.901276 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerDied","Data":"103b59736de54d6755bb8b02b8a293f759b25ad4bccf594a39b53ed243b42708"} Mar 11 08:44:38 crc kubenswrapper[4808]: I0311 08:44:38.902267 4808 scope.go:117] "RemoveContainer" containerID="103b59736de54d6755bb8b02b8a293f759b25ad4bccf594a39b53ed243b42708" Mar 11 08:44:39 crc kubenswrapper[4808]: I0311 08:44:39.798763 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6d7975b9fc-7zfbx" Mar 11 08:44:39 crc kubenswrapper[4808]: I0311 08:44:39.799803 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6d7975b9fc-7zfbx" Mar 11 08:44:39 crc kubenswrapper[4808]: I0311 08:44:39.910517 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/0.log" Mar 11 08:44:39 crc kubenswrapper[4808]: I0311 08:44:39.910581 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"f03a123f500fddc53b564987c9f19c0f557fcb3b08e010a46735034f400b5d17"} Mar 11 08:44:40 crc kubenswrapper[4808]: W0311 08:44:40.224534 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podee8ed134_1c66_4b5d_925c_6cfc105dde4b.slice/crio-18aafb6d3c8d62e81f4d706b27c220f113f86b7d9da47b08b7e919eb5050bf98 WatchSource:0}: Error finding container 18aafb6d3c8d62e81f4d706b27c220f113f86b7d9da47b08b7e919eb5050bf98: Status 404 returned error can't find the container with id 18aafb6d3c8d62e81f4d706b27c220f113f86b7d9da47b08b7e919eb5050bf98 Mar 11 08:44:40 crc kubenswrapper[4808]: I0311 08:44:40.455941 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 11 08:44:40 crc kubenswrapper[4808]: I0311 08:44:40.788625 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7774777c67-x9jd7" Mar 11 08:44:40 crc kubenswrapper[4808]: I0311 08:44:40.789345 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7774777c67-x9jd7" Mar 11 08:44:40 crc kubenswrapper[4808]: I0311 08:44:40.921753 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/1.log" Mar 11 08:44:40 crc kubenswrapper[4808]: I0311 08:44:40.922835 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/0.log" Mar 11 08:44:40 crc kubenswrapper[4808]: I0311 08:44:40.922876 4808 generic.go:334] "Generic (PLEG): container finished" podID="9d751cbb-f2e2-430d-9754-c882a5e924a5" containerID="f03a123f500fddc53b564987c9f19c0f557fcb3b08e010a46735034f400b5d17" exitCode=255 Mar 11 08:44:40 crc kubenswrapper[4808]: I0311 08:44:40.922942 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerDied","Data":"f03a123f500fddc53b564987c9f19c0f557fcb3b08e010a46735034f400b5d17"} Mar 11 08:44:40 crc kubenswrapper[4808]: I0311 08:44:40.922973 4808 scope.go:117] "RemoveContainer" containerID="103b59736de54d6755bb8b02b8a293f759b25ad4bccf594a39b53ed243b42708" Mar 11 08:44:40 crc kubenswrapper[4808]: I0311 08:44:40.923300 4808 scope.go:117] "RemoveContainer" containerID="f03a123f500fddc53b564987c9f19c0f557fcb3b08e010a46735034f400b5d17" Mar 11 08:44:40 crc kubenswrapper[4808]: E0311 08:44:40.923467 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=check-endpoints pod=network-check-source-55646444c4-trplf_openshift-network-diagnostics(9d751cbb-f2e2-430d-9754-c882a5e924a5)\"" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 11 08:44:40 crc kubenswrapper[4808]: I0311 08:44:40.926039 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6d7975b9fc-7zfbx" event={"ID":"ee8ed134-1c66-4b5d-925c-6cfc105dde4b","Type":"ContainerStarted","Data":"e3a005786bb4e31903b67dc18cd44f8bd5e302f2ac0c2d1a5f56c7d6400f31b8"} Mar 11 08:44:40 crc kubenswrapper[4808]: I0311 08:44:40.926085 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6d7975b9fc-7zfbx" event={"ID":"ee8ed134-1c66-4b5d-925c-6cfc105dde4b","Type":"ContainerStarted","Data":"18aafb6d3c8d62e81f4d706b27c220f113f86b7d9da47b08b7e919eb5050bf98"} Mar 11 08:44:40 crc kubenswrapper[4808]: I0311 08:44:40.927171 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6d7975b9fc-7zfbx" Mar 11 08:44:40 crc kubenswrapper[4808]: I0311 08:44:40.933333 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6d7975b9fc-7zfbx" Mar 11 08:44:41 crc kubenswrapper[4808]: W0311 08:44:41.207084 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a5b563b_dfdf_4720_a13f_85f72959f234.slice/crio-5f992a41615940a8fe9f78178953088e779ec742b18fa0faa48779ea19cfec35 WatchSource:0}: Error finding container 5f992a41615940a8fe9f78178953088e779ec742b18fa0faa48779ea19cfec35: Status 404 returned error can't find the container with id 5f992a41615940a8fe9f78178953088e779ec742b18fa0faa48779ea19cfec35 Mar 11 08:44:41 crc kubenswrapper[4808]: I0311 08:44:41.936422 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7774777c67-x9jd7" event={"ID":"3a5b563b-dfdf-4720-a13f-85f72959f234","Type":"ContainerStarted","Data":"10f92a990739f97dc10ba52dc4b2bec13e16f1c475ce36c19a5018a3cfbfa06c"} Mar 11 08:44:41 crc kubenswrapper[4808]: I0311 08:44:41.936855 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7774777c67-x9jd7" event={"ID":"3a5b563b-dfdf-4720-a13f-85f72959f234","Type":"ContainerStarted","Data":"5f992a41615940a8fe9f78178953088e779ec742b18fa0faa48779ea19cfec35"} Mar 11 08:44:41 crc kubenswrapper[4808]: I0311 08:44:41.937266 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7774777c67-x9jd7" Mar 11 08:44:41 crc kubenswrapper[4808]: I0311 08:44:41.938796 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/1.log" Mar 11 08:44:42 crc kubenswrapper[4808]: I0311 08:44:42.937277 4808 patch_prober.go:28] interesting pod/route-controller-manager-7774777c67-x9jd7 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.68:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 11 08:44:42 crc kubenswrapper[4808]: I0311 08:44:42.937351 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7774777c67-x9jd7" podUID="3a5b563b-dfdf-4720-a13f-85f72959f234" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.68:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 11 08:44:43 crc kubenswrapper[4808]: I0311 08:44:43.945841 4808 patch_prober.go:28] interesting pod/route-controller-manager-7774777c67-x9jd7 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.68:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 11 08:44:43 crc kubenswrapper[4808]: I0311 08:44:43.946045 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7774777c67-x9jd7" podUID="3a5b563b-dfdf-4720-a13f-85f72959f234" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.68:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 11 08:44:45 crc kubenswrapper[4808]: I0311 08:44:45.012592 4808 patch_prober.go:28] interesting pod/route-controller-manager-7774777c67-x9jd7 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.68:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 11 08:44:45 crc kubenswrapper[4808]: I0311 08:44:45.013143 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7774777c67-x9jd7" podUID="3a5b563b-dfdf-4720-a13f-85f72959f234" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.68:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 11 08:44:45 crc kubenswrapper[4808]: I0311 08:44:45.019715 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Mar 11 08:44:45 crc kubenswrapper[4808]: I0311 08:44:45.143668 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 11 08:44:45 crc kubenswrapper[4808]: I0311 08:44:45.374722 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 11 08:44:45 crc kubenswrapper[4808]: I0311 08:44:45.399558 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 11 08:44:46 crc kubenswrapper[4808]: I0311 08:44:46.150283 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 11 08:44:46 crc kubenswrapper[4808]: I0311 08:44:46.165238 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 11 08:44:46 crc kubenswrapper[4808]: I0311 08:44:46.595655 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 11 08:44:46 crc kubenswrapper[4808]: I0311 08:44:46.606444 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 11 08:44:46 crc kubenswrapper[4808]: I0311 08:44:46.849750 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Mar 11 08:44:47 crc kubenswrapper[4808]: I0311 08:44:47.081794 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Mar 11 08:44:47 crc kubenswrapper[4808]: I0311 08:44:47.144101 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 11 08:44:47 crc kubenswrapper[4808]: I0311 08:44:47.321251 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Mar 11 08:44:47 crc kubenswrapper[4808]: I0311 08:44:47.333271 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 11 08:44:47 crc kubenswrapper[4808]: I0311 08:44:47.499534 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 11 08:44:47 crc kubenswrapper[4808]: I0311 08:44:47.586458 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 11 08:44:47 crc kubenswrapper[4808]: I0311 08:44:47.617950 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 11 08:44:47 crc kubenswrapper[4808]: I0311 08:44:47.628555 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 11 08:44:47 crc kubenswrapper[4808]: I0311 08:44:47.718262 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 11 08:44:47 crc kubenswrapper[4808]: I0311 08:44:47.813942 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 11 08:44:47 crc kubenswrapper[4808]: I0311 08:44:47.921012 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 11 08:44:47 crc kubenswrapper[4808]: I0311 08:44:47.945974 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 11 08:44:48 crc kubenswrapper[4808]: I0311 08:44:48.013292 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 11 08:44:48 crc kubenswrapper[4808]: I0311 08:44:48.065235 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Mar 11 08:44:48 crc kubenswrapper[4808]: I0311 08:44:48.073624 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Mar 11 08:44:48 crc kubenswrapper[4808]: I0311 08:44:48.108093 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 11 08:44:48 crc kubenswrapper[4808]: I0311 08:44:48.168667 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Mar 11 08:44:48 crc kubenswrapper[4808]: I0311 08:44:48.197679 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 11 08:44:48 crc kubenswrapper[4808]: I0311 08:44:48.215966 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Mar 11 08:44:48 crc kubenswrapper[4808]: I0311 08:44:48.217001 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 11 08:44:48 crc kubenswrapper[4808]: I0311 08:44:48.228057 4808 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 11 08:44:48 crc kubenswrapper[4808]: I0311 08:44:48.240841 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Mar 11 08:44:48 crc kubenswrapper[4808]: I0311 08:44:48.334444 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 11 08:44:48 crc kubenswrapper[4808]: I0311 08:44:48.499179 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 11 08:44:48 crc kubenswrapper[4808]: I0311 08:44:48.567895 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 11 08:44:48 crc kubenswrapper[4808]: I0311 08:44:48.596644 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Mar 11 08:44:48 crc kubenswrapper[4808]: I0311 08:44:48.649028 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Mar 11 08:44:48 crc kubenswrapper[4808]: I0311 08:44:48.757444 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 11 08:44:48 crc kubenswrapper[4808]: I0311 08:44:48.854638 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 11 08:44:48 crc kubenswrapper[4808]: I0311 08:44:48.953974 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 11 08:44:48 crc kubenswrapper[4808]: I0311 08:44:48.963203 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Mar 11 08:44:49 crc kubenswrapper[4808]: I0311 08:44:49.253646 4808 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 11 08:44:49 crc kubenswrapper[4808]: I0311 08:44:49.256295 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=35.256283491 podStartE2EDuration="35.256283491s" podCreationTimestamp="2026-03-11 08:44:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:44:34.851462094 +0000 UTC m=+325.804785414" watchObservedRunningTime="2026-03-11 08:44:49.256283491 +0000 UTC m=+340.209606811" Mar 11 08:44:49 crc kubenswrapper[4808]: I0311 08:44:49.257297 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6d7975b9fc-7zfbx" podStartSLOduration=37.257292169 podStartE2EDuration="37.257292169s" podCreationTimestamp="2026-03-11 08:44:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:44:40.956588458 +0000 UTC m=+331.909911798" watchObservedRunningTime="2026-03-11 08:44:49.257292169 +0000 UTC m=+340.210615479" Mar 11 08:44:49 crc kubenswrapper[4808]: I0311 08:44:49.257836 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7774777c67-x9jd7" podStartSLOduration=37.257832594999996 podStartE2EDuration="37.257832595s" podCreationTimestamp="2026-03-11 08:44:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:44:41.959629804 +0000 UTC m=+332.912953174" watchObservedRunningTime="2026-03-11 08:44:49.257832595 +0000 UTC m=+340.211155915" Mar 11 08:44:49 crc kubenswrapper[4808]: I0311 08:44:49.258078 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-vbbdr","openshift-kube-apiserver/kube-apiserver-crc"] Mar 11 08:44:49 crc kubenswrapper[4808]: I0311 08:44:49.258114 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 11 08:44:49 crc kubenswrapper[4808]: I0311 08:44:49.258132 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6d7975b9fc-7zfbx","openshift-route-controller-manager/route-controller-manager-7774777c67-x9jd7"] Mar 11 08:44:49 crc kubenswrapper[4808]: I0311 08:44:49.279789 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=15.279762819 podStartE2EDuration="15.279762819s" podCreationTimestamp="2026-03-11 08:44:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:44:49.275162748 +0000 UTC m=+340.228486068" watchObservedRunningTime="2026-03-11 08:44:49.279762819 +0000 UTC m=+340.233086189" Mar 11 08:44:49 crc kubenswrapper[4808]: I0311 08:44:49.288223 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 11 08:44:49 crc kubenswrapper[4808]: I0311 08:44:49.317527 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Mar 11 08:44:49 crc kubenswrapper[4808]: I0311 08:44:49.429382 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 11 08:44:49 crc kubenswrapper[4808]: I0311 08:44:49.429390 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Mar 11 08:44:49 crc kubenswrapper[4808]: I0311 08:44:49.442886 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Mar 11 08:44:49 crc kubenswrapper[4808]: I0311 08:44:49.451399 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Mar 11 08:44:49 crc kubenswrapper[4808]: I0311 08:44:49.530698 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 11 08:44:49 crc kubenswrapper[4808]: I0311 08:44:49.611453 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 11 08:44:49 crc kubenswrapper[4808]: I0311 08:44:49.701837 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Mar 11 08:44:49 crc kubenswrapper[4808]: I0311 08:44:49.796809 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b05200d-8025-468a-9c30-fbfd45a80b8b" path="/var/lib/kubelet/pods/7b05200d-8025-468a-9c30-fbfd45a80b8b/volumes" Mar 11 08:44:49 crc kubenswrapper[4808]: I0311 08:44:49.849427 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 11 08:44:49 crc kubenswrapper[4808]: I0311 08:44:49.849726 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 11 08:44:50 crc kubenswrapper[4808]: I0311 08:44:50.020836 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Mar 11 08:44:50 crc kubenswrapper[4808]: I0311 08:44:50.060563 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 11 08:44:50 crc kubenswrapper[4808]: I0311 08:44:50.106723 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 11 08:44:50 crc kubenswrapper[4808]: I0311 08:44:50.206812 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 11 08:44:50 crc kubenswrapper[4808]: I0311 08:44:50.258812 4808 patch_prober.go:28] interesting pod/route-controller-manager-7774777c67-x9jd7 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.68:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 11 08:44:50 crc kubenswrapper[4808]: I0311 08:44:50.258902 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7774777c67-x9jd7" podUID="3a5b563b-dfdf-4720-a13f-85f72959f234" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.68:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 11 08:44:50 crc kubenswrapper[4808]: I0311 08:44:50.434928 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 11 08:44:50 crc kubenswrapper[4808]: I0311 08:44:50.501756 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Mar 11 08:44:50 crc kubenswrapper[4808]: I0311 08:44:50.573606 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Mar 11 08:44:50 crc kubenswrapper[4808]: I0311 08:44:50.604165 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 11 08:44:50 crc kubenswrapper[4808]: I0311 08:44:50.909029 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Mar 11 08:44:50 crc kubenswrapper[4808]: I0311 08:44:50.969746 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Mar 11 08:44:50 crc kubenswrapper[4808]: I0311 08:44:50.987696 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 11 08:44:51 crc kubenswrapper[4808]: I0311 08:44:51.006935 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 11 08:44:51 crc kubenswrapper[4808]: I0311 08:44:51.048584 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Mar 11 08:44:51 crc kubenswrapper[4808]: I0311 08:44:51.113555 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 11 08:44:51 crc kubenswrapper[4808]: I0311 08:44:51.161521 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 11 08:44:51 crc kubenswrapper[4808]: I0311 08:44:51.191493 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Mar 11 08:44:51 crc kubenswrapper[4808]: I0311 08:44:51.268158 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 11 08:44:51 crc kubenswrapper[4808]: I0311 08:44:51.273820 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Mar 11 08:44:51 crc kubenswrapper[4808]: I0311 08:44:51.324830 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Mar 11 08:44:51 crc kubenswrapper[4808]: I0311 08:44:51.359452 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 11 08:44:51 crc kubenswrapper[4808]: I0311 08:44:51.492040 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 11 08:44:51 crc kubenswrapper[4808]: I0311 08:44:51.679418 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 11 08:44:51 crc kubenswrapper[4808]: I0311 08:44:51.801037 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 11 08:44:51 crc kubenswrapper[4808]: I0311 08:44:51.892720 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 11 08:44:51 crc kubenswrapper[4808]: I0311 08:44:51.941229 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 11 08:44:51 crc kubenswrapper[4808]: I0311 08:44:51.962884 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 11 08:44:51 crc kubenswrapper[4808]: I0311 08:44:51.970271 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 11 08:44:51 crc kubenswrapper[4808]: I0311 08:44:51.984626 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Mar 11 08:44:51 crc kubenswrapper[4808]: I0311 08:44:51.997576 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 11 08:44:52 crc kubenswrapper[4808]: I0311 08:44:52.043629 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 11 08:44:52 crc kubenswrapper[4808]: I0311 08:44:52.260563 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 11 08:44:52 crc kubenswrapper[4808]: I0311 08:44:52.278802 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 11 08:44:52 crc kubenswrapper[4808]: I0311 08:44:52.343232 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 11 08:44:52 crc kubenswrapper[4808]: I0311 08:44:52.408205 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Mar 11 08:44:52 crc kubenswrapper[4808]: I0311 08:44:52.456413 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 11 08:44:52 crc kubenswrapper[4808]: I0311 08:44:52.536914 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 11 08:44:52 crc kubenswrapper[4808]: I0311 08:44:52.606295 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 11 08:44:52 crc kubenswrapper[4808]: I0311 08:44:52.638296 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 11 08:44:52 crc kubenswrapper[4808]: I0311 08:44:52.750125 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 11 08:44:52 crc kubenswrapper[4808]: I0311 08:44:52.770615 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 11 08:44:52 crc kubenswrapper[4808]: I0311 08:44:52.813558 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 11 08:44:52 crc kubenswrapper[4808]: I0311 08:44:52.826743 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 11 08:44:52 crc kubenswrapper[4808]: I0311 08:44:52.827527 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 11 08:44:52 crc kubenswrapper[4808]: I0311 08:44:52.833622 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Mar 11 08:44:52 crc kubenswrapper[4808]: I0311 08:44:52.895914 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 11 08:44:52 crc kubenswrapper[4808]: I0311 08:44:52.992591 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 11 08:44:53 crc kubenswrapper[4808]: I0311 08:44:53.012157 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 11 08:44:53 crc kubenswrapper[4808]: I0311 08:44:53.188395 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 11 08:44:53 crc kubenswrapper[4808]: I0311 08:44:53.237929 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 11 08:44:53 crc kubenswrapper[4808]: I0311 08:44:53.262307 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 11 08:44:53 crc kubenswrapper[4808]: I0311 08:44:53.333319 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 11 08:44:53 crc kubenswrapper[4808]: I0311 08:44:53.401851 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 11 08:44:53 crc kubenswrapper[4808]: I0311 08:44:53.414017 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 11 08:44:53 crc kubenswrapper[4808]: I0311 08:44:53.420932 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 11 08:44:53 crc kubenswrapper[4808]: I0311 08:44:53.490109 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 11 08:44:53 crc kubenswrapper[4808]: I0311 08:44:53.507739 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Mar 11 08:44:53 crc kubenswrapper[4808]: I0311 08:44:53.579206 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Mar 11 08:44:53 crc kubenswrapper[4808]: I0311 08:44:53.604019 4808 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Mar 11 08:44:53 crc kubenswrapper[4808]: I0311 08:44:53.640757 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 11 08:44:53 crc kubenswrapper[4808]: I0311 08:44:53.706593 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 11 08:44:53 crc kubenswrapper[4808]: I0311 08:44:53.789124 4808 scope.go:117] "RemoveContainer" containerID="f03a123f500fddc53b564987c9f19c0f557fcb3b08e010a46735034f400b5d17" Mar 11 08:44:53 crc kubenswrapper[4808]: I0311 08:44:53.848761 4808 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 11 08:44:53 crc kubenswrapper[4808]: I0311 08:44:53.858951 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Mar 11 08:44:53 crc kubenswrapper[4808]: I0311 08:44:53.873648 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 11 08:44:53 crc kubenswrapper[4808]: I0311 08:44:53.884257 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 11 08:44:53 crc kubenswrapper[4808]: I0311 08:44:53.894874 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Mar 11 08:44:53 crc kubenswrapper[4808]: I0311 08:44:53.983272 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 11 08:44:54 crc kubenswrapper[4808]: I0311 08:44:54.006909 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 11 08:44:54 crc kubenswrapper[4808]: I0311 08:44:54.007776 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/1.log" Mar 11 08:44:54 crc kubenswrapper[4808]: I0311 08:44:54.007830 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"1da347d6f5a08a0cffe2b8a4ef6c33d2775ae118cb2f68c3aa3b69d92e2d2ea2"} Mar 11 08:44:54 crc kubenswrapper[4808]: I0311 08:44:54.069055 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Mar 11 08:44:54 crc kubenswrapper[4808]: I0311 08:44:54.108004 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Mar 11 08:44:54 crc kubenswrapper[4808]: I0311 08:44:54.139587 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 11 08:44:54 crc kubenswrapper[4808]: I0311 08:44:54.279128 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 11 08:44:54 crc kubenswrapper[4808]: I0311 08:44:54.283441 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Mar 11 08:44:54 crc kubenswrapper[4808]: I0311 08:44:54.333769 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 11 08:44:54 crc kubenswrapper[4808]: I0311 08:44:54.370873 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Mar 11 08:44:54 crc kubenswrapper[4808]: I0311 08:44:54.413769 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 11 08:44:54 crc kubenswrapper[4808]: I0311 08:44:54.429026 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Mar 11 08:44:54 crc kubenswrapper[4808]: I0311 08:44:54.489756 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 11 08:44:54 crc kubenswrapper[4808]: I0311 08:44:54.509109 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 11 08:44:54 crc kubenswrapper[4808]: I0311 08:44:54.574738 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 11 08:44:54 crc kubenswrapper[4808]: I0311 08:44:54.597873 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 11 08:44:54 crc kubenswrapper[4808]: I0311 08:44:54.692691 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 11 08:44:54 crc kubenswrapper[4808]: I0311 08:44:54.702143 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Mar 11 08:44:54 crc kubenswrapper[4808]: I0311 08:44:54.725158 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 11 08:44:54 crc kubenswrapper[4808]: I0311 08:44:54.927201 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 11 08:44:54 crc kubenswrapper[4808]: I0311 08:44:54.939033 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Mar 11 08:44:54 crc kubenswrapper[4808]: I0311 08:44:54.942632 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Mar 11 08:44:55 crc kubenswrapper[4808]: I0311 08:44:55.012447 4808 patch_prober.go:28] interesting pod/route-controller-manager-7774777c67-x9jd7 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.68:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 11 08:44:55 crc kubenswrapper[4808]: I0311 08:44:55.013731 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7774777c67-x9jd7" podUID="3a5b563b-dfdf-4720-a13f-85f72959f234" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.68:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 11 08:44:55 crc kubenswrapper[4808]: I0311 08:44:55.018198 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/2.log" Mar 11 08:44:55 crc kubenswrapper[4808]: I0311 08:44:55.019143 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/1.log" Mar 11 08:44:55 crc kubenswrapper[4808]: I0311 08:44:55.019203 4808 generic.go:334] "Generic (PLEG): container finished" podID="9d751cbb-f2e2-430d-9754-c882a5e924a5" containerID="1da347d6f5a08a0cffe2b8a4ef6c33d2775ae118cb2f68c3aa3b69d92e2d2ea2" exitCode=255 Mar 11 08:44:55 crc kubenswrapper[4808]: I0311 08:44:55.019244 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerDied","Data":"1da347d6f5a08a0cffe2b8a4ef6c33d2775ae118cb2f68c3aa3b69d92e2d2ea2"} Mar 11 08:44:55 crc kubenswrapper[4808]: I0311 08:44:55.019292 4808 scope.go:117] "RemoveContainer" containerID="f03a123f500fddc53b564987c9f19c0f557fcb3b08e010a46735034f400b5d17" Mar 11 08:44:55 crc kubenswrapper[4808]: I0311 08:44:55.020018 4808 scope.go:117] "RemoveContainer" containerID="1da347d6f5a08a0cffe2b8a4ef6c33d2775ae118cb2f68c3aa3b69d92e2d2ea2" Mar 11 08:44:55 crc kubenswrapper[4808]: E0311 08:44:55.020338 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=check-endpoints pod=network-check-source-55646444c4-trplf_openshift-network-diagnostics(9d751cbb-f2e2-430d-9754-c882a5e924a5)\"" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 11 08:44:55 crc kubenswrapper[4808]: I0311 08:44:55.039981 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 11 08:44:55 crc kubenswrapper[4808]: I0311 08:44:55.124238 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 11 08:44:55 crc kubenswrapper[4808]: I0311 08:44:55.159684 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 11 08:44:55 crc kubenswrapper[4808]: I0311 08:44:55.169620 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Mar 11 08:44:55 crc kubenswrapper[4808]: I0311 08:44:55.236132 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 11 08:44:55 crc kubenswrapper[4808]: I0311 08:44:55.295978 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 11 08:44:55 crc kubenswrapper[4808]: I0311 08:44:55.343977 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 11 08:44:55 crc kubenswrapper[4808]: I0311 08:44:55.389265 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Mar 11 08:44:55 crc kubenswrapper[4808]: I0311 08:44:55.418068 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 11 08:44:55 crc kubenswrapper[4808]: I0311 08:44:55.468940 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 11 08:44:55 crc kubenswrapper[4808]: I0311 08:44:55.518626 4808 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 11 08:44:55 crc kubenswrapper[4808]: I0311 08:44:55.520990 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Mar 11 08:44:55 crc kubenswrapper[4808]: I0311 08:44:55.631635 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Mar 11 08:44:55 crc kubenswrapper[4808]: I0311 08:44:55.684467 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 11 08:44:55 crc kubenswrapper[4808]: I0311 08:44:55.690875 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 11 08:44:55 crc kubenswrapper[4808]: I0311 08:44:55.700200 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Mar 11 08:44:55 crc kubenswrapper[4808]: I0311 08:44:55.767671 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 11 08:44:55 crc kubenswrapper[4808]: I0311 08:44:55.820964 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 11 08:44:55 crc kubenswrapper[4808]: I0311 08:44:55.997732 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 11 08:44:56 crc kubenswrapper[4808]: I0311 08:44:56.026620 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/2.log" Mar 11 08:44:56 crc kubenswrapper[4808]: I0311 08:44:56.035742 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Mar 11 08:44:56 crc kubenswrapper[4808]: I0311 08:44:56.051299 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Mar 11 08:44:56 crc kubenswrapper[4808]: I0311 08:44:56.075021 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 11 08:44:56 crc kubenswrapper[4808]: I0311 08:44:56.110181 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Mar 11 08:44:56 crc kubenswrapper[4808]: I0311 08:44:56.117143 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 11 08:44:56 crc kubenswrapper[4808]: I0311 08:44:56.140632 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Mar 11 08:44:56 crc kubenswrapper[4808]: I0311 08:44:56.141467 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 11 08:44:56 crc kubenswrapper[4808]: I0311 08:44:56.148376 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 11 08:44:56 crc kubenswrapper[4808]: I0311 08:44:56.371391 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 11 08:44:56 crc kubenswrapper[4808]: I0311 08:44:56.450833 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Mar 11 08:44:56 crc kubenswrapper[4808]: I0311 08:44:56.459969 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 11 08:44:56 crc kubenswrapper[4808]: I0311 08:44:56.479300 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 11 08:44:56 crc kubenswrapper[4808]: I0311 08:44:56.482596 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Mar 11 08:44:56 crc kubenswrapper[4808]: I0311 08:44:56.545736 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 11 08:44:56 crc kubenswrapper[4808]: I0311 08:44:56.560563 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 11 08:44:56 crc kubenswrapper[4808]: I0311 08:44:56.588212 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 11 08:44:56 crc kubenswrapper[4808]: I0311 08:44:56.687048 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 11 08:44:56 crc kubenswrapper[4808]: I0311 08:44:56.719580 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 11 08:44:56 crc kubenswrapper[4808]: I0311 08:44:56.769014 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 11 08:44:56 crc kubenswrapper[4808]: I0311 08:44:56.790582 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Mar 11 08:44:56 crc kubenswrapper[4808]: I0311 08:44:56.820790 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 11 08:44:56 crc kubenswrapper[4808]: I0311 08:44:56.845490 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 11 08:44:56 crc kubenswrapper[4808]: I0311 08:44:56.960415 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 11 08:44:57 crc kubenswrapper[4808]: I0311 08:44:57.003814 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Mar 11 08:44:57 crc kubenswrapper[4808]: I0311 08:44:57.033466 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 11 08:44:57 crc kubenswrapper[4808]: I0311 08:44:57.099917 4808 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Mar 11 08:44:57 crc kubenswrapper[4808]: I0311 08:44:57.100127 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://4286bf7bd225f1f8e07665f0db6520ce46e55955aae3192a593aa24b290b6699" gracePeriod=5 Mar 11 08:44:57 crc kubenswrapper[4808]: I0311 08:44:57.168863 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 11 08:44:57 crc kubenswrapper[4808]: I0311 08:44:57.220299 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Mar 11 08:44:57 crc kubenswrapper[4808]: I0311 08:44:57.275137 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 11 08:44:57 crc kubenswrapper[4808]: I0311 08:44:57.305835 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 11 08:44:57 crc kubenswrapper[4808]: I0311 08:44:57.340666 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 11 08:44:57 crc kubenswrapper[4808]: I0311 08:44:57.361263 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 11 08:44:57 crc kubenswrapper[4808]: I0311 08:44:57.412258 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Mar 11 08:44:57 crc kubenswrapper[4808]: I0311 08:44:57.469612 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Mar 11 08:44:57 crc kubenswrapper[4808]: I0311 08:44:57.664662 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 11 08:44:57 crc kubenswrapper[4808]: I0311 08:44:57.680448 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-7dc7444945-sngrd"] Mar 11 08:44:57 crc kubenswrapper[4808]: E0311 08:44:57.680884 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Mar 11 08:44:57 crc kubenswrapper[4808]: I0311 08:44:57.680972 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Mar 11 08:44:57 crc kubenswrapper[4808]: E0311 08:44:57.681037 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b05200d-8025-468a-9c30-fbfd45a80b8b" containerName="oauth-openshift" Mar 11 08:44:57 crc kubenswrapper[4808]: I0311 08:44:57.681089 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b05200d-8025-468a-9c30-fbfd45a80b8b" containerName="oauth-openshift" Mar 11 08:44:57 crc kubenswrapper[4808]: E0311 08:44:57.681147 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30c9c874-96ee-4ff1-88fb-eedbd0d114e8" containerName="installer" Mar 11 08:44:57 crc kubenswrapper[4808]: I0311 08:44:57.681199 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="30c9c874-96ee-4ff1-88fb-eedbd0d114e8" containerName="installer" Mar 11 08:44:57 crc kubenswrapper[4808]: I0311 08:44:57.681380 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="30c9c874-96ee-4ff1-88fb-eedbd0d114e8" containerName="installer" Mar 11 08:44:57 crc kubenswrapper[4808]: I0311 08:44:57.681456 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b05200d-8025-468a-9c30-fbfd45a80b8b" containerName="oauth-openshift" Mar 11 08:44:57 crc kubenswrapper[4808]: I0311 08:44:57.681513 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Mar 11 08:44:57 crc kubenswrapper[4808]: I0311 08:44:57.681912 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7dc7444945-sngrd" Mar 11 08:44:57 crc kubenswrapper[4808]: I0311 08:44:57.685718 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 11 08:44:57 crc kubenswrapper[4808]: I0311 08:44:57.685728 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 11 08:44:57 crc kubenswrapper[4808]: I0311 08:44:57.686064 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Mar 11 08:44:57 crc kubenswrapper[4808]: I0311 08:44:57.686472 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 11 08:44:57 crc kubenswrapper[4808]: I0311 08:44:57.686592 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 11 08:44:57 crc kubenswrapper[4808]: I0311 08:44:57.686639 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 11 08:44:57 crc kubenswrapper[4808]: I0311 08:44:57.686675 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 11 08:44:57 crc kubenswrapper[4808]: I0311 08:44:57.687228 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 11 08:44:57 crc kubenswrapper[4808]: I0311 08:44:57.687402 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 11 08:44:57 crc kubenswrapper[4808]: I0311 08:44:57.687928 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 11 08:44:57 crc kubenswrapper[4808]: I0311 08:44:57.687965 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Mar 11 08:44:57 crc kubenswrapper[4808]: I0311 08:44:57.690986 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 11 08:44:57 crc kubenswrapper[4808]: I0311 08:44:57.698089 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 11 08:44:57 crc kubenswrapper[4808]: I0311 08:44:57.712250 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 11 08:44:57 crc kubenswrapper[4808]: I0311 08:44:57.728268 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 11 08:44:57 crc kubenswrapper[4808]: I0311 08:44:57.729989 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 11 08:44:57 crc kubenswrapper[4808]: I0311 08:44:57.730113 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7dc7444945-sngrd"] Mar 11 08:44:57 crc kubenswrapper[4808]: I0311 08:44:57.762074 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Mar 11 08:44:57 crc kubenswrapper[4808]: I0311 08:44:57.777607 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 11 08:44:57 crc kubenswrapper[4808]: I0311 08:44:57.781837 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 11 08:44:57 crc kubenswrapper[4808]: I0311 08:44:57.784382 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlxrj\" (UniqueName: \"kubernetes.io/projected/a6248a14-c4ab-4a51-85b9-27eb8196d118-kube-api-access-wlxrj\") pod \"oauth-openshift-7dc7444945-sngrd\" (UID: \"a6248a14-c4ab-4a51-85b9-27eb8196d118\") " pod="openshift-authentication/oauth-openshift-7dc7444945-sngrd" Mar 11 08:44:57 crc kubenswrapper[4808]: I0311 08:44:57.784422 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a6248a14-c4ab-4a51-85b9-27eb8196d118-audit-dir\") pod \"oauth-openshift-7dc7444945-sngrd\" (UID: \"a6248a14-c4ab-4a51-85b9-27eb8196d118\") " pod="openshift-authentication/oauth-openshift-7dc7444945-sngrd" Mar 11 08:44:57 crc kubenswrapper[4808]: I0311 08:44:57.784449 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a6248a14-c4ab-4a51-85b9-27eb8196d118-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7dc7444945-sngrd\" (UID: \"a6248a14-c4ab-4a51-85b9-27eb8196d118\") " pod="openshift-authentication/oauth-openshift-7dc7444945-sngrd" Mar 11 08:44:57 crc kubenswrapper[4808]: I0311 08:44:57.784474 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a6248a14-c4ab-4a51-85b9-27eb8196d118-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7dc7444945-sngrd\" (UID: \"a6248a14-c4ab-4a51-85b9-27eb8196d118\") " pod="openshift-authentication/oauth-openshift-7dc7444945-sngrd" Mar 11 08:44:57 crc kubenswrapper[4808]: I0311 08:44:57.784579 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a6248a14-c4ab-4a51-85b9-27eb8196d118-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7dc7444945-sngrd\" (UID: \"a6248a14-c4ab-4a51-85b9-27eb8196d118\") " pod="openshift-authentication/oauth-openshift-7dc7444945-sngrd" Mar 11 08:44:57 crc kubenswrapper[4808]: I0311 08:44:57.784614 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a6248a14-c4ab-4a51-85b9-27eb8196d118-v4-0-config-system-router-certs\") pod \"oauth-openshift-7dc7444945-sngrd\" (UID: \"a6248a14-c4ab-4a51-85b9-27eb8196d118\") " pod="openshift-authentication/oauth-openshift-7dc7444945-sngrd" Mar 11 08:44:57 crc kubenswrapper[4808]: I0311 08:44:57.784637 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a6248a14-c4ab-4a51-85b9-27eb8196d118-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7dc7444945-sngrd\" (UID: \"a6248a14-c4ab-4a51-85b9-27eb8196d118\") " pod="openshift-authentication/oauth-openshift-7dc7444945-sngrd" Mar 11 08:44:57 crc kubenswrapper[4808]: I0311 08:44:57.784669 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a6248a14-c4ab-4a51-85b9-27eb8196d118-v4-0-config-system-service-ca\") pod \"oauth-openshift-7dc7444945-sngrd\" (UID: \"a6248a14-c4ab-4a51-85b9-27eb8196d118\") " pod="openshift-authentication/oauth-openshift-7dc7444945-sngrd" Mar 11 08:44:57 crc kubenswrapper[4808]: I0311 08:44:57.784696 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a6248a14-c4ab-4a51-85b9-27eb8196d118-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7dc7444945-sngrd\" (UID: \"a6248a14-c4ab-4a51-85b9-27eb8196d118\") " pod="openshift-authentication/oauth-openshift-7dc7444945-sngrd" Mar 11 08:44:57 crc kubenswrapper[4808]: I0311 08:44:57.784720 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a6248a14-c4ab-4a51-85b9-27eb8196d118-v4-0-config-system-session\") pod \"oauth-openshift-7dc7444945-sngrd\" (UID: \"a6248a14-c4ab-4a51-85b9-27eb8196d118\") " pod="openshift-authentication/oauth-openshift-7dc7444945-sngrd" Mar 11 08:44:57 crc kubenswrapper[4808]: I0311 08:44:57.784772 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a6248a14-c4ab-4a51-85b9-27eb8196d118-v4-0-config-user-template-error\") pod \"oauth-openshift-7dc7444945-sngrd\" (UID: \"a6248a14-c4ab-4a51-85b9-27eb8196d118\") " pod="openshift-authentication/oauth-openshift-7dc7444945-sngrd" Mar 11 08:44:57 crc kubenswrapper[4808]: I0311 08:44:57.784798 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a6248a14-c4ab-4a51-85b9-27eb8196d118-v4-0-config-user-template-login\") pod \"oauth-openshift-7dc7444945-sngrd\" (UID: \"a6248a14-c4ab-4a51-85b9-27eb8196d118\") " pod="openshift-authentication/oauth-openshift-7dc7444945-sngrd" Mar 11 08:44:57 crc kubenswrapper[4808]: I0311 08:44:57.784824 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a6248a14-c4ab-4a51-85b9-27eb8196d118-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7dc7444945-sngrd\" (UID: \"a6248a14-c4ab-4a51-85b9-27eb8196d118\") " pod="openshift-authentication/oauth-openshift-7dc7444945-sngrd" Mar 11 08:44:57 crc kubenswrapper[4808]: I0311 08:44:57.784879 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a6248a14-c4ab-4a51-85b9-27eb8196d118-audit-policies\") pod \"oauth-openshift-7dc7444945-sngrd\" (UID: \"a6248a14-c4ab-4a51-85b9-27eb8196d118\") " pod="openshift-authentication/oauth-openshift-7dc7444945-sngrd" Mar 11 08:44:57 crc kubenswrapper[4808]: I0311 08:44:57.880242 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Mar 11 08:44:57 crc kubenswrapper[4808]: I0311 08:44:57.886098 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a6248a14-c4ab-4a51-85b9-27eb8196d118-v4-0-config-user-template-error\") pod \"oauth-openshift-7dc7444945-sngrd\" (UID: \"a6248a14-c4ab-4a51-85b9-27eb8196d118\") " pod="openshift-authentication/oauth-openshift-7dc7444945-sngrd" Mar 11 08:44:57 crc kubenswrapper[4808]: I0311 08:44:57.886192 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a6248a14-c4ab-4a51-85b9-27eb8196d118-v4-0-config-user-template-login\") pod \"oauth-openshift-7dc7444945-sngrd\" (UID: \"a6248a14-c4ab-4a51-85b9-27eb8196d118\") " pod="openshift-authentication/oauth-openshift-7dc7444945-sngrd" Mar 11 08:44:57 crc kubenswrapper[4808]: I0311 08:44:57.886228 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a6248a14-c4ab-4a51-85b9-27eb8196d118-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7dc7444945-sngrd\" (UID: \"a6248a14-c4ab-4a51-85b9-27eb8196d118\") " pod="openshift-authentication/oauth-openshift-7dc7444945-sngrd" Mar 11 08:44:57 crc kubenswrapper[4808]: I0311 08:44:57.886274 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a6248a14-c4ab-4a51-85b9-27eb8196d118-audit-policies\") pod \"oauth-openshift-7dc7444945-sngrd\" (UID: \"a6248a14-c4ab-4a51-85b9-27eb8196d118\") " pod="openshift-authentication/oauth-openshift-7dc7444945-sngrd" Mar 11 08:44:57 crc kubenswrapper[4808]: I0311 08:44:57.886323 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a6248a14-c4ab-4a51-85b9-27eb8196d118-audit-dir\") pod \"oauth-openshift-7dc7444945-sngrd\" (UID: \"a6248a14-c4ab-4a51-85b9-27eb8196d118\") " pod="openshift-authentication/oauth-openshift-7dc7444945-sngrd" Mar 11 08:44:57 crc kubenswrapper[4808]: I0311 08:44:57.886344 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a6248a14-c4ab-4a51-85b9-27eb8196d118-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7dc7444945-sngrd\" (UID: \"a6248a14-c4ab-4a51-85b9-27eb8196d118\") " pod="openshift-authentication/oauth-openshift-7dc7444945-sngrd" Mar 11 08:44:57 crc kubenswrapper[4808]: I0311 08:44:57.886382 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlxrj\" (UniqueName: \"kubernetes.io/projected/a6248a14-c4ab-4a51-85b9-27eb8196d118-kube-api-access-wlxrj\") pod \"oauth-openshift-7dc7444945-sngrd\" (UID: \"a6248a14-c4ab-4a51-85b9-27eb8196d118\") " pod="openshift-authentication/oauth-openshift-7dc7444945-sngrd" Mar 11 08:44:57 crc kubenswrapper[4808]: I0311 08:44:57.886408 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a6248a14-c4ab-4a51-85b9-27eb8196d118-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7dc7444945-sngrd\" (UID: \"a6248a14-c4ab-4a51-85b9-27eb8196d118\") " pod="openshift-authentication/oauth-openshift-7dc7444945-sngrd" Mar 11 08:44:57 crc kubenswrapper[4808]: I0311 08:44:57.886446 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a6248a14-c4ab-4a51-85b9-27eb8196d118-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7dc7444945-sngrd\" (UID: \"a6248a14-c4ab-4a51-85b9-27eb8196d118\") " pod="openshift-authentication/oauth-openshift-7dc7444945-sngrd" Mar 11 08:44:57 crc kubenswrapper[4808]: I0311 08:44:57.886483 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a6248a14-c4ab-4a51-85b9-27eb8196d118-audit-dir\") pod \"oauth-openshift-7dc7444945-sngrd\" (UID: \"a6248a14-c4ab-4a51-85b9-27eb8196d118\") " pod="openshift-authentication/oauth-openshift-7dc7444945-sngrd" Mar 11 08:44:57 crc kubenswrapper[4808]: I0311 08:44:57.886770 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a6248a14-c4ab-4a51-85b9-27eb8196d118-v4-0-config-system-router-certs\") pod \"oauth-openshift-7dc7444945-sngrd\" (UID: \"a6248a14-c4ab-4a51-85b9-27eb8196d118\") " pod="openshift-authentication/oauth-openshift-7dc7444945-sngrd" Mar 11 08:44:57 crc kubenswrapper[4808]: I0311 08:44:57.886814 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a6248a14-c4ab-4a51-85b9-27eb8196d118-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7dc7444945-sngrd\" (UID: \"a6248a14-c4ab-4a51-85b9-27eb8196d118\") " pod="openshift-authentication/oauth-openshift-7dc7444945-sngrd" Mar 11 08:44:57 crc kubenswrapper[4808]: I0311 08:44:57.886845 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a6248a14-c4ab-4a51-85b9-27eb8196d118-v4-0-config-system-service-ca\") pod \"oauth-openshift-7dc7444945-sngrd\" (UID: \"a6248a14-c4ab-4a51-85b9-27eb8196d118\") " pod="openshift-authentication/oauth-openshift-7dc7444945-sngrd" Mar 11 08:44:57 crc kubenswrapper[4808]: I0311 08:44:57.886871 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a6248a14-c4ab-4a51-85b9-27eb8196d118-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7dc7444945-sngrd\" (UID: \"a6248a14-c4ab-4a51-85b9-27eb8196d118\") " pod="openshift-authentication/oauth-openshift-7dc7444945-sngrd" Mar 11 08:44:57 crc kubenswrapper[4808]: I0311 08:44:57.886905 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a6248a14-c4ab-4a51-85b9-27eb8196d118-v4-0-config-system-session\") pod \"oauth-openshift-7dc7444945-sngrd\" (UID: \"a6248a14-c4ab-4a51-85b9-27eb8196d118\") " pod="openshift-authentication/oauth-openshift-7dc7444945-sngrd" Mar 11 08:44:57 crc kubenswrapper[4808]: I0311 08:44:57.887457 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a6248a14-c4ab-4a51-85b9-27eb8196d118-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7dc7444945-sngrd\" (UID: \"a6248a14-c4ab-4a51-85b9-27eb8196d118\") " pod="openshift-authentication/oauth-openshift-7dc7444945-sngrd" Mar 11 08:44:57 crc kubenswrapper[4808]: I0311 08:44:57.887696 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a6248a14-c4ab-4a51-85b9-27eb8196d118-audit-policies\") pod \"oauth-openshift-7dc7444945-sngrd\" (UID: \"a6248a14-c4ab-4a51-85b9-27eb8196d118\") " pod="openshift-authentication/oauth-openshift-7dc7444945-sngrd" Mar 11 08:44:57 crc kubenswrapper[4808]: I0311 08:44:57.887733 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a6248a14-c4ab-4a51-85b9-27eb8196d118-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7dc7444945-sngrd\" (UID: \"a6248a14-c4ab-4a51-85b9-27eb8196d118\") " pod="openshift-authentication/oauth-openshift-7dc7444945-sngrd" Mar 11 08:44:57 crc kubenswrapper[4808]: I0311 08:44:57.888174 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a6248a14-c4ab-4a51-85b9-27eb8196d118-v4-0-config-system-service-ca\") pod \"oauth-openshift-7dc7444945-sngrd\" (UID: \"a6248a14-c4ab-4a51-85b9-27eb8196d118\") " pod="openshift-authentication/oauth-openshift-7dc7444945-sngrd" Mar 11 08:44:57 crc kubenswrapper[4808]: I0311 08:44:57.892572 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a6248a14-c4ab-4a51-85b9-27eb8196d118-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7dc7444945-sngrd\" (UID: \"a6248a14-c4ab-4a51-85b9-27eb8196d118\") " pod="openshift-authentication/oauth-openshift-7dc7444945-sngrd" Mar 11 08:44:57 crc kubenswrapper[4808]: I0311 08:44:57.893410 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a6248a14-c4ab-4a51-85b9-27eb8196d118-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7dc7444945-sngrd\" (UID: \"a6248a14-c4ab-4a51-85b9-27eb8196d118\") " pod="openshift-authentication/oauth-openshift-7dc7444945-sngrd" Mar 11 08:44:57 crc kubenswrapper[4808]: I0311 08:44:57.893954 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a6248a14-c4ab-4a51-85b9-27eb8196d118-v4-0-config-user-template-login\") pod \"oauth-openshift-7dc7444945-sngrd\" (UID: \"a6248a14-c4ab-4a51-85b9-27eb8196d118\") " pod="openshift-authentication/oauth-openshift-7dc7444945-sngrd" Mar 11 08:44:57 crc kubenswrapper[4808]: I0311 08:44:57.894961 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a6248a14-c4ab-4a51-85b9-27eb8196d118-v4-0-config-system-session\") pod \"oauth-openshift-7dc7444945-sngrd\" (UID: \"a6248a14-c4ab-4a51-85b9-27eb8196d118\") " pod="openshift-authentication/oauth-openshift-7dc7444945-sngrd" Mar 11 08:44:57 crc kubenswrapper[4808]: I0311 08:44:57.902266 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a6248a14-c4ab-4a51-85b9-27eb8196d118-v4-0-config-user-template-error\") pod \"oauth-openshift-7dc7444945-sngrd\" (UID: \"a6248a14-c4ab-4a51-85b9-27eb8196d118\") " pod="openshift-authentication/oauth-openshift-7dc7444945-sngrd" Mar 11 08:44:57 crc kubenswrapper[4808]: I0311 08:44:57.902436 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a6248a14-c4ab-4a51-85b9-27eb8196d118-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7dc7444945-sngrd\" (UID: \"a6248a14-c4ab-4a51-85b9-27eb8196d118\") " pod="openshift-authentication/oauth-openshift-7dc7444945-sngrd" Mar 11 08:44:57 crc kubenswrapper[4808]: I0311 08:44:57.903520 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a6248a14-c4ab-4a51-85b9-27eb8196d118-v4-0-config-system-router-certs\") pod \"oauth-openshift-7dc7444945-sngrd\" (UID: \"a6248a14-c4ab-4a51-85b9-27eb8196d118\") " pod="openshift-authentication/oauth-openshift-7dc7444945-sngrd" Mar 11 08:44:57 crc kubenswrapper[4808]: I0311 08:44:57.903618 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a6248a14-c4ab-4a51-85b9-27eb8196d118-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7dc7444945-sngrd\" (UID: \"a6248a14-c4ab-4a51-85b9-27eb8196d118\") " pod="openshift-authentication/oauth-openshift-7dc7444945-sngrd" Mar 11 08:44:57 crc kubenswrapper[4808]: I0311 08:44:57.920669 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlxrj\" (UniqueName: \"kubernetes.io/projected/a6248a14-c4ab-4a51-85b9-27eb8196d118-kube-api-access-wlxrj\") pod \"oauth-openshift-7dc7444945-sngrd\" (UID: \"a6248a14-c4ab-4a51-85b9-27eb8196d118\") " pod="openshift-authentication/oauth-openshift-7dc7444945-sngrd" Mar 11 08:44:58 crc kubenswrapper[4808]: I0311 08:44:58.025435 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7dc7444945-sngrd" Mar 11 08:44:58 crc kubenswrapper[4808]: I0311 08:44:58.085134 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 11 08:44:58 crc kubenswrapper[4808]: I0311 08:44:58.241127 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 11 08:44:58 crc kubenswrapper[4808]: I0311 08:44:58.251885 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Mar 11 08:44:58 crc kubenswrapper[4808]: I0311 08:44:58.350202 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 11 08:44:58 crc kubenswrapper[4808]: I0311 08:44:58.378691 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 11 08:44:58 crc kubenswrapper[4808]: I0311 08:44:58.402915 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Mar 11 08:44:58 crc kubenswrapper[4808]: I0311 08:44:58.415732 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 11 08:44:58 crc kubenswrapper[4808]: I0311 08:44:58.574722 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Mar 11 08:44:58 crc kubenswrapper[4808]: I0311 08:44:58.651220 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 11 08:44:58 crc kubenswrapper[4808]: I0311 08:44:58.702675 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 11 08:44:58 crc kubenswrapper[4808]: I0311 08:44:58.869319 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 11 08:44:58 crc kubenswrapper[4808]: I0311 08:44:58.883441 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 11 08:44:59 crc kubenswrapper[4808]: I0311 08:44:59.027985 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 11 08:44:59 crc kubenswrapper[4808]: I0311 08:44:59.076210 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 11 08:44:59 crc kubenswrapper[4808]: I0311 08:44:59.198308 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 11 08:44:59 crc kubenswrapper[4808]: I0311 08:44:59.230756 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 11 08:44:59 crc kubenswrapper[4808]: I0311 08:44:59.272411 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Mar 11 08:44:59 crc kubenswrapper[4808]: I0311 08:44:59.308764 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 11 08:44:59 crc kubenswrapper[4808]: I0311 08:44:59.467949 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7dc7444945-sngrd"] Mar 11 08:44:59 crc kubenswrapper[4808]: I0311 08:44:59.520249 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 11 08:44:59 crc kubenswrapper[4808]: I0311 08:44:59.525406 4808 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 11 08:44:59 crc kubenswrapper[4808]: I0311 08:44:59.538053 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 11 08:44:59 crc kubenswrapper[4808]: I0311 08:44:59.666669 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 11 08:44:59 crc kubenswrapper[4808]: I0311 08:44:59.752495 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 11 08:44:59 crc kubenswrapper[4808]: I0311 08:44:59.766472 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Mar 11 08:44:59 crc kubenswrapper[4808]: I0311 08:44:59.767009 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 11 08:44:59 crc kubenswrapper[4808]: I0311 08:44:59.844594 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 11 08:45:00 crc kubenswrapper[4808]: I0311 08:45:00.048577 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7dc7444945-sngrd" event={"ID":"a6248a14-c4ab-4a51-85b9-27eb8196d118","Type":"ContainerStarted","Data":"9a204929ee479300564a9135326288a2adff446dd40a6d971af71ec4a9449491"} Mar 11 08:45:00 crc kubenswrapper[4808]: I0311 08:45:00.048643 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7dc7444945-sngrd" event={"ID":"a6248a14-c4ab-4a51-85b9-27eb8196d118","Type":"ContainerStarted","Data":"2e96a92bcd8f0b2220d8f46e61cd596053f6938db74ce51115c24727b6c53eef"} Mar 11 08:45:00 crc kubenswrapper[4808]: I0311 08:45:00.048828 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-7dc7444945-sngrd" Mar 11 08:45:00 crc kubenswrapper[4808]: I0311 08:45:00.075444 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-7dc7444945-sngrd" podStartSLOduration=55.075426752 podStartE2EDuration="55.075426752s" podCreationTimestamp="2026-03-11 08:44:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:45:00.073617691 +0000 UTC m=+351.026941011" watchObservedRunningTime="2026-03-11 08:45:00.075426752 +0000 UTC m=+351.028750072" Mar 11 08:45:00 crc kubenswrapper[4808]: I0311 08:45:00.163434 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29553645-mfhm9"] Mar 11 08:45:00 crc kubenswrapper[4808]: I0311 08:45:00.167968 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29553645-mfhm9" Mar 11 08:45:00 crc kubenswrapper[4808]: I0311 08:45:00.181579 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 11 08:45:00 crc kubenswrapper[4808]: I0311 08:45:00.181920 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 11 08:45:00 crc kubenswrapper[4808]: I0311 08:45:00.194666 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29553645-mfhm9"] Mar 11 08:45:00 crc kubenswrapper[4808]: I0311 08:45:00.316865 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/38820ed5-b98d-45c8-8935-e18657f601b2-config-volume\") pod \"collect-profiles-29553645-mfhm9\" (UID: \"38820ed5-b98d-45c8-8935-e18657f601b2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553645-mfhm9" Mar 11 08:45:00 crc kubenswrapper[4808]: I0311 08:45:00.316970 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6d795\" (UniqueName: \"kubernetes.io/projected/38820ed5-b98d-45c8-8935-e18657f601b2-kube-api-access-6d795\") pod \"collect-profiles-29553645-mfhm9\" (UID: \"38820ed5-b98d-45c8-8935-e18657f601b2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553645-mfhm9" Mar 11 08:45:00 crc kubenswrapper[4808]: I0311 08:45:00.317020 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/38820ed5-b98d-45c8-8935-e18657f601b2-secret-volume\") pod \"collect-profiles-29553645-mfhm9\" (UID: \"38820ed5-b98d-45c8-8935-e18657f601b2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553645-mfhm9" Mar 11 08:45:00 crc kubenswrapper[4808]: I0311 08:45:00.332530 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 11 08:45:00 crc kubenswrapper[4808]: I0311 08:45:00.350593 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-7dc7444945-sngrd" Mar 11 08:45:00 crc kubenswrapper[4808]: I0311 08:45:00.378549 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 11 08:45:00 crc kubenswrapper[4808]: I0311 08:45:00.392348 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 11 08:45:00 crc kubenswrapper[4808]: I0311 08:45:00.417934 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/38820ed5-b98d-45c8-8935-e18657f601b2-config-volume\") pod \"collect-profiles-29553645-mfhm9\" (UID: \"38820ed5-b98d-45c8-8935-e18657f601b2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553645-mfhm9" Mar 11 08:45:00 crc kubenswrapper[4808]: I0311 08:45:00.418878 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/38820ed5-b98d-45c8-8935-e18657f601b2-config-volume\") pod \"collect-profiles-29553645-mfhm9\" (UID: \"38820ed5-b98d-45c8-8935-e18657f601b2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553645-mfhm9" Mar 11 08:45:00 crc kubenswrapper[4808]: I0311 08:45:00.419044 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6d795\" (UniqueName: \"kubernetes.io/projected/38820ed5-b98d-45c8-8935-e18657f601b2-kube-api-access-6d795\") pod \"collect-profiles-29553645-mfhm9\" (UID: \"38820ed5-b98d-45c8-8935-e18657f601b2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553645-mfhm9" Mar 11 08:45:00 crc kubenswrapper[4808]: I0311 08:45:00.419430 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/38820ed5-b98d-45c8-8935-e18657f601b2-secret-volume\") pod \"collect-profiles-29553645-mfhm9\" (UID: \"38820ed5-b98d-45c8-8935-e18657f601b2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553645-mfhm9" Mar 11 08:45:00 crc kubenswrapper[4808]: I0311 08:45:00.424722 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/38820ed5-b98d-45c8-8935-e18657f601b2-secret-volume\") pod \"collect-profiles-29553645-mfhm9\" (UID: \"38820ed5-b98d-45c8-8935-e18657f601b2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553645-mfhm9" Mar 11 08:45:00 crc kubenswrapper[4808]: I0311 08:45:00.434287 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6d795\" (UniqueName: \"kubernetes.io/projected/38820ed5-b98d-45c8-8935-e18657f601b2-kube-api-access-6d795\") pod \"collect-profiles-29553645-mfhm9\" (UID: \"38820ed5-b98d-45c8-8935-e18657f601b2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553645-mfhm9" Mar 11 08:45:00 crc kubenswrapper[4808]: I0311 08:45:00.496250 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29553645-mfhm9" Mar 11 08:45:00 crc kubenswrapper[4808]: I0311 08:45:00.510853 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 11 08:45:00 crc kubenswrapper[4808]: I0311 08:45:00.545722 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 11 08:45:00 crc kubenswrapper[4808]: I0311 08:45:00.582392 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 11 08:45:00 crc kubenswrapper[4808]: I0311 08:45:00.653184 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Mar 11 08:45:00 crc kubenswrapper[4808]: I0311 08:45:00.887562 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 11 08:45:00 crc kubenswrapper[4808]: I0311 08:45:00.889300 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29553645-mfhm9"] Mar 11 08:45:00 crc kubenswrapper[4808]: W0311 08:45:00.908867 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod38820ed5_b98d_45c8_8935_e18657f601b2.slice/crio-50efaa4385513854011c88055edad998dbba9dd5911c044b4b225b76265e692e WatchSource:0}: Error finding container 50efaa4385513854011c88055edad998dbba9dd5911c044b4b225b76265e692e: Status 404 returned error can't find the container with id 50efaa4385513854011c88055edad998dbba9dd5911c044b4b225b76265e692e Mar 11 08:45:01 crc kubenswrapper[4808]: I0311 08:45:01.054939 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29553645-mfhm9" event={"ID":"38820ed5-b98d-45c8-8935-e18657f601b2","Type":"ContainerStarted","Data":"d895f084cd2c6d40dc6a13803628f2f5f55e341124eb6b77075602b58de710e5"} Mar 11 08:45:01 crc kubenswrapper[4808]: I0311 08:45:01.054992 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29553645-mfhm9" event={"ID":"38820ed5-b98d-45c8-8935-e18657f601b2","Type":"ContainerStarted","Data":"50efaa4385513854011c88055edad998dbba9dd5911c044b4b225b76265e692e"} Mar 11 08:45:01 crc kubenswrapper[4808]: I0311 08:45:01.068691 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29553645-mfhm9" podStartSLOduration=1.068669587 podStartE2EDuration="1.068669587s" podCreationTimestamp="2026-03-11 08:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:45:01.066438824 +0000 UTC m=+352.019762164" watchObservedRunningTime="2026-03-11 08:45:01.068669587 +0000 UTC m=+352.021992917" Mar 11 08:45:01 crc kubenswrapper[4808]: I0311 08:45:01.388503 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Mar 11 08:45:01 crc kubenswrapper[4808]: I0311 08:45:01.694175 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 11 08:45:01 crc kubenswrapper[4808]: I0311 08:45:01.975894 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 11 08:45:02 crc kubenswrapper[4808]: I0311 08:45:02.060395 4808 generic.go:334] "Generic (PLEG): container finished" podID="38820ed5-b98d-45c8-8935-e18657f601b2" containerID="d895f084cd2c6d40dc6a13803628f2f5f55e341124eb6b77075602b58de710e5" exitCode=0 Mar 11 08:45:02 crc kubenswrapper[4808]: I0311 08:45:02.060519 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29553645-mfhm9" event={"ID":"38820ed5-b98d-45c8-8935-e18657f601b2","Type":"ContainerDied","Data":"d895f084cd2c6d40dc6a13803628f2f5f55e341124eb6b77075602b58de710e5"} Mar 11 08:45:02 crc kubenswrapper[4808]: I0311 08:45:02.609457 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 11 08:45:02 crc kubenswrapper[4808]: I0311 08:45:02.685156 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Mar 11 08:45:02 crc kubenswrapper[4808]: I0311 08:45:02.685292 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 11 08:45:02 crc kubenswrapper[4808]: I0311 08:45:02.747385 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 11 08:45:02 crc kubenswrapper[4808]: I0311 08:45:02.747450 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 11 08:45:02 crc kubenswrapper[4808]: I0311 08:45:02.747496 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 11 08:45:02 crc kubenswrapper[4808]: I0311 08:45:02.747536 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 11 08:45:02 crc kubenswrapper[4808]: I0311 08:45:02.747595 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 11 08:45:02 crc kubenswrapper[4808]: I0311 08:45:02.747582 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 11 08:45:02 crc kubenswrapper[4808]: I0311 08:45:02.747664 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 11 08:45:02 crc kubenswrapper[4808]: I0311 08:45:02.747664 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 11 08:45:02 crc kubenswrapper[4808]: I0311 08:45:02.748108 4808 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Mar 11 08:45:02 crc kubenswrapper[4808]: I0311 08:45:02.748420 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 11 08:45:02 crc kubenswrapper[4808]: I0311 08:45:02.758433 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 11 08:45:02 crc kubenswrapper[4808]: I0311 08:45:02.849664 4808 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Mar 11 08:45:02 crc kubenswrapper[4808]: I0311 08:45:02.849699 4808 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Mar 11 08:45:02 crc kubenswrapper[4808]: I0311 08:45:02.849710 4808 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Mar 11 08:45:02 crc kubenswrapper[4808]: I0311 08:45:02.849720 4808 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Mar 11 08:45:03 crc kubenswrapper[4808]: I0311 08:45:03.068987 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Mar 11 08:45:03 crc kubenswrapper[4808]: I0311 08:45:03.069330 4808 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="4286bf7bd225f1f8e07665f0db6520ce46e55955aae3192a593aa24b290b6699" exitCode=137 Mar 11 08:45:03 crc kubenswrapper[4808]: I0311 08:45:03.069433 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 11 08:45:03 crc kubenswrapper[4808]: I0311 08:45:03.069483 4808 scope.go:117] "RemoveContainer" containerID="4286bf7bd225f1f8e07665f0db6520ce46e55955aae3192a593aa24b290b6699" Mar 11 08:45:03 crc kubenswrapper[4808]: I0311 08:45:03.088730 4808 scope.go:117] "RemoveContainer" containerID="4286bf7bd225f1f8e07665f0db6520ce46e55955aae3192a593aa24b290b6699" Mar 11 08:45:03 crc kubenswrapper[4808]: E0311 08:45:03.089140 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4286bf7bd225f1f8e07665f0db6520ce46e55955aae3192a593aa24b290b6699\": container with ID starting with 4286bf7bd225f1f8e07665f0db6520ce46e55955aae3192a593aa24b290b6699 not found: ID does not exist" containerID="4286bf7bd225f1f8e07665f0db6520ce46e55955aae3192a593aa24b290b6699" Mar 11 08:45:03 crc kubenswrapper[4808]: I0311 08:45:03.089178 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4286bf7bd225f1f8e07665f0db6520ce46e55955aae3192a593aa24b290b6699"} err="failed to get container status \"4286bf7bd225f1f8e07665f0db6520ce46e55955aae3192a593aa24b290b6699\": rpc error: code = NotFound desc = could not find container \"4286bf7bd225f1f8e07665f0db6520ce46e55955aae3192a593aa24b290b6699\": container with ID starting with 4286bf7bd225f1f8e07665f0db6520ce46e55955aae3192a593aa24b290b6699 not found: ID does not exist" Mar 11 08:45:03 crc kubenswrapper[4808]: I0311 08:45:03.320140 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29553645-mfhm9" Mar 11 08:45:03 crc kubenswrapper[4808]: I0311 08:45:03.457208 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6d795\" (UniqueName: \"kubernetes.io/projected/38820ed5-b98d-45c8-8935-e18657f601b2-kube-api-access-6d795\") pod \"38820ed5-b98d-45c8-8935-e18657f601b2\" (UID: \"38820ed5-b98d-45c8-8935-e18657f601b2\") " Mar 11 08:45:03 crc kubenswrapper[4808]: I0311 08:45:03.457317 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/38820ed5-b98d-45c8-8935-e18657f601b2-config-volume\") pod \"38820ed5-b98d-45c8-8935-e18657f601b2\" (UID: \"38820ed5-b98d-45c8-8935-e18657f601b2\") " Mar 11 08:45:03 crc kubenswrapper[4808]: I0311 08:45:03.457415 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/38820ed5-b98d-45c8-8935-e18657f601b2-secret-volume\") pod \"38820ed5-b98d-45c8-8935-e18657f601b2\" (UID: \"38820ed5-b98d-45c8-8935-e18657f601b2\") " Mar 11 08:45:03 crc kubenswrapper[4808]: I0311 08:45:03.458408 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38820ed5-b98d-45c8-8935-e18657f601b2-config-volume" (OuterVolumeSpecName: "config-volume") pod "38820ed5-b98d-45c8-8935-e18657f601b2" (UID: "38820ed5-b98d-45c8-8935-e18657f601b2"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:45:03 crc kubenswrapper[4808]: I0311 08:45:03.461522 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38820ed5-b98d-45c8-8935-e18657f601b2-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "38820ed5-b98d-45c8-8935-e18657f601b2" (UID: "38820ed5-b98d-45c8-8935-e18657f601b2"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:45:03 crc kubenswrapper[4808]: I0311 08:45:03.462960 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38820ed5-b98d-45c8-8935-e18657f601b2-kube-api-access-6d795" (OuterVolumeSpecName: "kube-api-access-6d795") pod "38820ed5-b98d-45c8-8935-e18657f601b2" (UID: "38820ed5-b98d-45c8-8935-e18657f601b2"). InnerVolumeSpecName "kube-api-access-6d795". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:45:03 crc kubenswrapper[4808]: I0311 08:45:03.559200 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6d795\" (UniqueName: \"kubernetes.io/projected/38820ed5-b98d-45c8-8935-e18657f601b2-kube-api-access-6d795\") on node \"crc\" DevicePath \"\"" Mar 11 08:45:03 crc kubenswrapper[4808]: I0311 08:45:03.559246 4808 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/38820ed5-b98d-45c8-8935-e18657f601b2-config-volume\") on node \"crc\" DevicePath \"\"" Mar 11 08:45:03 crc kubenswrapper[4808]: I0311 08:45:03.559256 4808 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/38820ed5-b98d-45c8-8935-e18657f601b2-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 11 08:45:03 crc kubenswrapper[4808]: I0311 08:45:03.799214 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Mar 11 08:45:03 crc kubenswrapper[4808]: I0311 08:45:03.799548 4808 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Mar 11 08:45:03 crc kubenswrapper[4808]: I0311 08:45:03.813922 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Mar 11 08:45:03 crc kubenswrapper[4808]: I0311 08:45:03.813962 4808 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="d4e66cfb-6ffa-47f3-8461-d7bdb87da118" Mar 11 08:45:03 crc kubenswrapper[4808]: I0311 08:45:03.818038 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Mar 11 08:45:03 crc kubenswrapper[4808]: I0311 08:45:03.818164 4808 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="d4e66cfb-6ffa-47f3-8461-d7bdb87da118" Mar 11 08:45:04 crc kubenswrapper[4808]: I0311 08:45:04.015849 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7774777c67-x9jd7" Mar 11 08:45:04 crc kubenswrapper[4808]: I0311 08:45:04.077341 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29553645-mfhm9" Mar 11 08:45:04 crc kubenswrapper[4808]: I0311 08:45:04.077518 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29553645-mfhm9" event={"ID":"38820ed5-b98d-45c8-8935-e18657f601b2","Type":"ContainerDied","Data":"50efaa4385513854011c88055edad998dbba9dd5911c044b4b225b76265e692e"} Mar 11 08:45:04 crc kubenswrapper[4808]: I0311 08:45:04.077568 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50efaa4385513854011c88055edad998dbba9dd5911c044b4b225b76265e692e" Mar 11 08:45:05 crc kubenswrapper[4808]: I0311 08:45:05.789047 4808 scope.go:117] "RemoveContainer" containerID="1da347d6f5a08a0cffe2b8a4ef6c33d2775ae118cb2f68c3aa3b69d92e2d2ea2" Mar 11 08:45:05 crc kubenswrapper[4808]: E0311 08:45:05.789686 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=check-endpoints pod=network-check-source-55646444c4-trplf_openshift-network-diagnostics(9d751cbb-f2e2-430d-9754-c882a5e924a5)\"" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 11 08:45:06 crc kubenswrapper[4808]: I0311 08:45:06.428806 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:45:16 crc kubenswrapper[4808]: I0311 08:45:16.789395 4808 scope.go:117] "RemoveContainer" containerID="1da347d6f5a08a0cffe2b8a4ef6c33d2775ae118cb2f68c3aa3b69d92e2d2ea2" Mar 11 08:45:17 crc kubenswrapper[4808]: I0311 08:45:17.156689 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/2.log" Mar 11 08:45:17 crc kubenswrapper[4808]: I0311 08:45:17.156968 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"4b7b2962c16f1f0514e1f64735eeddc3f33509f83f180b83757eb225a5690f65"} Mar 11 08:45:24 crc kubenswrapper[4808]: I0311 08:45:24.196690 4808 generic.go:334] "Generic (PLEG): container finished" podID="090328a2-0e9e-49a5-b82a-e35947e2fbf2" containerID="2e74dbe82b83d6432a70673b4d9b66afacb6c9f1751b3a5da20c04b029b95669" exitCode=0 Mar 11 08:45:24 crc kubenswrapper[4808]: I0311 08:45:24.196782 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-b25qz" event={"ID":"090328a2-0e9e-49a5-b82a-e35947e2fbf2","Type":"ContainerDied","Data":"2e74dbe82b83d6432a70673b4d9b66afacb6c9f1751b3a5da20c04b029b95669"} Mar 11 08:45:24 crc kubenswrapper[4808]: I0311 08:45:24.197803 4808 scope.go:117] "RemoveContainer" containerID="2e74dbe82b83d6432a70673b4d9b66afacb6c9f1751b3a5da20c04b029b95669" Mar 11 08:45:25 crc kubenswrapper[4808]: I0311 08:45:25.205541 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-b25qz" event={"ID":"090328a2-0e9e-49a5-b82a-e35947e2fbf2","Type":"ContainerStarted","Data":"77de04bad18f0cd79042a4028b77233e899f3dd7beddaef38d8e7bd316d90b6f"} Mar 11 08:45:25 crc kubenswrapper[4808]: I0311 08:45:25.207403 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-b25qz" Mar 11 08:45:25 crc kubenswrapper[4808]: I0311 08:45:25.209201 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-b25qz" Mar 11 08:45:46 crc kubenswrapper[4808]: I0311 08:45:46.027299 4808 patch_prober.go:28] interesting pod/machine-config-daemon-tfsm9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 08:45:46 crc kubenswrapper[4808]: I0311 08:45:46.027970 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 08:46:00 crc kubenswrapper[4808]: I0311 08:46:00.135224 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553646-85knb"] Mar 11 08:46:00 crc kubenswrapper[4808]: E0311 08:46:00.136915 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38820ed5-b98d-45c8-8935-e18657f601b2" containerName="collect-profiles" Mar 11 08:46:00 crc kubenswrapper[4808]: I0311 08:46:00.136964 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="38820ed5-b98d-45c8-8935-e18657f601b2" containerName="collect-profiles" Mar 11 08:46:00 crc kubenswrapper[4808]: I0311 08:46:00.137269 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="38820ed5-b98d-45c8-8935-e18657f601b2" containerName="collect-profiles" Mar 11 08:46:00 crc kubenswrapper[4808]: I0311 08:46:00.138153 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553646-85knb" Mar 11 08:46:00 crc kubenswrapper[4808]: I0311 08:46:00.143212 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-8r7cc" Mar 11 08:46:00 crc kubenswrapper[4808]: I0311 08:46:00.143228 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 08:46:00 crc kubenswrapper[4808]: I0311 08:46:00.143940 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 08:46:00 crc kubenswrapper[4808]: I0311 08:46:00.144472 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553646-85knb"] Mar 11 08:46:00 crc kubenswrapper[4808]: I0311 08:46:00.210730 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bq9fg\" (UniqueName: \"kubernetes.io/projected/9ff9f1ae-9f2f-4a38-8728-1c6f8b533daf-kube-api-access-bq9fg\") pod \"auto-csr-approver-29553646-85knb\" (UID: \"9ff9f1ae-9f2f-4a38-8728-1c6f8b533daf\") " pod="openshift-infra/auto-csr-approver-29553646-85knb" Mar 11 08:46:00 crc kubenswrapper[4808]: I0311 08:46:00.312674 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bq9fg\" (UniqueName: \"kubernetes.io/projected/9ff9f1ae-9f2f-4a38-8728-1c6f8b533daf-kube-api-access-bq9fg\") pod \"auto-csr-approver-29553646-85knb\" (UID: \"9ff9f1ae-9f2f-4a38-8728-1c6f8b533daf\") " pod="openshift-infra/auto-csr-approver-29553646-85knb" Mar 11 08:46:00 crc kubenswrapper[4808]: I0311 08:46:00.334124 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bq9fg\" (UniqueName: \"kubernetes.io/projected/9ff9f1ae-9f2f-4a38-8728-1c6f8b533daf-kube-api-access-bq9fg\") pod \"auto-csr-approver-29553646-85knb\" (UID: \"9ff9f1ae-9f2f-4a38-8728-1c6f8b533daf\") " pod="openshift-infra/auto-csr-approver-29553646-85knb" Mar 11 08:46:00 crc kubenswrapper[4808]: I0311 08:46:00.472582 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553646-85knb" Mar 11 08:46:00 crc kubenswrapper[4808]: I0311 08:46:00.902743 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553646-85knb"] Mar 11 08:46:01 crc kubenswrapper[4808]: I0311 08:46:01.425804 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553646-85knb" event={"ID":"9ff9f1ae-9f2f-4a38-8728-1c6f8b533daf","Type":"ContainerStarted","Data":"26b6bd20fa26567f88b293082625e8d2cf0d55b7f871454743fad090779733eb"} Mar 11 08:46:02 crc kubenswrapper[4808]: I0311 08:46:02.432834 4808 generic.go:334] "Generic (PLEG): container finished" podID="9ff9f1ae-9f2f-4a38-8728-1c6f8b533daf" containerID="840fd9122e1eb3d0aba1cb4059d9e0be73b1c4ee9815beb3b47b642d11b61de2" exitCode=0 Mar 11 08:46:02 crc kubenswrapper[4808]: I0311 08:46:02.432950 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553646-85knb" event={"ID":"9ff9f1ae-9f2f-4a38-8728-1c6f8b533daf","Type":"ContainerDied","Data":"840fd9122e1eb3d0aba1cb4059d9e0be73b1c4ee9815beb3b47b642d11b61de2"} Mar 11 08:46:03 crc kubenswrapper[4808]: I0311 08:46:03.781943 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553646-85knb" Mar 11 08:46:03 crc kubenswrapper[4808]: I0311 08:46:03.856057 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bq9fg\" (UniqueName: \"kubernetes.io/projected/9ff9f1ae-9f2f-4a38-8728-1c6f8b533daf-kube-api-access-bq9fg\") pod \"9ff9f1ae-9f2f-4a38-8728-1c6f8b533daf\" (UID: \"9ff9f1ae-9f2f-4a38-8728-1c6f8b533daf\") " Mar 11 08:46:03 crc kubenswrapper[4808]: I0311 08:46:03.861710 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ff9f1ae-9f2f-4a38-8728-1c6f8b533daf-kube-api-access-bq9fg" (OuterVolumeSpecName: "kube-api-access-bq9fg") pod "9ff9f1ae-9f2f-4a38-8728-1c6f8b533daf" (UID: "9ff9f1ae-9f2f-4a38-8728-1c6f8b533daf"). InnerVolumeSpecName "kube-api-access-bq9fg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:46:03 crc kubenswrapper[4808]: I0311 08:46:03.958687 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bq9fg\" (UniqueName: \"kubernetes.io/projected/9ff9f1ae-9f2f-4a38-8728-1c6f8b533daf-kube-api-access-bq9fg\") on node \"crc\" DevicePath \"\"" Mar 11 08:46:04 crc kubenswrapper[4808]: I0311 08:46:04.450599 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553646-85knb" event={"ID":"9ff9f1ae-9f2f-4a38-8728-1c6f8b533daf","Type":"ContainerDied","Data":"26b6bd20fa26567f88b293082625e8d2cf0d55b7f871454743fad090779733eb"} Mar 11 08:46:04 crc kubenswrapper[4808]: I0311 08:46:04.450653 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553646-85knb" Mar 11 08:46:04 crc kubenswrapper[4808]: I0311 08:46:04.450675 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26b6bd20fa26567f88b293082625e8d2cf0d55b7f871454743fad090779733eb" Mar 11 08:46:16 crc kubenswrapper[4808]: I0311 08:46:16.027463 4808 patch_prober.go:28] interesting pod/machine-config-daemon-tfsm9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 08:46:16 crc kubenswrapper[4808]: I0311 08:46:16.028470 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 08:46:39 crc kubenswrapper[4808]: I0311 08:46:39.475611 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-67rsf"] Mar 11 08:46:39 crc kubenswrapper[4808]: E0311 08:46:39.476556 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ff9f1ae-9f2f-4a38-8728-1c6f8b533daf" containerName="oc" Mar 11 08:46:39 crc kubenswrapper[4808]: I0311 08:46:39.476577 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ff9f1ae-9f2f-4a38-8728-1c6f8b533daf" containerName="oc" Mar 11 08:46:39 crc kubenswrapper[4808]: I0311 08:46:39.476695 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ff9f1ae-9f2f-4a38-8728-1c6f8b533daf" containerName="oc" Mar 11 08:46:39 crc kubenswrapper[4808]: I0311 08:46:39.477160 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-67rsf" Mar 11 08:46:39 crc kubenswrapper[4808]: I0311 08:46:39.494928 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-67rsf"] Mar 11 08:46:39 crc kubenswrapper[4808]: I0311 08:46:39.615743 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d4c4aa4f-6989-4bf1-8d88-54dac88b6716-installation-pull-secrets\") pod \"image-registry-66df7c8f76-67rsf\" (UID: \"d4c4aa4f-6989-4bf1-8d88-54dac88b6716\") " pod="openshift-image-registry/image-registry-66df7c8f76-67rsf" Mar 11 08:46:39 crc kubenswrapper[4808]: I0311 08:46:39.616103 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d4c4aa4f-6989-4bf1-8d88-54dac88b6716-bound-sa-token\") pod \"image-registry-66df7c8f76-67rsf\" (UID: \"d4c4aa4f-6989-4bf1-8d88-54dac88b6716\") " pod="openshift-image-registry/image-registry-66df7c8f76-67rsf" Mar 11 08:46:39 crc kubenswrapper[4808]: I0311 08:46:39.616313 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d4c4aa4f-6989-4bf1-8d88-54dac88b6716-registry-certificates\") pod \"image-registry-66df7c8f76-67rsf\" (UID: \"d4c4aa4f-6989-4bf1-8d88-54dac88b6716\") " pod="openshift-image-registry/image-registry-66df7c8f76-67rsf" Mar 11 08:46:39 crc kubenswrapper[4808]: I0311 08:46:39.616556 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d4c4aa4f-6989-4bf1-8d88-54dac88b6716-trusted-ca\") pod \"image-registry-66df7c8f76-67rsf\" (UID: \"d4c4aa4f-6989-4bf1-8d88-54dac88b6716\") " pod="openshift-image-registry/image-registry-66df7c8f76-67rsf" Mar 11 08:46:39 crc kubenswrapper[4808]: I0311 08:46:39.616758 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctf4b\" (UniqueName: \"kubernetes.io/projected/d4c4aa4f-6989-4bf1-8d88-54dac88b6716-kube-api-access-ctf4b\") pod \"image-registry-66df7c8f76-67rsf\" (UID: \"d4c4aa4f-6989-4bf1-8d88-54dac88b6716\") " pod="openshift-image-registry/image-registry-66df7c8f76-67rsf" Mar 11 08:46:39 crc kubenswrapper[4808]: I0311 08:46:39.616957 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-67rsf\" (UID: \"d4c4aa4f-6989-4bf1-8d88-54dac88b6716\") " pod="openshift-image-registry/image-registry-66df7c8f76-67rsf" Mar 11 08:46:39 crc kubenswrapper[4808]: I0311 08:46:39.617138 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d4c4aa4f-6989-4bf1-8d88-54dac88b6716-registry-tls\") pod \"image-registry-66df7c8f76-67rsf\" (UID: \"d4c4aa4f-6989-4bf1-8d88-54dac88b6716\") " pod="openshift-image-registry/image-registry-66df7c8f76-67rsf" Mar 11 08:46:39 crc kubenswrapper[4808]: I0311 08:46:39.617286 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d4c4aa4f-6989-4bf1-8d88-54dac88b6716-ca-trust-extracted\") pod \"image-registry-66df7c8f76-67rsf\" (UID: \"d4c4aa4f-6989-4bf1-8d88-54dac88b6716\") " pod="openshift-image-registry/image-registry-66df7c8f76-67rsf" Mar 11 08:46:39 crc kubenswrapper[4808]: I0311 08:46:39.662665 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-67rsf\" (UID: \"d4c4aa4f-6989-4bf1-8d88-54dac88b6716\") " pod="openshift-image-registry/image-registry-66df7c8f76-67rsf" Mar 11 08:46:39 crc kubenswrapper[4808]: I0311 08:46:39.718412 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d4c4aa4f-6989-4bf1-8d88-54dac88b6716-installation-pull-secrets\") pod \"image-registry-66df7c8f76-67rsf\" (UID: \"d4c4aa4f-6989-4bf1-8d88-54dac88b6716\") " pod="openshift-image-registry/image-registry-66df7c8f76-67rsf" Mar 11 08:46:39 crc kubenswrapper[4808]: I0311 08:46:39.718804 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d4c4aa4f-6989-4bf1-8d88-54dac88b6716-bound-sa-token\") pod \"image-registry-66df7c8f76-67rsf\" (UID: \"d4c4aa4f-6989-4bf1-8d88-54dac88b6716\") " pod="openshift-image-registry/image-registry-66df7c8f76-67rsf" Mar 11 08:46:39 crc kubenswrapper[4808]: I0311 08:46:39.719083 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d4c4aa4f-6989-4bf1-8d88-54dac88b6716-registry-certificates\") pod \"image-registry-66df7c8f76-67rsf\" (UID: \"d4c4aa4f-6989-4bf1-8d88-54dac88b6716\") " pod="openshift-image-registry/image-registry-66df7c8f76-67rsf" Mar 11 08:46:39 crc kubenswrapper[4808]: I0311 08:46:39.719327 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d4c4aa4f-6989-4bf1-8d88-54dac88b6716-trusted-ca\") pod \"image-registry-66df7c8f76-67rsf\" (UID: \"d4c4aa4f-6989-4bf1-8d88-54dac88b6716\") " pod="openshift-image-registry/image-registry-66df7c8f76-67rsf" Mar 11 08:46:39 crc kubenswrapper[4808]: I0311 08:46:39.719665 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctf4b\" (UniqueName: \"kubernetes.io/projected/d4c4aa4f-6989-4bf1-8d88-54dac88b6716-kube-api-access-ctf4b\") pod \"image-registry-66df7c8f76-67rsf\" (UID: \"d4c4aa4f-6989-4bf1-8d88-54dac88b6716\") " pod="openshift-image-registry/image-registry-66df7c8f76-67rsf" Mar 11 08:46:39 crc kubenswrapper[4808]: I0311 08:46:39.719883 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d4c4aa4f-6989-4bf1-8d88-54dac88b6716-registry-tls\") pod \"image-registry-66df7c8f76-67rsf\" (UID: \"d4c4aa4f-6989-4bf1-8d88-54dac88b6716\") " pod="openshift-image-registry/image-registry-66df7c8f76-67rsf" Mar 11 08:46:39 crc kubenswrapper[4808]: I0311 08:46:39.720030 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d4c4aa4f-6989-4bf1-8d88-54dac88b6716-ca-trust-extracted\") pod \"image-registry-66df7c8f76-67rsf\" (UID: \"d4c4aa4f-6989-4bf1-8d88-54dac88b6716\") " pod="openshift-image-registry/image-registry-66df7c8f76-67rsf" Mar 11 08:46:39 crc kubenswrapper[4808]: I0311 08:46:39.721276 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d4c4aa4f-6989-4bf1-8d88-54dac88b6716-ca-trust-extracted\") pod \"image-registry-66df7c8f76-67rsf\" (UID: \"d4c4aa4f-6989-4bf1-8d88-54dac88b6716\") " pod="openshift-image-registry/image-registry-66df7c8f76-67rsf" Mar 11 08:46:39 crc kubenswrapper[4808]: I0311 08:46:39.721817 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d4c4aa4f-6989-4bf1-8d88-54dac88b6716-registry-certificates\") pod \"image-registry-66df7c8f76-67rsf\" (UID: \"d4c4aa4f-6989-4bf1-8d88-54dac88b6716\") " pod="openshift-image-registry/image-registry-66df7c8f76-67rsf" Mar 11 08:46:39 crc kubenswrapper[4808]: I0311 08:46:39.723153 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d4c4aa4f-6989-4bf1-8d88-54dac88b6716-trusted-ca\") pod \"image-registry-66df7c8f76-67rsf\" (UID: \"d4c4aa4f-6989-4bf1-8d88-54dac88b6716\") " pod="openshift-image-registry/image-registry-66df7c8f76-67rsf" Mar 11 08:46:39 crc kubenswrapper[4808]: I0311 08:46:39.729689 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d4c4aa4f-6989-4bf1-8d88-54dac88b6716-installation-pull-secrets\") pod \"image-registry-66df7c8f76-67rsf\" (UID: \"d4c4aa4f-6989-4bf1-8d88-54dac88b6716\") " pod="openshift-image-registry/image-registry-66df7c8f76-67rsf" Mar 11 08:46:39 crc kubenswrapper[4808]: I0311 08:46:39.733607 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d4c4aa4f-6989-4bf1-8d88-54dac88b6716-registry-tls\") pod \"image-registry-66df7c8f76-67rsf\" (UID: \"d4c4aa4f-6989-4bf1-8d88-54dac88b6716\") " pod="openshift-image-registry/image-registry-66df7c8f76-67rsf" Mar 11 08:46:39 crc kubenswrapper[4808]: I0311 08:46:39.748724 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d4c4aa4f-6989-4bf1-8d88-54dac88b6716-bound-sa-token\") pod \"image-registry-66df7c8f76-67rsf\" (UID: \"d4c4aa4f-6989-4bf1-8d88-54dac88b6716\") " pod="openshift-image-registry/image-registry-66df7c8f76-67rsf" Mar 11 08:46:39 crc kubenswrapper[4808]: I0311 08:46:39.765260 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctf4b\" (UniqueName: \"kubernetes.io/projected/d4c4aa4f-6989-4bf1-8d88-54dac88b6716-kube-api-access-ctf4b\") pod \"image-registry-66df7c8f76-67rsf\" (UID: \"d4c4aa4f-6989-4bf1-8d88-54dac88b6716\") " pod="openshift-image-registry/image-registry-66df7c8f76-67rsf" Mar 11 08:46:39 crc kubenswrapper[4808]: I0311 08:46:39.798828 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-67rsf" Mar 11 08:46:40 crc kubenswrapper[4808]: I0311 08:46:40.089161 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-67rsf"] Mar 11 08:46:40 crc kubenswrapper[4808]: I0311 08:46:40.682244 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-67rsf" event={"ID":"d4c4aa4f-6989-4bf1-8d88-54dac88b6716","Type":"ContainerStarted","Data":"2bc0c4c1c6c486aa012cdffc3d9c730c009225ba59a47b642751d2158a693671"} Mar 11 08:46:40 crc kubenswrapper[4808]: I0311 08:46:40.683802 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-67rsf" event={"ID":"d4c4aa4f-6989-4bf1-8d88-54dac88b6716","Type":"ContainerStarted","Data":"f2d60453710aa830d896d566366716dfe13bde6033f2c837e65212f1637f1fc7"} Mar 11 08:46:40 crc kubenswrapper[4808]: I0311 08:46:40.683843 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-67rsf" Mar 11 08:46:40 crc kubenswrapper[4808]: I0311 08:46:40.717135 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-67rsf" podStartSLOduration=1.717106866 podStartE2EDuration="1.717106866s" podCreationTimestamp="2026-03-11 08:46:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:46:40.711478165 +0000 UTC m=+451.664801495" watchObservedRunningTime="2026-03-11 08:46:40.717106866 +0000 UTC m=+451.670430236" Mar 11 08:46:46 crc kubenswrapper[4808]: I0311 08:46:46.027876 4808 patch_prober.go:28] interesting pod/machine-config-daemon-tfsm9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 08:46:46 crc kubenswrapper[4808]: I0311 08:46:46.028571 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 08:46:46 crc kubenswrapper[4808]: I0311 08:46:46.028631 4808 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" Mar 11 08:46:46 crc kubenswrapper[4808]: I0311 08:46:46.029538 4808 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c6794dd7ab092f5a96326d5fa33059ecdf5805f11897a09c759ab292bb6c6eec"} pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 11 08:46:46 crc kubenswrapper[4808]: I0311 08:46:46.029640 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerName="machine-config-daemon" containerID="cri-o://c6794dd7ab092f5a96326d5fa33059ecdf5805f11897a09c759ab292bb6c6eec" gracePeriod=600 Mar 11 08:46:46 crc kubenswrapper[4808]: I0311 08:46:46.719106 4808 generic.go:334] "Generic (PLEG): container finished" podID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerID="c6794dd7ab092f5a96326d5fa33059ecdf5805f11897a09c759ab292bb6c6eec" exitCode=0 Mar 11 08:46:46 crc kubenswrapper[4808]: I0311 08:46:46.719188 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" event={"ID":"3dda5309-668d-4e3c-b3b2-1d708eecc578","Type":"ContainerDied","Data":"c6794dd7ab092f5a96326d5fa33059ecdf5805f11897a09c759ab292bb6c6eec"} Mar 11 08:46:46 crc kubenswrapper[4808]: I0311 08:46:46.719542 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" event={"ID":"3dda5309-668d-4e3c-b3b2-1d708eecc578","Type":"ContainerStarted","Data":"e39a2d960963f858eb3b99fd35396864663897c2db7e9fbce15cdb56f2cd6cab"} Mar 11 08:46:46 crc kubenswrapper[4808]: I0311 08:46:46.719590 4808 scope.go:117] "RemoveContainer" containerID="5dad38518c1c86c0eb3926b0ff17b9fe7847bd1e942591307c49dfbd87a61d19" Mar 11 08:46:59 crc kubenswrapper[4808]: I0311 08:46:59.807651 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-67rsf" Mar 11 08:46:59 crc kubenswrapper[4808]: I0311 08:46:59.884724 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-76p85"] Mar 11 08:47:24 crc kubenswrapper[4808]: I0311 08:47:24.928784 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-76p85" podUID="65550926-3f8b-436d-8a5c-e425d8c4875f" containerName="registry" containerID="cri-o://530f9546a16fa2c98dd75198f9613a0e7eac5b9518c1427ed4c35d0d32de8797" gracePeriod=30 Mar 11 08:47:25 crc kubenswrapper[4808]: I0311 08:47:25.408844 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-76p85" Mar 11 08:47:25 crc kubenswrapper[4808]: I0311 08:47:25.488175 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9c5fj\" (UniqueName: \"kubernetes.io/projected/65550926-3f8b-436d-8a5c-e425d8c4875f-kube-api-access-9c5fj\") pod \"65550926-3f8b-436d-8a5c-e425d8c4875f\" (UID: \"65550926-3f8b-436d-8a5c-e425d8c4875f\") " Mar 11 08:47:25 crc kubenswrapper[4808]: I0311 08:47:25.489114 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/65550926-3f8b-436d-8a5c-e425d8c4875f-registry-tls\") pod \"65550926-3f8b-436d-8a5c-e425d8c4875f\" (UID: \"65550926-3f8b-436d-8a5c-e425d8c4875f\") " Mar 11 08:47:25 crc kubenswrapper[4808]: I0311 08:47:25.489146 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/65550926-3f8b-436d-8a5c-e425d8c4875f-trusted-ca\") pod \"65550926-3f8b-436d-8a5c-e425d8c4875f\" (UID: \"65550926-3f8b-436d-8a5c-e425d8c4875f\") " Mar 11 08:47:25 crc kubenswrapper[4808]: I0311 08:47:25.489175 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/65550926-3f8b-436d-8a5c-e425d8c4875f-registry-certificates\") pod \"65550926-3f8b-436d-8a5c-e425d8c4875f\" (UID: \"65550926-3f8b-436d-8a5c-e425d8c4875f\") " Mar 11 08:47:25 crc kubenswrapper[4808]: I0311 08:47:25.489409 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/65550926-3f8b-436d-8a5c-e425d8c4875f-ca-trust-extracted\") pod \"65550926-3f8b-436d-8a5c-e425d8c4875f\" (UID: \"65550926-3f8b-436d-8a5c-e425d8c4875f\") " Mar 11 08:47:25 crc kubenswrapper[4808]: I0311 08:47:25.489484 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/65550926-3f8b-436d-8a5c-e425d8c4875f-bound-sa-token\") pod \"65550926-3f8b-436d-8a5c-e425d8c4875f\" (UID: \"65550926-3f8b-436d-8a5c-e425d8c4875f\") " Mar 11 08:47:25 crc kubenswrapper[4808]: I0311 08:47:25.489567 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/65550926-3f8b-436d-8a5c-e425d8c4875f-installation-pull-secrets\") pod \"65550926-3f8b-436d-8a5c-e425d8c4875f\" (UID: \"65550926-3f8b-436d-8a5c-e425d8c4875f\") " Mar 11 08:47:25 crc kubenswrapper[4808]: I0311 08:47:25.489763 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"65550926-3f8b-436d-8a5c-e425d8c4875f\" (UID: \"65550926-3f8b-436d-8a5c-e425d8c4875f\") " Mar 11 08:47:25 crc kubenswrapper[4808]: I0311 08:47:25.490111 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65550926-3f8b-436d-8a5c-e425d8c4875f-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "65550926-3f8b-436d-8a5c-e425d8c4875f" (UID: "65550926-3f8b-436d-8a5c-e425d8c4875f"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:47:25 crc kubenswrapper[4808]: I0311 08:47:25.490264 4808 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/65550926-3f8b-436d-8a5c-e425d8c4875f-registry-certificates\") on node \"crc\" DevicePath \"\"" Mar 11 08:47:25 crc kubenswrapper[4808]: I0311 08:47:25.490655 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65550926-3f8b-436d-8a5c-e425d8c4875f-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "65550926-3f8b-436d-8a5c-e425d8c4875f" (UID: "65550926-3f8b-436d-8a5c-e425d8c4875f"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:47:25 crc kubenswrapper[4808]: I0311 08:47:25.496245 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65550926-3f8b-436d-8a5c-e425d8c4875f-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "65550926-3f8b-436d-8a5c-e425d8c4875f" (UID: "65550926-3f8b-436d-8a5c-e425d8c4875f"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:47:25 crc kubenswrapper[4808]: I0311 08:47:25.496656 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65550926-3f8b-436d-8a5c-e425d8c4875f-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "65550926-3f8b-436d-8a5c-e425d8c4875f" (UID: "65550926-3f8b-436d-8a5c-e425d8c4875f"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:47:25 crc kubenswrapper[4808]: I0311 08:47:25.497118 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65550926-3f8b-436d-8a5c-e425d8c4875f-kube-api-access-9c5fj" (OuterVolumeSpecName: "kube-api-access-9c5fj") pod "65550926-3f8b-436d-8a5c-e425d8c4875f" (UID: "65550926-3f8b-436d-8a5c-e425d8c4875f"). InnerVolumeSpecName "kube-api-access-9c5fj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:47:25 crc kubenswrapper[4808]: I0311 08:47:25.497459 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65550926-3f8b-436d-8a5c-e425d8c4875f-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "65550926-3f8b-436d-8a5c-e425d8c4875f" (UID: "65550926-3f8b-436d-8a5c-e425d8c4875f"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:47:25 crc kubenswrapper[4808]: I0311 08:47:25.505004 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "65550926-3f8b-436d-8a5c-e425d8c4875f" (UID: "65550926-3f8b-436d-8a5c-e425d8c4875f"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 11 08:47:25 crc kubenswrapper[4808]: I0311 08:47:25.518997 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65550926-3f8b-436d-8a5c-e425d8c4875f-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "65550926-3f8b-436d-8a5c-e425d8c4875f" (UID: "65550926-3f8b-436d-8a5c-e425d8c4875f"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 08:47:25 crc kubenswrapper[4808]: I0311 08:47:25.592109 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9c5fj\" (UniqueName: \"kubernetes.io/projected/65550926-3f8b-436d-8a5c-e425d8c4875f-kube-api-access-9c5fj\") on node \"crc\" DevicePath \"\"" Mar 11 08:47:25 crc kubenswrapper[4808]: I0311 08:47:25.592171 4808 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/65550926-3f8b-436d-8a5c-e425d8c4875f-registry-tls\") on node \"crc\" DevicePath \"\"" Mar 11 08:47:25 crc kubenswrapper[4808]: I0311 08:47:25.592193 4808 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/65550926-3f8b-436d-8a5c-e425d8c4875f-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 11 08:47:25 crc kubenswrapper[4808]: I0311 08:47:25.592210 4808 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/65550926-3f8b-436d-8a5c-e425d8c4875f-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Mar 11 08:47:25 crc kubenswrapper[4808]: I0311 08:47:25.592228 4808 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/65550926-3f8b-436d-8a5c-e425d8c4875f-bound-sa-token\") on node \"crc\" DevicePath \"\"" Mar 11 08:47:25 crc kubenswrapper[4808]: I0311 08:47:25.592245 4808 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/65550926-3f8b-436d-8a5c-e425d8c4875f-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Mar 11 08:47:26 crc kubenswrapper[4808]: I0311 08:47:26.043486 4808 generic.go:334] "Generic (PLEG): container finished" podID="65550926-3f8b-436d-8a5c-e425d8c4875f" containerID="530f9546a16fa2c98dd75198f9613a0e7eac5b9518c1427ed4c35d0d32de8797" exitCode=0 Mar 11 08:47:26 crc kubenswrapper[4808]: I0311 08:47:26.043593 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-76p85" Mar 11 08:47:26 crc kubenswrapper[4808]: I0311 08:47:26.043585 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-76p85" event={"ID":"65550926-3f8b-436d-8a5c-e425d8c4875f","Type":"ContainerDied","Data":"530f9546a16fa2c98dd75198f9613a0e7eac5b9518c1427ed4c35d0d32de8797"} Mar 11 08:47:26 crc kubenswrapper[4808]: I0311 08:47:26.044106 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-76p85" event={"ID":"65550926-3f8b-436d-8a5c-e425d8c4875f","Type":"ContainerDied","Data":"4b169c4d22bb4941869a3dc4f405ee30482126efc68e79b7589d0d26b1d32b1a"} Mar 11 08:47:26 crc kubenswrapper[4808]: I0311 08:47:26.044141 4808 scope.go:117] "RemoveContainer" containerID="530f9546a16fa2c98dd75198f9613a0e7eac5b9518c1427ed4c35d0d32de8797" Mar 11 08:47:26 crc kubenswrapper[4808]: I0311 08:47:26.071427 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-76p85"] Mar 11 08:47:26 crc kubenswrapper[4808]: I0311 08:47:26.080004 4808 scope.go:117] "RemoveContainer" containerID="530f9546a16fa2c98dd75198f9613a0e7eac5b9518c1427ed4c35d0d32de8797" Mar 11 08:47:26 crc kubenswrapper[4808]: E0311 08:47:26.080596 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"530f9546a16fa2c98dd75198f9613a0e7eac5b9518c1427ed4c35d0d32de8797\": container with ID starting with 530f9546a16fa2c98dd75198f9613a0e7eac5b9518c1427ed4c35d0d32de8797 not found: ID does not exist" containerID="530f9546a16fa2c98dd75198f9613a0e7eac5b9518c1427ed4c35d0d32de8797" Mar 11 08:47:26 crc kubenswrapper[4808]: I0311 08:47:26.080635 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"530f9546a16fa2c98dd75198f9613a0e7eac5b9518c1427ed4c35d0d32de8797"} err="failed to get container status \"530f9546a16fa2c98dd75198f9613a0e7eac5b9518c1427ed4c35d0d32de8797\": rpc error: code = NotFound desc = could not find container \"530f9546a16fa2c98dd75198f9613a0e7eac5b9518c1427ed4c35d0d32de8797\": container with ID starting with 530f9546a16fa2c98dd75198f9613a0e7eac5b9518c1427ed4c35d0d32de8797 not found: ID does not exist" Mar 11 08:47:26 crc kubenswrapper[4808]: I0311 08:47:26.080669 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-76p85"] Mar 11 08:47:27 crc kubenswrapper[4808]: I0311 08:47:27.802195 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65550926-3f8b-436d-8a5c-e425d8c4875f" path="/var/lib/kubelet/pods/65550926-3f8b-436d-8a5c-e425d8c4875f/volumes" Mar 11 08:47:41 crc kubenswrapper[4808]: I0311 08:47:41.084931 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pf6x9"] Mar 11 08:47:41 crc kubenswrapper[4808]: I0311 08:47:41.086108 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-pf6x9" podUID="32a93168-4bf6-48c0-89b4-5e4393234562" containerName="registry-server" containerID="cri-o://a850b6949811010daca36762a9c5a798c603bbad12e27d18071279f6090857b3" gracePeriod=30 Mar 11 08:47:41 crc kubenswrapper[4808]: I0311 08:47:41.092295 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bc8sc"] Mar 11 08:47:41 crc kubenswrapper[4808]: I0311 08:47:41.092546 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bc8sc" podUID="8d26a515-59f9-49a6-a9e0-6ff62b523ab4" containerName="registry-server" containerID="cri-o://dc7cefd976b122d8c3e8895f0c75162bb03c6521fc2697025802f893b8715e94" gracePeriod=30 Mar 11 08:47:41 crc kubenswrapper[4808]: I0311 08:47:41.105970 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-b25qz"] Mar 11 08:47:41 crc kubenswrapper[4808]: I0311 08:47:41.106166 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-b25qz" podUID="090328a2-0e9e-49a5-b82a-e35947e2fbf2" containerName="marketplace-operator" containerID="cri-o://77de04bad18f0cd79042a4028b77233e899f3dd7beddaef38d8e7bd316d90b6f" gracePeriod=30 Mar 11 08:47:41 crc kubenswrapper[4808]: I0311 08:47:41.131515 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cnszv"] Mar 11 08:47:41 crc kubenswrapper[4808]: I0311 08:47:41.131824 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-cnszv" podUID="5a83bb5a-5523-4258-b6ce-895c0b1d7f07" containerName="registry-server" containerID="cri-o://f2c6d092023bbd464a5a5d9a4736b9c7061598634a2633b2a31add43e5c6dd8a" gracePeriod=30 Mar 11 08:47:41 crc kubenswrapper[4808]: I0311 08:47:41.139861 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-b6wjx"] Mar 11 08:47:41 crc kubenswrapper[4808]: E0311 08:47:41.140236 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65550926-3f8b-436d-8a5c-e425d8c4875f" containerName="registry" Mar 11 08:47:41 crc kubenswrapper[4808]: I0311 08:47:41.140263 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="65550926-3f8b-436d-8a5c-e425d8c4875f" containerName="registry" Mar 11 08:47:41 crc kubenswrapper[4808]: I0311 08:47:41.140465 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="65550926-3f8b-436d-8a5c-e425d8c4875f" containerName="registry" Mar 11 08:47:41 crc kubenswrapper[4808]: I0311 08:47:41.141069 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-b6wjx" Mar 11 08:47:41 crc kubenswrapper[4808]: I0311 08:47:41.142381 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dqmwn"] Mar 11 08:47:41 crc kubenswrapper[4808]: I0311 08:47:41.143078 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dqmwn" podUID="5cd00c2e-5e6d-4fa5-a2a5-28153c8ef4bd" containerName="registry-server" containerID="cri-o://52adc17afa44923452a812d2cfc7bc81f38473d12438047c7572015d462dc61b" gracePeriod=30 Mar 11 08:47:41 crc kubenswrapper[4808]: I0311 08:47:41.150878 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-b6wjx"] Mar 11 08:47:41 crc kubenswrapper[4808]: I0311 08:47:41.334629 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slkjd\" (UniqueName: \"kubernetes.io/projected/469bdff1-1585-4d3e-8261-8d36e14aae58-kube-api-access-slkjd\") pod \"marketplace-operator-79b997595-b6wjx\" (UID: \"469bdff1-1585-4d3e-8261-8d36e14aae58\") " pod="openshift-marketplace/marketplace-operator-79b997595-b6wjx" Mar 11 08:47:41 crc kubenswrapper[4808]: I0311 08:47:41.334908 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/469bdff1-1585-4d3e-8261-8d36e14aae58-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-b6wjx\" (UID: \"469bdff1-1585-4d3e-8261-8d36e14aae58\") " pod="openshift-marketplace/marketplace-operator-79b997595-b6wjx" Mar 11 08:47:41 crc kubenswrapper[4808]: I0311 08:47:41.334997 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/469bdff1-1585-4d3e-8261-8d36e14aae58-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-b6wjx\" (UID: \"469bdff1-1585-4d3e-8261-8d36e14aae58\") " pod="openshift-marketplace/marketplace-operator-79b997595-b6wjx" Mar 11 08:47:41 crc kubenswrapper[4808]: I0311 08:47:41.442113 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/469bdff1-1585-4d3e-8261-8d36e14aae58-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-b6wjx\" (UID: \"469bdff1-1585-4d3e-8261-8d36e14aae58\") " pod="openshift-marketplace/marketplace-operator-79b997595-b6wjx" Mar 11 08:47:41 crc kubenswrapper[4808]: I0311 08:47:41.442177 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slkjd\" (UniqueName: \"kubernetes.io/projected/469bdff1-1585-4d3e-8261-8d36e14aae58-kube-api-access-slkjd\") pod \"marketplace-operator-79b997595-b6wjx\" (UID: \"469bdff1-1585-4d3e-8261-8d36e14aae58\") " pod="openshift-marketplace/marketplace-operator-79b997595-b6wjx" Mar 11 08:47:41 crc kubenswrapper[4808]: I0311 08:47:41.442210 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/469bdff1-1585-4d3e-8261-8d36e14aae58-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-b6wjx\" (UID: \"469bdff1-1585-4d3e-8261-8d36e14aae58\") " pod="openshift-marketplace/marketplace-operator-79b997595-b6wjx" Mar 11 08:47:41 crc kubenswrapper[4808]: I0311 08:47:41.443610 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/469bdff1-1585-4d3e-8261-8d36e14aae58-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-b6wjx\" (UID: \"469bdff1-1585-4d3e-8261-8d36e14aae58\") " pod="openshift-marketplace/marketplace-operator-79b997595-b6wjx" Mar 11 08:47:41 crc kubenswrapper[4808]: I0311 08:47:41.454281 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/469bdff1-1585-4d3e-8261-8d36e14aae58-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-b6wjx\" (UID: \"469bdff1-1585-4d3e-8261-8d36e14aae58\") " pod="openshift-marketplace/marketplace-operator-79b997595-b6wjx" Mar 11 08:47:41 crc kubenswrapper[4808]: I0311 08:47:41.458191 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slkjd\" (UniqueName: \"kubernetes.io/projected/469bdff1-1585-4d3e-8261-8d36e14aae58-kube-api-access-slkjd\") pod \"marketplace-operator-79b997595-b6wjx\" (UID: \"469bdff1-1585-4d3e-8261-8d36e14aae58\") " pod="openshift-marketplace/marketplace-operator-79b997595-b6wjx" Mar 11 08:47:41 crc kubenswrapper[4808]: I0311 08:47:41.526018 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-b6wjx" Mar 11 08:47:41 crc kubenswrapper[4808]: I0311 08:47:41.530172 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pf6x9" Mar 11 08:47:41 crc kubenswrapper[4808]: I0311 08:47:41.533664 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cnszv" Mar 11 08:47:41 crc kubenswrapper[4808]: I0311 08:47:41.541021 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-b25qz" Mar 11 08:47:41 crc kubenswrapper[4808]: I0311 08:47:41.543959 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bc8sc" Mar 11 08:47:41 crc kubenswrapper[4808]: I0311 08:47:41.581369 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dqmwn" Mar 11 08:47:41 crc kubenswrapper[4808]: I0311 08:47:41.644129 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d26a515-59f9-49a6-a9e0-6ff62b523ab4-utilities\") pod \"8d26a515-59f9-49a6-a9e0-6ff62b523ab4\" (UID: \"8d26a515-59f9-49a6-a9e0-6ff62b523ab4\") " Mar 11 08:47:41 crc kubenswrapper[4808]: I0311 08:47:41.644232 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m9x95\" (UniqueName: \"kubernetes.io/projected/090328a2-0e9e-49a5-b82a-e35947e2fbf2-kube-api-access-m9x95\") pod \"090328a2-0e9e-49a5-b82a-e35947e2fbf2\" (UID: \"090328a2-0e9e-49a5-b82a-e35947e2fbf2\") " Mar 11 08:47:41 crc kubenswrapper[4808]: I0311 08:47:41.644261 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32a93168-4bf6-48c0-89b4-5e4393234562-utilities\") pod \"32a93168-4bf6-48c0-89b4-5e4393234562\" (UID: \"32a93168-4bf6-48c0-89b4-5e4393234562\") " Mar 11 08:47:41 crc kubenswrapper[4808]: I0311 08:47:41.644295 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a83bb5a-5523-4258-b6ce-895c0b1d7f07-catalog-content\") pod \"5a83bb5a-5523-4258-b6ce-895c0b1d7f07\" (UID: \"5a83bb5a-5523-4258-b6ce-895c0b1d7f07\") " Mar 11 08:47:41 crc kubenswrapper[4808]: I0311 08:47:41.644325 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/090328a2-0e9e-49a5-b82a-e35947e2fbf2-marketplace-trusted-ca\") pod \"090328a2-0e9e-49a5-b82a-e35947e2fbf2\" (UID: \"090328a2-0e9e-49a5-b82a-e35947e2fbf2\") " Mar 11 08:47:41 crc kubenswrapper[4808]: I0311 08:47:41.644372 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/090328a2-0e9e-49a5-b82a-e35947e2fbf2-marketplace-operator-metrics\") pod \"090328a2-0e9e-49a5-b82a-e35947e2fbf2\" (UID: \"090328a2-0e9e-49a5-b82a-e35947e2fbf2\") " Mar 11 08:47:41 crc kubenswrapper[4808]: I0311 08:47:41.644417 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a83bb5a-5523-4258-b6ce-895c0b1d7f07-utilities\") pod \"5a83bb5a-5523-4258-b6ce-895c0b1d7f07\" (UID: \"5a83bb5a-5523-4258-b6ce-895c0b1d7f07\") " Mar 11 08:47:41 crc kubenswrapper[4808]: I0311 08:47:41.644459 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d26a515-59f9-49a6-a9e0-6ff62b523ab4-catalog-content\") pod \"8d26a515-59f9-49a6-a9e0-6ff62b523ab4\" (UID: \"8d26a515-59f9-49a6-a9e0-6ff62b523ab4\") " Mar 11 08:47:41 crc kubenswrapper[4808]: I0311 08:47:41.644502 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4pc8c\" (UniqueName: \"kubernetes.io/projected/32a93168-4bf6-48c0-89b4-5e4393234562-kube-api-access-4pc8c\") pod \"32a93168-4bf6-48c0-89b4-5e4393234562\" (UID: \"32a93168-4bf6-48c0-89b4-5e4393234562\") " Mar 11 08:47:41 crc kubenswrapper[4808]: I0311 08:47:41.644535 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wzk8f\" (UniqueName: \"kubernetes.io/projected/8d26a515-59f9-49a6-a9e0-6ff62b523ab4-kube-api-access-wzk8f\") pod \"8d26a515-59f9-49a6-a9e0-6ff62b523ab4\" (UID: \"8d26a515-59f9-49a6-a9e0-6ff62b523ab4\") " Mar 11 08:47:41 crc kubenswrapper[4808]: I0311 08:47:41.644573 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32a93168-4bf6-48c0-89b4-5e4393234562-catalog-content\") pod \"32a93168-4bf6-48c0-89b4-5e4393234562\" (UID: \"32a93168-4bf6-48c0-89b4-5e4393234562\") " Mar 11 08:47:41 crc kubenswrapper[4808]: I0311 08:47:41.644596 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vzcgv\" (UniqueName: \"kubernetes.io/projected/5a83bb5a-5523-4258-b6ce-895c0b1d7f07-kube-api-access-vzcgv\") pod \"5a83bb5a-5523-4258-b6ce-895c0b1d7f07\" (UID: \"5a83bb5a-5523-4258-b6ce-895c0b1d7f07\") " Mar 11 08:47:41 crc kubenswrapper[4808]: I0311 08:47:41.645172 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d26a515-59f9-49a6-a9e0-6ff62b523ab4-utilities" (OuterVolumeSpecName: "utilities") pod "8d26a515-59f9-49a6-a9e0-6ff62b523ab4" (UID: "8d26a515-59f9-49a6-a9e0-6ff62b523ab4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 08:47:41 crc kubenswrapper[4808]: I0311 08:47:41.645259 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a83bb5a-5523-4258-b6ce-895c0b1d7f07-utilities" (OuterVolumeSpecName: "utilities") pod "5a83bb5a-5523-4258-b6ce-895c0b1d7f07" (UID: "5a83bb5a-5523-4258-b6ce-895c0b1d7f07"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 08:47:41 crc kubenswrapper[4808]: I0311 08:47:41.645674 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32a93168-4bf6-48c0-89b4-5e4393234562-utilities" (OuterVolumeSpecName: "utilities") pod "32a93168-4bf6-48c0-89b4-5e4393234562" (UID: "32a93168-4bf6-48c0-89b4-5e4393234562"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 08:47:41 crc kubenswrapper[4808]: I0311 08:47:41.646526 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/090328a2-0e9e-49a5-b82a-e35947e2fbf2-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "090328a2-0e9e-49a5-b82a-e35947e2fbf2" (UID: "090328a2-0e9e-49a5-b82a-e35947e2fbf2"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:47:41 crc kubenswrapper[4808]: I0311 08:47:41.648375 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32a93168-4bf6-48c0-89b4-5e4393234562-kube-api-access-4pc8c" (OuterVolumeSpecName: "kube-api-access-4pc8c") pod "32a93168-4bf6-48c0-89b4-5e4393234562" (UID: "32a93168-4bf6-48c0-89b4-5e4393234562"). InnerVolumeSpecName "kube-api-access-4pc8c". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:47:41 crc kubenswrapper[4808]: I0311 08:47:41.652219 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d26a515-59f9-49a6-a9e0-6ff62b523ab4-kube-api-access-wzk8f" (OuterVolumeSpecName: "kube-api-access-wzk8f") pod "8d26a515-59f9-49a6-a9e0-6ff62b523ab4" (UID: "8d26a515-59f9-49a6-a9e0-6ff62b523ab4"). InnerVolumeSpecName "kube-api-access-wzk8f". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:47:41 crc kubenswrapper[4808]: I0311 08:47:41.653885 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a83bb5a-5523-4258-b6ce-895c0b1d7f07-kube-api-access-vzcgv" (OuterVolumeSpecName: "kube-api-access-vzcgv") pod "5a83bb5a-5523-4258-b6ce-895c0b1d7f07" (UID: "5a83bb5a-5523-4258-b6ce-895c0b1d7f07"). InnerVolumeSpecName "kube-api-access-vzcgv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:47:41 crc kubenswrapper[4808]: I0311 08:47:41.654864 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/090328a2-0e9e-49a5-b82a-e35947e2fbf2-kube-api-access-m9x95" (OuterVolumeSpecName: "kube-api-access-m9x95") pod "090328a2-0e9e-49a5-b82a-e35947e2fbf2" (UID: "090328a2-0e9e-49a5-b82a-e35947e2fbf2"). InnerVolumeSpecName "kube-api-access-m9x95". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:47:41 crc kubenswrapper[4808]: I0311 08:47:41.660916 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/090328a2-0e9e-49a5-b82a-e35947e2fbf2-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "090328a2-0e9e-49a5-b82a-e35947e2fbf2" (UID: "090328a2-0e9e-49a5-b82a-e35947e2fbf2"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:47:41 crc kubenswrapper[4808]: I0311 08:47:41.684243 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a83bb5a-5523-4258-b6ce-895c0b1d7f07-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5a83bb5a-5523-4258-b6ce-895c0b1d7f07" (UID: "5a83bb5a-5523-4258-b6ce-895c0b1d7f07"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 08:47:41 crc kubenswrapper[4808]: I0311 08:47:41.715868 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d26a515-59f9-49a6-a9e0-6ff62b523ab4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8d26a515-59f9-49a6-a9e0-6ff62b523ab4" (UID: "8d26a515-59f9-49a6-a9e0-6ff62b523ab4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 08:47:41 crc kubenswrapper[4808]: I0311 08:47:41.725427 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32a93168-4bf6-48c0-89b4-5e4393234562-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "32a93168-4bf6-48c0-89b4-5e4393234562" (UID: "32a93168-4bf6-48c0-89b4-5e4393234562"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 08:47:41 crc kubenswrapper[4808]: I0311 08:47:41.741477 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-b6wjx"] Mar 11 08:47:41 crc kubenswrapper[4808]: I0311 08:47:41.745607 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5cd00c2e-5e6d-4fa5-a2a5-28153c8ef4bd-utilities\") pod \"5cd00c2e-5e6d-4fa5-a2a5-28153c8ef4bd\" (UID: \"5cd00c2e-5e6d-4fa5-a2a5-28153c8ef4bd\") " Mar 11 08:47:41 crc kubenswrapper[4808]: I0311 08:47:41.745716 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ww7bz\" (UniqueName: \"kubernetes.io/projected/5cd00c2e-5e6d-4fa5-a2a5-28153c8ef4bd-kube-api-access-ww7bz\") pod \"5cd00c2e-5e6d-4fa5-a2a5-28153c8ef4bd\" (UID: \"5cd00c2e-5e6d-4fa5-a2a5-28153c8ef4bd\") " Mar 11 08:47:41 crc kubenswrapper[4808]: I0311 08:47:41.745744 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5cd00c2e-5e6d-4fa5-a2a5-28153c8ef4bd-catalog-content\") pod \"5cd00c2e-5e6d-4fa5-a2a5-28153c8ef4bd\" (UID: \"5cd00c2e-5e6d-4fa5-a2a5-28153c8ef4bd\") " Mar 11 08:47:41 crc kubenswrapper[4808]: I0311 08:47:41.746449 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5cd00c2e-5e6d-4fa5-a2a5-28153c8ef4bd-utilities" (OuterVolumeSpecName: "utilities") pod "5cd00c2e-5e6d-4fa5-a2a5-28153c8ef4bd" (UID: "5cd00c2e-5e6d-4fa5-a2a5-28153c8ef4bd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 08:47:41 crc kubenswrapper[4808]: W0311 08:47:41.748611 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod469bdff1_1585_4d3e_8261_8d36e14aae58.slice/crio-4bbfc811097d75f1671760c369fbc2c387859e49e4d6cfca5d9b5b83f948bd88 WatchSource:0}: Error finding container 4bbfc811097d75f1671760c369fbc2c387859e49e4d6cfca5d9b5b83f948bd88: Status 404 returned error can't find the container with id 4bbfc811097d75f1671760c369fbc2c387859e49e4d6cfca5d9b5b83f948bd88 Mar 11 08:47:41 crc kubenswrapper[4808]: I0311 08:47:41.749695 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5cd00c2e-5e6d-4fa5-a2a5-28153c8ef4bd-kube-api-access-ww7bz" (OuterVolumeSpecName: "kube-api-access-ww7bz") pod "5cd00c2e-5e6d-4fa5-a2a5-28153c8ef4bd" (UID: "5cd00c2e-5e6d-4fa5-a2a5-28153c8ef4bd"). InnerVolumeSpecName "kube-api-access-ww7bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:47:41 crc kubenswrapper[4808]: I0311 08:47:41.760888 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4pc8c\" (UniqueName: \"kubernetes.io/projected/32a93168-4bf6-48c0-89b4-5e4393234562-kube-api-access-4pc8c\") on node \"crc\" DevicePath \"\"" Mar 11 08:47:41 crc kubenswrapper[4808]: I0311 08:47:41.760914 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5cd00c2e-5e6d-4fa5-a2a5-28153c8ef4bd-utilities\") on node \"crc\" DevicePath \"\"" Mar 11 08:47:41 crc kubenswrapper[4808]: I0311 08:47:41.760924 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wzk8f\" (UniqueName: \"kubernetes.io/projected/8d26a515-59f9-49a6-a9e0-6ff62b523ab4-kube-api-access-wzk8f\") on node \"crc\" DevicePath \"\"" Mar 11 08:47:41 crc kubenswrapper[4808]: I0311 08:47:41.760933 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vzcgv\" (UniqueName: \"kubernetes.io/projected/5a83bb5a-5523-4258-b6ce-895c0b1d7f07-kube-api-access-vzcgv\") on node \"crc\" DevicePath \"\"" Mar 11 08:47:41 crc kubenswrapper[4808]: I0311 08:47:41.760942 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32a93168-4bf6-48c0-89b4-5e4393234562-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 11 08:47:41 crc kubenswrapper[4808]: I0311 08:47:41.760950 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d26a515-59f9-49a6-a9e0-6ff62b523ab4-utilities\") on node \"crc\" DevicePath \"\"" Mar 11 08:47:41 crc kubenswrapper[4808]: I0311 08:47:41.760958 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m9x95\" (UniqueName: \"kubernetes.io/projected/090328a2-0e9e-49a5-b82a-e35947e2fbf2-kube-api-access-m9x95\") on node \"crc\" DevicePath \"\"" Mar 11 08:47:41 crc kubenswrapper[4808]: I0311 08:47:41.760965 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32a93168-4bf6-48c0-89b4-5e4393234562-utilities\") on node \"crc\" DevicePath \"\"" Mar 11 08:47:41 crc kubenswrapper[4808]: I0311 08:47:41.760973 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a83bb5a-5523-4258-b6ce-895c0b1d7f07-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 11 08:47:41 crc kubenswrapper[4808]: I0311 08:47:41.760981 4808 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/090328a2-0e9e-49a5-b82a-e35947e2fbf2-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 11 08:47:41 crc kubenswrapper[4808]: I0311 08:47:41.760989 4808 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/090328a2-0e9e-49a5-b82a-e35947e2fbf2-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Mar 11 08:47:41 crc kubenswrapper[4808]: I0311 08:47:41.760997 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ww7bz\" (UniqueName: \"kubernetes.io/projected/5cd00c2e-5e6d-4fa5-a2a5-28153c8ef4bd-kube-api-access-ww7bz\") on node \"crc\" DevicePath \"\"" Mar 11 08:47:41 crc kubenswrapper[4808]: I0311 08:47:41.761005 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a83bb5a-5523-4258-b6ce-895c0b1d7f07-utilities\") on node \"crc\" DevicePath \"\"" Mar 11 08:47:41 crc kubenswrapper[4808]: I0311 08:47:41.761014 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d26a515-59f9-49a6-a9e0-6ff62b523ab4-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 11 08:47:41 crc kubenswrapper[4808]: I0311 08:47:41.861817 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5cd00c2e-5e6d-4fa5-a2a5-28153c8ef4bd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5cd00c2e-5e6d-4fa5-a2a5-28153c8ef4bd" (UID: "5cd00c2e-5e6d-4fa5-a2a5-28153c8ef4bd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 08:47:41 crc kubenswrapper[4808]: I0311 08:47:41.861965 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5cd00c2e-5e6d-4fa5-a2a5-28153c8ef4bd-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 11 08:47:42 crc kubenswrapper[4808]: I0311 08:47:42.172061 4808 generic.go:334] "Generic (PLEG): container finished" podID="090328a2-0e9e-49a5-b82a-e35947e2fbf2" containerID="77de04bad18f0cd79042a4028b77233e899f3dd7beddaef38d8e7bd316d90b6f" exitCode=0 Mar 11 08:47:42 crc kubenswrapper[4808]: I0311 08:47:42.172122 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-b25qz" Mar 11 08:47:42 crc kubenswrapper[4808]: I0311 08:47:42.172143 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-b25qz" event={"ID":"090328a2-0e9e-49a5-b82a-e35947e2fbf2","Type":"ContainerDied","Data":"77de04bad18f0cd79042a4028b77233e899f3dd7beddaef38d8e7bd316d90b6f"} Mar 11 08:47:42 crc kubenswrapper[4808]: I0311 08:47:42.172746 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-b25qz" event={"ID":"090328a2-0e9e-49a5-b82a-e35947e2fbf2","Type":"ContainerDied","Data":"fe4c0be5f4837f9e7bd9459ee81d96af39f7b14a37319149e79cdf0bea3a308e"} Mar 11 08:47:42 crc kubenswrapper[4808]: I0311 08:47:42.172773 4808 scope.go:117] "RemoveContainer" containerID="77de04bad18f0cd79042a4028b77233e899f3dd7beddaef38d8e7bd316d90b6f" Mar 11 08:47:42 crc kubenswrapper[4808]: I0311 08:47:42.175026 4808 generic.go:334] "Generic (PLEG): container finished" podID="5cd00c2e-5e6d-4fa5-a2a5-28153c8ef4bd" containerID="52adc17afa44923452a812d2cfc7bc81f38473d12438047c7572015d462dc61b" exitCode=0 Mar 11 08:47:42 crc kubenswrapper[4808]: I0311 08:47:42.175069 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dqmwn" event={"ID":"5cd00c2e-5e6d-4fa5-a2a5-28153c8ef4bd","Type":"ContainerDied","Data":"52adc17afa44923452a812d2cfc7bc81f38473d12438047c7572015d462dc61b"} Mar 11 08:47:42 crc kubenswrapper[4808]: I0311 08:47:42.175101 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dqmwn" event={"ID":"5cd00c2e-5e6d-4fa5-a2a5-28153c8ef4bd","Type":"ContainerDied","Data":"4bc7b1bedc609ebf4989499e799bb099c4bbfbcc608b8f89395f991e6807c3b9"} Mar 11 08:47:42 crc kubenswrapper[4808]: I0311 08:47:42.175156 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dqmwn" Mar 11 08:47:42 crc kubenswrapper[4808]: I0311 08:47:42.179986 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-b6wjx" event={"ID":"469bdff1-1585-4d3e-8261-8d36e14aae58","Type":"ContainerStarted","Data":"7ade292ecfda53ad3d37fec4819549fe22bfb36219ebeeca583aeb25430aa6f5"} Mar 11 08:47:42 crc kubenswrapper[4808]: I0311 08:47:42.180052 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-b6wjx" event={"ID":"469bdff1-1585-4d3e-8261-8d36e14aae58","Type":"ContainerStarted","Data":"4bbfc811097d75f1671760c369fbc2c387859e49e4d6cfca5d9b5b83f948bd88"} Mar 11 08:47:42 crc kubenswrapper[4808]: I0311 08:47:42.180230 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-b6wjx" Mar 11 08:47:42 crc kubenswrapper[4808]: I0311 08:47:42.183023 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-b6wjx" Mar 11 08:47:42 crc kubenswrapper[4808]: I0311 08:47:42.183475 4808 generic.go:334] "Generic (PLEG): container finished" podID="32a93168-4bf6-48c0-89b4-5e4393234562" containerID="a850b6949811010daca36762a9c5a798c603bbad12e27d18071279f6090857b3" exitCode=0 Mar 11 08:47:42 crc kubenswrapper[4808]: I0311 08:47:42.183540 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pf6x9" event={"ID":"32a93168-4bf6-48c0-89b4-5e4393234562","Type":"ContainerDied","Data":"a850b6949811010daca36762a9c5a798c603bbad12e27d18071279f6090857b3"} Mar 11 08:47:42 crc kubenswrapper[4808]: I0311 08:47:42.183565 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pf6x9" event={"ID":"32a93168-4bf6-48c0-89b4-5e4393234562","Type":"ContainerDied","Data":"8c1765628c5202bc313e32c21f8c546084dd7a15bd5474fcbb05993c6af6893f"} Mar 11 08:47:42 crc kubenswrapper[4808]: I0311 08:47:42.183653 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pf6x9" Mar 11 08:47:42 crc kubenswrapper[4808]: I0311 08:47:42.187617 4808 generic.go:334] "Generic (PLEG): container finished" podID="8d26a515-59f9-49a6-a9e0-6ff62b523ab4" containerID="dc7cefd976b122d8c3e8895f0c75162bb03c6521fc2697025802f893b8715e94" exitCode=0 Mar 11 08:47:42 crc kubenswrapper[4808]: I0311 08:47:42.187686 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bc8sc" Mar 11 08:47:42 crc kubenswrapper[4808]: I0311 08:47:42.187690 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bc8sc" event={"ID":"8d26a515-59f9-49a6-a9e0-6ff62b523ab4","Type":"ContainerDied","Data":"dc7cefd976b122d8c3e8895f0c75162bb03c6521fc2697025802f893b8715e94"} Mar 11 08:47:42 crc kubenswrapper[4808]: I0311 08:47:42.187731 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bc8sc" event={"ID":"8d26a515-59f9-49a6-a9e0-6ff62b523ab4","Type":"ContainerDied","Data":"e0bb0b31911453de8bcbf5d6df0c9e6806b4142d12224ca4635e0909604a2dcd"} Mar 11 08:47:42 crc kubenswrapper[4808]: I0311 08:47:42.188983 4808 scope.go:117] "RemoveContainer" containerID="2e74dbe82b83d6432a70673b4d9b66afacb6c9f1751b3a5da20c04b029b95669" Mar 11 08:47:42 crc kubenswrapper[4808]: I0311 08:47:42.190441 4808 generic.go:334] "Generic (PLEG): container finished" podID="5a83bb5a-5523-4258-b6ce-895c0b1d7f07" containerID="f2c6d092023bbd464a5a5d9a4736b9c7061598634a2633b2a31add43e5c6dd8a" exitCode=0 Mar 11 08:47:42 crc kubenswrapper[4808]: I0311 08:47:42.190470 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cnszv" event={"ID":"5a83bb5a-5523-4258-b6ce-895c0b1d7f07","Type":"ContainerDied","Data":"f2c6d092023bbd464a5a5d9a4736b9c7061598634a2633b2a31add43e5c6dd8a"} Mar 11 08:47:42 crc kubenswrapper[4808]: I0311 08:47:42.190494 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cnszv" event={"ID":"5a83bb5a-5523-4258-b6ce-895c0b1d7f07","Type":"ContainerDied","Data":"63c34a18d15242e471c764fee72ff5c382d4030e95bd6fb6b22f3cba14d5ee8d"} Mar 11 08:47:42 crc kubenswrapper[4808]: I0311 08:47:42.190543 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cnszv" Mar 11 08:47:42 crc kubenswrapper[4808]: I0311 08:47:42.213051 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-b25qz"] Mar 11 08:47:42 crc kubenswrapper[4808]: I0311 08:47:42.220213 4808 scope.go:117] "RemoveContainer" containerID="77de04bad18f0cd79042a4028b77233e899f3dd7beddaef38d8e7bd316d90b6f" Mar 11 08:47:42 crc kubenswrapper[4808]: E0311 08:47:42.231877 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77de04bad18f0cd79042a4028b77233e899f3dd7beddaef38d8e7bd316d90b6f\": container with ID starting with 77de04bad18f0cd79042a4028b77233e899f3dd7beddaef38d8e7bd316d90b6f not found: ID does not exist" containerID="77de04bad18f0cd79042a4028b77233e899f3dd7beddaef38d8e7bd316d90b6f" Mar 11 08:47:42 crc kubenswrapper[4808]: I0311 08:47:42.231927 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77de04bad18f0cd79042a4028b77233e899f3dd7beddaef38d8e7bd316d90b6f"} err="failed to get container status \"77de04bad18f0cd79042a4028b77233e899f3dd7beddaef38d8e7bd316d90b6f\": rpc error: code = NotFound desc = could not find container \"77de04bad18f0cd79042a4028b77233e899f3dd7beddaef38d8e7bd316d90b6f\": container with ID starting with 77de04bad18f0cd79042a4028b77233e899f3dd7beddaef38d8e7bd316d90b6f not found: ID does not exist" Mar 11 08:47:42 crc kubenswrapper[4808]: I0311 08:47:42.231956 4808 scope.go:117] "RemoveContainer" containerID="2e74dbe82b83d6432a70673b4d9b66afacb6c9f1751b3a5da20c04b029b95669" Mar 11 08:47:42 crc kubenswrapper[4808]: E0311 08:47:42.232270 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e74dbe82b83d6432a70673b4d9b66afacb6c9f1751b3a5da20c04b029b95669\": container with ID starting with 2e74dbe82b83d6432a70673b4d9b66afacb6c9f1751b3a5da20c04b029b95669 not found: ID does not exist" containerID="2e74dbe82b83d6432a70673b4d9b66afacb6c9f1751b3a5da20c04b029b95669" Mar 11 08:47:42 crc kubenswrapper[4808]: I0311 08:47:42.232335 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e74dbe82b83d6432a70673b4d9b66afacb6c9f1751b3a5da20c04b029b95669"} err="failed to get container status \"2e74dbe82b83d6432a70673b4d9b66afacb6c9f1751b3a5da20c04b029b95669\": rpc error: code = NotFound desc = could not find container \"2e74dbe82b83d6432a70673b4d9b66afacb6c9f1751b3a5da20c04b029b95669\": container with ID starting with 2e74dbe82b83d6432a70673b4d9b66afacb6c9f1751b3a5da20c04b029b95669 not found: ID does not exist" Mar 11 08:47:42 crc kubenswrapper[4808]: I0311 08:47:42.232367 4808 scope.go:117] "RemoveContainer" containerID="52adc17afa44923452a812d2cfc7bc81f38473d12438047c7572015d462dc61b" Mar 11 08:47:42 crc kubenswrapper[4808]: I0311 08:47:42.233867 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-b25qz"] Mar 11 08:47:42 crc kubenswrapper[4808]: I0311 08:47:42.255157 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-b6wjx" podStartSLOduration=1.255139765 podStartE2EDuration="1.255139765s" podCreationTimestamp="2026-03-11 08:47:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:47:42.25393557 +0000 UTC m=+513.207258880" watchObservedRunningTime="2026-03-11 08:47:42.255139765 +0000 UTC m=+513.208463075" Mar 11 08:47:42 crc kubenswrapper[4808]: I0311 08:47:42.260775 4808 scope.go:117] "RemoveContainer" containerID="5c58af5174cca0dba072646bf90a707fcdbf5317f44b1005ec563a8ef47be2ee" Mar 11 08:47:42 crc kubenswrapper[4808]: I0311 08:47:42.280194 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dqmwn"] Mar 11 08:47:42 crc kubenswrapper[4808]: I0311 08:47:42.288843 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dqmwn"] Mar 11 08:47:42 crc kubenswrapper[4808]: I0311 08:47:42.314185 4808 scope.go:117] "RemoveContainer" containerID="22774567f7df2852b549858401f0464dd96c4621b4b8cf668a33045d6147516c" Mar 11 08:47:42 crc kubenswrapper[4808]: I0311 08:47:42.321779 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bc8sc"] Mar 11 08:47:42 crc kubenswrapper[4808]: I0311 08:47:42.330391 4808 scope.go:117] "RemoveContainer" containerID="52adc17afa44923452a812d2cfc7bc81f38473d12438047c7572015d462dc61b" Mar 11 08:47:42 crc kubenswrapper[4808]: I0311 08:47:42.330808 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bc8sc"] Mar 11 08:47:42 crc kubenswrapper[4808]: E0311 08:47:42.330806 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52adc17afa44923452a812d2cfc7bc81f38473d12438047c7572015d462dc61b\": container with ID starting with 52adc17afa44923452a812d2cfc7bc81f38473d12438047c7572015d462dc61b not found: ID does not exist" containerID="52adc17afa44923452a812d2cfc7bc81f38473d12438047c7572015d462dc61b" Mar 11 08:47:42 crc kubenswrapper[4808]: I0311 08:47:42.330861 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52adc17afa44923452a812d2cfc7bc81f38473d12438047c7572015d462dc61b"} err="failed to get container status \"52adc17afa44923452a812d2cfc7bc81f38473d12438047c7572015d462dc61b\": rpc error: code = NotFound desc = could not find container \"52adc17afa44923452a812d2cfc7bc81f38473d12438047c7572015d462dc61b\": container with ID starting with 52adc17afa44923452a812d2cfc7bc81f38473d12438047c7572015d462dc61b not found: ID does not exist" Mar 11 08:47:42 crc kubenswrapper[4808]: I0311 08:47:42.330882 4808 scope.go:117] "RemoveContainer" containerID="5c58af5174cca0dba072646bf90a707fcdbf5317f44b1005ec563a8ef47be2ee" Mar 11 08:47:42 crc kubenswrapper[4808]: E0311 08:47:42.333953 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c58af5174cca0dba072646bf90a707fcdbf5317f44b1005ec563a8ef47be2ee\": container with ID starting with 5c58af5174cca0dba072646bf90a707fcdbf5317f44b1005ec563a8ef47be2ee not found: ID does not exist" containerID="5c58af5174cca0dba072646bf90a707fcdbf5317f44b1005ec563a8ef47be2ee" Mar 11 08:47:42 crc kubenswrapper[4808]: I0311 08:47:42.333981 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c58af5174cca0dba072646bf90a707fcdbf5317f44b1005ec563a8ef47be2ee"} err="failed to get container status \"5c58af5174cca0dba072646bf90a707fcdbf5317f44b1005ec563a8ef47be2ee\": rpc error: code = NotFound desc = could not find container \"5c58af5174cca0dba072646bf90a707fcdbf5317f44b1005ec563a8ef47be2ee\": container with ID starting with 5c58af5174cca0dba072646bf90a707fcdbf5317f44b1005ec563a8ef47be2ee not found: ID does not exist" Mar 11 08:47:42 crc kubenswrapper[4808]: I0311 08:47:42.334002 4808 scope.go:117] "RemoveContainer" containerID="22774567f7df2852b549858401f0464dd96c4621b4b8cf668a33045d6147516c" Mar 11 08:47:42 crc kubenswrapper[4808]: E0311 08:47:42.334340 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22774567f7df2852b549858401f0464dd96c4621b4b8cf668a33045d6147516c\": container with ID starting with 22774567f7df2852b549858401f0464dd96c4621b4b8cf668a33045d6147516c not found: ID does not exist" containerID="22774567f7df2852b549858401f0464dd96c4621b4b8cf668a33045d6147516c" Mar 11 08:47:42 crc kubenswrapper[4808]: I0311 08:47:42.334376 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22774567f7df2852b549858401f0464dd96c4621b4b8cf668a33045d6147516c"} err="failed to get container status \"22774567f7df2852b549858401f0464dd96c4621b4b8cf668a33045d6147516c\": rpc error: code = NotFound desc = could not find container \"22774567f7df2852b549858401f0464dd96c4621b4b8cf668a33045d6147516c\": container with ID starting with 22774567f7df2852b549858401f0464dd96c4621b4b8cf668a33045d6147516c not found: ID does not exist" Mar 11 08:47:42 crc kubenswrapper[4808]: I0311 08:47:42.334393 4808 scope.go:117] "RemoveContainer" containerID="a850b6949811010daca36762a9c5a798c603bbad12e27d18071279f6090857b3" Mar 11 08:47:42 crc kubenswrapper[4808]: I0311 08:47:42.342641 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cnszv"] Mar 11 08:47:42 crc kubenswrapper[4808]: I0311 08:47:42.346688 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-cnszv"] Mar 11 08:47:42 crc kubenswrapper[4808]: I0311 08:47:42.347454 4808 scope.go:117] "RemoveContainer" containerID="87d5a46de9ee3150455457a050c7c44516668e08c685c81df04c1b29771b5189" Mar 11 08:47:42 crc kubenswrapper[4808]: I0311 08:47:42.351033 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pf6x9"] Mar 11 08:47:42 crc kubenswrapper[4808]: I0311 08:47:42.354224 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-pf6x9"] Mar 11 08:47:42 crc kubenswrapper[4808]: I0311 08:47:42.370288 4808 scope.go:117] "RemoveContainer" containerID="5600a6dba32204f485a6b40da78cb61f2f88a90ee59e22b71531fa9398bbf2e9" Mar 11 08:47:42 crc kubenswrapper[4808]: I0311 08:47:42.386660 4808 scope.go:117] "RemoveContainer" containerID="a850b6949811010daca36762a9c5a798c603bbad12e27d18071279f6090857b3" Mar 11 08:47:42 crc kubenswrapper[4808]: E0311 08:47:42.387054 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a850b6949811010daca36762a9c5a798c603bbad12e27d18071279f6090857b3\": container with ID starting with a850b6949811010daca36762a9c5a798c603bbad12e27d18071279f6090857b3 not found: ID does not exist" containerID="a850b6949811010daca36762a9c5a798c603bbad12e27d18071279f6090857b3" Mar 11 08:47:42 crc kubenswrapper[4808]: I0311 08:47:42.387091 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a850b6949811010daca36762a9c5a798c603bbad12e27d18071279f6090857b3"} err="failed to get container status \"a850b6949811010daca36762a9c5a798c603bbad12e27d18071279f6090857b3\": rpc error: code = NotFound desc = could not find container \"a850b6949811010daca36762a9c5a798c603bbad12e27d18071279f6090857b3\": container with ID starting with a850b6949811010daca36762a9c5a798c603bbad12e27d18071279f6090857b3 not found: ID does not exist" Mar 11 08:47:42 crc kubenswrapper[4808]: I0311 08:47:42.387113 4808 scope.go:117] "RemoveContainer" containerID="87d5a46de9ee3150455457a050c7c44516668e08c685c81df04c1b29771b5189" Mar 11 08:47:42 crc kubenswrapper[4808]: E0311 08:47:42.387427 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87d5a46de9ee3150455457a050c7c44516668e08c685c81df04c1b29771b5189\": container with ID starting with 87d5a46de9ee3150455457a050c7c44516668e08c685c81df04c1b29771b5189 not found: ID does not exist" containerID="87d5a46de9ee3150455457a050c7c44516668e08c685c81df04c1b29771b5189" Mar 11 08:47:42 crc kubenswrapper[4808]: I0311 08:47:42.387457 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87d5a46de9ee3150455457a050c7c44516668e08c685c81df04c1b29771b5189"} err="failed to get container status \"87d5a46de9ee3150455457a050c7c44516668e08c685c81df04c1b29771b5189\": rpc error: code = NotFound desc = could not find container \"87d5a46de9ee3150455457a050c7c44516668e08c685c81df04c1b29771b5189\": container with ID starting with 87d5a46de9ee3150455457a050c7c44516668e08c685c81df04c1b29771b5189 not found: ID does not exist" Mar 11 08:47:42 crc kubenswrapper[4808]: I0311 08:47:42.387479 4808 scope.go:117] "RemoveContainer" containerID="5600a6dba32204f485a6b40da78cb61f2f88a90ee59e22b71531fa9398bbf2e9" Mar 11 08:47:42 crc kubenswrapper[4808]: E0311 08:47:42.387808 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5600a6dba32204f485a6b40da78cb61f2f88a90ee59e22b71531fa9398bbf2e9\": container with ID starting with 5600a6dba32204f485a6b40da78cb61f2f88a90ee59e22b71531fa9398bbf2e9 not found: ID does not exist" containerID="5600a6dba32204f485a6b40da78cb61f2f88a90ee59e22b71531fa9398bbf2e9" Mar 11 08:47:42 crc kubenswrapper[4808]: I0311 08:47:42.387831 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5600a6dba32204f485a6b40da78cb61f2f88a90ee59e22b71531fa9398bbf2e9"} err="failed to get container status \"5600a6dba32204f485a6b40da78cb61f2f88a90ee59e22b71531fa9398bbf2e9\": rpc error: code = NotFound desc = could not find container \"5600a6dba32204f485a6b40da78cb61f2f88a90ee59e22b71531fa9398bbf2e9\": container with ID starting with 5600a6dba32204f485a6b40da78cb61f2f88a90ee59e22b71531fa9398bbf2e9 not found: ID does not exist" Mar 11 08:47:42 crc kubenswrapper[4808]: I0311 08:47:42.387843 4808 scope.go:117] "RemoveContainer" containerID="dc7cefd976b122d8c3e8895f0c75162bb03c6521fc2697025802f893b8715e94" Mar 11 08:47:42 crc kubenswrapper[4808]: I0311 08:47:42.399580 4808 scope.go:117] "RemoveContainer" containerID="a155dd81d66ce35c16f17641800b659c3950077d091e588f6781d1b9a5f090a9" Mar 11 08:47:42 crc kubenswrapper[4808]: I0311 08:47:42.415199 4808 scope.go:117] "RemoveContainer" containerID="e303b3282c058dd47673b1784d8ef582f35033d49ca8d5626c3545d6f2d8f6d8" Mar 11 08:47:42 crc kubenswrapper[4808]: I0311 08:47:42.425396 4808 scope.go:117] "RemoveContainer" containerID="dc7cefd976b122d8c3e8895f0c75162bb03c6521fc2697025802f893b8715e94" Mar 11 08:47:42 crc kubenswrapper[4808]: E0311 08:47:42.425638 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc7cefd976b122d8c3e8895f0c75162bb03c6521fc2697025802f893b8715e94\": container with ID starting with dc7cefd976b122d8c3e8895f0c75162bb03c6521fc2697025802f893b8715e94 not found: ID does not exist" containerID="dc7cefd976b122d8c3e8895f0c75162bb03c6521fc2697025802f893b8715e94" Mar 11 08:47:42 crc kubenswrapper[4808]: I0311 08:47:42.425675 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc7cefd976b122d8c3e8895f0c75162bb03c6521fc2697025802f893b8715e94"} err="failed to get container status \"dc7cefd976b122d8c3e8895f0c75162bb03c6521fc2697025802f893b8715e94\": rpc error: code = NotFound desc = could not find container \"dc7cefd976b122d8c3e8895f0c75162bb03c6521fc2697025802f893b8715e94\": container with ID starting with dc7cefd976b122d8c3e8895f0c75162bb03c6521fc2697025802f893b8715e94 not found: ID does not exist" Mar 11 08:47:42 crc kubenswrapper[4808]: I0311 08:47:42.425697 4808 scope.go:117] "RemoveContainer" containerID="a155dd81d66ce35c16f17641800b659c3950077d091e588f6781d1b9a5f090a9" Mar 11 08:47:42 crc kubenswrapper[4808]: E0311 08:47:42.425952 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a155dd81d66ce35c16f17641800b659c3950077d091e588f6781d1b9a5f090a9\": container with ID starting with a155dd81d66ce35c16f17641800b659c3950077d091e588f6781d1b9a5f090a9 not found: ID does not exist" containerID="a155dd81d66ce35c16f17641800b659c3950077d091e588f6781d1b9a5f090a9" Mar 11 08:47:42 crc kubenswrapper[4808]: I0311 08:47:42.425995 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a155dd81d66ce35c16f17641800b659c3950077d091e588f6781d1b9a5f090a9"} err="failed to get container status \"a155dd81d66ce35c16f17641800b659c3950077d091e588f6781d1b9a5f090a9\": rpc error: code = NotFound desc = could not find container \"a155dd81d66ce35c16f17641800b659c3950077d091e588f6781d1b9a5f090a9\": container with ID starting with a155dd81d66ce35c16f17641800b659c3950077d091e588f6781d1b9a5f090a9 not found: ID does not exist" Mar 11 08:47:42 crc kubenswrapper[4808]: I0311 08:47:42.426020 4808 scope.go:117] "RemoveContainer" containerID="e303b3282c058dd47673b1784d8ef582f35033d49ca8d5626c3545d6f2d8f6d8" Mar 11 08:47:42 crc kubenswrapper[4808]: E0311 08:47:42.426456 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e303b3282c058dd47673b1784d8ef582f35033d49ca8d5626c3545d6f2d8f6d8\": container with ID starting with e303b3282c058dd47673b1784d8ef582f35033d49ca8d5626c3545d6f2d8f6d8 not found: ID does not exist" containerID="e303b3282c058dd47673b1784d8ef582f35033d49ca8d5626c3545d6f2d8f6d8" Mar 11 08:47:42 crc kubenswrapper[4808]: I0311 08:47:42.426493 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e303b3282c058dd47673b1784d8ef582f35033d49ca8d5626c3545d6f2d8f6d8"} err="failed to get container status \"e303b3282c058dd47673b1784d8ef582f35033d49ca8d5626c3545d6f2d8f6d8\": rpc error: code = NotFound desc = could not find container \"e303b3282c058dd47673b1784d8ef582f35033d49ca8d5626c3545d6f2d8f6d8\": container with ID starting with e303b3282c058dd47673b1784d8ef582f35033d49ca8d5626c3545d6f2d8f6d8 not found: ID does not exist" Mar 11 08:47:42 crc kubenswrapper[4808]: I0311 08:47:42.426515 4808 scope.go:117] "RemoveContainer" containerID="f2c6d092023bbd464a5a5d9a4736b9c7061598634a2633b2a31add43e5c6dd8a" Mar 11 08:47:42 crc kubenswrapper[4808]: I0311 08:47:42.437094 4808 scope.go:117] "RemoveContainer" containerID="c3d179c8b0135b11a21d82d8d0b409ce75683ffd9a9c424f4f4e8c8f9fe7e678" Mar 11 08:47:42 crc kubenswrapper[4808]: I0311 08:47:42.460800 4808 scope.go:117] "RemoveContainer" containerID="8c1d7c702e054ded9daa9bd4fbb6a1199f4f98de666664dfe792895e0ba1d748" Mar 11 08:47:42 crc kubenswrapper[4808]: I0311 08:47:42.472194 4808 scope.go:117] "RemoveContainer" containerID="f2c6d092023bbd464a5a5d9a4736b9c7061598634a2633b2a31add43e5c6dd8a" Mar 11 08:47:42 crc kubenswrapper[4808]: E0311 08:47:42.472527 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2c6d092023bbd464a5a5d9a4736b9c7061598634a2633b2a31add43e5c6dd8a\": container with ID starting with f2c6d092023bbd464a5a5d9a4736b9c7061598634a2633b2a31add43e5c6dd8a not found: ID does not exist" containerID="f2c6d092023bbd464a5a5d9a4736b9c7061598634a2633b2a31add43e5c6dd8a" Mar 11 08:47:42 crc kubenswrapper[4808]: I0311 08:47:42.472554 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2c6d092023bbd464a5a5d9a4736b9c7061598634a2633b2a31add43e5c6dd8a"} err="failed to get container status \"f2c6d092023bbd464a5a5d9a4736b9c7061598634a2633b2a31add43e5c6dd8a\": rpc error: code = NotFound desc = could not find container \"f2c6d092023bbd464a5a5d9a4736b9c7061598634a2633b2a31add43e5c6dd8a\": container with ID starting with f2c6d092023bbd464a5a5d9a4736b9c7061598634a2633b2a31add43e5c6dd8a not found: ID does not exist" Mar 11 08:47:42 crc kubenswrapper[4808]: I0311 08:47:42.472574 4808 scope.go:117] "RemoveContainer" containerID="c3d179c8b0135b11a21d82d8d0b409ce75683ffd9a9c424f4f4e8c8f9fe7e678" Mar 11 08:47:42 crc kubenswrapper[4808]: E0311 08:47:42.472930 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3d179c8b0135b11a21d82d8d0b409ce75683ffd9a9c424f4f4e8c8f9fe7e678\": container with ID starting with c3d179c8b0135b11a21d82d8d0b409ce75683ffd9a9c424f4f4e8c8f9fe7e678 not found: ID does not exist" containerID="c3d179c8b0135b11a21d82d8d0b409ce75683ffd9a9c424f4f4e8c8f9fe7e678" Mar 11 08:47:42 crc kubenswrapper[4808]: I0311 08:47:42.472990 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3d179c8b0135b11a21d82d8d0b409ce75683ffd9a9c424f4f4e8c8f9fe7e678"} err="failed to get container status \"c3d179c8b0135b11a21d82d8d0b409ce75683ffd9a9c424f4f4e8c8f9fe7e678\": rpc error: code = NotFound desc = could not find container \"c3d179c8b0135b11a21d82d8d0b409ce75683ffd9a9c424f4f4e8c8f9fe7e678\": container with ID starting with c3d179c8b0135b11a21d82d8d0b409ce75683ffd9a9c424f4f4e8c8f9fe7e678 not found: ID does not exist" Mar 11 08:47:42 crc kubenswrapper[4808]: I0311 08:47:42.473027 4808 scope.go:117] "RemoveContainer" containerID="8c1d7c702e054ded9daa9bd4fbb6a1199f4f98de666664dfe792895e0ba1d748" Mar 11 08:47:42 crc kubenswrapper[4808]: E0311 08:47:42.473465 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c1d7c702e054ded9daa9bd4fbb6a1199f4f98de666664dfe792895e0ba1d748\": container with ID starting with 8c1d7c702e054ded9daa9bd4fbb6a1199f4f98de666664dfe792895e0ba1d748 not found: ID does not exist" containerID="8c1d7c702e054ded9daa9bd4fbb6a1199f4f98de666664dfe792895e0ba1d748" Mar 11 08:47:42 crc kubenswrapper[4808]: I0311 08:47:42.473489 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c1d7c702e054ded9daa9bd4fbb6a1199f4f98de666664dfe792895e0ba1d748"} err="failed to get container status \"8c1d7c702e054ded9daa9bd4fbb6a1199f4f98de666664dfe792895e0ba1d748\": rpc error: code = NotFound desc = could not find container \"8c1d7c702e054ded9daa9bd4fbb6a1199f4f98de666664dfe792895e0ba1d748\": container with ID starting with 8c1d7c702e054ded9daa9bd4fbb6a1199f4f98de666664dfe792895e0ba1d748 not found: ID does not exist" Mar 11 08:47:43 crc kubenswrapper[4808]: I0311 08:47:43.305963 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6xd22"] Mar 11 08:47:43 crc kubenswrapper[4808]: E0311 08:47:43.306253 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32a93168-4bf6-48c0-89b4-5e4393234562" containerName="extract-utilities" Mar 11 08:47:43 crc kubenswrapper[4808]: I0311 08:47:43.306272 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="32a93168-4bf6-48c0-89b4-5e4393234562" containerName="extract-utilities" Mar 11 08:47:43 crc kubenswrapper[4808]: E0311 08:47:43.306304 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="090328a2-0e9e-49a5-b82a-e35947e2fbf2" containerName="marketplace-operator" Mar 11 08:47:43 crc kubenswrapper[4808]: I0311 08:47:43.306316 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="090328a2-0e9e-49a5-b82a-e35947e2fbf2" containerName="marketplace-operator" Mar 11 08:47:43 crc kubenswrapper[4808]: E0311 08:47:43.306336 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cd00c2e-5e6d-4fa5-a2a5-28153c8ef4bd" containerName="registry-server" Mar 11 08:47:43 crc kubenswrapper[4808]: I0311 08:47:43.306347 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cd00c2e-5e6d-4fa5-a2a5-28153c8ef4bd" containerName="registry-server" Mar 11 08:47:43 crc kubenswrapper[4808]: E0311 08:47:43.307009 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d26a515-59f9-49a6-a9e0-6ff62b523ab4" containerName="extract-content" Mar 11 08:47:43 crc kubenswrapper[4808]: I0311 08:47:43.307028 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d26a515-59f9-49a6-a9e0-6ff62b523ab4" containerName="extract-content" Mar 11 08:47:43 crc kubenswrapper[4808]: E0311 08:47:43.307046 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a83bb5a-5523-4258-b6ce-895c0b1d7f07" containerName="extract-content" Mar 11 08:47:43 crc kubenswrapper[4808]: I0311 08:47:43.307058 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a83bb5a-5523-4258-b6ce-895c0b1d7f07" containerName="extract-content" Mar 11 08:47:43 crc kubenswrapper[4808]: E0311 08:47:43.307071 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32a93168-4bf6-48c0-89b4-5e4393234562" containerName="registry-server" Mar 11 08:47:43 crc kubenswrapper[4808]: I0311 08:47:43.307081 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="32a93168-4bf6-48c0-89b4-5e4393234562" containerName="registry-server" Mar 11 08:47:43 crc kubenswrapper[4808]: E0311 08:47:43.307099 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a83bb5a-5523-4258-b6ce-895c0b1d7f07" containerName="registry-server" Mar 11 08:47:43 crc kubenswrapper[4808]: I0311 08:47:43.307108 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a83bb5a-5523-4258-b6ce-895c0b1d7f07" containerName="registry-server" Mar 11 08:47:43 crc kubenswrapper[4808]: E0311 08:47:43.307124 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d26a515-59f9-49a6-a9e0-6ff62b523ab4" containerName="extract-utilities" Mar 11 08:47:43 crc kubenswrapper[4808]: I0311 08:47:43.307133 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d26a515-59f9-49a6-a9e0-6ff62b523ab4" containerName="extract-utilities" Mar 11 08:47:43 crc kubenswrapper[4808]: E0311 08:47:43.307145 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a83bb5a-5523-4258-b6ce-895c0b1d7f07" containerName="extract-utilities" Mar 11 08:47:43 crc kubenswrapper[4808]: I0311 08:47:43.307153 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a83bb5a-5523-4258-b6ce-895c0b1d7f07" containerName="extract-utilities" Mar 11 08:47:43 crc kubenswrapper[4808]: E0311 08:47:43.307161 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d26a515-59f9-49a6-a9e0-6ff62b523ab4" containerName="registry-server" Mar 11 08:47:43 crc kubenswrapper[4808]: I0311 08:47:43.307169 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d26a515-59f9-49a6-a9e0-6ff62b523ab4" containerName="registry-server" Mar 11 08:47:43 crc kubenswrapper[4808]: E0311 08:47:43.307178 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="090328a2-0e9e-49a5-b82a-e35947e2fbf2" containerName="marketplace-operator" Mar 11 08:47:43 crc kubenswrapper[4808]: I0311 08:47:43.307186 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="090328a2-0e9e-49a5-b82a-e35947e2fbf2" containerName="marketplace-operator" Mar 11 08:47:43 crc kubenswrapper[4808]: E0311 08:47:43.307199 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cd00c2e-5e6d-4fa5-a2a5-28153c8ef4bd" containerName="extract-utilities" Mar 11 08:47:43 crc kubenswrapper[4808]: I0311 08:47:43.307207 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cd00c2e-5e6d-4fa5-a2a5-28153c8ef4bd" containerName="extract-utilities" Mar 11 08:47:43 crc kubenswrapper[4808]: E0311 08:47:43.307217 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32a93168-4bf6-48c0-89b4-5e4393234562" containerName="extract-content" Mar 11 08:47:43 crc kubenswrapper[4808]: I0311 08:47:43.307226 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="32a93168-4bf6-48c0-89b4-5e4393234562" containerName="extract-content" Mar 11 08:47:43 crc kubenswrapper[4808]: E0311 08:47:43.307236 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cd00c2e-5e6d-4fa5-a2a5-28153c8ef4bd" containerName="extract-content" Mar 11 08:47:43 crc kubenswrapper[4808]: I0311 08:47:43.307246 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cd00c2e-5e6d-4fa5-a2a5-28153c8ef4bd" containerName="extract-content" Mar 11 08:47:43 crc kubenswrapper[4808]: I0311 08:47:43.307432 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="090328a2-0e9e-49a5-b82a-e35947e2fbf2" containerName="marketplace-operator" Mar 11 08:47:43 crc kubenswrapper[4808]: I0311 08:47:43.307453 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="090328a2-0e9e-49a5-b82a-e35947e2fbf2" containerName="marketplace-operator" Mar 11 08:47:43 crc kubenswrapper[4808]: I0311 08:47:43.307469 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cd00c2e-5e6d-4fa5-a2a5-28153c8ef4bd" containerName="registry-server" Mar 11 08:47:43 crc kubenswrapper[4808]: I0311 08:47:43.307485 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="32a93168-4bf6-48c0-89b4-5e4393234562" containerName="registry-server" Mar 11 08:47:43 crc kubenswrapper[4808]: I0311 08:47:43.307497 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a83bb5a-5523-4258-b6ce-895c0b1d7f07" containerName="registry-server" Mar 11 08:47:43 crc kubenswrapper[4808]: I0311 08:47:43.307511 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d26a515-59f9-49a6-a9e0-6ff62b523ab4" containerName="registry-server" Mar 11 08:47:43 crc kubenswrapper[4808]: I0311 08:47:43.308525 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6xd22" Mar 11 08:47:43 crc kubenswrapper[4808]: I0311 08:47:43.311315 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Mar 11 08:47:43 crc kubenswrapper[4808]: I0311 08:47:43.330491 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6xd22"] Mar 11 08:47:43 crc kubenswrapper[4808]: I0311 08:47:43.385509 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9gj8\" (UniqueName: \"kubernetes.io/projected/bd3fd8a8-324f-444f-99ce-9916706b9b32-kube-api-access-q9gj8\") pod \"certified-operators-6xd22\" (UID: \"bd3fd8a8-324f-444f-99ce-9916706b9b32\") " pod="openshift-marketplace/certified-operators-6xd22" Mar 11 08:47:43 crc kubenswrapper[4808]: I0311 08:47:43.385573 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd3fd8a8-324f-444f-99ce-9916706b9b32-utilities\") pod \"certified-operators-6xd22\" (UID: \"bd3fd8a8-324f-444f-99ce-9916706b9b32\") " pod="openshift-marketplace/certified-operators-6xd22" Mar 11 08:47:43 crc kubenswrapper[4808]: I0311 08:47:43.385613 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd3fd8a8-324f-444f-99ce-9916706b9b32-catalog-content\") pod \"certified-operators-6xd22\" (UID: \"bd3fd8a8-324f-444f-99ce-9916706b9b32\") " pod="openshift-marketplace/certified-operators-6xd22" Mar 11 08:47:43 crc kubenswrapper[4808]: I0311 08:47:43.486869 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9gj8\" (UniqueName: \"kubernetes.io/projected/bd3fd8a8-324f-444f-99ce-9916706b9b32-kube-api-access-q9gj8\") pod \"certified-operators-6xd22\" (UID: \"bd3fd8a8-324f-444f-99ce-9916706b9b32\") " pod="openshift-marketplace/certified-operators-6xd22" Mar 11 08:47:43 crc kubenswrapper[4808]: I0311 08:47:43.486944 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd3fd8a8-324f-444f-99ce-9916706b9b32-utilities\") pod \"certified-operators-6xd22\" (UID: \"bd3fd8a8-324f-444f-99ce-9916706b9b32\") " pod="openshift-marketplace/certified-operators-6xd22" Mar 11 08:47:43 crc kubenswrapper[4808]: I0311 08:47:43.486995 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd3fd8a8-324f-444f-99ce-9916706b9b32-catalog-content\") pod \"certified-operators-6xd22\" (UID: \"bd3fd8a8-324f-444f-99ce-9916706b9b32\") " pod="openshift-marketplace/certified-operators-6xd22" Mar 11 08:47:43 crc kubenswrapper[4808]: I0311 08:47:43.487722 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd3fd8a8-324f-444f-99ce-9916706b9b32-catalog-content\") pod \"certified-operators-6xd22\" (UID: \"bd3fd8a8-324f-444f-99ce-9916706b9b32\") " pod="openshift-marketplace/certified-operators-6xd22" Mar 11 08:47:43 crc kubenswrapper[4808]: I0311 08:47:43.487932 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd3fd8a8-324f-444f-99ce-9916706b9b32-utilities\") pod \"certified-operators-6xd22\" (UID: \"bd3fd8a8-324f-444f-99ce-9916706b9b32\") " pod="openshift-marketplace/certified-operators-6xd22" Mar 11 08:47:43 crc kubenswrapper[4808]: I0311 08:47:43.498426 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2vsnp"] Mar 11 08:47:43 crc kubenswrapper[4808]: I0311 08:47:43.513110 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2vsnp" Mar 11 08:47:43 crc kubenswrapper[4808]: I0311 08:47:43.515885 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Mar 11 08:47:43 crc kubenswrapper[4808]: I0311 08:47:43.517983 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2vsnp"] Mar 11 08:47:43 crc kubenswrapper[4808]: I0311 08:47:43.520694 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9gj8\" (UniqueName: \"kubernetes.io/projected/bd3fd8a8-324f-444f-99ce-9916706b9b32-kube-api-access-q9gj8\") pod \"certified-operators-6xd22\" (UID: \"bd3fd8a8-324f-444f-99ce-9916706b9b32\") " pod="openshift-marketplace/certified-operators-6xd22" Mar 11 08:47:43 crc kubenswrapper[4808]: I0311 08:47:43.587865 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvt9h\" (UniqueName: \"kubernetes.io/projected/e7fbeb3b-1b0e-481e-a0e9-6673407ec18f-kube-api-access-vvt9h\") pod \"community-operators-2vsnp\" (UID: \"e7fbeb3b-1b0e-481e-a0e9-6673407ec18f\") " pod="openshift-marketplace/community-operators-2vsnp" Mar 11 08:47:43 crc kubenswrapper[4808]: I0311 08:47:43.587928 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7fbeb3b-1b0e-481e-a0e9-6673407ec18f-catalog-content\") pod \"community-operators-2vsnp\" (UID: \"e7fbeb3b-1b0e-481e-a0e9-6673407ec18f\") " pod="openshift-marketplace/community-operators-2vsnp" Mar 11 08:47:43 crc kubenswrapper[4808]: I0311 08:47:43.587999 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7fbeb3b-1b0e-481e-a0e9-6673407ec18f-utilities\") pod \"community-operators-2vsnp\" (UID: \"e7fbeb3b-1b0e-481e-a0e9-6673407ec18f\") " pod="openshift-marketplace/community-operators-2vsnp" Mar 11 08:47:43 crc kubenswrapper[4808]: I0311 08:47:43.625434 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6xd22" Mar 11 08:47:43 crc kubenswrapper[4808]: I0311 08:47:43.688337 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7fbeb3b-1b0e-481e-a0e9-6673407ec18f-utilities\") pod \"community-operators-2vsnp\" (UID: \"e7fbeb3b-1b0e-481e-a0e9-6673407ec18f\") " pod="openshift-marketplace/community-operators-2vsnp" Mar 11 08:47:43 crc kubenswrapper[4808]: I0311 08:47:43.688406 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvt9h\" (UniqueName: \"kubernetes.io/projected/e7fbeb3b-1b0e-481e-a0e9-6673407ec18f-kube-api-access-vvt9h\") pod \"community-operators-2vsnp\" (UID: \"e7fbeb3b-1b0e-481e-a0e9-6673407ec18f\") " pod="openshift-marketplace/community-operators-2vsnp" Mar 11 08:47:43 crc kubenswrapper[4808]: I0311 08:47:43.688449 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7fbeb3b-1b0e-481e-a0e9-6673407ec18f-catalog-content\") pod \"community-operators-2vsnp\" (UID: \"e7fbeb3b-1b0e-481e-a0e9-6673407ec18f\") " pod="openshift-marketplace/community-operators-2vsnp" Mar 11 08:47:43 crc kubenswrapper[4808]: I0311 08:47:43.688765 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7fbeb3b-1b0e-481e-a0e9-6673407ec18f-utilities\") pod \"community-operators-2vsnp\" (UID: \"e7fbeb3b-1b0e-481e-a0e9-6673407ec18f\") " pod="openshift-marketplace/community-operators-2vsnp" Mar 11 08:47:43 crc kubenswrapper[4808]: I0311 08:47:43.688832 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7fbeb3b-1b0e-481e-a0e9-6673407ec18f-catalog-content\") pod \"community-operators-2vsnp\" (UID: \"e7fbeb3b-1b0e-481e-a0e9-6673407ec18f\") " pod="openshift-marketplace/community-operators-2vsnp" Mar 11 08:47:43 crc kubenswrapper[4808]: I0311 08:47:43.710253 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvt9h\" (UniqueName: \"kubernetes.io/projected/e7fbeb3b-1b0e-481e-a0e9-6673407ec18f-kube-api-access-vvt9h\") pod \"community-operators-2vsnp\" (UID: \"e7fbeb3b-1b0e-481e-a0e9-6673407ec18f\") " pod="openshift-marketplace/community-operators-2vsnp" Mar 11 08:47:43 crc kubenswrapper[4808]: I0311 08:47:43.797204 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="090328a2-0e9e-49a5-b82a-e35947e2fbf2" path="/var/lib/kubelet/pods/090328a2-0e9e-49a5-b82a-e35947e2fbf2/volumes" Mar 11 08:47:43 crc kubenswrapper[4808]: I0311 08:47:43.798184 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32a93168-4bf6-48c0-89b4-5e4393234562" path="/var/lib/kubelet/pods/32a93168-4bf6-48c0-89b4-5e4393234562/volumes" Mar 11 08:47:43 crc kubenswrapper[4808]: I0311 08:47:43.799129 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a83bb5a-5523-4258-b6ce-895c0b1d7f07" path="/var/lib/kubelet/pods/5a83bb5a-5523-4258-b6ce-895c0b1d7f07/volumes" Mar 11 08:47:43 crc kubenswrapper[4808]: I0311 08:47:43.800473 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5cd00c2e-5e6d-4fa5-a2a5-28153c8ef4bd" path="/var/lib/kubelet/pods/5cd00c2e-5e6d-4fa5-a2a5-28153c8ef4bd/volumes" Mar 11 08:47:43 crc kubenswrapper[4808]: I0311 08:47:43.801207 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d26a515-59f9-49a6-a9e0-6ff62b523ab4" path="/var/lib/kubelet/pods/8d26a515-59f9-49a6-a9e0-6ff62b523ab4/volumes" Mar 11 08:47:43 crc kubenswrapper[4808]: I0311 08:47:43.845684 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6xd22"] Mar 11 08:47:43 crc kubenswrapper[4808]: I0311 08:47:43.849003 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2vsnp" Mar 11 08:47:43 crc kubenswrapper[4808]: W0311 08:47:43.851148 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd3fd8a8_324f_444f_99ce_9916706b9b32.slice/crio-0faaa6510220595207f3053181174e90ab03bae40d30071da0e186dae42535c0 WatchSource:0}: Error finding container 0faaa6510220595207f3053181174e90ab03bae40d30071da0e186dae42535c0: Status 404 returned error can't find the container with id 0faaa6510220595207f3053181174e90ab03bae40d30071da0e186dae42535c0 Mar 11 08:47:44 crc kubenswrapper[4808]: I0311 08:47:44.220164 4808 generic.go:334] "Generic (PLEG): container finished" podID="bd3fd8a8-324f-444f-99ce-9916706b9b32" containerID="fde5b79f6ecb77a664814cef8cb561bb97d7ff084d730d702ddd91e91cdf8b3b" exitCode=0 Mar 11 08:47:44 crc kubenswrapper[4808]: I0311 08:47:44.220260 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6xd22" event={"ID":"bd3fd8a8-324f-444f-99ce-9916706b9b32","Type":"ContainerDied","Data":"fde5b79f6ecb77a664814cef8cb561bb97d7ff084d730d702ddd91e91cdf8b3b"} Mar 11 08:47:44 crc kubenswrapper[4808]: I0311 08:47:44.220316 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6xd22" event={"ID":"bd3fd8a8-324f-444f-99ce-9916706b9b32","Type":"ContainerStarted","Data":"0faaa6510220595207f3053181174e90ab03bae40d30071da0e186dae42535c0"} Mar 11 08:47:44 crc kubenswrapper[4808]: I0311 08:47:44.264971 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2vsnp"] Mar 11 08:47:44 crc kubenswrapper[4808]: W0311 08:47:44.277862 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode7fbeb3b_1b0e_481e_a0e9_6673407ec18f.slice/crio-74a2991486157383826751a11a9c8a3bf1b9c15cb4d3874d5e2bf36f8ba4fc90 WatchSource:0}: Error finding container 74a2991486157383826751a11a9c8a3bf1b9c15cb4d3874d5e2bf36f8ba4fc90: Status 404 returned error can't find the container with id 74a2991486157383826751a11a9c8a3bf1b9c15cb4d3874d5e2bf36f8ba4fc90 Mar 11 08:47:45 crc kubenswrapper[4808]: I0311 08:47:45.226726 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6xd22" event={"ID":"bd3fd8a8-324f-444f-99ce-9916706b9b32","Type":"ContainerStarted","Data":"d792c6afc2c29d5057276c3350cead8d4cb9e229562a4dbf3d7982dfffa29f42"} Mar 11 08:47:45 crc kubenswrapper[4808]: I0311 08:47:45.229055 4808 generic.go:334] "Generic (PLEG): container finished" podID="e7fbeb3b-1b0e-481e-a0e9-6673407ec18f" containerID="e7185054f510fe84f9080d862784d0146d1461265f39d3722bd1a2eefec8d4ab" exitCode=0 Mar 11 08:47:45 crc kubenswrapper[4808]: I0311 08:47:45.229104 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2vsnp" event={"ID":"e7fbeb3b-1b0e-481e-a0e9-6673407ec18f","Type":"ContainerDied","Data":"e7185054f510fe84f9080d862784d0146d1461265f39d3722bd1a2eefec8d4ab"} Mar 11 08:47:45 crc kubenswrapper[4808]: I0311 08:47:45.229137 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2vsnp" event={"ID":"e7fbeb3b-1b0e-481e-a0e9-6673407ec18f","Type":"ContainerStarted","Data":"74a2991486157383826751a11a9c8a3bf1b9c15cb4d3874d5e2bf36f8ba4fc90"} Mar 11 08:47:45 crc kubenswrapper[4808]: I0311 08:47:45.698847 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qmn6r"] Mar 11 08:47:45 crc kubenswrapper[4808]: I0311 08:47:45.699917 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qmn6r" Mar 11 08:47:45 crc kubenswrapper[4808]: I0311 08:47:45.704959 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Mar 11 08:47:45 crc kubenswrapper[4808]: I0311 08:47:45.716525 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qmn6r"] Mar 11 08:47:45 crc kubenswrapper[4808]: I0311 08:47:45.813179 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vbjh\" (UniqueName: \"kubernetes.io/projected/89ebd57f-641c-4d65-b3b8-cc6ccc005770-kube-api-access-8vbjh\") pod \"redhat-marketplace-qmn6r\" (UID: \"89ebd57f-641c-4d65-b3b8-cc6ccc005770\") " pod="openshift-marketplace/redhat-marketplace-qmn6r" Mar 11 08:47:45 crc kubenswrapper[4808]: I0311 08:47:45.813505 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89ebd57f-641c-4d65-b3b8-cc6ccc005770-catalog-content\") pod \"redhat-marketplace-qmn6r\" (UID: \"89ebd57f-641c-4d65-b3b8-cc6ccc005770\") " pod="openshift-marketplace/redhat-marketplace-qmn6r" Mar 11 08:47:45 crc kubenswrapper[4808]: I0311 08:47:45.813530 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89ebd57f-641c-4d65-b3b8-cc6ccc005770-utilities\") pod \"redhat-marketplace-qmn6r\" (UID: \"89ebd57f-641c-4d65-b3b8-cc6ccc005770\") " pod="openshift-marketplace/redhat-marketplace-qmn6r" Mar 11 08:47:45 crc kubenswrapper[4808]: I0311 08:47:45.900924 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rjf7z"] Mar 11 08:47:45 crc kubenswrapper[4808]: I0311 08:47:45.903799 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rjf7z" Mar 11 08:47:45 crc kubenswrapper[4808]: I0311 08:47:45.906989 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Mar 11 08:47:45 crc kubenswrapper[4808]: I0311 08:47:45.911666 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rjf7z"] Mar 11 08:47:45 crc kubenswrapper[4808]: I0311 08:47:45.915270 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vbjh\" (UniqueName: \"kubernetes.io/projected/89ebd57f-641c-4d65-b3b8-cc6ccc005770-kube-api-access-8vbjh\") pod \"redhat-marketplace-qmn6r\" (UID: \"89ebd57f-641c-4d65-b3b8-cc6ccc005770\") " pod="openshift-marketplace/redhat-marketplace-qmn6r" Mar 11 08:47:45 crc kubenswrapper[4808]: I0311 08:47:45.915382 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89ebd57f-641c-4d65-b3b8-cc6ccc005770-catalog-content\") pod \"redhat-marketplace-qmn6r\" (UID: \"89ebd57f-641c-4d65-b3b8-cc6ccc005770\") " pod="openshift-marketplace/redhat-marketplace-qmn6r" Mar 11 08:47:45 crc kubenswrapper[4808]: I0311 08:47:45.916035 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89ebd57f-641c-4d65-b3b8-cc6ccc005770-utilities\") pod \"redhat-marketplace-qmn6r\" (UID: \"89ebd57f-641c-4d65-b3b8-cc6ccc005770\") " pod="openshift-marketplace/redhat-marketplace-qmn6r" Mar 11 08:47:45 crc kubenswrapper[4808]: I0311 08:47:45.916310 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89ebd57f-641c-4d65-b3b8-cc6ccc005770-utilities\") pod \"redhat-marketplace-qmn6r\" (UID: \"89ebd57f-641c-4d65-b3b8-cc6ccc005770\") " pod="openshift-marketplace/redhat-marketplace-qmn6r" Mar 11 08:47:45 crc kubenswrapper[4808]: I0311 08:47:45.915983 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89ebd57f-641c-4d65-b3b8-cc6ccc005770-catalog-content\") pod \"redhat-marketplace-qmn6r\" (UID: \"89ebd57f-641c-4d65-b3b8-cc6ccc005770\") " pod="openshift-marketplace/redhat-marketplace-qmn6r" Mar 11 08:47:45 crc kubenswrapper[4808]: I0311 08:47:45.941038 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vbjh\" (UniqueName: \"kubernetes.io/projected/89ebd57f-641c-4d65-b3b8-cc6ccc005770-kube-api-access-8vbjh\") pod \"redhat-marketplace-qmn6r\" (UID: \"89ebd57f-641c-4d65-b3b8-cc6ccc005770\") " pod="openshift-marketplace/redhat-marketplace-qmn6r" Mar 11 08:47:46 crc kubenswrapper[4808]: I0311 08:47:46.016817 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqrw4\" (UniqueName: \"kubernetes.io/projected/7a8a6d81-44ea-4ab5-a6c2-fa901e1ebf40-kube-api-access-mqrw4\") pod \"redhat-operators-rjf7z\" (UID: \"7a8a6d81-44ea-4ab5-a6c2-fa901e1ebf40\") " pod="openshift-marketplace/redhat-operators-rjf7z" Mar 11 08:47:46 crc kubenswrapper[4808]: I0311 08:47:46.016885 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a8a6d81-44ea-4ab5-a6c2-fa901e1ebf40-catalog-content\") pod \"redhat-operators-rjf7z\" (UID: \"7a8a6d81-44ea-4ab5-a6c2-fa901e1ebf40\") " pod="openshift-marketplace/redhat-operators-rjf7z" Mar 11 08:47:46 crc kubenswrapper[4808]: I0311 08:47:46.016922 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a8a6d81-44ea-4ab5-a6c2-fa901e1ebf40-utilities\") pod \"redhat-operators-rjf7z\" (UID: \"7a8a6d81-44ea-4ab5-a6c2-fa901e1ebf40\") " pod="openshift-marketplace/redhat-operators-rjf7z" Mar 11 08:47:46 crc kubenswrapper[4808]: I0311 08:47:46.032167 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qmn6r" Mar 11 08:47:46 crc kubenswrapper[4808]: I0311 08:47:46.118501 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqrw4\" (UniqueName: \"kubernetes.io/projected/7a8a6d81-44ea-4ab5-a6c2-fa901e1ebf40-kube-api-access-mqrw4\") pod \"redhat-operators-rjf7z\" (UID: \"7a8a6d81-44ea-4ab5-a6c2-fa901e1ebf40\") " pod="openshift-marketplace/redhat-operators-rjf7z" Mar 11 08:47:46 crc kubenswrapper[4808]: I0311 08:47:46.118872 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a8a6d81-44ea-4ab5-a6c2-fa901e1ebf40-catalog-content\") pod \"redhat-operators-rjf7z\" (UID: \"7a8a6d81-44ea-4ab5-a6c2-fa901e1ebf40\") " pod="openshift-marketplace/redhat-operators-rjf7z" Mar 11 08:47:46 crc kubenswrapper[4808]: I0311 08:47:46.118949 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a8a6d81-44ea-4ab5-a6c2-fa901e1ebf40-utilities\") pod \"redhat-operators-rjf7z\" (UID: \"7a8a6d81-44ea-4ab5-a6c2-fa901e1ebf40\") " pod="openshift-marketplace/redhat-operators-rjf7z" Mar 11 08:47:46 crc kubenswrapper[4808]: I0311 08:47:46.119504 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a8a6d81-44ea-4ab5-a6c2-fa901e1ebf40-catalog-content\") pod \"redhat-operators-rjf7z\" (UID: \"7a8a6d81-44ea-4ab5-a6c2-fa901e1ebf40\") " pod="openshift-marketplace/redhat-operators-rjf7z" Mar 11 08:47:46 crc kubenswrapper[4808]: I0311 08:47:46.119603 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a8a6d81-44ea-4ab5-a6c2-fa901e1ebf40-utilities\") pod \"redhat-operators-rjf7z\" (UID: \"7a8a6d81-44ea-4ab5-a6c2-fa901e1ebf40\") " pod="openshift-marketplace/redhat-operators-rjf7z" Mar 11 08:47:46 crc kubenswrapper[4808]: I0311 08:47:46.183159 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqrw4\" (UniqueName: \"kubernetes.io/projected/7a8a6d81-44ea-4ab5-a6c2-fa901e1ebf40-kube-api-access-mqrw4\") pod \"redhat-operators-rjf7z\" (UID: \"7a8a6d81-44ea-4ab5-a6c2-fa901e1ebf40\") " pod="openshift-marketplace/redhat-operators-rjf7z" Mar 11 08:47:46 crc kubenswrapper[4808]: I0311 08:47:46.229787 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rjf7z" Mar 11 08:47:46 crc kubenswrapper[4808]: I0311 08:47:46.241170 4808 generic.go:334] "Generic (PLEG): container finished" podID="e7fbeb3b-1b0e-481e-a0e9-6673407ec18f" containerID="eebbddc205ced58aa3fed5658f904e4d83ca36e4bbbc34ca2cba21701a0e4e82" exitCode=0 Mar 11 08:47:46 crc kubenswrapper[4808]: I0311 08:47:46.241450 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2vsnp" event={"ID":"e7fbeb3b-1b0e-481e-a0e9-6673407ec18f","Type":"ContainerDied","Data":"eebbddc205ced58aa3fed5658f904e4d83ca36e4bbbc34ca2cba21701a0e4e82"} Mar 11 08:47:46 crc kubenswrapper[4808]: I0311 08:47:46.243550 4808 generic.go:334] "Generic (PLEG): container finished" podID="bd3fd8a8-324f-444f-99ce-9916706b9b32" containerID="d792c6afc2c29d5057276c3350cead8d4cb9e229562a4dbf3d7982dfffa29f42" exitCode=0 Mar 11 08:47:46 crc kubenswrapper[4808]: I0311 08:47:46.243587 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6xd22" event={"ID":"bd3fd8a8-324f-444f-99ce-9916706b9b32","Type":"ContainerDied","Data":"d792c6afc2c29d5057276c3350cead8d4cb9e229562a4dbf3d7982dfffa29f42"} Mar 11 08:47:46 crc kubenswrapper[4808]: I0311 08:47:46.278738 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qmn6r"] Mar 11 08:47:46 crc kubenswrapper[4808]: W0311 08:47:46.289089 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod89ebd57f_641c_4d65_b3b8_cc6ccc005770.slice/crio-105471606d8623efd7895b4e51ee5accdf1aacf03d0bffb27ab168c89c147f41 WatchSource:0}: Error finding container 105471606d8623efd7895b4e51ee5accdf1aacf03d0bffb27ab168c89c147f41: Status 404 returned error can't find the container with id 105471606d8623efd7895b4e51ee5accdf1aacf03d0bffb27ab168c89c147f41 Mar 11 08:47:46 crc kubenswrapper[4808]: I0311 08:47:46.632667 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rjf7z"] Mar 11 08:47:46 crc kubenswrapper[4808]: W0311 08:47:46.636866 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7a8a6d81_44ea_4ab5_a6c2_fa901e1ebf40.slice/crio-31d7df0b1a92b6af99a8547c27256530f89aed4b5f5978403bb104683b11ea20 WatchSource:0}: Error finding container 31d7df0b1a92b6af99a8547c27256530f89aed4b5f5978403bb104683b11ea20: Status 404 returned error can't find the container with id 31d7df0b1a92b6af99a8547c27256530f89aed4b5f5978403bb104683b11ea20 Mar 11 08:47:47 crc kubenswrapper[4808]: I0311 08:47:47.253976 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2vsnp" event={"ID":"e7fbeb3b-1b0e-481e-a0e9-6673407ec18f","Type":"ContainerStarted","Data":"395ef027b7b8dcd94f48fe647ae1e61cd760a757ef8a097883094ef732f27b95"} Mar 11 08:47:47 crc kubenswrapper[4808]: I0311 08:47:47.258525 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6xd22" event={"ID":"bd3fd8a8-324f-444f-99ce-9916706b9b32","Type":"ContainerStarted","Data":"8d68e02f813684ce278eb584da951ff72ce29081d995ab88730f73c9d4f61d8f"} Mar 11 08:47:47 crc kubenswrapper[4808]: I0311 08:47:47.260009 4808 generic.go:334] "Generic (PLEG): container finished" podID="7a8a6d81-44ea-4ab5-a6c2-fa901e1ebf40" containerID="1fe0a3720c5193da7a2bf28b8802dce077f0452c2fb8fb7006be60fe8ebe461c" exitCode=0 Mar 11 08:47:47 crc kubenswrapper[4808]: I0311 08:47:47.260064 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rjf7z" event={"ID":"7a8a6d81-44ea-4ab5-a6c2-fa901e1ebf40","Type":"ContainerDied","Data":"1fe0a3720c5193da7a2bf28b8802dce077f0452c2fb8fb7006be60fe8ebe461c"} Mar 11 08:47:47 crc kubenswrapper[4808]: I0311 08:47:47.260088 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rjf7z" event={"ID":"7a8a6d81-44ea-4ab5-a6c2-fa901e1ebf40","Type":"ContainerStarted","Data":"31d7df0b1a92b6af99a8547c27256530f89aed4b5f5978403bb104683b11ea20"} Mar 11 08:47:47 crc kubenswrapper[4808]: I0311 08:47:47.262458 4808 generic.go:334] "Generic (PLEG): container finished" podID="89ebd57f-641c-4d65-b3b8-cc6ccc005770" containerID="fca993f3040fde57f6c7f848cc2ed8e51a6815bbe472342e2af534225f2d545c" exitCode=0 Mar 11 08:47:47 crc kubenswrapper[4808]: I0311 08:47:47.262481 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qmn6r" event={"ID":"89ebd57f-641c-4d65-b3b8-cc6ccc005770","Type":"ContainerDied","Data":"fca993f3040fde57f6c7f848cc2ed8e51a6815bbe472342e2af534225f2d545c"} Mar 11 08:47:47 crc kubenswrapper[4808]: I0311 08:47:47.262495 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qmn6r" event={"ID":"89ebd57f-641c-4d65-b3b8-cc6ccc005770","Type":"ContainerStarted","Data":"105471606d8623efd7895b4e51ee5accdf1aacf03d0bffb27ab168c89c147f41"} Mar 11 08:47:47 crc kubenswrapper[4808]: I0311 08:47:47.275876 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2vsnp" podStartSLOduration=2.797617202 podStartE2EDuration="4.275855179s" podCreationTimestamp="2026-03-11 08:47:43 +0000 UTC" firstStartedPulling="2026-03-11 08:47:45.230393113 +0000 UTC m=+516.183716433" lastFinishedPulling="2026-03-11 08:47:46.70863108 +0000 UTC m=+517.661954410" observedRunningTime="2026-03-11 08:47:47.272323354 +0000 UTC m=+518.225646674" watchObservedRunningTime="2026-03-11 08:47:47.275855179 +0000 UTC m=+518.229178499" Mar 11 08:47:47 crc kubenswrapper[4808]: I0311 08:47:47.313104 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6xd22" podStartSLOduration=1.864454663 podStartE2EDuration="4.313086066s" podCreationTimestamp="2026-03-11 08:47:43 +0000 UTC" firstStartedPulling="2026-03-11 08:47:44.223204972 +0000 UTC m=+515.176528322" lastFinishedPulling="2026-03-11 08:47:46.671836375 +0000 UTC m=+517.625159725" observedRunningTime="2026-03-11 08:47:47.310408287 +0000 UTC m=+518.263731627" watchObservedRunningTime="2026-03-11 08:47:47.313086066 +0000 UTC m=+518.266409396" Mar 11 08:47:48 crc kubenswrapper[4808]: I0311 08:47:48.268217 4808 generic.go:334] "Generic (PLEG): container finished" podID="89ebd57f-641c-4d65-b3b8-cc6ccc005770" containerID="87e703d6d895ac49af8efa8f45b65088a45a39e364e762d0c1bc3ed3ae67d5c6" exitCode=0 Mar 11 08:47:48 crc kubenswrapper[4808]: I0311 08:47:48.268274 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qmn6r" event={"ID":"89ebd57f-641c-4d65-b3b8-cc6ccc005770","Type":"ContainerDied","Data":"87e703d6d895ac49af8efa8f45b65088a45a39e364e762d0c1bc3ed3ae67d5c6"} Mar 11 08:47:48 crc kubenswrapper[4808]: I0311 08:47:48.271168 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rjf7z" event={"ID":"7a8a6d81-44ea-4ab5-a6c2-fa901e1ebf40","Type":"ContainerStarted","Data":"2f693ca5f7842cd62201e937bc422b97a81173594c3c463df398dde4054c3800"} Mar 11 08:47:49 crc kubenswrapper[4808]: I0311 08:47:49.279089 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qmn6r" event={"ID":"89ebd57f-641c-4d65-b3b8-cc6ccc005770","Type":"ContainerStarted","Data":"30f1873a82bbc5f3c6ff4f49d7c5ec5c9da22c7a39c3cf6c67b1df280c680ec5"} Mar 11 08:47:49 crc kubenswrapper[4808]: I0311 08:47:49.282871 4808 generic.go:334] "Generic (PLEG): container finished" podID="7a8a6d81-44ea-4ab5-a6c2-fa901e1ebf40" containerID="2f693ca5f7842cd62201e937bc422b97a81173594c3c463df398dde4054c3800" exitCode=0 Mar 11 08:47:49 crc kubenswrapper[4808]: I0311 08:47:49.282919 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rjf7z" event={"ID":"7a8a6d81-44ea-4ab5-a6c2-fa901e1ebf40","Type":"ContainerDied","Data":"2f693ca5f7842cd62201e937bc422b97a81173594c3c463df398dde4054c3800"} Mar 11 08:47:49 crc kubenswrapper[4808]: I0311 08:47:49.284595 4808 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 11 08:47:49 crc kubenswrapper[4808]: I0311 08:47:49.306493 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qmn6r" podStartSLOduration=2.837945456 podStartE2EDuration="4.306473031s" podCreationTimestamp="2026-03-11 08:47:45 +0000 UTC" firstStartedPulling="2026-03-11 08:47:47.26511389 +0000 UTC m=+518.218437230" lastFinishedPulling="2026-03-11 08:47:48.733641485 +0000 UTC m=+519.686964805" observedRunningTime="2026-03-11 08:47:49.304505843 +0000 UTC m=+520.257829163" watchObservedRunningTime="2026-03-11 08:47:49.306473031 +0000 UTC m=+520.259796391" Mar 11 08:47:50 crc kubenswrapper[4808]: I0311 08:47:50.293147 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rjf7z" event={"ID":"7a8a6d81-44ea-4ab5-a6c2-fa901e1ebf40","Type":"ContainerStarted","Data":"29513221fa40a2f9b4434f3864ec7928adecba4e1493638e65f3de914068b1d8"} Mar 11 08:47:50 crc kubenswrapper[4808]: I0311 08:47:50.319920 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rjf7z" podStartSLOduration=2.8953343030000003 podStartE2EDuration="5.319901632s" podCreationTimestamp="2026-03-11 08:47:45 +0000 UTC" firstStartedPulling="2026-03-11 08:47:47.26142116 +0000 UTC m=+518.214744480" lastFinishedPulling="2026-03-11 08:47:49.685988489 +0000 UTC m=+520.639311809" observedRunningTime="2026-03-11 08:47:50.316925644 +0000 UTC m=+521.270248994" watchObservedRunningTime="2026-03-11 08:47:50.319901632 +0000 UTC m=+521.273224952" Mar 11 08:47:53 crc kubenswrapper[4808]: I0311 08:47:53.625928 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6xd22" Mar 11 08:47:53 crc kubenswrapper[4808]: I0311 08:47:53.628332 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6xd22" Mar 11 08:47:53 crc kubenswrapper[4808]: I0311 08:47:53.680688 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6xd22" Mar 11 08:47:53 crc kubenswrapper[4808]: I0311 08:47:53.849747 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2vsnp" Mar 11 08:47:53 crc kubenswrapper[4808]: I0311 08:47:53.850062 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2vsnp" Mar 11 08:47:53 crc kubenswrapper[4808]: I0311 08:47:53.904659 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2vsnp" Mar 11 08:47:54 crc kubenswrapper[4808]: I0311 08:47:54.385065 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6xd22" Mar 11 08:47:54 crc kubenswrapper[4808]: I0311 08:47:54.385820 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2vsnp" Mar 11 08:47:56 crc kubenswrapper[4808]: I0311 08:47:56.033398 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qmn6r" Mar 11 08:47:56 crc kubenswrapper[4808]: I0311 08:47:56.033757 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qmn6r" Mar 11 08:47:56 crc kubenswrapper[4808]: I0311 08:47:56.088942 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qmn6r" Mar 11 08:47:56 crc kubenswrapper[4808]: I0311 08:47:56.231181 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rjf7z" Mar 11 08:47:56 crc kubenswrapper[4808]: I0311 08:47:56.231473 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rjf7z" Mar 11 08:47:56 crc kubenswrapper[4808]: I0311 08:47:56.369079 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qmn6r" Mar 11 08:47:57 crc kubenswrapper[4808]: I0311 08:47:57.277024 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rjf7z" Mar 11 08:47:57 crc kubenswrapper[4808]: I0311 08:47:57.379868 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rjf7z" Mar 11 08:48:00 crc kubenswrapper[4808]: I0311 08:48:00.144824 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553648-9lzqz"] Mar 11 08:48:00 crc kubenswrapper[4808]: I0311 08:48:00.146188 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553648-9lzqz" Mar 11 08:48:00 crc kubenswrapper[4808]: I0311 08:48:00.149627 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 08:48:00 crc kubenswrapper[4808]: I0311 08:48:00.149722 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 08:48:00 crc kubenswrapper[4808]: I0311 08:48:00.149879 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-8r7cc" Mar 11 08:48:00 crc kubenswrapper[4808]: I0311 08:48:00.160469 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553648-9lzqz"] Mar 11 08:48:00 crc kubenswrapper[4808]: I0311 08:48:00.185173 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kgdr\" (UniqueName: \"kubernetes.io/projected/73272a25-2d50-4ea7-8e43-a6c3d99e8523-kube-api-access-4kgdr\") pod \"auto-csr-approver-29553648-9lzqz\" (UID: \"73272a25-2d50-4ea7-8e43-a6c3d99e8523\") " pod="openshift-infra/auto-csr-approver-29553648-9lzqz" Mar 11 08:48:00 crc kubenswrapper[4808]: I0311 08:48:00.286980 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kgdr\" (UniqueName: \"kubernetes.io/projected/73272a25-2d50-4ea7-8e43-a6c3d99e8523-kube-api-access-4kgdr\") pod \"auto-csr-approver-29553648-9lzqz\" (UID: \"73272a25-2d50-4ea7-8e43-a6c3d99e8523\") " pod="openshift-infra/auto-csr-approver-29553648-9lzqz" Mar 11 08:48:00 crc kubenswrapper[4808]: I0311 08:48:00.317114 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kgdr\" (UniqueName: \"kubernetes.io/projected/73272a25-2d50-4ea7-8e43-a6c3d99e8523-kube-api-access-4kgdr\") pod \"auto-csr-approver-29553648-9lzqz\" (UID: \"73272a25-2d50-4ea7-8e43-a6c3d99e8523\") " pod="openshift-infra/auto-csr-approver-29553648-9lzqz" Mar 11 08:48:00 crc kubenswrapper[4808]: I0311 08:48:00.478152 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553648-9lzqz" Mar 11 08:48:00 crc kubenswrapper[4808]: I0311 08:48:00.758791 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553648-9lzqz"] Mar 11 08:48:01 crc kubenswrapper[4808]: I0311 08:48:01.364397 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553648-9lzqz" event={"ID":"73272a25-2d50-4ea7-8e43-a6c3d99e8523","Type":"ContainerStarted","Data":"f899cc6cee582b298aee5de3e10706e7cc064b910f2caf39eecc2704d80269a4"} Mar 11 08:48:02 crc kubenswrapper[4808]: I0311 08:48:02.371311 4808 generic.go:334] "Generic (PLEG): container finished" podID="73272a25-2d50-4ea7-8e43-a6c3d99e8523" containerID="114f83b7949b2db80f6af93611df19dc9c04b09c2b7ac3d5318ddaf68aaee213" exitCode=0 Mar 11 08:48:02 crc kubenswrapper[4808]: I0311 08:48:02.371584 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553648-9lzqz" event={"ID":"73272a25-2d50-4ea7-8e43-a6c3d99e8523","Type":"ContainerDied","Data":"114f83b7949b2db80f6af93611df19dc9c04b09c2b7ac3d5318ddaf68aaee213"} Mar 11 08:48:03 crc kubenswrapper[4808]: I0311 08:48:03.726472 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553648-9lzqz" Mar 11 08:48:03 crc kubenswrapper[4808]: I0311 08:48:03.833080 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4kgdr\" (UniqueName: \"kubernetes.io/projected/73272a25-2d50-4ea7-8e43-a6c3d99e8523-kube-api-access-4kgdr\") pod \"73272a25-2d50-4ea7-8e43-a6c3d99e8523\" (UID: \"73272a25-2d50-4ea7-8e43-a6c3d99e8523\") " Mar 11 08:48:03 crc kubenswrapper[4808]: I0311 08:48:03.839074 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73272a25-2d50-4ea7-8e43-a6c3d99e8523-kube-api-access-4kgdr" (OuterVolumeSpecName: "kube-api-access-4kgdr") pod "73272a25-2d50-4ea7-8e43-a6c3d99e8523" (UID: "73272a25-2d50-4ea7-8e43-a6c3d99e8523"). InnerVolumeSpecName "kube-api-access-4kgdr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:48:03 crc kubenswrapper[4808]: I0311 08:48:03.934863 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4kgdr\" (UniqueName: \"kubernetes.io/projected/73272a25-2d50-4ea7-8e43-a6c3d99e8523-kube-api-access-4kgdr\") on node \"crc\" DevicePath \"\"" Mar 11 08:48:04 crc kubenswrapper[4808]: I0311 08:48:04.388568 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553648-9lzqz" event={"ID":"73272a25-2d50-4ea7-8e43-a6c3d99e8523","Type":"ContainerDied","Data":"f899cc6cee582b298aee5de3e10706e7cc064b910f2caf39eecc2704d80269a4"} Mar 11 08:48:04 crc kubenswrapper[4808]: I0311 08:48:04.388677 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f899cc6cee582b298aee5de3e10706e7cc064b910f2caf39eecc2704d80269a4" Mar 11 08:48:04 crc kubenswrapper[4808]: I0311 08:48:04.388626 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553648-9lzqz" Mar 11 08:48:04 crc kubenswrapper[4808]: I0311 08:48:04.804924 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553642-n69sp"] Mar 11 08:48:04 crc kubenswrapper[4808]: I0311 08:48:04.809393 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553642-n69sp"] Mar 11 08:48:05 crc kubenswrapper[4808]: I0311 08:48:05.796352 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29ace785-0297-4201-9d1c-778af0740058" path="/var/lib/kubelet/pods/29ace785-0297-4201-9d1c-778af0740058/volumes" Mar 11 08:48:46 crc kubenswrapper[4808]: I0311 08:48:46.028103 4808 patch_prober.go:28] interesting pod/machine-config-daemon-tfsm9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 08:48:46 crc kubenswrapper[4808]: I0311 08:48:46.028896 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 08:49:16 crc kubenswrapper[4808]: I0311 08:49:16.027966 4808 patch_prober.go:28] interesting pod/machine-config-daemon-tfsm9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 08:49:16 crc kubenswrapper[4808]: I0311 08:49:16.028979 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 08:49:46 crc kubenswrapper[4808]: I0311 08:49:46.027270 4808 patch_prober.go:28] interesting pod/machine-config-daemon-tfsm9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 08:49:46 crc kubenswrapper[4808]: I0311 08:49:46.028527 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 08:49:46 crc kubenswrapper[4808]: I0311 08:49:46.028625 4808 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" Mar 11 08:49:46 crc kubenswrapper[4808]: I0311 08:49:46.029582 4808 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e39a2d960963f858eb3b99fd35396864663897c2db7e9fbce15cdb56f2cd6cab"} pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 11 08:49:46 crc kubenswrapper[4808]: I0311 08:49:46.029687 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerName="machine-config-daemon" containerID="cri-o://e39a2d960963f858eb3b99fd35396864663897c2db7e9fbce15cdb56f2cd6cab" gracePeriod=600 Mar 11 08:49:46 crc kubenswrapper[4808]: E0311 08:49:46.197806 4808 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dda5309_668d_4e3c_b3b2_1d708eecc578.slice/crio-conmon-e39a2d960963f858eb3b99fd35396864663897c2db7e9fbce15cdb56f2cd6cab.scope\": RecentStats: unable to find data in memory cache]" Mar 11 08:49:46 crc kubenswrapper[4808]: I0311 08:49:46.204700 4808 generic.go:334] "Generic (PLEG): container finished" podID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerID="e39a2d960963f858eb3b99fd35396864663897c2db7e9fbce15cdb56f2cd6cab" exitCode=0 Mar 11 08:49:46 crc kubenswrapper[4808]: I0311 08:49:46.204770 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" event={"ID":"3dda5309-668d-4e3c-b3b2-1d708eecc578","Type":"ContainerDied","Data":"e39a2d960963f858eb3b99fd35396864663897c2db7e9fbce15cdb56f2cd6cab"} Mar 11 08:49:46 crc kubenswrapper[4808]: I0311 08:49:46.204842 4808 scope.go:117] "RemoveContainer" containerID="c6794dd7ab092f5a96326d5fa33059ecdf5805f11897a09c759ab292bb6c6eec" Mar 11 08:49:47 crc kubenswrapper[4808]: I0311 08:49:47.218753 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" event={"ID":"3dda5309-668d-4e3c-b3b2-1d708eecc578","Type":"ContainerStarted","Data":"dfe1963de46b9f0d9bf3f89f3d4ece211127d1565ecdc2dcad109566897a8ce5"} Mar 11 08:50:00 crc kubenswrapper[4808]: I0311 08:50:00.149664 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553650-l5bbk"] Mar 11 08:50:00 crc kubenswrapper[4808]: E0311 08:50:00.150888 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73272a25-2d50-4ea7-8e43-a6c3d99e8523" containerName="oc" Mar 11 08:50:00 crc kubenswrapper[4808]: I0311 08:50:00.150919 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="73272a25-2d50-4ea7-8e43-a6c3d99e8523" containerName="oc" Mar 11 08:50:00 crc kubenswrapper[4808]: I0311 08:50:00.151141 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="73272a25-2d50-4ea7-8e43-a6c3d99e8523" containerName="oc" Mar 11 08:50:00 crc kubenswrapper[4808]: I0311 08:50:00.151906 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553650-l5bbk" Mar 11 08:50:00 crc kubenswrapper[4808]: I0311 08:50:00.157695 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 08:50:00 crc kubenswrapper[4808]: I0311 08:50:00.158099 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 08:50:00 crc kubenswrapper[4808]: I0311 08:50:00.158252 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-8r7cc" Mar 11 08:50:00 crc kubenswrapper[4808]: I0311 08:50:00.161213 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553650-l5bbk"] Mar 11 08:50:00 crc kubenswrapper[4808]: I0311 08:50:00.267316 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzpnw\" (UniqueName: \"kubernetes.io/projected/5bd9ff2a-2fef-4d54-b6ec-5c6200d34c4c-kube-api-access-fzpnw\") pod \"auto-csr-approver-29553650-l5bbk\" (UID: \"5bd9ff2a-2fef-4d54-b6ec-5c6200d34c4c\") " pod="openshift-infra/auto-csr-approver-29553650-l5bbk" Mar 11 08:50:00 crc kubenswrapper[4808]: I0311 08:50:00.369323 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzpnw\" (UniqueName: \"kubernetes.io/projected/5bd9ff2a-2fef-4d54-b6ec-5c6200d34c4c-kube-api-access-fzpnw\") pod \"auto-csr-approver-29553650-l5bbk\" (UID: \"5bd9ff2a-2fef-4d54-b6ec-5c6200d34c4c\") " pod="openshift-infra/auto-csr-approver-29553650-l5bbk" Mar 11 08:50:00 crc kubenswrapper[4808]: I0311 08:50:00.402999 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzpnw\" (UniqueName: \"kubernetes.io/projected/5bd9ff2a-2fef-4d54-b6ec-5c6200d34c4c-kube-api-access-fzpnw\") pod \"auto-csr-approver-29553650-l5bbk\" (UID: \"5bd9ff2a-2fef-4d54-b6ec-5c6200d34c4c\") " pod="openshift-infra/auto-csr-approver-29553650-l5bbk" Mar 11 08:50:00 crc kubenswrapper[4808]: I0311 08:50:00.485816 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553650-l5bbk" Mar 11 08:50:00 crc kubenswrapper[4808]: I0311 08:50:00.713620 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553650-l5bbk"] Mar 11 08:50:01 crc kubenswrapper[4808]: I0311 08:50:01.324881 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553650-l5bbk" event={"ID":"5bd9ff2a-2fef-4d54-b6ec-5c6200d34c4c","Type":"ContainerStarted","Data":"0a8ebe13875a946ae3a1930e679758935416b2856b798cc21dbc2d002e4cb722"} Mar 11 08:50:02 crc kubenswrapper[4808]: I0311 08:50:02.339304 4808 generic.go:334] "Generic (PLEG): container finished" podID="5bd9ff2a-2fef-4d54-b6ec-5c6200d34c4c" containerID="aed826059c55a11766c57ccb9bc0e7e0c65ffc98781eaeff7bba0544c82ba5ca" exitCode=0 Mar 11 08:50:02 crc kubenswrapper[4808]: I0311 08:50:02.339372 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553650-l5bbk" event={"ID":"5bd9ff2a-2fef-4d54-b6ec-5c6200d34c4c","Type":"ContainerDied","Data":"aed826059c55a11766c57ccb9bc0e7e0c65ffc98781eaeff7bba0544c82ba5ca"} Mar 11 08:50:03 crc kubenswrapper[4808]: I0311 08:50:03.603177 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553650-l5bbk" Mar 11 08:50:03 crc kubenswrapper[4808]: I0311 08:50:03.714642 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fzpnw\" (UniqueName: \"kubernetes.io/projected/5bd9ff2a-2fef-4d54-b6ec-5c6200d34c4c-kube-api-access-fzpnw\") pod \"5bd9ff2a-2fef-4d54-b6ec-5c6200d34c4c\" (UID: \"5bd9ff2a-2fef-4d54-b6ec-5c6200d34c4c\") " Mar 11 08:50:03 crc kubenswrapper[4808]: I0311 08:50:03.723516 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bd9ff2a-2fef-4d54-b6ec-5c6200d34c4c-kube-api-access-fzpnw" (OuterVolumeSpecName: "kube-api-access-fzpnw") pod "5bd9ff2a-2fef-4d54-b6ec-5c6200d34c4c" (UID: "5bd9ff2a-2fef-4d54-b6ec-5c6200d34c4c"). InnerVolumeSpecName "kube-api-access-fzpnw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:50:03 crc kubenswrapper[4808]: I0311 08:50:03.816229 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fzpnw\" (UniqueName: \"kubernetes.io/projected/5bd9ff2a-2fef-4d54-b6ec-5c6200d34c4c-kube-api-access-fzpnw\") on node \"crc\" DevicePath \"\"" Mar 11 08:50:04 crc kubenswrapper[4808]: I0311 08:50:04.358514 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553650-l5bbk" event={"ID":"5bd9ff2a-2fef-4d54-b6ec-5c6200d34c4c","Type":"ContainerDied","Data":"0a8ebe13875a946ae3a1930e679758935416b2856b798cc21dbc2d002e4cb722"} Mar 11 08:50:04 crc kubenswrapper[4808]: I0311 08:50:04.358909 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a8ebe13875a946ae3a1930e679758935416b2856b798cc21dbc2d002e4cb722" Mar 11 08:50:04 crc kubenswrapper[4808]: I0311 08:50:04.358577 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553650-l5bbk" Mar 11 08:50:04 crc kubenswrapper[4808]: I0311 08:50:04.669874 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553644-fdj6x"] Mar 11 08:50:04 crc kubenswrapper[4808]: I0311 08:50:04.675826 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553644-fdj6x"] Mar 11 08:50:05 crc kubenswrapper[4808]: I0311 08:50:05.796265 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1b2195c-43fc-4253-9675-13d6836d7c49" path="/var/lib/kubelet/pods/a1b2195c-43fc-4253-9675-13d6836d7c49/volumes" Mar 11 08:50:35 crc kubenswrapper[4808]: I0311 08:50:35.819113 4808 scope.go:117] "RemoveContainer" containerID="4b4e3d4d2083ba9f10c7206fa7a62ec4ff0cec8a7039796a036308eb0e3a51b9" Mar 11 08:50:35 crc kubenswrapper[4808]: I0311 08:50:35.872970 4808 scope.go:117] "RemoveContainer" containerID="adc1b8c556c2fe10f316cd7fea1f8b3c3c2b8e6b4d70df7b29eb9cd75c956b58" Mar 11 08:51:46 crc kubenswrapper[4808]: I0311 08:51:46.027699 4808 patch_prober.go:28] interesting pod/machine-config-daemon-tfsm9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 08:51:46 crc kubenswrapper[4808]: I0311 08:51:46.028322 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 08:52:00 crc kubenswrapper[4808]: I0311 08:52:00.135844 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553652-tbd96"] Mar 11 08:52:00 crc kubenswrapper[4808]: E0311 08:52:00.137638 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bd9ff2a-2fef-4d54-b6ec-5c6200d34c4c" containerName="oc" Mar 11 08:52:00 crc kubenswrapper[4808]: I0311 08:52:00.137656 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bd9ff2a-2fef-4d54-b6ec-5c6200d34c4c" containerName="oc" Mar 11 08:52:00 crc kubenswrapper[4808]: I0311 08:52:00.137757 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bd9ff2a-2fef-4d54-b6ec-5c6200d34c4c" containerName="oc" Mar 11 08:52:00 crc kubenswrapper[4808]: I0311 08:52:00.138126 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553652-tbd96" Mar 11 08:52:00 crc kubenswrapper[4808]: I0311 08:52:00.139764 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 08:52:00 crc kubenswrapper[4808]: I0311 08:52:00.140018 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-8r7cc" Mar 11 08:52:00 crc kubenswrapper[4808]: I0311 08:52:00.140155 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 08:52:00 crc kubenswrapper[4808]: I0311 08:52:00.147906 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553652-tbd96"] Mar 11 08:52:00 crc kubenswrapper[4808]: I0311 08:52:00.178117 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vbrv\" (UniqueName: \"kubernetes.io/projected/165effca-3ddd-4f44-9e37-38142ea6cfe6-kube-api-access-9vbrv\") pod \"auto-csr-approver-29553652-tbd96\" (UID: \"165effca-3ddd-4f44-9e37-38142ea6cfe6\") " pod="openshift-infra/auto-csr-approver-29553652-tbd96" Mar 11 08:52:00 crc kubenswrapper[4808]: I0311 08:52:00.279084 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vbrv\" (UniqueName: \"kubernetes.io/projected/165effca-3ddd-4f44-9e37-38142ea6cfe6-kube-api-access-9vbrv\") pod \"auto-csr-approver-29553652-tbd96\" (UID: \"165effca-3ddd-4f44-9e37-38142ea6cfe6\") " pod="openshift-infra/auto-csr-approver-29553652-tbd96" Mar 11 08:52:00 crc kubenswrapper[4808]: I0311 08:52:00.302521 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vbrv\" (UniqueName: \"kubernetes.io/projected/165effca-3ddd-4f44-9e37-38142ea6cfe6-kube-api-access-9vbrv\") pod \"auto-csr-approver-29553652-tbd96\" (UID: \"165effca-3ddd-4f44-9e37-38142ea6cfe6\") " pod="openshift-infra/auto-csr-approver-29553652-tbd96" Mar 11 08:52:00 crc kubenswrapper[4808]: I0311 08:52:00.460033 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553652-tbd96" Mar 11 08:52:00 crc kubenswrapper[4808]: I0311 08:52:00.735881 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553652-tbd96"] Mar 11 08:52:01 crc kubenswrapper[4808]: I0311 08:52:01.139924 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553652-tbd96" event={"ID":"165effca-3ddd-4f44-9e37-38142ea6cfe6","Type":"ContainerStarted","Data":"498f5c984e75b11be81e2dbe0919b2cc074184f71018e85778d0fe91d3a9233e"} Mar 11 08:52:02 crc kubenswrapper[4808]: I0311 08:52:02.152417 4808 generic.go:334] "Generic (PLEG): container finished" podID="165effca-3ddd-4f44-9e37-38142ea6cfe6" containerID="88f77b9f5b01e21a5ceb4871bdbe498513e15bfd8369974e34be9506b9c7eae8" exitCode=0 Mar 11 08:52:02 crc kubenswrapper[4808]: I0311 08:52:02.152490 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553652-tbd96" event={"ID":"165effca-3ddd-4f44-9e37-38142ea6cfe6","Type":"ContainerDied","Data":"88f77b9f5b01e21a5ceb4871bdbe498513e15bfd8369974e34be9506b9c7eae8"} Mar 11 08:52:03 crc kubenswrapper[4808]: I0311 08:52:03.464828 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553652-tbd96" Mar 11 08:52:03 crc kubenswrapper[4808]: I0311 08:52:03.646475 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vbrv\" (UniqueName: \"kubernetes.io/projected/165effca-3ddd-4f44-9e37-38142ea6cfe6-kube-api-access-9vbrv\") pod \"165effca-3ddd-4f44-9e37-38142ea6cfe6\" (UID: \"165effca-3ddd-4f44-9e37-38142ea6cfe6\") " Mar 11 08:52:03 crc kubenswrapper[4808]: I0311 08:52:03.655118 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/165effca-3ddd-4f44-9e37-38142ea6cfe6-kube-api-access-9vbrv" (OuterVolumeSpecName: "kube-api-access-9vbrv") pod "165effca-3ddd-4f44-9e37-38142ea6cfe6" (UID: "165effca-3ddd-4f44-9e37-38142ea6cfe6"). InnerVolumeSpecName "kube-api-access-9vbrv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:52:03 crc kubenswrapper[4808]: I0311 08:52:03.748091 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9vbrv\" (UniqueName: \"kubernetes.io/projected/165effca-3ddd-4f44-9e37-38142ea6cfe6-kube-api-access-9vbrv\") on node \"crc\" DevicePath \"\"" Mar 11 08:52:04 crc kubenswrapper[4808]: I0311 08:52:04.167777 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553652-tbd96" event={"ID":"165effca-3ddd-4f44-9e37-38142ea6cfe6","Type":"ContainerDied","Data":"498f5c984e75b11be81e2dbe0919b2cc074184f71018e85778d0fe91d3a9233e"} Mar 11 08:52:04 crc kubenswrapper[4808]: I0311 08:52:04.167842 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553652-tbd96" Mar 11 08:52:04 crc kubenswrapper[4808]: I0311 08:52:04.168165 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="498f5c984e75b11be81e2dbe0919b2cc074184f71018e85778d0fe91d3a9233e" Mar 11 08:52:04 crc kubenswrapper[4808]: I0311 08:52:04.528334 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553646-85knb"] Mar 11 08:52:04 crc kubenswrapper[4808]: I0311 08:52:04.533620 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553646-85knb"] Mar 11 08:52:05 crc kubenswrapper[4808]: I0311 08:52:05.795666 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ff9f1ae-9f2f-4a38-8728-1c6f8b533daf" path="/var/lib/kubelet/pods/9ff9f1ae-9f2f-4a38-8728-1c6f8b533daf/volumes" Mar 11 08:52:16 crc kubenswrapper[4808]: I0311 08:52:16.027571 4808 patch_prober.go:28] interesting pod/machine-config-daemon-tfsm9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 08:52:16 crc kubenswrapper[4808]: I0311 08:52:16.028186 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 08:52:24 crc kubenswrapper[4808]: I0311 08:52:23.998975 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-f7nlr" podUID="759ae8a0-0d30-4da6-82e5-7d82ebfec823" containerName="hostpath-provisioner" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 11 08:52:35 crc kubenswrapper[4808]: I0311 08:52:35.948218 4808 scope.go:117] "RemoveContainer" containerID="840fd9122e1eb3d0aba1cb4059d9e0be73b1c4ee9815beb3b47b642d11b61de2" Mar 11 08:52:46 crc kubenswrapper[4808]: I0311 08:52:46.027562 4808 patch_prober.go:28] interesting pod/machine-config-daemon-tfsm9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 08:52:46 crc kubenswrapper[4808]: I0311 08:52:46.028232 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 08:52:46 crc kubenswrapper[4808]: I0311 08:52:46.028299 4808 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" Mar 11 08:52:46 crc kubenswrapper[4808]: I0311 08:52:46.028943 4808 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"dfe1963de46b9f0d9bf3f89f3d4ece211127d1565ecdc2dcad109566897a8ce5"} pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 11 08:52:46 crc kubenswrapper[4808]: I0311 08:52:46.029004 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerName="machine-config-daemon" containerID="cri-o://dfe1963de46b9f0d9bf3f89f3d4ece211127d1565ecdc2dcad109566897a8ce5" gracePeriod=600 Mar 11 08:52:46 crc kubenswrapper[4808]: I0311 08:52:46.457664 4808 generic.go:334] "Generic (PLEG): container finished" podID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerID="dfe1963de46b9f0d9bf3f89f3d4ece211127d1565ecdc2dcad109566897a8ce5" exitCode=0 Mar 11 08:52:46 crc kubenswrapper[4808]: I0311 08:52:46.457756 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" event={"ID":"3dda5309-668d-4e3c-b3b2-1d708eecc578","Type":"ContainerDied","Data":"dfe1963de46b9f0d9bf3f89f3d4ece211127d1565ecdc2dcad109566897a8ce5"} Mar 11 08:52:46 crc kubenswrapper[4808]: I0311 08:52:46.458558 4808 scope.go:117] "RemoveContainer" containerID="e39a2d960963f858eb3b99fd35396864663897c2db7e9fbce15cdb56f2cd6cab" Mar 11 08:52:46 crc kubenswrapper[4808]: I0311 08:52:46.459493 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" event={"ID":"3dda5309-668d-4e3c-b3b2-1d708eecc578","Type":"ContainerStarted","Data":"9e9ccff456ae05e5b80f4063f4e4da8d311a6152671f06021793956cce879777"} Mar 11 08:52:58 crc kubenswrapper[4808]: I0311 08:52:58.947994 4808 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 11 08:54:00 crc kubenswrapper[4808]: I0311 08:54:00.148504 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553654-fdqcq"] Mar 11 08:54:00 crc kubenswrapper[4808]: E0311 08:54:00.149298 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="165effca-3ddd-4f44-9e37-38142ea6cfe6" containerName="oc" Mar 11 08:54:00 crc kubenswrapper[4808]: I0311 08:54:00.149313 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="165effca-3ddd-4f44-9e37-38142ea6cfe6" containerName="oc" Mar 11 08:54:00 crc kubenswrapper[4808]: I0311 08:54:00.149463 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="165effca-3ddd-4f44-9e37-38142ea6cfe6" containerName="oc" Mar 11 08:54:00 crc kubenswrapper[4808]: I0311 08:54:00.149886 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553654-fdqcq" Mar 11 08:54:00 crc kubenswrapper[4808]: I0311 08:54:00.153001 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 08:54:00 crc kubenswrapper[4808]: I0311 08:54:00.153327 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 08:54:00 crc kubenswrapper[4808]: I0311 08:54:00.154678 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-8r7cc" Mar 11 08:54:00 crc kubenswrapper[4808]: I0311 08:54:00.157818 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553654-fdqcq"] Mar 11 08:54:00 crc kubenswrapper[4808]: I0311 08:54:00.239312 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcsrj\" (UniqueName: \"kubernetes.io/projected/f6a1463b-9472-494c-b46e-75c7098a9e62-kube-api-access-kcsrj\") pod \"auto-csr-approver-29553654-fdqcq\" (UID: \"f6a1463b-9472-494c-b46e-75c7098a9e62\") " pod="openshift-infra/auto-csr-approver-29553654-fdqcq" Mar 11 08:54:00 crc kubenswrapper[4808]: I0311 08:54:00.340084 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcsrj\" (UniqueName: \"kubernetes.io/projected/f6a1463b-9472-494c-b46e-75c7098a9e62-kube-api-access-kcsrj\") pod \"auto-csr-approver-29553654-fdqcq\" (UID: \"f6a1463b-9472-494c-b46e-75c7098a9e62\") " pod="openshift-infra/auto-csr-approver-29553654-fdqcq" Mar 11 08:54:00 crc kubenswrapper[4808]: I0311 08:54:00.361313 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcsrj\" (UniqueName: \"kubernetes.io/projected/f6a1463b-9472-494c-b46e-75c7098a9e62-kube-api-access-kcsrj\") pod \"auto-csr-approver-29553654-fdqcq\" (UID: \"f6a1463b-9472-494c-b46e-75c7098a9e62\") " pod="openshift-infra/auto-csr-approver-29553654-fdqcq" Mar 11 08:54:00 crc kubenswrapper[4808]: I0311 08:54:00.474417 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553654-fdqcq" Mar 11 08:54:00 crc kubenswrapper[4808]: I0311 08:54:00.897594 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553654-fdqcq"] Mar 11 08:54:00 crc kubenswrapper[4808]: I0311 08:54:00.907384 4808 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 11 08:54:01 crc kubenswrapper[4808]: I0311 08:54:01.027438 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553654-fdqcq" event={"ID":"f6a1463b-9472-494c-b46e-75c7098a9e62","Type":"ContainerStarted","Data":"bd4075265961838cb30b03c1a9521a7bd81f801ca6a39f4de9bfbb7eb41ecbf2"} Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.430496 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-8wfl5"] Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.431702 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" podUID="afeac5d0-d84f-4776-ae37-a03c8f0f66b8" containerName="ovn-controller" containerID="cri-o://0f11f10c1e9b99cab011ae05555fe3dd8aeb6e7df5cb166f07c6bcfffc9f9164" gracePeriod=30 Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.432177 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" podUID="afeac5d0-d84f-4776-ae37-a03c8f0f66b8" containerName="sbdb" containerID="cri-o://01f1b29fe1734c41ddcc1b36367ccc54148fa85f2e05072fc0b60987dfcbb078" gracePeriod=30 Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.432246 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" podUID="afeac5d0-d84f-4776-ae37-a03c8f0f66b8" containerName="nbdb" containerID="cri-o://e1d10229731c437de7f29f504fec11ac3688a0a5039812d76551a867f814b95e" gracePeriod=30 Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.432297 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" podUID="afeac5d0-d84f-4776-ae37-a03c8f0f66b8" containerName="northd" containerID="cri-o://a351ca31265d6256ce9c08680495dd7a9b7882cd17a8faf4a0d224a800737e4e" gracePeriod=30 Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.432335 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" podUID="afeac5d0-d84f-4776-ae37-a03c8f0f66b8" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://02a71d4f33fd4d8bcf666c2d01f03d31c6e0f2b8eaa0cc183b48251e36cdc474" gracePeriod=30 Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.432418 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" podUID="afeac5d0-d84f-4776-ae37-a03c8f0f66b8" containerName="kube-rbac-proxy-node" containerID="cri-o://17fbdac0c2d9f6080066329737fa9866b0e953b38687d6c7b2c62dea2e9964a1" gracePeriod=30 Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.432459 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" podUID="afeac5d0-d84f-4776-ae37-a03c8f0f66b8" containerName="ovn-acl-logging" containerID="cri-o://3674634f2b9f7a0e2fcf3758da17080b26a6284c8446d8326bc405b1307f8c22" gracePeriod=30 Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.499509 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" podUID="afeac5d0-d84f-4776-ae37-a03c8f0f66b8" containerName="ovnkube-controller" containerID="cri-o://14f606735543a42f93d6dcfef5cf41caeb01d4d37301694aac50fa3451463180" gracePeriod=30 Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.721296 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8wfl5_afeac5d0-d84f-4776-ae37-a03c8f0f66b8/ovnkube-controller/3.log" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.723961 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8wfl5_afeac5d0-d84f-4776-ae37-a03c8f0f66b8/ovn-acl-logging/0.log" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.725089 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8wfl5_afeac5d0-d84f-4776-ae37-a03c8f0f66b8/ovn-controller/0.log" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.725544 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.775057 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-nqzhb"] Mar 11 08:54:02 crc kubenswrapper[4808]: E0311 08:54:02.775292 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afeac5d0-d84f-4776-ae37-a03c8f0f66b8" containerName="ovnkube-controller" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.775307 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="afeac5d0-d84f-4776-ae37-a03c8f0f66b8" containerName="ovnkube-controller" Mar 11 08:54:02 crc kubenswrapper[4808]: E0311 08:54:02.775319 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afeac5d0-d84f-4776-ae37-a03c8f0f66b8" containerName="northd" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.775327 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="afeac5d0-d84f-4776-ae37-a03c8f0f66b8" containerName="northd" Mar 11 08:54:02 crc kubenswrapper[4808]: E0311 08:54:02.775339 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afeac5d0-d84f-4776-ae37-a03c8f0f66b8" containerName="kubecfg-setup" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.775346 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="afeac5d0-d84f-4776-ae37-a03c8f0f66b8" containerName="kubecfg-setup" Mar 11 08:54:02 crc kubenswrapper[4808]: E0311 08:54:02.775359 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afeac5d0-d84f-4776-ae37-a03c8f0f66b8" containerName="ovnkube-controller" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.775382 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="afeac5d0-d84f-4776-ae37-a03c8f0f66b8" containerName="ovnkube-controller" Mar 11 08:54:02 crc kubenswrapper[4808]: E0311 08:54:02.775390 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afeac5d0-d84f-4776-ae37-a03c8f0f66b8" containerName="ovnkube-controller" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.775396 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="afeac5d0-d84f-4776-ae37-a03c8f0f66b8" containerName="ovnkube-controller" Mar 11 08:54:02 crc kubenswrapper[4808]: E0311 08:54:02.775406 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afeac5d0-d84f-4776-ae37-a03c8f0f66b8" containerName="sbdb" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.775412 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="afeac5d0-d84f-4776-ae37-a03c8f0f66b8" containerName="sbdb" Mar 11 08:54:02 crc kubenswrapper[4808]: E0311 08:54:02.775421 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afeac5d0-d84f-4776-ae37-a03c8f0f66b8" containerName="kube-rbac-proxy-node" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.775430 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="afeac5d0-d84f-4776-ae37-a03c8f0f66b8" containerName="kube-rbac-proxy-node" Mar 11 08:54:02 crc kubenswrapper[4808]: E0311 08:54:02.775443 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afeac5d0-d84f-4776-ae37-a03c8f0f66b8" containerName="nbdb" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.775450 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="afeac5d0-d84f-4776-ae37-a03c8f0f66b8" containerName="nbdb" Mar 11 08:54:02 crc kubenswrapper[4808]: E0311 08:54:02.775461 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afeac5d0-d84f-4776-ae37-a03c8f0f66b8" containerName="kube-rbac-proxy-ovn-metrics" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.775467 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="afeac5d0-d84f-4776-ae37-a03c8f0f66b8" containerName="kube-rbac-proxy-ovn-metrics" Mar 11 08:54:02 crc kubenswrapper[4808]: E0311 08:54:02.775478 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afeac5d0-d84f-4776-ae37-a03c8f0f66b8" containerName="ovn-acl-logging" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.775485 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="afeac5d0-d84f-4776-ae37-a03c8f0f66b8" containerName="ovn-acl-logging" Mar 11 08:54:02 crc kubenswrapper[4808]: E0311 08:54:02.775491 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afeac5d0-d84f-4776-ae37-a03c8f0f66b8" containerName="ovn-controller" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.775498 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="afeac5d0-d84f-4776-ae37-a03c8f0f66b8" containerName="ovn-controller" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.775607 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="afeac5d0-d84f-4776-ae37-a03c8f0f66b8" containerName="ovnkube-controller" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.775621 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="afeac5d0-d84f-4776-ae37-a03c8f0f66b8" containerName="ovnkube-controller" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.775629 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="afeac5d0-d84f-4776-ae37-a03c8f0f66b8" containerName="nbdb" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.775638 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="afeac5d0-d84f-4776-ae37-a03c8f0f66b8" containerName="ovnkube-controller" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.775648 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="afeac5d0-d84f-4776-ae37-a03c8f0f66b8" containerName="kube-rbac-proxy-node" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.775658 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="afeac5d0-d84f-4776-ae37-a03c8f0f66b8" containerName="sbdb" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.775668 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="afeac5d0-d84f-4776-ae37-a03c8f0f66b8" containerName="kube-rbac-proxy-ovn-metrics" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.775679 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="afeac5d0-d84f-4776-ae37-a03c8f0f66b8" containerName="ovnkube-controller" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.775692 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="afeac5d0-d84f-4776-ae37-a03c8f0f66b8" containerName="northd" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.775703 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="afeac5d0-d84f-4776-ae37-a03c8f0f66b8" containerName="ovn-controller" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.775713 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="afeac5d0-d84f-4776-ae37-a03c8f0f66b8" containerName="ovn-acl-logging" Mar 11 08:54:02 crc kubenswrapper[4808]: E0311 08:54:02.775838 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afeac5d0-d84f-4776-ae37-a03c8f0f66b8" containerName="ovnkube-controller" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.775848 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="afeac5d0-d84f-4776-ae37-a03c8f0f66b8" containerName="ovnkube-controller" Mar 11 08:54:02 crc kubenswrapper[4808]: E0311 08:54:02.775855 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afeac5d0-d84f-4776-ae37-a03c8f0f66b8" containerName="ovnkube-controller" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.775861 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="afeac5d0-d84f-4776-ae37-a03c8f0f66b8" containerName="ovnkube-controller" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.775944 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="afeac5d0-d84f-4776-ae37-a03c8f0f66b8" containerName="ovnkube-controller" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.777407 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-nqzhb" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.878533 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-run-openvswitch\") pod \"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\" (UID: \"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\") " Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.878616 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "afeac5d0-d84f-4776-ae37-a03c8f0f66b8" (UID: "afeac5d0-d84f-4776-ae37-a03c8f0f66b8"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.878694 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9k5ct\" (UniqueName: \"kubernetes.io/projected/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-kube-api-access-9k5ct\") pod \"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\" (UID: \"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\") " Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.878771 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-run-ovn\") pod \"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\" (UID: \"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\") " Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.879675 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-host-slash\") pod \"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\" (UID: \"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\") " Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.879745 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-host-slash" (OuterVolumeSpecName: "host-slash") pod "afeac5d0-d84f-4776-ae37-a03c8f0f66b8" (UID: "afeac5d0-d84f-4776-ae37-a03c8f0f66b8"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.879718 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "afeac5d0-d84f-4776-ae37-a03c8f0f66b8" (UID: "afeac5d0-d84f-4776-ae37-a03c8f0f66b8"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.879772 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-host-run-netns\") pod \"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\" (UID: \"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\") " Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.879852 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-systemd-units\") pod \"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\" (UID: \"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\") " Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.879875 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-node-log\") pod \"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\" (UID: \"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\") " Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.879899 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-env-overrides\") pod \"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\" (UID: \"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\") " Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.879917 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-host-run-ovn-kubernetes\") pod \"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\" (UID: \"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\") " Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.879792 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "afeac5d0-d84f-4776-ae37-a03c8f0f66b8" (UID: "afeac5d0-d84f-4776-ae37-a03c8f0f66b8"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.879928 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-node-log" (OuterVolumeSpecName: "node-log") pod "afeac5d0-d84f-4776-ae37-a03c8f0f66b8" (UID: "afeac5d0-d84f-4776-ae37-a03c8f0f66b8"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.879908 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "afeac5d0-d84f-4776-ae37-a03c8f0f66b8" (UID: "afeac5d0-d84f-4776-ae37-a03c8f0f66b8"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.879950 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-host-cni-bin\") pod \"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\" (UID: \"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\") " Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.879963 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "afeac5d0-d84f-4776-ae37-a03c8f0f66b8" (UID: "afeac5d0-d84f-4776-ae37-a03c8f0f66b8"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.879970 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-var-lib-openvswitch\") pod \"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\" (UID: \"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\") " Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.879984 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "afeac5d0-d84f-4776-ae37-a03c8f0f66b8" (UID: "afeac5d0-d84f-4776-ae37-a03c8f0f66b8"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.879987 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-host-var-lib-cni-networks-ovn-kubernetes\") pod \"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\" (UID: \"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\") " Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.880000 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-log-socket\") pod \"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\" (UID: \"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\") " Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.880002 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "afeac5d0-d84f-4776-ae37-a03c8f0f66b8" (UID: "afeac5d0-d84f-4776-ae37-a03c8f0f66b8"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.880020 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "afeac5d0-d84f-4776-ae37-a03c8f0f66b8" (UID: "afeac5d0-d84f-4776-ae37-a03c8f0f66b8"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.880023 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-ovn-node-metrics-cert\") pod \"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\" (UID: \"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\") " Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.880049 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-host-cni-netd\") pod \"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\" (UID: \"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\") " Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.880080 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-ovnkube-config\") pod \"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\" (UID: \"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\") " Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.880122 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-etc-openvswitch\") pod \"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\" (UID: \"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\") " Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.880142 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-host-kubelet\") pod \"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\" (UID: \"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\") " Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.880154 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-run-systemd\") pod \"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\" (UID: \"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\") " Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.880170 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-ovnkube-script-lib\") pod \"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\" (UID: \"afeac5d0-d84f-4776-ae37-a03c8f0f66b8\") " Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.880291 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea-log-socket\") pod \"ovnkube-node-nqzhb\" (UID: \"cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-nqzhb" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.880316 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea-host-kubelet\") pod \"ovnkube-node-nqzhb\" (UID: \"cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-nqzhb" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.880332 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea-systemd-units\") pod \"ovnkube-node-nqzhb\" (UID: \"cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-nqzhb" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.880340 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "afeac5d0-d84f-4776-ae37-a03c8f0f66b8" (UID: "afeac5d0-d84f-4776-ae37-a03c8f0f66b8"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.880347 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea-ovn-node-metrics-cert\") pod \"ovnkube-node-nqzhb\" (UID: \"cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-nqzhb" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.880389 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "afeac5d0-d84f-4776-ae37-a03c8f0f66b8" (UID: "afeac5d0-d84f-4776-ae37-a03c8f0f66b8"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.880413 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea-etc-openvswitch\") pod \"ovnkube-node-nqzhb\" (UID: \"cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-nqzhb" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.880437 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea-ovnkube-script-lib\") pod \"ovnkube-node-nqzhb\" (UID: \"cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-nqzhb" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.880470 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpldt\" (UniqueName: \"kubernetes.io/projected/cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea-kube-api-access-gpldt\") pod \"ovnkube-node-nqzhb\" (UID: \"cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-nqzhb" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.880487 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea-host-run-ovn-kubernetes\") pod \"ovnkube-node-nqzhb\" (UID: \"cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-nqzhb" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.880587 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea-env-overrides\") pod \"ovnkube-node-nqzhb\" (UID: \"cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-nqzhb" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.880610 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea-host-cni-bin\") pod \"ovnkube-node-nqzhb\" (UID: \"cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-nqzhb" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.880630 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea-node-log\") pod \"ovnkube-node-nqzhb\" (UID: \"cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-nqzhb" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.880649 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "afeac5d0-d84f-4776-ae37-a03c8f0f66b8" (UID: "afeac5d0-d84f-4776-ae37-a03c8f0f66b8"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.880657 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-nqzhb\" (UID: \"cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-nqzhb" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.880717 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-log-socket" (OuterVolumeSpecName: "log-socket") pod "afeac5d0-d84f-4776-ae37-a03c8f0f66b8" (UID: "afeac5d0-d84f-4776-ae37-a03c8f0f66b8"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.880821 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea-host-run-netns\") pod \"ovnkube-node-nqzhb\" (UID: \"cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-nqzhb" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.880841 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "afeac5d0-d84f-4776-ae37-a03c8f0f66b8" (UID: "afeac5d0-d84f-4776-ae37-a03c8f0f66b8"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.880871 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea-ovnkube-config\") pod \"ovnkube-node-nqzhb\" (UID: \"cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-nqzhb" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.880910 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "afeac5d0-d84f-4776-ae37-a03c8f0f66b8" (UID: "afeac5d0-d84f-4776-ae37-a03c8f0f66b8"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.880933 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "afeac5d0-d84f-4776-ae37-a03c8f0f66b8" (UID: "afeac5d0-d84f-4776-ae37-a03c8f0f66b8"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.881261 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea-run-systemd\") pod \"ovnkube-node-nqzhb\" (UID: \"cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-nqzhb" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.881288 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea-host-cni-netd\") pod \"ovnkube-node-nqzhb\" (UID: \"cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-nqzhb" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.881303 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea-host-slash\") pod \"ovnkube-node-nqzhb\" (UID: \"cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-nqzhb" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.881328 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea-var-lib-openvswitch\") pod \"ovnkube-node-nqzhb\" (UID: \"cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-nqzhb" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.881346 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea-run-ovn\") pod \"ovnkube-node-nqzhb\" (UID: \"cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-nqzhb" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.881388 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea-run-openvswitch\") pod \"ovnkube-node-nqzhb\" (UID: \"cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-nqzhb" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.881463 4808 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-host-cni-bin\") on node \"crc\" DevicePath \"\"" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.881474 4808 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.881485 4808 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.881494 4808 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-log-socket\") on node \"crc\" DevicePath \"\"" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.881504 4808 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-host-cni-netd\") on node \"crc\" DevicePath \"\"" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.881513 4808 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-ovnkube-config\") on node \"crc\" DevicePath \"\"" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.881523 4808 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.881531 4808 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-host-kubelet\") on node \"crc\" DevicePath \"\"" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.881539 4808 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.881547 4808 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-run-openvswitch\") on node \"crc\" DevicePath \"\"" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.881557 4808 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-run-ovn\") on node \"crc\" DevicePath \"\"" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.881564 4808 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-host-slash\") on node \"crc\" DevicePath \"\"" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.881572 4808 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-host-run-netns\") on node \"crc\" DevicePath \"\"" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.881581 4808 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-systemd-units\") on node \"crc\" DevicePath \"\"" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.881588 4808 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-node-log\") on node \"crc\" DevicePath \"\"" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.881597 4808 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-env-overrides\") on node \"crc\" DevicePath \"\"" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.881607 4808 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.883560 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-kube-api-access-9k5ct" (OuterVolumeSpecName: "kube-api-access-9k5ct") pod "afeac5d0-d84f-4776-ae37-a03c8f0f66b8" (UID: "afeac5d0-d84f-4776-ae37-a03c8f0f66b8"). InnerVolumeSpecName "kube-api-access-9k5ct". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.883799 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "afeac5d0-d84f-4776-ae37-a03c8f0f66b8" (UID: "afeac5d0-d84f-4776-ae37-a03c8f0f66b8"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.892633 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "afeac5d0-d84f-4776-ae37-a03c8f0f66b8" (UID: "afeac5d0-d84f-4776-ae37-a03c8f0f66b8"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.983220 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea-host-kubelet\") pod \"ovnkube-node-nqzhb\" (UID: \"cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-nqzhb" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.983269 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea-systemd-units\") pod \"ovnkube-node-nqzhb\" (UID: \"cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-nqzhb" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.983300 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea-ovn-node-metrics-cert\") pod \"ovnkube-node-nqzhb\" (UID: \"cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-nqzhb" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.983334 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea-ovnkube-script-lib\") pod \"ovnkube-node-nqzhb\" (UID: \"cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-nqzhb" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.983384 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea-etc-openvswitch\") pod \"ovnkube-node-nqzhb\" (UID: \"cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-nqzhb" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.983420 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpldt\" (UniqueName: \"kubernetes.io/projected/cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea-kube-api-access-gpldt\") pod \"ovnkube-node-nqzhb\" (UID: \"cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-nqzhb" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.983485 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea-host-kubelet\") pod \"ovnkube-node-nqzhb\" (UID: \"cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-nqzhb" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.983534 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea-etc-openvswitch\") pod \"ovnkube-node-nqzhb\" (UID: \"cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-nqzhb" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.983496 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea-systemd-units\") pod \"ovnkube-node-nqzhb\" (UID: \"cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-nqzhb" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.983765 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea-host-run-ovn-kubernetes\") pod \"ovnkube-node-nqzhb\" (UID: \"cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-nqzhb" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.983809 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea-env-overrides\") pod \"ovnkube-node-nqzhb\" (UID: \"cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-nqzhb" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.983838 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea-host-cni-bin\") pod \"ovnkube-node-nqzhb\" (UID: \"cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-nqzhb" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.983865 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea-node-log\") pod \"ovnkube-node-nqzhb\" (UID: \"cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-nqzhb" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.983884 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea-host-run-ovn-kubernetes\") pod \"ovnkube-node-nqzhb\" (UID: \"cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-nqzhb" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.983895 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-nqzhb\" (UID: \"cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-nqzhb" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.984010 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea-host-run-netns\") pod \"ovnkube-node-nqzhb\" (UID: \"cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-nqzhb" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.984087 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea-ovnkube-config\") pod \"ovnkube-node-nqzhb\" (UID: \"cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-nqzhb" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.984118 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea-ovnkube-script-lib\") pod \"ovnkube-node-nqzhb\" (UID: \"cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-nqzhb" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.984139 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea-run-systemd\") pod \"ovnkube-node-nqzhb\" (UID: \"cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-nqzhb" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.984175 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea-host-cni-bin\") pod \"ovnkube-node-nqzhb\" (UID: \"cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-nqzhb" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.984196 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea-host-cni-netd\") pod \"ovnkube-node-nqzhb\" (UID: \"cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-nqzhb" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.983929 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-nqzhb\" (UID: \"cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-nqzhb" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.984230 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea-node-log\") pod \"ovnkube-node-nqzhb\" (UID: \"cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-nqzhb" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.984244 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea-host-slash\") pod \"ovnkube-node-nqzhb\" (UID: \"cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-nqzhb" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.984257 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea-run-systemd\") pod \"ovnkube-node-nqzhb\" (UID: \"cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-nqzhb" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.984204 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea-host-run-netns\") pod \"ovnkube-node-nqzhb\" (UID: \"cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-nqzhb" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.984319 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea-var-lib-openvswitch\") pod \"ovnkube-node-nqzhb\" (UID: \"cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-nqzhb" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.984398 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea-host-slash\") pod \"ovnkube-node-nqzhb\" (UID: \"cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-nqzhb" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.984325 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea-host-cni-netd\") pod \"ovnkube-node-nqzhb\" (UID: \"cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-nqzhb" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.984467 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea-var-lib-openvswitch\") pod \"ovnkube-node-nqzhb\" (UID: \"cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-nqzhb" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.984528 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea-run-ovn\") pod \"ovnkube-node-nqzhb\" (UID: \"cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-nqzhb" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.984567 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea-run-ovn\") pod \"ovnkube-node-nqzhb\" (UID: \"cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-nqzhb" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.984565 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea-env-overrides\") pod \"ovnkube-node-nqzhb\" (UID: \"cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-nqzhb" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.984581 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea-run-openvswitch\") pod \"ovnkube-node-nqzhb\" (UID: \"cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-nqzhb" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.984614 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea-run-openvswitch\") pod \"ovnkube-node-nqzhb\" (UID: \"cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-nqzhb" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.984659 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea-log-socket\") pod \"ovnkube-node-nqzhb\" (UID: \"cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-nqzhb" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.984740 4808 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-run-systemd\") on node \"crc\" DevicePath \"\"" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.984760 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9k5ct\" (UniqueName: \"kubernetes.io/projected/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-kube-api-access-9k5ct\") on node \"crc\" DevicePath \"\"" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.984845 4808 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/afeac5d0-d84f-4776-ae37-a03c8f0f66b8-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.984779 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea-log-socket\") pod \"ovnkube-node-nqzhb\" (UID: \"cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-nqzhb" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.984761 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea-ovnkube-config\") pod \"ovnkube-node-nqzhb\" (UID: \"cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-nqzhb" Mar 11 08:54:02 crc kubenswrapper[4808]: I0311 08:54:02.988684 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea-ovn-node-metrics-cert\") pod \"ovnkube-node-nqzhb\" (UID: \"cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-nqzhb" Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.015409 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpldt\" (UniqueName: \"kubernetes.io/projected/cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea-kube-api-access-gpldt\") pod \"ovnkube-node-nqzhb\" (UID: \"cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-nqzhb" Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.042384 4808 generic.go:334] "Generic (PLEG): container finished" podID="f6a1463b-9472-494c-b46e-75c7098a9e62" containerID="060f361a44d029c1563e602d7c5ca51bccbcfb1fa002f6f0dfd510cad78dd996" exitCode=0 Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.042478 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553654-fdqcq" event={"ID":"f6a1463b-9472-494c-b46e-75c7098a9e62","Type":"ContainerDied","Data":"060f361a44d029c1563e602d7c5ca51bccbcfb1fa002f6f0dfd510cad78dd996"} Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.046111 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dgh9v_c1a75dfb-31dd-4275-a309-c9e7130feb05/kube-multus/2.log" Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.046636 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dgh9v_c1a75dfb-31dd-4275-a309-c9e7130feb05/kube-multus/1.log" Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.046696 4808 generic.go:334] "Generic (PLEG): container finished" podID="c1a75dfb-31dd-4275-a309-c9e7130feb05" containerID="a74cf4fc7a6efc5e697b4c5b638237ec2b87c79fbcc672ad3c5e57df7e0e9cd7" exitCode=2 Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.046734 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-dgh9v" event={"ID":"c1a75dfb-31dd-4275-a309-c9e7130feb05","Type":"ContainerDied","Data":"a74cf4fc7a6efc5e697b4c5b638237ec2b87c79fbcc672ad3c5e57df7e0e9cd7"} Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.046791 4808 scope.go:117] "RemoveContainer" containerID="e012d5673ee21ba84ca94a2309891fa86969898e35da381f78fc2c18734d636c" Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.047519 4808 scope.go:117] "RemoveContainer" containerID="a74cf4fc7a6efc5e697b4c5b638237ec2b87c79fbcc672ad3c5e57df7e0e9cd7" Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.050517 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8wfl5_afeac5d0-d84f-4776-ae37-a03c8f0f66b8/ovnkube-controller/3.log" Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.054544 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8wfl5_afeac5d0-d84f-4776-ae37-a03c8f0f66b8/ovn-acl-logging/0.log" Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.056432 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8wfl5_afeac5d0-d84f-4776-ae37-a03c8f0f66b8/ovn-controller/0.log" Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.057603 4808 generic.go:334] "Generic (PLEG): container finished" podID="afeac5d0-d84f-4776-ae37-a03c8f0f66b8" containerID="14f606735543a42f93d6dcfef5cf41caeb01d4d37301694aac50fa3451463180" exitCode=0 Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.057646 4808 generic.go:334] "Generic (PLEG): container finished" podID="afeac5d0-d84f-4776-ae37-a03c8f0f66b8" containerID="01f1b29fe1734c41ddcc1b36367ccc54148fa85f2e05072fc0b60987dfcbb078" exitCode=0 Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.057674 4808 generic.go:334] "Generic (PLEG): container finished" podID="afeac5d0-d84f-4776-ae37-a03c8f0f66b8" containerID="e1d10229731c437de7f29f504fec11ac3688a0a5039812d76551a867f814b95e" exitCode=0 Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.057695 4808 generic.go:334] "Generic (PLEG): container finished" podID="afeac5d0-d84f-4776-ae37-a03c8f0f66b8" containerID="a351ca31265d6256ce9c08680495dd7a9b7882cd17a8faf4a0d224a800737e4e" exitCode=0 Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.057716 4808 generic.go:334] "Generic (PLEG): container finished" podID="afeac5d0-d84f-4776-ae37-a03c8f0f66b8" containerID="02a71d4f33fd4d8bcf666c2d01f03d31c6e0f2b8eaa0cc183b48251e36cdc474" exitCode=0 Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.057734 4808 generic.go:334] "Generic (PLEG): container finished" podID="afeac5d0-d84f-4776-ae37-a03c8f0f66b8" containerID="17fbdac0c2d9f6080066329737fa9866b0e953b38687d6c7b2c62dea2e9964a1" exitCode=0 Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.057753 4808 generic.go:334] "Generic (PLEG): container finished" podID="afeac5d0-d84f-4776-ae37-a03c8f0f66b8" containerID="3674634f2b9f7a0e2fcf3758da17080b26a6284c8446d8326bc405b1307f8c22" exitCode=143 Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.057771 4808 generic.go:334] "Generic (PLEG): container finished" podID="afeac5d0-d84f-4776-ae37-a03c8f0f66b8" containerID="0f11f10c1e9b99cab011ae05555fe3dd8aeb6e7df5cb166f07c6bcfffc9f9164" exitCode=143 Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.057729 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" event={"ID":"afeac5d0-d84f-4776-ae37-a03c8f0f66b8","Type":"ContainerDied","Data":"14f606735543a42f93d6dcfef5cf41caeb01d4d37301694aac50fa3451463180"} Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.057811 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.057842 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" event={"ID":"afeac5d0-d84f-4776-ae37-a03c8f0f66b8","Type":"ContainerDied","Data":"01f1b29fe1734c41ddcc1b36367ccc54148fa85f2e05072fc0b60987dfcbb078"} Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.057880 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" event={"ID":"afeac5d0-d84f-4776-ae37-a03c8f0f66b8","Type":"ContainerDied","Data":"e1d10229731c437de7f29f504fec11ac3688a0a5039812d76551a867f814b95e"} Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.057912 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" event={"ID":"afeac5d0-d84f-4776-ae37-a03c8f0f66b8","Type":"ContainerDied","Data":"a351ca31265d6256ce9c08680495dd7a9b7882cd17a8faf4a0d224a800737e4e"} Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.058175 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" event={"ID":"afeac5d0-d84f-4776-ae37-a03c8f0f66b8","Type":"ContainerDied","Data":"02a71d4f33fd4d8bcf666c2d01f03d31c6e0f2b8eaa0cc183b48251e36cdc474"} Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.058206 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" event={"ID":"afeac5d0-d84f-4776-ae37-a03c8f0f66b8","Type":"ContainerDied","Data":"17fbdac0c2d9f6080066329737fa9866b0e953b38687d6c7b2c62dea2e9964a1"} Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.058228 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"14f606735543a42f93d6dcfef5cf41caeb01d4d37301694aac50fa3451463180"} Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.058245 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e8ed6835884a9397b2548e8636b4e06a1733e82faf47693035faf763591bfa72"} Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.058257 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"01f1b29fe1734c41ddcc1b36367ccc54148fa85f2e05072fc0b60987dfcbb078"} Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.058269 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e1d10229731c437de7f29f504fec11ac3688a0a5039812d76551a867f814b95e"} Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.058279 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a351ca31265d6256ce9c08680495dd7a9b7882cd17a8faf4a0d224a800737e4e"} Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.058290 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"02a71d4f33fd4d8bcf666c2d01f03d31c6e0f2b8eaa0cc183b48251e36cdc474"} Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.058301 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"17fbdac0c2d9f6080066329737fa9866b0e953b38687d6c7b2c62dea2e9964a1"} Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.058312 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3674634f2b9f7a0e2fcf3758da17080b26a6284c8446d8326bc405b1307f8c22"} Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.058323 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0f11f10c1e9b99cab011ae05555fe3dd8aeb6e7df5cb166f07c6bcfffc9f9164"} Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.060127 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7129de475a69815e0ed556f2d45c208788efcd7ef2792f3d93994af37dee87b2"} Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.060169 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" event={"ID":"afeac5d0-d84f-4776-ae37-a03c8f0f66b8","Type":"ContainerDied","Data":"3674634f2b9f7a0e2fcf3758da17080b26a6284c8446d8326bc405b1307f8c22"} Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.060194 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"14f606735543a42f93d6dcfef5cf41caeb01d4d37301694aac50fa3451463180"} Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.060210 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e8ed6835884a9397b2548e8636b4e06a1733e82faf47693035faf763591bfa72"} Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.060221 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"01f1b29fe1734c41ddcc1b36367ccc54148fa85f2e05072fc0b60987dfcbb078"} Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.060232 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e1d10229731c437de7f29f504fec11ac3688a0a5039812d76551a867f814b95e"} Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.060243 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a351ca31265d6256ce9c08680495dd7a9b7882cd17a8faf4a0d224a800737e4e"} Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.060254 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"02a71d4f33fd4d8bcf666c2d01f03d31c6e0f2b8eaa0cc183b48251e36cdc474"} Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.060265 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"17fbdac0c2d9f6080066329737fa9866b0e953b38687d6c7b2c62dea2e9964a1"} Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.060276 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3674634f2b9f7a0e2fcf3758da17080b26a6284c8446d8326bc405b1307f8c22"} Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.060290 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0f11f10c1e9b99cab011ae05555fe3dd8aeb6e7df5cb166f07c6bcfffc9f9164"} Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.060300 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7129de475a69815e0ed556f2d45c208788efcd7ef2792f3d93994af37dee87b2"} Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.060316 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" event={"ID":"afeac5d0-d84f-4776-ae37-a03c8f0f66b8","Type":"ContainerDied","Data":"0f11f10c1e9b99cab011ae05555fe3dd8aeb6e7df5cb166f07c6bcfffc9f9164"} Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.060332 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"14f606735543a42f93d6dcfef5cf41caeb01d4d37301694aac50fa3451463180"} Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.060344 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e8ed6835884a9397b2548e8636b4e06a1733e82faf47693035faf763591bfa72"} Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.060355 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"01f1b29fe1734c41ddcc1b36367ccc54148fa85f2e05072fc0b60987dfcbb078"} Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.060398 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e1d10229731c437de7f29f504fec11ac3688a0a5039812d76551a867f814b95e"} Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.060409 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a351ca31265d6256ce9c08680495dd7a9b7882cd17a8faf4a0d224a800737e4e"} Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.060420 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"02a71d4f33fd4d8bcf666c2d01f03d31c6e0f2b8eaa0cc183b48251e36cdc474"} Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.060430 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"17fbdac0c2d9f6080066329737fa9866b0e953b38687d6c7b2c62dea2e9964a1"} Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.060440 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3674634f2b9f7a0e2fcf3758da17080b26a6284c8446d8326bc405b1307f8c22"} Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.060450 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0f11f10c1e9b99cab011ae05555fe3dd8aeb6e7df5cb166f07c6bcfffc9f9164"} Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.060462 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7129de475a69815e0ed556f2d45c208788efcd7ef2792f3d93994af37dee87b2"} Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.060476 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8wfl5" event={"ID":"afeac5d0-d84f-4776-ae37-a03c8f0f66b8","Type":"ContainerDied","Data":"b5fb214c37e3f6908ae8b5c1d7c0a6c7f1142819930144cdd979bfeb40b3e7f2"} Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.060496 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"14f606735543a42f93d6dcfef5cf41caeb01d4d37301694aac50fa3451463180"} Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.060509 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e8ed6835884a9397b2548e8636b4e06a1733e82faf47693035faf763591bfa72"} Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.060519 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"01f1b29fe1734c41ddcc1b36367ccc54148fa85f2e05072fc0b60987dfcbb078"} Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.060532 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e1d10229731c437de7f29f504fec11ac3688a0a5039812d76551a867f814b95e"} Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.060543 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a351ca31265d6256ce9c08680495dd7a9b7882cd17a8faf4a0d224a800737e4e"} Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.060553 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"02a71d4f33fd4d8bcf666c2d01f03d31c6e0f2b8eaa0cc183b48251e36cdc474"} Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.060562 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"17fbdac0c2d9f6080066329737fa9866b0e953b38687d6c7b2c62dea2e9964a1"} Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.060573 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3674634f2b9f7a0e2fcf3758da17080b26a6284c8446d8326bc405b1307f8c22"} Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.060582 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0f11f10c1e9b99cab011ae05555fe3dd8aeb6e7df5cb166f07c6bcfffc9f9164"} Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.060592 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7129de475a69815e0ed556f2d45c208788efcd7ef2792f3d93994af37dee87b2"} Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.086866 4808 scope.go:117] "RemoveContainer" containerID="14f606735543a42f93d6dcfef5cf41caeb01d4d37301694aac50fa3451463180" Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.100198 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-nqzhb" Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.118970 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-8wfl5"] Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.125242 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-8wfl5"] Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.127441 4808 scope.go:117] "RemoveContainer" containerID="e8ed6835884a9397b2548e8636b4e06a1733e82faf47693035faf763591bfa72" Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.173508 4808 scope.go:117] "RemoveContainer" containerID="01f1b29fe1734c41ddcc1b36367ccc54148fa85f2e05072fc0b60987dfcbb078" Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.213708 4808 scope.go:117] "RemoveContainer" containerID="e1d10229731c437de7f29f504fec11ac3688a0a5039812d76551a867f814b95e" Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.230088 4808 scope.go:117] "RemoveContainer" containerID="a351ca31265d6256ce9c08680495dd7a9b7882cd17a8faf4a0d224a800737e4e" Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.251046 4808 scope.go:117] "RemoveContainer" containerID="02a71d4f33fd4d8bcf666c2d01f03d31c6e0f2b8eaa0cc183b48251e36cdc474" Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.264428 4808 scope.go:117] "RemoveContainer" containerID="17fbdac0c2d9f6080066329737fa9866b0e953b38687d6c7b2c62dea2e9964a1" Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.342274 4808 scope.go:117] "RemoveContainer" containerID="3674634f2b9f7a0e2fcf3758da17080b26a6284c8446d8326bc405b1307f8c22" Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.356315 4808 scope.go:117] "RemoveContainer" containerID="0f11f10c1e9b99cab011ae05555fe3dd8aeb6e7df5cb166f07c6bcfffc9f9164" Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.371074 4808 scope.go:117] "RemoveContainer" containerID="7129de475a69815e0ed556f2d45c208788efcd7ef2792f3d93994af37dee87b2" Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.384877 4808 scope.go:117] "RemoveContainer" containerID="14f606735543a42f93d6dcfef5cf41caeb01d4d37301694aac50fa3451463180" Mar 11 08:54:03 crc kubenswrapper[4808]: E0311 08:54:03.385411 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14f606735543a42f93d6dcfef5cf41caeb01d4d37301694aac50fa3451463180\": container with ID starting with 14f606735543a42f93d6dcfef5cf41caeb01d4d37301694aac50fa3451463180 not found: ID does not exist" containerID="14f606735543a42f93d6dcfef5cf41caeb01d4d37301694aac50fa3451463180" Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.385450 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14f606735543a42f93d6dcfef5cf41caeb01d4d37301694aac50fa3451463180"} err="failed to get container status \"14f606735543a42f93d6dcfef5cf41caeb01d4d37301694aac50fa3451463180\": rpc error: code = NotFound desc = could not find container \"14f606735543a42f93d6dcfef5cf41caeb01d4d37301694aac50fa3451463180\": container with ID starting with 14f606735543a42f93d6dcfef5cf41caeb01d4d37301694aac50fa3451463180 not found: ID does not exist" Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.385478 4808 scope.go:117] "RemoveContainer" containerID="e8ed6835884a9397b2548e8636b4e06a1733e82faf47693035faf763591bfa72" Mar 11 08:54:03 crc kubenswrapper[4808]: E0311 08:54:03.385926 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8ed6835884a9397b2548e8636b4e06a1733e82faf47693035faf763591bfa72\": container with ID starting with e8ed6835884a9397b2548e8636b4e06a1733e82faf47693035faf763591bfa72 not found: ID does not exist" containerID="e8ed6835884a9397b2548e8636b4e06a1733e82faf47693035faf763591bfa72" Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.385954 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8ed6835884a9397b2548e8636b4e06a1733e82faf47693035faf763591bfa72"} err="failed to get container status \"e8ed6835884a9397b2548e8636b4e06a1733e82faf47693035faf763591bfa72\": rpc error: code = NotFound desc = could not find container \"e8ed6835884a9397b2548e8636b4e06a1733e82faf47693035faf763591bfa72\": container with ID starting with e8ed6835884a9397b2548e8636b4e06a1733e82faf47693035faf763591bfa72 not found: ID does not exist" Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.386004 4808 scope.go:117] "RemoveContainer" containerID="01f1b29fe1734c41ddcc1b36367ccc54148fa85f2e05072fc0b60987dfcbb078" Mar 11 08:54:03 crc kubenswrapper[4808]: E0311 08:54:03.386570 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01f1b29fe1734c41ddcc1b36367ccc54148fa85f2e05072fc0b60987dfcbb078\": container with ID starting with 01f1b29fe1734c41ddcc1b36367ccc54148fa85f2e05072fc0b60987dfcbb078 not found: ID does not exist" containerID="01f1b29fe1734c41ddcc1b36367ccc54148fa85f2e05072fc0b60987dfcbb078" Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.386601 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01f1b29fe1734c41ddcc1b36367ccc54148fa85f2e05072fc0b60987dfcbb078"} err="failed to get container status \"01f1b29fe1734c41ddcc1b36367ccc54148fa85f2e05072fc0b60987dfcbb078\": rpc error: code = NotFound desc = could not find container \"01f1b29fe1734c41ddcc1b36367ccc54148fa85f2e05072fc0b60987dfcbb078\": container with ID starting with 01f1b29fe1734c41ddcc1b36367ccc54148fa85f2e05072fc0b60987dfcbb078 not found: ID does not exist" Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.386620 4808 scope.go:117] "RemoveContainer" containerID="e1d10229731c437de7f29f504fec11ac3688a0a5039812d76551a867f814b95e" Mar 11 08:54:03 crc kubenswrapper[4808]: E0311 08:54:03.386932 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1d10229731c437de7f29f504fec11ac3688a0a5039812d76551a867f814b95e\": container with ID starting with e1d10229731c437de7f29f504fec11ac3688a0a5039812d76551a867f814b95e not found: ID does not exist" containerID="e1d10229731c437de7f29f504fec11ac3688a0a5039812d76551a867f814b95e" Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.386975 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1d10229731c437de7f29f504fec11ac3688a0a5039812d76551a867f814b95e"} err="failed to get container status \"e1d10229731c437de7f29f504fec11ac3688a0a5039812d76551a867f814b95e\": rpc error: code = NotFound desc = could not find container \"e1d10229731c437de7f29f504fec11ac3688a0a5039812d76551a867f814b95e\": container with ID starting with e1d10229731c437de7f29f504fec11ac3688a0a5039812d76551a867f814b95e not found: ID does not exist" Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.387005 4808 scope.go:117] "RemoveContainer" containerID="a351ca31265d6256ce9c08680495dd7a9b7882cd17a8faf4a0d224a800737e4e" Mar 11 08:54:03 crc kubenswrapper[4808]: E0311 08:54:03.387521 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a351ca31265d6256ce9c08680495dd7a9b7882cd17a8faf4a0d224a800737e4e\": container with ID starting with a351ca31265d6256ce9c08680495dd7a9b7882cd17a8faf4a0d224a800737e4e not found: ID does not exist" containerID="a351ca31265d6256ce9c08680495dd7a9b7882cd17a8faf4a0d224a800737e4e" Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.387551 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a351ca31265d6256ce9c08680495dd7a9b7882cd17a8faf4a0d224a800737e4e"} err="failed to get container status \"a351ca31265d6256ce9c08680495dd7a9b7882cd17a8faf4a0d224a800737e4e\": rpc error: code = NotFound desc = could not find container \"a351ca31265d6256ce9c08680495dd7a9b7882cd17a8faf4a0d224a800737e4e\": container with ID starting with a351ca31265d6256ce9c08680495dd7a9b7882cd17a8faf4a0d224a800737e4e not found: ID does not exist" Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.387570 4808 scope.go:117] "RemoveContainer" containerID="02a71d4f33fd4d8bcf666c2d01f03d31c6e0f2b8eaa0cc183b48251e36cdc474" Mar 11 08:54:03 crc kubenswrapper[4808]: E0311 08:54:03.388043 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02a71d4f33fd4d8bcf666c2d01f03d31c6e0f2b8eaa0cc183b48251e36cdc474\": container with ID starting with 02a71d4f33fd4d8bcf666c2d01f03d31c6e0f2b8eaa0cc183b48251e36cdc474 not found: ID does not exist" containerID="02a71d4f33fd4d8bcf666c2d01f03d31c6e0f2b8eaa0cc183b48251e36cdc474" Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.388073 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02a71d4f33fd4d8bcf666c2d01f03d31c6e0f2b8eaa0cc183b48251e36cdc474"} err="failed to get container status \"02a71d4f33fd4d8bcf666c2d01f03d31c6e0f2b8eaa0cc183b48251e36cdc474\": rpc error: code = NotFound desc = could not find container \"02a71d4f33fd4d8bcf666c2d01f03d31c6e0f2b8eaa0cc183b48251e36cdc474\": container with ID starting with 02a71d4f33fd4d8bcf666c2d01f03d31c6e0f2b8eaa0cc183b48251e36cdc474 not found: ID does not exist" Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.388092 4808 scope.go:117] "RemoveContainer" containerID="17fbdac0c2d9f6080066329737fa9866b0e953b38687d6c7b2c62dea2e9964a1" Mar 11 08:54:03 crc kubenswrapper[4808]: E0311 08:54:03.388478 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17fbdac0c2d9f6080066329737fa9866b0e953b38687d6c7b2c62dea2e9964a1\": container with ID starting with 17fbdac0c2d9f6080066329737fa9866b0e953b38687d6c7b2c62dea2e9964a1 not found: ID does not exist" containerID="17fbdac0c2d9f6080066329737fa9866b0e953b38687d6c7b2c62dea2e9964a1" Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.388509 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17fbdac0c2d9f6080066329737fa9866b0e953b38687d6c7b2c62dea2e9964a1"} err="failed to get container status \"17fbdac0c2d9f6080066329737fa9866b0e953b38687d6c7b2c62dea2e9964a1\": rpc error: code = NotFound desc = could not find container \"17fbdac0c2d9f6080066329737fa9866b0e953b38687d6c7b2c62dea2e9964a1\": container with ID starting with 17fbdac0c2d9f6080066329737fa9866b0e953b38687d6c7b2c62dea2e9964a1 not found: ID does not exist" Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.388528 4808 scope.go:117] "RemoveContainer" containerID="3674634f2b9f7a0e2fcf3758da17080b26a6284c8446d8326bc405b1307f8c22" Mar 11 08:54:03 crc kubenswrapper[4808]: E0311 08:54:03.388797 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3674634f2b9f7a0e2fcf3758da17080b26a6284c8446d8326bc405b1307f8c22\": container with ID starting with 3674634f2b9f7a0e2fcf3758da17080b26a6284c8446d8326bc405b1307f8c22 not found: ID does not exist" containerID="3674634f2b9f7a0e2fcf3758da17080b26a6284c8446d8326bc405b1307f8c22" Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.388826 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3674634f2b9f7a0e2fcf3758da17080b26a6284c8446d8326bc405b1307f8c22"} err="failed to get container status \"3674634f2b9f7a0e2fcf3758da17080b26a6284c8446d8326bc405b1307f8c22\": rpc error: code = NotFound desc = could not find container \"3674634f2b9f7a0e2fcf3758da17080b26a6284c8446d8326bc405b1307f8c22\": container with ID starting with 3674634f2b9f7a0e2fcf3758da17080b26a6284c8446d8326bc405b1307f8c22 not found: ID does not exist" Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.388842 4808 scope.go:117] "RemoveContainer" containerID="0f11f10c1e9b99cab011ae05555fe3dd8aeb6e7df5cb166f07c6bcfffc9f9164" Mar 11 08:54:03 crc kubenswrapper[4808]: E0311 08:54:03.389289 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f11f10c1e9b99cab011ae05555fe3dd8aeb6e7df5cb166f07c6bcfffc9f9164\": container with ID starting with 0f11f10c1e9b99cab011ae05555fe3dd8aeb6e7df5cb166f07c6bcfffc9f9164 not found: ID does not exist" containerID="0f11f10c1e9b99cab011ae05555fe3dd8aeb6e7df5cb166f07c6bcfffc9f9164" Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.389318 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f11f10c1e9b99cab011ae05555fe3dd8aeb6e7df5cb166f07c6bcfffc9f9164"} err="failed to get container status \"0f11f10c1e9b99cab011ae05555fe3dd8aeb6e7df5cb166f07c6bcfffc9f9164\": rpc error: code = NotFound desc = could not find container \"0f11f10c1e9b99cab011ae05555fe3dd8aeb6e7df5cb166f07c6bcfffc9f9164\": container with ID starting with 0f11f10c1e9b99cab011ae05555fe3dd8aeb6e7df5cb166f07c6bcfffc9f9164 not found: ID does not exist" Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.389334 4808 scope.go:117] "RemoveContainer" containerID="7129de475a69815e0ed556f2d45c208788efcd7ef2792f3d93994af37dee87b2" Mar 11 08:54:03 crc kubenswrapper[4808]: E0311 08:54:03.389707 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7129de475a69815e0ed556f2d45c208788efcd7ef2792f3d93994af37dee87b2\": container with ID starting with 7129de475a69815e0ed556f2d45c208788efcd7ef2792f3d93994af37dee87b2 not found: ID does not exist" containerID="7129de475a69815e0ed556f2d45c208788efcd7ef2792f3d93994af37dee87b2" Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.389756 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7129de475a69815e0ed556f2d45c208788efcd7ef2792f3d93994af37dee87b2"} err="failed to get container status \"7129de475a69815e0ed556f2d45c208788efcd7ef2792f3d93994af37dee87b2\": rpc error: code = NotFound desc = could not find container \"7129de475a69815e0ed556f2d45c208788efcd7ef2792f3d93994af37dee87b2\": container with ID starting with 7129de475a69815e0ed556f2d45c208788efcd7ef2792f3d93994af37dee87b2 not found: ID does not exist" Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.389773 4808 scope.go:117] "RemoveContainer" containerID="14f606735543a42f93d6dcfef5cf41caeb01d4d37301694aac50fa3451463180" Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.390028 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14f606735543a42f93d6dcfef5cf41caeb01d4d37301694aac50fa3451463180"} err="failed to get container status \"14f606735543a42f93d6dcfef5cf41caeb01d4d37301694aac50fa3451463180\": rpc error: code = NotFound desc = could not find container \"14f606735543a42f93d6dcfef5cf41caeb01d4d37301694aac50fa3451463180\": container with ID starting with 14f606735543a42f93d6dcfef5cf41caeb01d4d37301694aac50fa3451463180 not found: ID does not exist" Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.390047 4808 scope.go:117] "RemoveContainer" containerID="e8ed6835884a9397b2548e8636b4e06a1733e82faf47693035faf763591bfa72" Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.390350 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8ed6835884a9397b2548e8636b4e06a1733e82faf47693035faf763591bfa72"} err="failed to get container status \"e8ed6835884a9397b2548e8636b4e06a1733e82faf47693035faf763591bfa72\": rpc error: code = NotFound desc = could not find container \"e8ed6835884a9397b2548e8636b4e06a1733e82faf47693035faf763591bfa72\": container with ID starting with e8ed6835884a9397b2548e8636b4e06a1733e82faf47693035faf763591bfa72 not found: ID does not exist" Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.390384 4808 scope.go:117] "RemoveContainer" containerID="01f1b29fe1734c41ddcc1b36367ccc54148fa85f2e05072fc0b60987dfcbb078" Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.390795 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01f1b29fe1734c41ddcc1b36367ccc54148fa85f2e05072fc0b60987dfcbb078"} err="failed to get container status \"01f1b29fe1734c41ddcc1b36367ccc54148fa85f2e05072fc0b60987dfcbb078\": rpc error: code = NotFound desc = could not find container \"01f1b29fe1734c41ddcc1b36367ccc54148fa85f2e05072fc0b60987dfcbb078\": container with ID starting with 01f1b29fe1734c41ddcc1b36367ccc54148fa85f2e05072fc0b60987dfcbb078 not found: ID does not exist" Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.390826 4808 scope.go:117] "RemoveContainer" containerID="e1d10229731c437de7f29f504fec11ac3688a0a5039812d76551a867f814b95e" Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.391092 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1d10229731c437de7f29f504fec11ac3688a0a5039812d76551a867f814b95e"} err="failed to get container status \"e1d10229731c437de7f29f504fec11ac3688a0a5039812d76551a867f814b95e\": rpc error: code = NotFound desc = could not find container \"e1d10229731c437de7f29f504fec11ac3688a0a5039812d76551a867f814b95e\": container with ID starting with e1d10229731c437de7f29f504fec11ac3688a0a5039812d76551a867f814b95e not found: ID does not exist" Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.391122 4808 scope.go:117] "RemoveContainer" containerID="a351ca31265d6256ce9c08680495dd7a9b7882cd17a8faf4a0d224a800737e4e" Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.391654 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a351ca31265d6256ce9c08680495dd7a9b7882cd17a8faf4a0d224a800737e4e"} err="failed to get container status \"a351ca31265d6256ce9c08680495dd7a9b7882cd17a8faf4a0d224a800737e4e\": rpc error: code = NotFound desc = could not find container \"a351ca31265d6256ce9c08680495dd7a9b7882cd17a8faf4a0d224a800737e4e\": container with ID starting with a351ca31265d6256ce9c08680495dd7a9b7882cd17a8faf4a0d224a800737e4e not found: ID does not exist" Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.391677 4808 scope.go:117] "RemoveContainer" containerID="02a71d4f33fd4d8bcf666c2d01f03d31c6e0f2b8eaa0cc183b48251e36cdc474" Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.392011 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02a71d4f33fd4d8bcf666c2d01f03d31c6e0f2b8eaa0cc183b48251e36cdc474"} err="failed to get container status \"02a71d4f33fd4d8bcf666c2d01f03d31c6e0f2b8eaa0cc183b48251e36cdc474\": rpc error: code = NotFound desc = could not find container \"02a71d4f33fd4d8bcf666c2d01f03d31c6e0f2b8eaa0cc183b48251e36cdc474\": container with ID starting with 02a71d4f33fd4d8bcf666c2d01f03d31c6e0f2b8eaa0cc183b48251e36cdc474 not found: ID does not exist" Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.392054 4808 scope.go:117] "RemoveContainer" containerID="17fbdac0c2d9f6080066329737fa9866b0e953b38687d6c7b2c62dea2e9964a1" Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.392476 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17fbdac0c2d9f6080066329737fa9866b0e953b38687d6c7b2c62dea2e9964a1"} err="failed to get container status \"17fbdac0c2d9f6080066329737fa9866b0e953b38687d6c7b2c62dea2e9964a1\": rpc error: code = NotFound desc = could not find container \"17fbdac0c2d9f6080066329737fa9866b0e953b38687d6c7b2c62dea2e9964a1\": container with ID starting with 17fbdac0c2d9f6080066329737fa9866b0e953b38687d6c7b2c62dea2e9964a1 not found: ID does not exist" Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.392503 4808 scope.go:117] "RemoveContainer" containerID="3674634f2b9f7a0e2fcf3758da17080b26a6284c8446d8326bc405b1307f8c22" Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.392932 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3674634f2b9f7a0e2fcf3758da17080b26a6284c8446d8326bc405b1307f8c22"} err="failed to get container status \"3674634f2b9f7a0e2fcf3758da17080b26a6284c8446d8326bc405b1307f8c22\": rpc error: code = NotFound desc = could not find container \"3674634f2b9f7a0e2fcf3758da17080b26a6284c8446d8326bc405b1307f8c22\": container with ID starting with 3674634f2b9f7a0e2fcf3758da17080b26a6284c8446d8326bc405b1307f8c22 not found: ID does not exist" Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.392956 4808 scope.go:117] "RemoveContainer" containerID="0f11f10c1e9b99cab011ae05555fe3dd8aeb6e7df5cb166f07c6bcfffc9f9164" Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.393235 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f11f10c1e9b99cab011ae05555fe3dd8aeb6e7df5cb166f07c6bcfffc9f9164"} err="failed to get container status \"0f11f10c1e9b99cab011ae05555fe3dd8aeb6e7df5cb166f07c6bcfffc9f9164\": rpc error: code = NotFound desc = could not find container \"0f11f10c1e9b99cab011ae05555fe3dd8aeb6e7df5cb166f07c6bcfffc9f9164\": container with ID starting with 0f11f10c1e9b99cab011ae05555fe3dd8aeb6e7df5cb166f07c6bcfffc9f9164 not found: ID does not exist" Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.393264 4808 scope.go:117] "RemoveContainer" containerID="7129de475a69815e0ed556f2d45c208788efcd7ef2792f3d93994af37dee87b2" Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.393849 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7129de475a69815e0ed556f2d45c208788efcd7ef2792f3d93994af37dee87b2"} err="failed to get container status \"7129de475a69815e0ed556f2d45c208788efcd7ef2792f3d93994af37dee87b2\": rpc error: code = NotFound desc = could not find container \"7129de475a69815e0ed556f2d45c208788efcd7ef2792f3d93994af37dee87b2\": container with ID starting with 7129de475a69815e0ed556f2d45c208788efcd7ef2792f3d93994af37dee87b2 not found: ID does not exist" Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.393872 4808 scope.go:117] "RemoveContainer" containerID="14f606735543a42f93d6dcfef5cf41caeb01d4d37301694aac50fa3451463180" Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.394141 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14f606735543a42f93d6dcfef5cf41caeb01d4d37301694aac50fa3451463180"} err="failed to get container status \"14f606735543a42f93d6dcfef5cf41caeb01d4d37301694aac50fa3451463180\": rpc error: code = NotFound desc = could not find container \"14f606735543a42f93d6dcfef5cf41caeb01d4d37301694aac50fa3451463180\": container with ID starting with 14f606735543a42f93d6dcfef5cf41caeb01d4d37301694aac50fa3451463180 not found: ID does not exist" Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.394167 4808 scope.go:117] "RemoveContainer" containerID="e8ed6835884a9397b2548e8636b4e06a1733e82faf47693035faf763591bfa72" Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.394609 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8ed6835884a9397b2548e8636b4e06a1733e82faf47693035faf763591bfa72"} err="failed to get container status \"e8ed6835884a9397b2548e8636b4e06a1733e82faf47693035faf763591bfa72\": rpc error: code = NotFound desc = could not find container \"e8ed6835884a9397b2548e8636b4e06a1733e82faf47693035faf763591bfa72\": container with ID starting with e8ed6835884a9397b2548e8636b4e06a1733e82faf47693035faf763591bfa72 not found: ID does not exist" Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.394635 4808 scope.go:117] "RemoveContainer" containerID="01f1b29fe1734c41ddcc1b36367ccc54148fa85f2e05072fc0b60987dfcbb078" Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.394979 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01f1b29fe1734c41ddcc1b36367ccc54148fa85f2e05072fc0b60987dfcbb078"} err="failed to get container status \"01f1b29fe1734c41ddcc1b36367ccc54148fa85f2e05072fc0b60987dfcbb078\": rpc error: code = NotFound desc = could not find container \"01f1b29fe1734c41ddcc1b36367ccc54148fa85f2e05072fc0b60987dfcbb078\": container with ID starting with 01f1b29fe1734c41ddcc1b36367ccc54148fa85f2e05072fc0b60987dfcbb078 not found: ID does not exist" Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.395002 4808 scope.go:117] "RemoveContainer" containerID="e1d10229731c437de7f29f504fec11ac3688a0a5039812d76551a867f814b95e" Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.395285 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1d10229731c437de7f29f504fec11ac3688a0a5039812d76551a867f814b95e"} err="failed to get container status \"e1d10229731c437de7f29f504fec11ac3688a0a5039812d76551a867f814b95e\": rpc error: code = NotFound desc = could not find container \"e1d10229731c437de7f29f504fec11ac3688a0a5039812d76551a867f814b95e\": container with ID starting with e1d10229731c437de7f29f504fec11ac3688a0a5039812d76551a867f814b95e not found: ID does not exist" Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.395309 4808 scope.go:117] "RemoveContainer" containerID="a351ca31265d6256ce9c08680495dd7a9b7882cd17a8faf4a0d224a800737e4e" Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.395540 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a351ca31265d6256ce9c08680495dd7a9b7882cd17a8faf4a0d224a800737e4e"} err="failed to get container status \"a351ca31265d6256ce9c08680495dd7a9b7882cd17a8faf4a0d224a800737e4e\": rpc error: code = NotFound desc = could not find container \"a351ca31265d6256ce9c08680495dd7a9b7882cd17a8faf4a0d224a800737e4e\": container with ID starting with a351ca31265d6256ce9c08680495dd7a9b7882cd17a8faf4a0d224a800737e4e not found: ID does not exist" Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.395566 4808 scope.go:117] "RemoveContainer" containerID="02a71d4f33fd4d8bcf666c2d01f03d31c6e0f2b8eaa0cc183b48251e36cdc474" Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.395852 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02a71d4f33fd4d8bcf666c2d01f03d31c6e0f2b8eaa0cc183b48251e36cdc474"} err="failed to get container status \"02a71d4f33fd4d8bcf666c2d01f03d31c6e0f2b8eaa0cc183b48251e36cdc474\": rpc error: code = NotFound desc = could not find container \"02a71d4f33fd4d8bcf666c2d01f03d31c6e0f2b8eaa0cc183b48251e36cdc474\": container with ID starting with 02a71d4f33fd4d8bcf666c2d01f03d31c6e0f2b8eaa0cc183b48251e36cdc474 not found: ID does not exist" Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.395877 4808 scope.go:117] "RemoveContainer" containerID="17fbdac0c2d9f6080066329737fa9866b0e953b38687d6c7b2c62dea2e9964a1" Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.396187 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17fbdac0c2d9f6080066329737fa9866b0e953b38687d6c7b2c62dea2e9964a1"} err="failed to get container status \"17fbdac0c2d9f6080066329737fa9866b0e953b38687d6c7b2c62dea2e9964a1\": rpc error: code = NotFound desc = could not find container \"17fbdac0c2d9f6080066329737fa9866b0e953b38687d6c7b2c62dea2e9964a1\": container with ID starting with 17fbdac0c2d9f6080066329737fa9866b0e953b38687d6c7b2c62dea2e9964a1 not found: ID does not exist" Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.396209 4808 scope.go:117] "RemoveContainer" containerID="3674634f2b9f7a0e2fcf3758da17080b26a6284c8446d8326bc405b1307f8c22" Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.396488 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3674634f2b9f7a0e2fcf3758da17080b26a6284c8446d8326bc405b1307f8c22"} err="failed to get container status \"3674634f2b9f7a0e2fcf3758da17080b26a6284c8446d8326bc405b1307f8c22\": rpc error: code = NotFound desc = could not find container \"3674634f2b9f7a0e2fcf3758da17080b26a6284c8446d8326bc405b1307f8c22\": container with ID starting with 3674634f2b9f7a0e2fcf3758da17080b26a6284c8446d8326bc405b1307f8c22 not found: ID does not exist" Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.396511 4808 scope.go:117] "RemoveContainer" containerID="0f11f10c1e9b99cab011ae05555fe3dd8aeb6e7df5cb166f07c6bcfffc9f9164" Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.396937 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f11f10c1e9b99cab011ae05555fe3dd8aeb6e7df5cb166f07c6bcfffc9f9164"} err="failed to get container status \"0f11f10c1e9b99cab011ae05555fe3dd8aeb6e7df5cb166f07c6bcfffc9f9164\": rpc error: code = NotFound desc = could not find container \"0f11f10c1e9b99cab011ae05555fe3dd8aeb6e7df5cb166f07c6bcfffc9f9164\": container with ID starting with 0f11f10c1e9b99cab011ae05555fe3dd8aeb6e7df5cb166f07c6bcfffc9f9164 not found: ID does not exist" Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.396963 4808 scope.go:117] "RemoveContainer" containerID="7129de475a69815e0ed556f2d45c208788efcd7ef2792f3d93994af37dee87b2" Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.397200 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7129de475a69815e0ed556f2d45c208788efcd7ef2792f3d93994af37dee87b2"} err="failed to get container status \"7129de475a69815e0ed556f2d45c208788efcd7ef2792f3d93994af37dee87b2\": rpc error: code = NotFound desc = could not find container \"7129de475a69815e0ed556f2d45c208788efcd7ef2792f3d93994af37dee87b2\": container with ID starting with 7129de475a69815e0ed556f2d45c208788efcd7ef2792f3d93994af37dee87b2 not found: ID does not exist" Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.397225 4808 scope.go:117] "RemoveContainer" containerID="14f606735543a42f93d6dcfef5cf41caeb01d4d37301694aac50fa3451463180" Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.397611 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14f606735543a42f93d6dcfef5cf41caeb01d4d37301694aac50fa3451463180"} err="failed to get container status \"14f606735543a42f93d6dcfef5cf41caeb01d4d37301694aac50fa3451463180\": rpc error: code = NotFound desc = could not find container \"14f606735543a42f93d6dcfef5cf41caeb01d4d37301694aac50fa3451463180\": container with ID starting with 14f606735543a42f93d6dcfef5cf41caeb01d4d37301694aac50fa3451463180 not found: ID does not exist" Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.397636 4808 scope.go:117] "RemoveContainer" containerID="e8ed6835884a9397b2548e8636b4e06a1733e82faf47693035faf763591bfa72" Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.398087 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8ed6835884a9397b2548e8636b4e06a1733e82faf47693035faf763591bfa72"} err="failed to get container status \"e8ed6835884a9397b2548e8636b4e06a1733e82faf47693035faf763591bfa72\": rpc error: code = NotFound desc = could not find container \"e8ed6835884a9397b2548e8636b4e06a1733e82faf47693035faf763591bfa72\": container with ID starting with e8ed6835884a9397b2548e8636b4e06a1733e82faf47693035faf763591bfa72 not found: ID does not exist" Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.398115 4808 scope.go:117] "RemoveContainer" containerID="01f1b29fe1734c41ddcc1b36367ccc54148fa85f2e05072fc0b60987dfcbb078" Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.398581 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01f1b29fe1734c41ddcc1b36367ccc54148fa85f2e05072fc0b60987dfcbb078"} err="failed to get container status \"01f1b29fe1734c41ddcc1b36367ccc54148fa85f2e05072fc0b60987dfcbb078\": rpc error: code = NotFound desc = could not find container \"01f1b29fe1734c41ddcc1b36367ccc54148fa85f2e05072fc0b60987dfcbb078\": container with ID starting with 01f1b29fe1734c41ddcc1b36367ccc54148fa85f2e05072fc0b60987dfcbb078 not found: ID does not exist" Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.398603 4808 scope.go:117] "RemoveContainer" containerID="e1d10229731c437de7f29f504fec11ac3688a0a5039812d76551a867f814b95e" Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.398849 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1d10229731c437de7f29f504fec11ac3688a0a5039812d76551a867f814b95e"} err="failed to get container status \"e1d10229731c437de7f29f504fec11ac3688a0a5039812d76551a867f814b95e\": rpc error: code = NotFound desc = could not find container \"e1d10229731c437de7f29f504fec11ac3688a0a5039812d76551a867f814b95e\": container with ID starting with e1d10229731c437de7f29f504fec11ac3688a0a5039812d76551a867f814b95e not found: ID does not exist" Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.398873 4808 scope.go:117] "RemoveContainer" containerID="a351ca31265d6256ce9c08680495dd7a9b7882cd17a8faf4a0d224a800737e4e" Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.399177 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a351ca31265d6256ce9c08680495dd7a9b7882cd17a8faf4a0d224a800737e4e"} err="failed to get container status \"a351ca31265d6256ce9c08680495dd7a9b7882cd17a8faf4a0d224a800737e4e\": rpc error: code = NotFound desc = could not find container \"a351ca31265d6256ce9c08680495dd7a9b7882cd17a8faf4a0d224a800737e4e\": container with ID starting with a351ca31265d6256ce9c08680495dd7a9b7882cd17a8faf4a0d224a800737e4e not found: ID does not exist" Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.399201 4808 scope.go:117] "RemoveContainer" containerID="02a71d4f33fd4d8bcf666c2d01f03d31c6e0f2b8eaa0cc183b48251e36cdc474" Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.399498 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02a71d4f33fd4d8bcf666c2d01f03d31c6e0f2b8eaa0cc183b48251e36cdc474"} err="failed to get container status \"02a71d4f33fd4d8bcf666c2d01f03d31c6e0f2b8eaa0cc183b48251e36cdc474\": rpc error: code = NotFound desc = could not find container \"02a71d4f33fd4d8bcf666c2d01f03d31c6e0f2b8eaa0cc183b48251e36cdc474\": container with ID starting with 02a71d4f33fd4d8bcf666c2d01f03d31c6e0f2b8eaa0cc183b48251e36cdc474 not found: ID does not exist" Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.399521 4808 scope.go:117] "RemoveContainer" containerID="17fbdac0c2d9f6080066329737fa9866b0e953b38687d6c7b2c62dea2e9964a1" Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.399862 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17fbdac0c2d9f6080066329737fa9866b0e953b38687d6c7b2c62dea2e9964a1"} err="failed to get container status \"17fbdac0c2d9f6080066329737fa9866b0e953b38687d6c7b2c62dea2e9964a1\": rpc error: code = NotFound desc = could not find container \"17fbdac0c2d9f6080066329737fa9866b0e953b38687d6c7b2c62dea2e9964a1\": container with ID starting with 17fbdac0c2d9f6080066329737fa9866b0e953b38687d6c7b2c62dea2e9964a1 not found: ID does not exist" Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.399887 4808 scope.go:117] "RemoveContainer" containerID="3674634f2b9f7a0e2fcf3758da17080b26a6284c8446d8326bc405b1307f8c22" Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.400161 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3674634f2b9f7a0e2fcf3758da17080b26a6284c8446d8326bc405b1307f8c22"} err="failed to get container status \"3674634f2b9f7a0e2fcf3758da17080b26a6284c8446d8326bc405b1307f8c22\": rpc error: code = NotFound desc = could not find container \"3674634f2b9f7a0e2fcf3758da17080b26a6284c8446d8326bc405b1307f8c22\": container with ID starting with 3674634f2b9f7a0e2fcf3758da17080b26a6284c8446d8326bc405b1307f8c22 not found: ID does not exist" Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.400182 4808 scope.go:117] "RemoveContainer" containerID="0f11f10c1e9b99cab011ae05555fe3dd8aeb6e7df5cb166f07c6bcfffc9f9164" Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.400414 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f11f10c1e9b99cab011ae05555fe3dd8aeb6e7df5cb166f07c6bcfffc9f9164"} err="failed to get container status \"0f11f10c1e9b99cab011ae05555fe3dd8aeb6e7df5cb166f07c6bcfffc9f9164\": rpc error: code = NotFound desc = could not find container \"0f11f10c1e9b99cab011ae05555fe3dd8aeb6e7df5cb166f07c6bcfffc9f9164\": container with ID starting with 0f11f10c1e9b99cab011ae05555fe3dd8aeb6e7df5cb166f07c6bcfffc9f9164 not found: ID does not exist" Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.400444 4808 scope.go:117] "RemoveContainer" containerID="7129de475a69815e0ed556f2d45c208788efcd7ef2792f3d93994af37dee87b2" Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.400689 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7129de475a69815e0ed556f2d45c208788efcd7ef2792f3d93994af37dee87b2"} err="failed to get container status \"7129de475a69815e0ed556f2d45c208788efcd7ef2792f3d93994af37dee87b2\": rpc error: code = NotFound desc = could not find container \"7129de475a69815e0ed556f2d45c208788efcd7ef2792f3d93994af37dee87b2\": container with ID starting with 7129de475a69815e0ed556f2d45c208788efcd7ef2792f3d93994af37dee87b2 not found: ID does not exist" Mar 11 08:54:03 crc kubenswrapper[4808]: I0311 08:54:03.797270 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afeac5d0-d84f-4776-ae37-a03c8f0f66b8" path="/var/lib/kubelet/pods/afeac5d0-d84f-4776-ae37-a03c8f0f66b8/volumes" Mar 11 08:54:04 crc kubenswrapper[4808]: I0311 08:54:04.067603 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dgh9v_c1a75dfb-31dd-4275-a309-c9e7130feb05/kube-multus/2.log" Mar 11 08:54:04 crc kubenswrapper[4808]: I0311 08:54:04.067984 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-dgh9v" event={"ID":"c1a75dfb-31dd-4275-a309-c9e7130feb05","Type":"ContainerStarted","Data":"67eba44bca6dc45e4d41a447ddab40a984529dcc9afcfefe229dc3190f385503"} Mar 11 08:54:04 crc kubenswrapper[4808]: I0311 08:54:04.070135 4808 generic.go:334] "Generic (PLEG): container finished" podID="cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea" containerID="bab2555a2d92cb2a707fdc7f06c17d140d6addaf00cd4040bfc36874a6fbe4a5" exitCode=0 Mar 11 08:54:04 crc kubenswrapper[4808]: I0311 08:54:04.070210 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nqzhb" event={"ID":"cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea","Type":"ContainerDied","Data":"bab2555a2d92cb2a707fdc7f06c17d140d6addaf00cd4040bfc36874a6fbe4a5"} Mar 11 08:54:04 crc kubenswrapper[4808]: I0311 08:54:04.070244 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nqzhb" event={"ID":"cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea","Type":"ContainerStarted","Data":"aa865c9e03a1fc7894a5cda42185260f79816c220a658914e31c1d49e9b24188"} Mar 11 08:54:04 crc kubenswrapper[4808]: I0311 08:54:04.160022 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553654-fdqcq" Mar 11 08:54:04 crc kubenswrapper[4808]: I0311 08:54:04.333862 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kcsrj\" (UniqueName: \"kubernetes.io/projected/f6a1463b-9472-494c-b46e-75c7098a9e62-kube-api-access-kcsrj\") pod \"f6a1463b-9472-494c-b46e-75c7098a9e62\" (UID: \"f6a1463b-9472-494c-b46e-75c7098a9e62\") " Mar 11 08:54:04 crc kubenswrapper[4808]: I0311 08:54:04.340568 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6a1463b-9472-494c-b46e-75c7098a9e62-kube-api-access-kcsrj" (OuterVolumeSpecName: "kube-api-access-kcsrj") pod "f6a1463b-9472-494c-b46e-75c7098a9e62" (UID: "f6a1463b-9472-494c-b46e-75c7098a9e62"). InnerVolumeSpecName "kube-api-access-kcsrj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:54:04 crc kubenswrapper[4808]: I0311 08:54:04.435173 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kcsrj\" (UniqueName: \"kubernetes.io/projected/f6a1463b-9472-494c-b46e-75c7098a9e62-kube-api-access-kcsrj\") on node \"crc\" DevicePath \"\"" Mar 11 08:54:05 crc kubenswrapper[4808]: I0311 08:54:05.080177 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nqzhb" event={"ID":"cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea","Type":"ContainerStarted","Data":"5ca207c9b24892bb9c594800ed9fcbf4d1919a2505d6f170c28b1f2a218615b3"} Mar 11 08:54:05 crc kubenswrapper[4808]: I0311 08:54:05.080662 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nqzhb" event={"ID":"cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea","Type":"ContainerStarted","Data":"f281236e3bf90b8143e30d8f96e7858f0fb00da115d27c16815d0ff791b6c09b"} Mar 11 08:54:05 crc kubenswrapper[4808]: I0311 08:54:05.080690 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nqzhb" event={"ID":"cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea","Type":"ContainerStarted","Data":"6e75a3e6108260eebda440a1a023fcf067e63cdec465986e4786d968f5355f9e"} Mar 11 08:54:05 crc kubenswrapper[4808]: I0311 08:54:05.080707 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nqzhb" event={"ID":"cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea","Type":"ContainerStarted","Data":"fe0b9478e1009a27b5be192c1f796fd20cffa43410055fe56629c95b5ce2988b"} Mar 11 08:54:05 crc kubenswrapper[4808]: I0311 08:54:05.080745 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nqzhb" event={"ID":"cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea","Type":"ContainerStarted","Data":"be551e828b16907378f1567502fbd2bba991f30fbe2b5032b435d2891f6aae4d"} Mar 11 08:54:05 crc kubenswrapper[4808]: I0311 08:54:05.080762 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nqzhb" event={"ID":"cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea","Type":"ContainerStarted","Data":"ed1a2ca68fdfbc3786c9b9667808a675c9c7ff887f0e4c2f6a6e41d49d8ba368"} Mar 11 08:54:05 crc kubenswrapper[4808]: I0311 08:54:05.081776 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553654-fdqcq" event={"ID":"f6a1463b-9472-494c-b46e-75c7098a9e62","Type":"ContainerDied","Data":"bd4075265961838cb30b03c1a9521a7bd81f801ca6a39f4de9bfbb7eb41ecbf2"} Mar 11 08:54:05 crc kubenswrapper[4808]: I0311 08:54:05.081806 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd4075265961838cb30b03c1a9521a7bd81f801ca6a39f4de9bfbb7eb41ecbf2" Mar 11 08:54:05 crc kubenswrapper[4808]: I0311 08:54:05.081848 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553654-fdqcq" Mar 11 08:54:05 crc kubenswrapper[4808]: I0311 08:54:05.214347 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553648-9lzqz"] Mar 11 08:54:05 crc kubenswrapper[4808]: I0311 08:54:05.219914 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553648-9lzqz"] Mar 11 08:54:05 crc kubenswrapper[4808]: I0311 08:54:05.795259 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73272a25-2d50-4ea7-8e43-a6c3d99e8523" path="/var/lib/kubelet/pods/73272a25-2d50-4ea7-8e43-a6c3d99e8523/volumes" Mar 11 08:54:05 crc kubenswrapper[4808]: I0311 08:54:05.811048 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-kprp5"] Mar 11 08:54:05 crc kubenswrapper[4808]: E0311 08:54:05.811349 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6a1463b-9472-494c-b46e-75c7098a9e62" containerName="oc" Mar 11 08:54:05 crc kubenswrapper[4808]: I0311 08:54:05.811394 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6a1463b-9472-494c-b46e-75c7098a9e62" containerName="oc" Mar 11 08:54:05 crc kubenswrapper[4808]: I0311 08:54:05.811564 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6a1463b-9472-494c-b46e-75c7098a9e62" containerName="oc" Mar 11 08:54:05 crc kubenswrapper[4808]: I0311 08:54:05.812105 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-kprp5" Mar 11 08:54:05 crc kubenswrapper[4808]: I0311 08:54:05.815231 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Mar 11 08:54:05 crc kubenswrapper[4808]: I0311 08:54:05.815275 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Mar 11 08:54:05 crc kubenswrapper[4808]: I0311 08:54:05.815308 4808 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-hjgnt" Mar 11 08:54:05 crc kubenswrapper[4808]: I0311 08:54:05.815571 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Mar 11 08:54:05 crc kubenswrapper[4808]: I0311 08:54:05.954049 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/0de397de-380f-4ee1-9990-455ee195a71f-node-mnt\") pod \"crc-storage-crc-kprp5\" (UID: \"0de397de-380f-4ee1-9990-455ee195a71f\") " pod="crc-storage/crc-storage-crc-kprp5" Mar 11 08:54:05 crc kubenswrapper[4808]: I0311 08:54:05.954213 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/0de397de-380f-4ee1-9990-455ee195a71f-crc-storage\") pod \"crc-storage-crc-kprp5\" (UID: \"0de397de-380f-4ee1-9990-455ee195a71f\") " pod="crc-storage/crc-storage-crc-kprp5" Mar 11 08:54:05 crc kubenswrapper[4808]: I0311 08:54:05.954452 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhgcd\" (UniqueName: \"kubernetes.io/projected/0de397de-380f-4ee1-9990-455ee195a71f-kube-api-access-vhgcd\") pod \"crc-storage-crc-kprp5\" (UID: \"0de397de-380f-4ee1-9990-455ee195a71f\") " pod="crc-storage/crc-storage-crc-kprp5" Mar 11 08:54:06 crc kubenswrapper[4808]: I0311 08:54:06.054971 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhgcd\" (UniqueName: \"kubernetes.io/projected/0de397de-380f-4ee1-9990-455ee195a71f-kube-api-access-vhgcd\") pod \"crc-storage-crc-kprp5\" (UID: \"0de397de-380f-4ee1-9990-455ee195a71f\") " pod="crc-storage/crc-storage-crc-kprp5" Mar 11 08:54:06 crc kubenswrapper[4808]: I0311 08:54:06.055104 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/0de397de-380f-4ee1-9990-455ee195a71f-node-mnt\") pod \"crc-storage-crc-kprp5\" (UID: \"0de397de-380f-4ee1-9990-455ee195a71f\") " pod="crc-storage/crc-storage-crc-kprp5" Mar 11 08:54:06 crc kubenswrapper[4808]: I0311 08:54:06.055141 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/0de397de-380f-4ee1-9990-455ee195a71f-crc-storage\") pod \"crc-storage-crc-kprp5\" (UID: \"0de397de-380f-4ee1-9990-455ee195a71f\") " pod="crc-storage/crc-storage-crc-kprp5" Mar 11 08:54:06 crc kubenswrapper[4808]: I0311 08:54:06.055352 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/0de397de-380f-4ee1-9990-455ee195a71f-node-mnt\") pod \"crc-storage-crc-kprp5\" (UID: \"0de397de-380f-4ee1-9990-455ee195a71f\") " pod="crc-storage/crc-storage-crc-kprp5" Mar 11 08:54:06 crc kubenswrapper[4808]: I0311 08:54:06.056087 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/0de397de-380f-4ee1-9990-455ee195a71f-crc-storage\") pod \"crc-storage-crc-kprp5\" (UID: \"0de397de-380f-4ee1-9990-455ee195a71f\") " pod="crc-storage/crc-storage-crc-kprp5" Mar 11 08:54:06 crc kubenswrapper[4808]: I0311 08:54:06.091775 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhgcd\" (UniqueName: \"kubernetes.io/projected/0de397de-380f-4ee1-9990-455ee195a71f-kube-api-access-vhgcd\") pod \"crc-storage-crc-kprp5\" (UID: \"0de397de-380f-4ee1-9990-455ee195a71f\") " pod="crc-storage/crc-storage-crc-kprp5" Mar 11 08:54:06 crc kubenswrapper[4808]: I0311 08:54:06.127512 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-kprp5" Mar 11 08:54:06 crc kubenswrapper[4808]: E0311 08:54:06.174636 4808 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-kprp5_crc-storage_0de397de-380f-4ee1-9990-455ee195a71f_0(49aa26913dc45e8c8db7c1dafe7aae59763cb427c93fcff438484e32e91f03c9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 11 08:54:06 crc kubenswrapper[4808]: E0311 08:54:06.174738 4808 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-kprp5_crc-storage_0de397de-380f-4ee1-9990-455ee195a71f_0(49aa26913dc45e8c8db7c1dafe7aae59763cb427c93fcff438484e32e91f03c9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-kprp5" Mar 11 08:54:06 crc kubenswrapper[4808]: E0311 08:54:06.174791 4808 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-kprp5_crc-storage_0de397de-380f-4ee1-9990-455ee195a71f_0(49aa26913dc45e8c8db7c1dafe7aae59763cb427c93fcff438484e32e91f03c9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-kprp5" Mar 11 08:54:06 crc kubenswrapper[4808]: E0311 08:54:06.174897 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-kprp5_crc-storage(0de397de-380f-4ee1-9990-455ee195a71f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-kprp5_crc-storage(0de397de-380f-4ee1-9990-455ee195a71f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-kprp5_crc-storage_0de397de-380f-4ee1-9990-455ee195a71f_0(49aa26913dc45e8c8db7c1dafe7aae59763cb427c93fcff438484e32e91f03c9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-kprp5" podUID="0de397de-380f-4ee1-9990-455ee195a71f" Mar 11 08:54:08 crc kubenswrapper[4808]: I0311 08:54:08.109839 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nqzhb" event={"ID":"cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea","Type":"ContainerStarted","Data":"704208907bfb3b170ce21511c0efcad425e7478658f76eb5202345e01c9d0f09"} Mar 11 08:54:10 crc kubenswrapper[4808]: I0311 08:54:10.125465 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nqzhb" event={"ID":"cd3ef4e6-9312-4e80-9c27-4bb48ff7a5ea","Type":"ContainerStarted","Data":"bb0d45db959437ce7c8e7f4af066b64c5c76a024578eff6033258ff84e996c55"} Mar 11 08:54:10 crc kubenswrapper[4808]: I0311 08:54:10.125939 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nqzhb" Mar 11 08:54:10 crc kubenswrapper[4808]: I0311 08:54:10.125976 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nqzhb" Mar 11 08:54:10 crc kubenswrapper[4808]: I0311 08:54:10.125986 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nqzhb" Mar 11 08:54:10 crc kubenswrapper[4808]: I0311 08:54:10.154050 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-nqzhb" podStartSLOduration=8.154033202 podStartE2EDuration="8.154033202s" podCreationTimestamp="2026-03-11 08:54:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:54:10.150105639 +0000 UTC m=+901.103428959" watchObservedRunningTime="2026-03-11 08:54:10.154033202 +0000 UTC m=+901.107356522" Mar 11 08:54:10 crc kubenswrapper[4808]: I0311 08:54:10.158597 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-nqzhb" Mar 11 08:54:10 crc kubenswrapper[4808]: I0311 08:54:10.159629 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-nqzhb" Mar 11 08:54:10 crc kubenswrapper[4808]: I0311 08:54:10.714400 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-kprp5"] Mar 11 08:54:10 crc kubenswrapper[4808]: I0311 08:54:10.714827 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-kprp5" Mar 11 08:54:10 crc kubenswrapper[4808]: I0311 08:54:10.715253 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-kprp5" Mar 11 08:54:10 crc kubenswrapper[4808]: E0311 08:54:10.740786 4808 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-kprp5_crc-storage_0de397de-380f-4ee1-9990-455ee195a71f_0(bbe422104cd80a682c17b5deb927deb309a2f722a1bff5ba11e136c1698288b3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 11 08:54:10 crc kubenswrapper[4808]: E0311 08:54:10.740851 4808 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-kprp5_crc-storage_0de397de-380f-4ee1-9990-455ee195a71f_0(bbe422104cd80a682c17b5deb927deb309a2f722a1bff5ba11e136c1698288b3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-kprp5" Mar 11 08:54:10 crc kubenswrapper[4808]: E0311 08:54:10.740870 4808 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-kprp5_crc-storage_0de397de-380f-4ee1-9990-455ee195a71f_0(bbe422104cd80a682c17b5deb927deb309a2f722a1bff5ba11e136c1698288b3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-kprp5" Mar 11 08:54:10 crc kubenswrapper[4808]: E0311 08:54:10.740923 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-kprp5_crc-storage(0de397de-380f-4ee1-9990-455ee195a71f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-kprp5_crc-storage(0de397de-380f-4ee1-9990-455ee195a71f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-kprp5_crc-storage_0de397de-380f-4ee1-9990-455ee195a71f_0(bbe422104cd80a682c17b5deb927deb309a2f722a1bff5ba11e136c1698288b3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-kprp5" podUID="0de397de-380f-4ee1-9990-455ee195a71f" Mar 11 08:54:24 crc kubenswrapper[4808]: I0311 08:54:24.789049 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-kprp5" Mar 11 08:54:24 crc kubenswrapper[4808]: I0311 08:54:24.790265 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-kprp5" Mar 11 08:54:25 crc kubenswrapper[4808]: I0311 08:54:25.249976 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-kprp5"] Mar 11 08:54:25 crc kubenswrapper[4808]: W0311 08:54:25.262307 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0de397de_380f_4ee1_9990_455ee195a71f.slice/crio-88f1f0b1ad61a5907b6e4f77fd4e03079169610ce55a3321bd27c86cbb889be7 WatchSource:0}: Error finding container 88f1f0b1ad61a5907b6e4f77fd4e03079169610ce55a3321bd27c86cbb889be7: Status 404 returned error can't find the container with id 88f1f0b1ad61a5907b6e4f77fd4e03079169610ce55a3321bd27c86cbb889be7 Mar 11 08:54:26 crc kubenswrapper[4808]: I0311 08:54:26.229055 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-kprp5" event={"ID":"0de397de-380f-4ee1-9990-455ee195a71f","Type":"ContainerStarted","Data":"88f1f0b1ad61a5907b6e4f77fd4e03079169610ce55a3321bd27c86cbb889be7"} Mar 11 08:54:27 crc kubenswrapper[4808]: I0311 08:54:27.240197 4808 generic.go:334] "Generic (PLEG): container finished" podID="0de397de-380f-4ee1-9990-455ee195a71f" containerID="3ce4fd899ebb7cd15a305e5567fd153aa982549d70f7108f8fcbccbee1533a07" exitCode=0 Mar 11 08:54:27 crc kubenswrapper[4808]: I0311 08:54:27.240482 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-kprp5" event={"ID":"0de397de-380f-4ee1-9990-455ee195a71f","Type":"ContainerDied","Data":"3ce4fd899ebb7cd15a305e5567fd153aa982549d70f7108f8fcbccbee1533a07"} Mar 11 08:54:28 crc kubenswrapper[4808]: I0311 08:54:28.451903 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-kprp5" Mar 11 08:54:28 crc kubenswrapper[4808]: I0311 08:54:28.556729 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/0de397de-380f-4ee1-9990-455ee195a71f-crc-storage\") pod \"0de397de-380f-4ee1-9990-455ee195a71f\" (UID: \"0de397de-380f-4ee1-9990-455ee195a71f\") " Mar 11 08:54:28 crc kubenswrapper[4808]: I0311 08:54:28.556877 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/0de397de-380f-4ee1-9990-455ee195a71f-node-mnt\") pod \"0de397de-380f-4ee1-9990-455ee195a71f\" (UID: \"0de397de-380f-4ee1-9990-455ee195a71f\") " Mar 11 08:54:28 crc kubenswrapper[4808]: I0311 08:54:28.556929 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vhgcd\" (UniqueName: \"kubernetes.io/projected/0de397de-380f-4ee1-9990-455ee195a71f-kube-api-access-vhgcd\") pod \"0de397de-380f-4ee1-9990-455ee195a71f\" (UID: \"0de397de-380f-4ee1-9990-455ee195a71f\") " Mar 11 08:54:28 crc kubenswrapper[4808]: I0311 08:54:28.557013 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0de397de-380f-4ee1-9990-455ee195a71f-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "0de397de-380f-4ee1-9990-455ee195a71f" (UID: "0de397de-380f-4ee1-9990-455ee195a71f"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 11 08:54:28 crc kubenswrapper[4808]: I0311 08:54:28.557380 4808 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/0de397de-380f-4ee1-9990-455ee195a71f-node-mnt\") on node \"crc\" DevicePath \"\"" Mar 11 08:54:28 crc kubenswrapper[4808]: I0311 08:54:28.564649 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0de397de-380f-4ee1-9990-455ee195a71f-kube-api-access-vhgcd" (OuterVolumeSpecName: "kube-api-access-vhgcd") pod "0de397de-380f-4ee1-9990-455ee195a71f" (UID: "0de397de-380f-4ee1-9990-455ee195a71f"). InnerVolumeSpecName "kube-api-access-vhgcd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:54:28 crc kubenswrapper[4808]: I0311 08:54:28.581465 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0de397de-380f-4ee1-9990-455ee195a71f-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "0de397de-380f-4ee1-9990-455ee195a71f" (UID: "0de397de-380f-4ee1-9990-455ee195a71f"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:54:28 crc kubenswrapper[4808]: I0311 08:54:28.658197 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vhgcd\" (UniqueName: \"kubernetes.io/projected/0de397de-380f-4ee1-9990-455ee195a71f-kube-api-access-vhgcd\") on node \"crc\" DevicePath \"\"" Mar 11 08:54:28 crc kubenswrapper[4808]: I0311 08:54:28.658243 4808 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/0de397de-380f-4ee1-9990-455ee195a71f-crc-storage\") on node \"crc\" DevicePath \"\"" Mar 11 08:54:29 crc kubenswrapper[4808]: I0311 08:54:29.250796 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-kprp5" event={"ID":"0de397de-380f-4ee1-9990-455ee195a71f","Type":"ContainerDied","Data":"88f1f0b1ad61a5907b6e4f77fd4e03079169610ce55a3321bd27c86cbb889be7"} Mar 11 08:54:29 crc kubenswrapper[4808]: I0311 08:54:29.250845 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="88f1f0b1ad61a5907b6e4f77fd4e03079169610ce55a3321bd27c86cbb889be7" Mar 11 08:54:29 crc kubenswrapper[4808]: I0311 08:54:29.250870 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-kprp5" Mar 11 08:54:33 crc kubenswrapper[4808]: I0311 08:54:33.131072 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-nqzhb" Mar 11 08:54:35 crc kubenswrapper[4808]: I0311 08:54:35.107863 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874j2pt4"] Mar 11 08:54:35 crc kubenswrapper[4808]: E0311 08:54:35.108167 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0de397de-380f-4ee1-9990-455ee195a71f" containerName="storage" Mar 11 08:54:35 crc kubenswrapper[4808]: I0311 08:54:35.108187 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="0de397de-380f-4ee1-9990-455ee195a71f" containerName="storage" Mar 11 08:54:35 crc kubenswrapper[4808]: I0311 08:54:35.108342 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="0de397de-380f-4ee1-9990-455ee195a71f" containerName="storage" Mar 11 08:54:35 crc kubenswrapper[4808]: I0311 08:54:35.109590 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874j2pt4" Mar 11 08:54:35 crc kubenswrapper[4808]: I0311 08:54:35.116097 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874j2pt4"] Mar 11 08:54:35 crc kubenswrapper[4808]: I0311 08:54:35.116659 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Mar 11 08:54:35 crc kubenswrapper[4808]: I0311 08:54:35.244317 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v88bh\" (UniqueName: \"kubernetes.io/projected/ad1db85c-8250-4d3f-a27a-e4680d126b3d-kube-api-access-v88bh\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874j2pt4\" (UID: \"ad1db85c-8250-4d3f-a27a-e4680d126b3d\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874j2pt4" Mar 11 08:54:35 crc kubenswrapper[4808]: I0311 08:54:35.244481 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ad1db85c-8250-4d3f-a27a-e4680d126b3d-util\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874j2pt4\" (UID: \"ad1db85c-8250-4d3f-a27a-e4680d126b3d\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874j2pt4" Mar 11 08:54:35 crc kubenswrapper[4808]: I0311 08:54:35.244539 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ad1db85c-8250-4d3f-a27a-e4680d126b3d-bundle\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874j2pt4\" (UID: \"ad1db85c-8250-4d3f-a27a-e4680d126b3d\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874j2pt4" Mar 11 08:54:35 crc kubenswrapper[4808]: I0311 08:54:35.345316 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v88bh\" (UniqueName: \"kubernetes.io/projected/ad1db85c-8250-4d3f-a27a-e4680d126b3d-kube-api-access-v88bh\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874j2pt4\" (UID: \"ad1db85c-8250-4d3f-a27a-e4680d126b3d\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874j2pt4" Mar 11 08:54:35 crc kubenswrapper[4808]: I0311 08:54:35.345457 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ad1db85c-8250-4d3f-a27a-e4680d126b3d-util\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874j2pt4\" (UID: \"ad1db85c-8250-4d3f-a27a-e4680d126b3d\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874j2pt4" Mar 11 08:54:35 crc kubenswrapper[4808]: I0311 08:54:35.345497 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ad1db85c-8250-4d3f-a27a-e4680d126b3d-bundle\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874j2pt4\" (UID: \"ad1db85c-8250-4d3f-a27a-e4680d126b3d\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874j2pt4" Mar 11 08:54:35 crc kubenswrapper[4808]: I0311 08:54:35.346050 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ad1db85c-8250-4d3f-a27a-e4680d126b3d-util\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874j2pt4\" (UID: \"ad1db85c-8250-4d3f-a27a-e4680d126b3d\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874j2pt4" Mar 11 08:54:35 crc kubenswrapper[4808]: I0311 08:54:35.346084 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ad1db85c-8250-4d3f-a27a-e4680d126b3d-bundle\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874j2pt4\" (UID: \"ad1db85c-8250-4d3f-a27a-e4680d126b3d\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874j2pt4" Mar 11 08:54:35 crc kubenswrapper[4808]: I0311 08:54:35.370065 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v88bh\" (UniqueName: \"kubernetes.io/projected/ad1db85c-8250-4d3f-a27a-e4680d126b3d-kube-api-access-v88bh\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874j2pt4\" (UID: \"ad1db85c-8250-4d3f-a27a-e4680d126b3d\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874j2pt4" Mar 11 08:54:35 crc kubenswrapper[4808]: I0311 08:54:35.454320 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874j2pt4" Mar 11 08:54:35 crc kubenswrapper[4808]: I0311 08:54:35.899386 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874j2pt4"] Mar 11 08:54:36 crc kubenswrapper[4808]: I0311 08:54:36.031827 4808 scope.go:117] "RemoveContainer" containerID="114f83b7949b2db80f6af93611df19dc9c04b09c2b7ac3d5318ddaf68aaee213" Mar 11 08:54:36 crc kubenswrapper[4808]: I0311 08:54:36.299985 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874j2pt4" event={"ID":"ad1db85c-8250-4d3f-a27a-e4680d126b3d","Type":"ContainerStarted","Data":"077f6fa2076d177011f379adbb6fb03756be6cc4f72aecc589b56baed0e1471d"} Mar 11 08:54:36 crc kubenswrapper[4808]: I0311 08:54:36.300042 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874j2pt4" event={"ID":"ad1db85c-8250-4d3f-a27a-e4680d126b3d","Type":"ContainerStarted","Data":"f16b98a3c6b90f3b28bdb938e3e1ce86ee878b1612db333756553dbef7d217f7"} Mar 11 08:54:37 crc kubenswrapper[4808]: I0311 08:54:37.308576 4808 generic.go:334] "Generic (PLEG): container finished" podID="ad1db85c-8250-4d3f-a27a-e4680d126b3d" containerID="077f6fa2076d177011f379adbb6fb03756be6cc4f72aecc589b56baed0e1471d" exitCode=0 Mar 11 08:54:37 crc kubenswrapper[4808]: I0311 08:54:37.308623 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874j2pt4" event={"ID":"ad1db85c-8250-4d3f-a27a-e4680d126b3d","Type":"ContainerDied","Data":"077f6fa2076d177011f379adbb6fb03756be6cc4f72aecc589b56baed0e1471d"} Mar 11 08:54:37 crc kubenswrapper[4808]: I0311 08:54:37.475822 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-d7l9g"] Mar 11 08:54:37 crc kubenswrapper[4808]: I0311 08:54:37.485565 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d7l9g" Mar 11 08:54:37 crc kubenswrapper[4808]: I0311 08:54:37.497343 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d7l9g"] Mar 11 08:54:37 crc kubenswrapper[4808]: I0311 08:54:37.674335 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a5ea36f-2728-4f44-9c45-6d8b6b03ffd2-utilities\") pod \"redhat-operators-d7l9g\" (UID: \"3a5ea36f-2728-4f44-9c45-6d8b6b03ffd2\") " pod="openshift-marketplace/redhat-operators-d7l9g" Mar 11 08:54:37 crc kubenswrapper[4808]: I0311 08:54:37.674421 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7v6pk\" (UniqueName: \"kubernetes.io/projected/3a5ea36f-2728-4f44-9c45-6d8b6b03ffd2-kube-api-access-7v6pk\") pod \"redhat-operators-d7l9g\" (UID: \"3a5ea36f-2728-4f44-9c45-6d8b6b03ffd2\") " pod="openshift-marketplace/redhat-operators-d7l9g" Mar 11 08:54:37 crc kubenswrapper[4808]: I0311 08:54:37.674487 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a5ea36f-2728-4f44-9c45-6d8b6b03ffd2-catalog-content\") pod \"redhat-operators-d7l9g\" (UID: \"3a5ea36f-2728-4f44-9c45-6d8b6b03ffd2\") " pod="openshift-marketplace/redhat-operators-d7l9g" Mar 11 08:54:37 crc kubenswrapper[4808]: I0311 08:54:37.775404 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a5ea36f-2728-4f44-9c45-6d8b6b03ffd2-catalog-content\") pod \"redhat-operators-d7l9g\" (UID: \"3a5ea36f-2728-4f44-9c45-6d8b6b03ffd2\") " pod="openshift-marketplace/redhat-operators-d7l9g" Mar 11 08:54:37 crc kubenswrapper[4808]: I0311 08:54:37.775495 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a5ea36f-2728-4f44-9c45-6d8b6b03ffd2-utilities\") pod \"redhat-operators-d7l9g\" (UID: \"3a5ea36f-2728-4f44-9c45-6d8b6b03ffd2\") " pod="openshift-marketplace/redhat-operators-d7l9g" Mar 11 08:54:37 crc kubenswrapper[4808]: I0311 08:54:37.775522 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7v6pk\" (UniqueName: \"kubernetes.io/projected/3a5ea36f-2728-4f44-9c45-6d8b6b03ffd2-kube-api-access-7v6pk\") pod \"redhat-operators-d7l9g\" (UID: \"3a5ea36f-2728-4f44-9c45-6d8b6b03ffd2\") " pod="openshift-marketplace/redhat-operators-d7l9g" Mar 11 08:54:37 crc kubenswrapper[4808]: I0311 08:54:37.776142 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a5ea36f-2728-4f44-9c45-6d8b6b03ffd2-catalog-content\") pod \"redhat-operators-d7l9g\" (UID: \"3a5ea36f-2728-4f44-9c45-6d8b6b03ffd2\") " pod="openshift-marketplace/redhat-operators-d7l9g" Mar 11 08:54:37 crc kubenswrapper[4808]: I0311 08:54:37.776279 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a5ea36f-2728-4f44-9c45-6d8b6b03ffd2-utilities\") pod \"redhat-operators-d7l9g\" (UID: \"3a5ea36f-2728-4f44-9c45-6d8b6b03ffd2\") " pod="openshift-marketplace/redhat-operators-d7l9g" Mar 11 08:54:37 crc kubenswrapper[4808]: I0311 08:54:37.805340 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7v6pk\" (UniqueName: \"kubernetes.io/projected/3a5ea36f-2728-4f44-9c45-6d8b6b03ffd2-kube-api-access-7v6pk\") pod \"redhat-operators-d7l9g\" (UID: \"3a5ea36f-2728-4f44-9c45-6d8b6b03ffd2\") " pod="openshift-marketplace/redhat-operators-d7l9g" Mar 11 08:54:37 crc kubenswrapper[4808]: I0311 08:54:37.814811 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d7l9g" Mar 11 08:54:37 crc kubenswrapper[4808]: I0311 08:54:37.997764 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d7l9g"] Mar 11 08:54:38 crc kubenswrapper[4808]: W0311 08:54:38.006101 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a5ea36f_2728_4f44_9c45_6d8b6b03ffd2.slice/crio-8cde518dd9b82b2b7ead2869876542aa7acba20df7e3c69c489ec748b4ad58c9 WatchSource:0}: Error finding container 8cde518dd9b82b2b7ead2869876542aa7acba20df7e3c69c489ec748b4ad58c9: Status 404 returned error can't find the container with id 8cde518dd9b82b2b7ead2869876542aa7acba20df7e3c69c489ec748b4ad58c9 Mar 11 08:54:38 crc kubenswrapper[4808]: I0311 08:54:38.318488 4808 generic.go:334] "Generic (PLEG): container finished" podID="3a5ea36f-2728-4f44-9c45-6d8b6b03ffd2" containerID="68ab8b679157e2da7cce9c1a416d05e93916f1aa18fca540f9c80f306465f14b" exitCode=0 Mar 11 08:54:38 crc kubenswrapper[4808]: I0311 08:54:38.318554 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d7l9g" event={"ID":"3a5ea36f-2728-4f44-9c45-6d8b6b03ffd2","Type":"ContainerDied","Data":"68ab8b679157e2da7cce9c1a416d05e93916f1aa18fca540f9c80f306465f14b"} Mar 11 08:54:38 crc kubenswrapper[4808]: I0311 08:54:38.318826 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d7l9g" event={"ID":"3a5ea36f-2728-4f44-9c45-6d8b6b03ffd2","Type":"ContainerStarted","Data":"8cde518dd9b82b2b7ead2869876542aa7acba20df7e3c69c489ec748b4ad58c9"} Mar 11 08:54:39 crc kubenswrapper[4808]: I0311 08:54:39.325279 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d7l9g" event={"ID":"3a5ea36f-2728-4f44-9c45-6d8b6b03ffd2","Type":"ContainerStarted","Data":"00b2d4e879f755539c80e8c8dd19ff1df98dec5d1c3fc83df7c6bf3cff07bc46"} Mar 11 08:54:39 crc kubenswrapper[4808]: I0311 08:54:39.327448 4808 generic.go:334] "Generic (PLEG): container finished" podID="ad1db85c-8250-4d3f-a27a-e4680d126b3d" containerID="df70fb1e62b2866306f38d5aabd89b676962d84e704aeb6a1d4fa2b8b4f16308" exitCode=0 Mar 11 08:54:39 crc kubenswrapper[4808]: I0311 08:54:39.327499 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874j2pt4" event={"ID":"ad1db85c-8250-4d3f-a27a-e4680d126b3d","Type":"ContainerDied","Data":"df70fb1e62b2866306f38d5aabd89b676962d84e704aeb6a1d4fa2b8b4f16308"} Mar 11 08:54:40 crc kubenswrapper[4808]: I0311 08:54:40.337888 4808 generic.go:334] "Generic (PLEG): container finished" podID="ad1db85c-8250-4d3f-a27a-e4680d126b3d" containerID="e1c073deb82ffb5eab68b90c62a9ac1f6ef196f736838475dfb9c520c4e5ffbf" exitCode=0 Mar 11 08:54:40 crc kubenswrapper[4808]: I0311 08:54:40.337995 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874j2pt4" event={"ID":"ad1db85c-8250-4d3f-a27a-e4680d126b3d","Type":"ContainerDied","Data":"e1c073deb82ffb5eab68b90c62a9ac1f6ef196f736838475dfb9c520c4e5ffbf"} Mar 11 08:54:40 crc kubenswrapper[4808]: I0311 08:54:40.341839 4808 generic.go:334] "Generic (PLEG): container finished" podID="3a5ea36f-2728-4f44-9c45-6d8b6b03ffd2" containerID="00b2d4e879f755539c80e8c8dd19ff1df98dec5d1c3fc83df7c6bf3cff07bc46" exitCode=0 Mar 11 08:54:40 crc kubenswrapper[4808]: I0311 08:54:40.341917 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d7l9g" event={"ID":"3a5ea36f-2728-4f44-9c45-6d8b6b03ffd2","Type":"ContainerDied","Data":"00b2d4e879f755539c80e8c8dd19ff1df98dec5d1c3fc83df7c6bf3cff07bc46"} Mar 11 08:54:41 crc kubenswrapper[4808]: I0311 08:54:41.350100 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d7l9g" event={"ID":"3a5ea36f-2728-4f44-9c45-6d8b6b03ffd2","Type":"ContainerStarted","Data":"712944822d93943337a36a0327570439a65b96beac8e78fa72be762fe140dd73"} Mar 11 08:54:41 crc kubenswrapper[4808]: I0311 08:54:41.374252 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-d7l9g" podStartSLOduration=1.96045836 podStartE2EDuration="4.374234133s" podCreationTimestamp="2026-03-11 08:54:37 +0000 UTC" firstStartedPulling="2026-03-11 08:54:38.320261341 +0000 UTC m=+929.273584661" lastFinishedPulling="2026-03-11 08:54:40.734037074 +0000 UTC m=+931.687360434" observedRunningTime="2026-03-11 08:54:41.37104929 +0000 UTC m=+932.324372650" watchObservedRunningTime="2026-03-11 08:54:41.374234133 +0000 UTC m=+932.327557453" Mar 11 08:54:41 crc kubenswrapper[4808]: I0311 08:54:41.630726 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874j2pt4" Mar 11 08:54:41 crc kubenswrapper[4808]: I0311 08:54:41.768434 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ad1db85c-8250-4d3f-a27a-e4680d126b3d-bundle\") pod \"ad1db85c-8250-4d3f-a27a-e4680d126b3d\" (UID: \"ad1db85c-8250-4d3f-a27a-e4680d126b3d\") " Mar 11 08:54:41 crc kubenswrapper[4808]: I0311 08:54:41.768533 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v88bh\" (UniqueName: \"kubernetes.io/projected/ad1db85c-8250-4d3f-a27a-e4680d126b3d-kube-api-access-v88bh\") pod \"ad1db85c-8250-4d3f-a27a-e4680d126b3d\" (UID: \"ad1db85c-8250-4d3f-a27a-e4680d126b3d\") " Mar 11 08:54:41 crc kubenswrapper[4808]: I0311 08:54:41.769853 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ad1db85c-8250-4d3f-a27a-e4680d126b3d-util\") pod \"ad1db85c-8250-4d3f-a27a-e4680d126b3d\" (UID: \"ad1db85c-8250-4d3f-a27a-e4680d126b3d\") " Mar 11 08:54:41 crc kubenswrapper[4808]: I0311 08:54:41.770336 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad1db85c-8250-4d3f-a27a-e4680d126b3d-bundle" (OuterVolumeSpecName: "bundle") pod "ad1db85c-8250-4d3f-a27a-e4680d126b3d" (UID: "ad1db85c-8250-4d3f-a27a-e4680d126b3d"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 08:54:41 crc kubenswrapper[4808]: I0311 08:54:41.770623 4808 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ad1db85c-8250-4d3f-a27a-e4680d126b3d-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 08:54:41 crc kubenswrapper[4808]: I0311 08:54:41.774970 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad1db85c-8250-4d3f-a27a-e4680d126b3d-kube-api-access-v88bh" (OuterVolumeSpecName: "kube-api-access-v88bh") pod "ad1db85c-8250-4d3f-a27a-e4680d126b3d" (UID: "ad1db85c-8250-4d3f-a27a-e4680d126b3d"). InnerVolumeSpecName "kube-api-access-v88bh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:54:41 crc kubenswrapper[4808]: I0311 08:54:41.783809 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad1db85c-8250-4d3f-a27a-e4680d126b3d-util" (OuterVolumeSpecName: "util") pod "ad1db85c-8250-4d3f-a27a-e4680d126b3d" (UID: "ad1db85c-8250-4d3f-a27a-e4680d126b3d"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 08:54:41 crc kubenswrapper[4808]: I0311 08:54:41.872241 4808 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ad1db85c-8250-4d3f-a27a-e4680d126b3d-util\") on node \"crc\" DevicePath \"\"" Mar 11 08:54:41 crc kubenswrapper[4808]: I0311 08:54:41.872271 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v88bh\" (UniqueName: \"kubernetes.io/projected/ad1db85c-8250-4d3f-a27a-e4680d126b3d-kube-api-access-v88bh\") on node \"crc\" DevicePath \"\"" Mar 11 08:54:42 crc kubenswrapper[4808]: I0311 08:54:42.356967 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874j2pt4" event={"ID":"ad1db85c-8250-4d3f-a27a-e4680d126b3d","Type":"ContainerDied","Data":"f16b98a3c6b90f3b28bdb938e3e1ce86ee878b1612db333756553dbef7d217f7"} Mar 11 08:54:42 crc kubenswrapper[4808]: I0311 08:54:42.357060 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f16b98a3c6b90f3b28bdb938e3e1ce86ee878b1612db333756553dbef7d217f7" Mar 11 08:54:42 crc kubenswrapper[4808]: I0311 08:54:42.356991 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874j2pt4" Mar 11 08:54:44 crc kubenswrapper[4808]: I0311 08:54:44.258481 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hnb2g"] Mar 11 08:54:44 crc kubenswrapper[4808]: E0311 08:54:44.258710 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad1db85c-8250-4d3f-a27a-e4680d126b3d" containerName="pull" Mar 11 08:54:44 crc kubenswrapper[4808]: I0311 08:54:44.258722 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad1db85c-8250-4d3f-a27a-e4680d126b3d" containerName="pull" Mar 11 08:54:44 crc kubenswrapper[4808]: E0311 08:54:44.258734 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad1db85c-8250-4d3f-a27a-e4680d126b3d" containerName="extract" Mar 11 08:54:44 crc kubenswrapper[4808]: I0311 08:54:44.258740 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad1db85c-8250-4d3f-a27a-e4680d126b3d" containerName="extract" Mar 11 08:54:44 crc kubenswrapper[4808]: E0311 08:54:44.258748 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad1db85c-8250-4d3f-a27a-e4680d126b3d" containerName="util" Mar 11 08:54:44 crc kubenswrapper[4808]: I0311 08:54:44.258754 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad1db85c-8250-4d3f-a27a-e4680d126b3d" containerName="util" Mar 11 08:54:44 crc kubenswrapper[4808]: I0311 08:54:44.258836 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad1db85c-8250-4d3f-a27a-e4680d126b3d" containerName="extract" Mar 11 08:54:44 crc kubenswrapper[4808]: I0311 08:54:44.259639 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hnb2g" Mar 11 08:54:44 crc kubenswrapper[4808]: I0311 08:54:44.277146 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hnb2g"] Mar 11 08:54:44 crc kubenswrapper[4808]: I0311 08:54:44.301781 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3a9e52e-5f5c-45cb-a809-a872ece0cd2f-utilities\") pod \"redhat-marketplace-hnb2g\" (UID: \"d3a9e52e-5f5c-45cb-a809-a872ece0cd2f\") " pod="openshift-marketplace/redhat-marketplace-hnb2g" Mar 11 08:54:44 crc kubenswrapper[4808]: I0311 08:54:44.301869 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3a9e52e-5f5c-45cb-a809-a872ece0cd2f-catalog-content\") pod \"redhat-marketplace-hnb2g\" (UID: \"d3a9e52e-5f5c-45cb-a809-a872ece0cd2f\") " pod="openshift-marketplace/redhat-marketplace-hnb2g" Mar 11 08:54:44 crc kubenswrapper[4808]: I0311 08:54:44.301910 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2n8v\" (UniqueName: \"kubernetes.io/projected/d3a9e52e-5f5c-45cb-a809-a872ece0cd2f-kube-api-access-v2n8v\") pod \"redhat-marketplace-hnb2g\" (UID: \"d3a9e52e-5f5c-45cb-a809-a872ece0cd2f\") " pod="openshift-marketplace/redhat-marketplace-hnb2g" Mar 11 08:54:44 crc kubenswrapper[4808]: I0311 08:54:44.403498 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3a9e52e-5f5c-45cb-a809-a872ece0cd2f-catalog-content\") pod \"redhat-marketplace-hnb2g\" (UID: \"d3a9e52e-5f5c-45cb-a809-a872ece0cd2f\") " pod="openshift-marketplace/redhat-marketplace-hnb2g" Mar 11 08:54:44 crc kubenswrapper[4808]: I0311 08:54:44.403573 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2n8v\" (UniqueName: \"kubernetes.io/projected/d3a9e52e-5f5c-45cb-a809-a872ece0cd2f-kube-api-access-v2n8v\") pod \"redhat-marketplace-hnb2g\" (UID: \"d3a9e52e-5f5c-45cb-a809-a872ece0cd2f\") " pod="openshift-marketplace/redhat-marketplace-hnb2g" Mar 11 08:54:44 crc kubenswrapper[4808]: I0311 08:54:44.403629 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3a9e52e-5f5c-45cb-a809-a872ece0cd2f-utilities\") pod \"redhat-marketplace-hnb2g\" (UID: \"d3a9e52e-5f5c-45cb-a809-a872ece0cd2f\") " pod="openshift-marketplace/redhat-marketplace-hnb2g" Mar 11 08:54:44 crc kubenswrapper[4808]: I0311 08:54:44.403968 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3a9e52e-5f5c-45cb-a809-a872ece0cd2f-catalog-content\") pod \"redhat-marketplace-hnb2g\" (UID: \"d3a9e52e-5f5c-45cb-a809-a872ece0cd2f\") " pod="openshift-marketplace/redhat-marketplace-hnb2g" Mar 11 08:54:44 crc kubenswrapper[4808]: I0311 08:54:44.404095 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3a9e52e-5f5c-45cb-a809-a872ece0cd2f-utilities\") pod \"redhat-marketplace-hnb2g\" (UID: \"d3a9e52e-5f5c-45cb-a809-a872ece0cd2f\") " pod="openshift-marketplace/redhat-marketplace-hnb2g" Mar 11 08:54:44 crc kubenswrapper[4808]: I0311 08:54:44.420620 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2n8v\" (UniqueName: \"kubernetes.io/projected/d3a9e52e-5f5c-45cb-a809-a872ece0cd2f-kube-api-access-v2n8v\") pod \"redhat-marketplace-hnb2g\" (UID: \"d3a9e52e-5f5c-45cb-a809-a872ece0cd2f\") " pod="openshift-marketplace/redhat-marketplace-hnb2g" Mar 11 08:54:44 crc kubenswrapper[4808]: I0311 08:54:44.573514 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hnb2g" Mar 11 08:54:44 crc kubenswrapper[4808]: I0311 08:54:44.770065 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hnb2g"] Mar 11 08:54:44 crc kubenswrapper[4808]: W0311 08:54:44.776790 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3a9e52e_5f5c_45cb_a809_a872ece0cd2f.slice/crio-6a3c6fd8dae3bf353f6332bcb6fcd7cc0f594f8094c1f0d66f0cf907cdcc964d WatchSource:0}: Error finding container 6a3c6fd8dae3bf353f6332bcb6fcd7cc0f594f8094c1f0d66f0cf907cdcc964d: Status 404 returned error can't find the container with id 6a3c6fd8dae3bf353f6332bcb6fcd7cc0f594f8094c1f0d66f0cf907cdcc964d Mar 11 08:54:45 crc kubenswrapper[4808]: I0311 08:54:45.380533 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hnb2g" event={"ID":"d3a9e52e-5f5c-45cb-a809-a872ece0cd2f","Type":"ContainerStarted","Data":"4ecd84343d8ddd3ea233e82256bb79cdd34990b8288051c3a6f2de52fccfaa39"} Mar 11 08:54:45 crc kubenswrapper[4808]: I0311 08:54:45.381644 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hnb2g" event={"ID":"d3a9e52e-5f5c-45cb-a809-a872ece0cd2f","Type":"ContainerStarted","Data":"6a3c6fd8dae3bf353f6332bcb6fcd7cc0f594f8094c1f0d66f0cf907cdcc964d"} Mar 11 08:54:45 crc kubenswrapper[4808]: I0311 08:54:45.872064 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-796d4cfff4-6bfzv"] Mar 11 08:54:45 crc kubenswrapper[4808]: I0311 08:54:45.872792 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-796d4cfff4-6bfzv" Mar 11 08:54:45 crc kubenswrapper[4808]: I0311 08:54:45.874445 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Mar 11 08:54:45 crc kubenswrapper[4808]: I0311 08:54:45.874943 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Mar 11 08:54:45 crc kubenswrapper[4808]: I0311 08:54:45.875756 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-b2p4m" Mar 11 08:54:45 crc kubenswrapper[4808]: I0311 08:54:45.890936 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-796d4cfff4-6bfzv"] Mar 11 08:54:45 crc kubenswrapper[4808]: I0311 08:54:45.919468 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bg6hj\" (UniqueName: \"kubernetes.io/projected/5c6a1c35-c3e7-41df-b410-34578fad1d2d-kube-api-access-bg6hj\") pod \"nmstate-operator-796d4cfff4-6bfzv\" (UID: \"5c6a1c35-c3e7-41df-b410-34578fad1d2d\") " pod="openshift-nmstate/nmstate-operator-796d4cfff4-6bfzv" Mar 11 08:54:46 crc kubenswrapper[4808]: I0311 08:54:46.021133 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bg6hj\" (UniqueName: \"kubernetes.io/projected/5c6a1c35-c3e7-41df-b410-34578fad1d2d-kube-api-access-bg6hj\") pod \"nmstate-operator-796d4cfff4-6bfzv\" (UID: \"5c6a1c35-c3e7-41df-b410-34578fad1d2d\") " pod="openshift-nmstate/nmstate-operator-796d4cfff4-6bfzv" Mar 11 08:54:46 crc kubenswrapper[4808]: I0311 08:54:46.027268 4808 patch_prober.go:28] interesting pod/machine-config-daemon-tfsm9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 08:54:46 crc kubenswrapper[4808]: I0311 08:54:46.027336 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 08:54:46 crc kubenswrapper[4808]: I0311 08:54:46.043587 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bg6hj\" (UniqueName: \"kubernetes.io/projected/5c6a1c35-c3e7-41df-b410-34578fad1d2d-kube-api-access-bg6hj\") pod \"nmstate-operator-796d4cfff4-6bfzv\" (UID: \"5c6a1c35-c3e7-41df-b410-34578fad1d2d\") " pod="openshift-nmstate/nmstate-operator-796d4cfff4-6bfzv" Mar 11 08:54:46 crc kubenswrapper[4808]: I0311 08:54:46.194388 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-796d4cfff4-6bfzv" Mar 11 08:54:46 crc kubenswrapper[4808]: I0311 08:54:46.394797 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-796d4cfff4-6bfzv"] Mar 11 08:54:46 crc kubenswrapper[4808]: I0311 08:54:46.396808 4808 generic.go:334] "Generic (PLEG): container finished" podID="d3a9e52e-5f5c-45cb-a809-a872ece0cd2f" containerID="4ecd84343d8ddd3ea233e82256bb79cdd34990b8288051c3a6f2de52fccfaa39" exitCode=0 Mar 11 08:54:46 crc kubenswrapper[4808]: I0311 08:54:46.396838 4808 generic.go:334] "Generic (PLEG): container finished" podID="d3a9e52e-5f5c-45cb-a809-a872ece0cd2f" containerID="4f1bdc8ca1c5a54ecf87f5a3e257a34a110f5db93797657648ff7ff41c7cd651" exitCode=0 Mar 11 08:54:46 crc kubenswrapper[4808]: I0311 08:54:46.396867 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hnb2g" event={"ID":"d3a9e52e-5f5c-45cb-a809-a872ece0cd2f","Type":"ContainerDied","Data":"4ecd84343d8ddd3ea233e82256bb79cdd34990b8288051c3a6f2de52fccfaa39"} Mar 11 08:54:46 crc kubenswrapper[4808]: I0311 08:54:46.396924 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hnb2g" event={"ID":"d3a9e52e-5f5c-45cb-a809-a872ece0cd2f","Type":"ContainerDied","Data":"4f1bdc8ca1c5a54ecf87f5a3e257a34a110f5db93797657648ff7ff41c7cd651"} Mar 11 08:54:46 crc kubenswrapper[4808]: W0311 08:54:46.420284 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5c6a1c35_c3e7_41df_b410_34578fad1d2d.slice/crio-cb5a4a878a598f3759eb7fe3d5b74ac5c6518636c42d8717ebc6f38b1ded8a62 WatchSource:0}: Error finding container cb5a4a878a598f3759eb7fe3d5b74ac5c6518636c42d8717ebc6f38b1ded8a62: Status 404 returned error can't find the container with id cb5a4a878a598f3759eb7fe3d5b74ac5c6518636c42d8717ebc6f38b1ded8a62 Mar 11 08:54:47 crc kubenswrapper[4808]: I0311 08:54:47.404079 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hnb2g" event={"ID":"d3a9e52e-5f5c-45cb-a809-a872ece0cd2f","Type":"ContainerStarted","Data":"beb108dc26bdaae8efc7506371767c15b98584a9f3cb53ffa270d38ea145e80a"} Mar 11 08:54:47 crc kubenswrapper[4808]: I0311 08:54:47.405068 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-796d4cfff4-6bfzv" event={"ID":"5c6a1c35-c3e7-41df-b410-34578fad1d2d","Type":"ContainerStarted","Data":"cb5a4a878a598f3759eb7fe3d5b74ac5c6518636c42d8717ebc6f38b1ded8a62"} Mar 11 08:54:47 crc kubenswrapper[4808]: I0311 08:54:47.421707 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hnb2g" podStartSLOduration=1.9552249929999999 podStartE2EDuration="3.421686682s" podCreationTimestamp="2026-03-11 08:54:44 +0000 UTC" firstStartedPulling="2026-03-11 08:54:45.381872407 +0000 UTC m=+936.335195727" lastFinishedPulling="2026-03-11 08:54:46.848334096 +0000 UTC m=+937.801657416" observedRunningTime="2026-03-11 08:54:47.418077697 +0000 UTC m=+938.371401017" watchObservedRunningTime="2026-03-11 08:54:47.421686682 +0000 UTC m=+938.375010002" Mar 11 08:54:47 crc kubenswrapper[4808]: I0311 08:54:47.815678 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-d7l9g" Mar 11 08:54:47 crc kubenswrapper[4808]: I0311 08:54:47.815726 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-d7l9g" Mar 11 08:54:47 crc kubenswrapper[4808]: I0311 08:54:47.856521 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-d7l9g" Mar 11 08:54:48 crc kubenswrapper[4808]: I0311 08:54:48.461457 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-d7l9g" Mar 11 08:54:49 crc kubenswrapper[4808]: I0311 08:54:49.428705 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-796d4cfff4-6bfzv" event={"ID":"5c6a1c35-c3e7-41df-b410-34578fad1d2d","Type":"ContainerStarted","Data":"066a828dd267fa2cb5802d88494b064ef2de4f73a9170646bcfe0f4de6ddcea8"} Mar 11 08:54:49 crc kubenswrapper[4808]: I0311 08:54:49.456244 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-796d4cfff4-6bfzv" podStartSLOduration=2.345750941 podStartE2EDuration="4.45622244s" podCreationTimestamp="2026-03-11 08:54:45 +0000 UTC" firstStartedPulling="2026-03-11 08:54:46.423544095 +0000 UTC m=+937.376867405" lastFinishedPulling="2026-03-11 08:54:48.534015584 +0000 UTC m=+939.487338904" observedRunningTime="2026-03-11 08:54:49.451157637 +0000 UTC m=+940.404480987" watchObservedRunningTime="2026-03-11 08:54:49.45622244 +0000 UTC m=+940.409545780" Mar 11 08:54:51 crc kubenswrapper[4808]: I0311 08:54:51.049967 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-d7l9g"] Mar 11 08:54:51 crc kubenswrapper[4808]: I0311 08:54:51.050319 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-d7l9g" podUID="3a5ea36f-2728-4f44-9c45-6d8b6b03ffd2" containerName="registry-server" containerID="cri-o://712944822d93943337a36a0327570439a65b96beac8e78fa72be762fe140dd73" gracePeriod=2 Mar 11 08:54:52 crc kubenswrapper[4808]: I0311 08:54:52.295725 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d7l9g" Mar 11 08:54:52 crc kubenswrapper[4808]: I0311 08:54:52.400096 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a5ea36f-2728-4f44-9c45-6d8b6b03ffd2-utilities\") pod \"3a5ea36f-2728-4f44-9c45-6d8b6b03ffd2\" (UID: \"3a5ea36f-2728-4f44-9c45-6d8b6b03ffd2\") " Mar 11 08:54:52 crc kubenswrapper[4808]: I0311 08:54:52.400140 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a5ea36f-2728-4f44-9c45-6d8b6b03ffd2-catalog-content\") pod \"3a5ea36f-2728-4f44-9c45-6d8b6b03ffd2\" (UID: \"3a5ea36f-2728-4f44-9c45-6d8b6b03ffd2\") " Mar 11 08:54:52 crc kubenswrapper[4808]: I0311 08:54:52.400217 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7v6pk\" (UniqueName: \"kubernetes.io/projected/3a5ea36f-2728-4f44-9c45-6d8b6b03ffd2-kube-api-access-7v6pk\") pod \"3a5ea36f-2728-4f44-9c45-6d8b6b03ffd2\" (UID: \"3a5ea36f-2728-4f44-9c45-6d8b6b03ffd2\") " Mar 11 08:54:52 crc kubenswrapper[4808]: I0311 08:54:52.401160 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a5ea36f-2728-4f44-9c45-6d8b6b03ffd2-utilities" (OuterVolumeSpecName: "utilities") pod "3a5ea36f-2728-4f44-9c45-6d8b6b03ffd2" (UID: "3a5ea36f-2728-4f44-9c45-6d8b6b03ffd2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 08:54:52 crc kubenswrapper[4808]: I0311 08:54:52.405254 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a5ea36f-2728-4f44-9c45-6d8b6b03ffd2-kube-api-access-7v6pk" (OuterVolumeSpecName: "kube-api-access-7v6pk") pod "3a5ea36f-2728-4f44-9c45-6d8b6b03ffd2" (UID: "3a5ea36f-2728-4f44-9c45-6d8b6b03ffd2"). InnerVolumeSpecName "kube-api-access-7v6pk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:54:52 crc kubenswrapper[4808]: I0311 08:54:52.445569 4808 generic.go:334] "Generic (PLEG): container finished" podID="3a5ea36f-2728-4f44-9c45-6d8b6b03ffd2" containerID="712944822d93943337a36a0327570439a65b96beac8e78fa72be762fe140dd73" exitCode=0 Mar 11 08:54:52 crc kubenswrapper[4808]: I0311 08:54:52.445623 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d7l9g" event={"ID":"3a5ea36f-2728-4f44-9c45-6d8b6b03ffd2","Type":"ContainerDied","Data":"712944822d93943337a36a0327570439a65b96beac8e78fa72be762fe140dd73"} Mar 11 08:54:52 crc kubenswrapper[4808]: I0311 08:54:52.445655 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d7l9g" event={"ID":"3a5ea36f-2728-4f44-9c45-6d8b6b03ffd2","Type":"ContainerDied","Data":"8cde518dd9b82b2b7ead2869876542aa7acba20df7e3c69c489ec748b4ad58c9"} Mar 11 08:54:52 crc kubenswrapper[4808]: I0311 08:54:52.445668 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d7l9g" Mar 11 08:54:52 crc kubenswrapper[4808]: I0311 08:54:52.445676 4808 scope.go:117] "RemoveContainer" containerID="712944822d93943337a36a0327570439a65b96beac8e78fa72be762fe140dd73" Mar 11 08:54:52 crc kubenswrapper[4808]: I0311 08:54:52.462129 4808 scope.go:117] "RemoveContainer" containerID="00b2d4e879f755539c80e8c8dd19ff1df98dec5d1c3fc83df7c6bf3cff07bc46" Mar 11 08:54:52 crc kubenswrapper[4808]: I0311 08:54:52.485435 4808 scope.go:117] "RemoveContainer" containerID="68ab8b679157e2da7cce9c1a416d05e93916f1aa18fca540f9c80f306465f14b" Mar 11 08:54:52 crc kubenswrapper[4808]: I0311 08:54:52.501683 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7v6pk\" (UniqueName: \"kubernetes.io/projected/3a5ea36f-2728-4f44-9c45-6d8b6b03ffd2-kube-api-access-7v6pk\") on node \"crc\" DevicePath \"\"" Mar 11 08:54:52 crc kubenswrapper[4808]: I0311 08:54:52.501741 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a5ea36f-2728-4f44-9c45-6d8b6b03ffd2-utilities\") on node \"crc\" DevicePath \"\"" Mar 11 08:54:52 crc kubenswrapper[4808]: I0311 08:54:52.504445 4808 scope.go:117] "RemoveContainer" containerID="712944822d93943337a36a0327570439a65b96beac8e78fa72be762fe140dd73" Mar 11 08:54:52 crc kubenswrapper[4808]: E0311 08:54:52.505001 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"712944822d93943337a36a0327570439a65b96beac8e78fa72be762fe140dd73\": container with ID starting with 712944822d93943337a36a0327570439a65b96beac8e78fa72be762fe140dd73 not found: ID does not exist" containerID="712944822d93943337a36a0327570439a65b96beac8e78fa72be762fe140dd73" Mar 11 08:54:52 crc kubenswrapper[4808]: I0311 08:54:52.505040 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"712944822d93943337a36a0327570439a65b96beac8e78fa72be762fe140dd73"} err="failed to get container status \"712944822d93943337a36a0327570439a65b96beac8e78fa72be762fe140dd73\": rpc error: code = NotFound desc = could not find container \"712944822d93943337a36a0327570439a65b96beac8e78fa72be762fe140dd73\": container with ID starting with 712944822d93943337a36a0327570439a65b96beac8e78fa72be762fe140dd73 not found: ID does not exist" Mar 11 08:54:52 crc kubenswrapper[4808]: I0311 08:54:52.505060 4808 scope.go:117] "RemoveContainer" containerID="00b2d4e879f755539c80e8c8dd19ff1df98dec5d1c3fc83df7c6bf3cff07bc46" Mar 11 08:54:52 crc kubenswrapper[4808]: E0311 08:54:52.505348 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00b2d4e879f755539c80e8c8dd19ff1df98dec5d1c3fc83df7c6bf3cff07bc46\": container with ID starting with 00b2d4e879f755539c80e8c8dd19ff1df98dec5d1c3fc83df7c6bf3cff07bc46 not found: ID does not exist" containerID="00b2d4e879f755539c80e8c8dd19ff1df98dec5d1c3fc83df7c6bf3cff07bc46" Mar 11 08:54:52 crc kubenswrapper[4808]: I0311 08:54:52.505451 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00b2d4e879f755539c80e8c8dd19ff1df98dec5d1c3fc83df7c6bf3cff07bc46"} err="failed to get container status \"00b2d4e879f755539c80e8c8dd19ff1df98dec5d1c3fc83df7c6bf3cff07bc46\": rpc error: code = NotFound desc = could not find container \"00b2d4e879f755539c80e8c8dd19ff1df98dec5d1c3fc83df7c6bf3cff07bc46\": container with ID starting with 00b2d4e879f755539c80e8c8dd19ff1df98dec5d1c3fc83df7c6bf3cff07bc46 not found: ID does not exist" Mar 11 08:54:52 crc kubenswrapper[4808]: I0311 08:54:52.505508 4808 scope.go:117] "RemoveContainer" containerID="68ab8b679157e2da7cce9c1a416d05e93916f1aa18fca540f9c80f306465f14b" Mar 11 08:54:52 crc kubenswrapper[4808]: E0311 08:54:52.505997 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68ab8b679157e2da7cce9c1a416d05e93916f1aa18fca540f9c80f306465f14b\": container with ID starting with 68ab8b679157e2da7cce9c1a416d05e93916f1aa18fca540f9c80f306465f14b not found: ID does not exist" containerID="68ab8b679157e2da7cce9c1a416d05e93916f1aa18fca540f9c80f306465f14b" Mar 11 08:54:52 crc kubenswrapper[4808]: I0311 08:54:52.506022 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68ab8b679157e2da7cce9c1a416d05e93916f1aa18fca540f9c80f306465f14b"} err="failed to get container status \"68ab8b679157e2da7cce9c1a416d05e93916f1aa18fca540f9c80f306465f14b\": rpc error: code = NotFound desc = could not find container \"68ab8b679157e2da7cce9c1a416d05e93916f1aa18fca540f9c80f306465f14b\": container with ID starting with 68ab8b679157e2da7cce9c1a416d05e93916f1aa18fca540f9c80f306465f14b not found: ID does not exist" Mar 11 08:54:52 crc kubenswrapper[4808]: I0311 08:54:52.526218 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a5ea36f-2728-4f44-9c45-6d8b6b03ffd2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3a5ea36f-2728-4f44-9c45-6d8b6b03ffd2" (UID: "3a5ea36f-2728-4f44-9c45-6d8b6b03ffd2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 08:54:52 crc kubenswrapper[4808]: I0311 08:54:52.603726 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a5ea36f-2728-4f44-9c45-6d8b6b03ffd2-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 11 08:54:52 crc kubenswrapper[4808]: I0311 08:54:52.771807 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-d7l9g"] Mar 11 08:54:52 crc kubenswrapper[4808]: I0311 08:54:52.777868 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-d7l9g"] Mar 11 08:54:53 crc kubenswrapper[4808]: I0311 08:54:53.796138 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a5ea36f-2728-4f44-9c45-6d8b6b03ffd2" path="/var/lib/kubelet/pods/3a5ea36f-2728-4f44-9c45-6d8b6b03ffd2/volumes" Mar 11 08:54:54 crc kubenswrapper[4808]: I0311 08:54:54.573641 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hnb2g" Mar 11 08:54:54 crc kubenswrapper[4808]: I0311 08:54:54.573698 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hnb2g" Mar 11 08:54:54 crc kubenswrapper[4808]: I0311 08:54:54.614273 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hnb2g" Mar 11 08:54:55 crc kubenswrapper[4808]: I0311 08:54:55.077803 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-9b8c8685d-trl6k"] Mar 11 08:54:55 crc kubenswrapper[4808]: E0311 08:54:55.078022 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a5ea36f-2728-4f44-9c45-6d8b6b03ffd2" containerName="extract-content" Mar 11 08:54:55 crc kubenswrapper[4808]: I0311 08:54:55.078033 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a5ea36f-2728-4f44-9c45-6d8b6b03ffd2" containerName="extract-content" Mar 11 08:54:55 crc kubenswrapper[4808]: E0311 08:54:55.078046 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a5ea36f-2728-4f44-9c45-6d8b6b03ffd2" containerName="extract-utilities" Mar 11 08:54:55 crc kubenswrapper[4808]: I0311 08:54:55.078051 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a5ea36f-2728-4f44-9c45-6d8b6b03ffd2" containerName="extract-utilities" Mar 11 08:54:55 crc kubenswrapper[4808]: E0311 08:54:55.078061 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a5ea36f-2728-4f44-9c45-6d8b6b03ffd2" containerName="registry-server" Mar 11 08:54:55 crc kubenswrapper[4808]: I0311 08:54:55.078067 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a5ea36f-2728-4f44-9c45-6d8b6b03ffd2" containerName="registry-server" Mar 11 08:54:55 crc kubenswrapper[4808]: I0311 08:54:55.078152 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a5ea36f-2728-4f44-9c45-6d8b6b03ffd2" containerName="registry-server" Mar 11 08:54:55 crc kubenswrapper[4808]: I0311 08:54:55.078674 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-trl6k" Mar 11 08:54:55 crc kubenswrapper[4808]: I0311 08:54:55.080381 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-shhlz" Mar 11 08:54:55 crc kubenswrapper[4808]: I0311 08:54:55.091745 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-9b8c8685d-trl6k"] Mar 11 08:54:55 crc kubenswrapper[4808]: I0311 08:54:55.096927 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-5f558f5558-ds975"] Mar 11 08:54:55 crc kubenswrapper[4808]: I0311 08:54:55.097704 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f558f5558-ds975" Mar 11 08:54:55 crc kubenswrapper[4808]: I0311 08:54:55.103959 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Mar 11 08:54:55 crc kubenswrapper[4808]: I0311 08:54:55.114492 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f558f5558-ds975"] Mar 11 08:54:55 crc kubenswrapper[4808]: I0311 08:54:55.132000 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7bs5\" (UniqueName: \"kubernetes.io/projected/85875097-abf3-4f4d-baa8-19ee0d1b85e7-kube-api-access-v7bs5\") pod \"nmstate-webhook-5f558f5558-ds975\" (UID: \"85875097-abf3-4f4d-baa8-19ee0d1b85e7\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-ds975" Mar 11 08:54:55 crc kubenswrapper[4808]: I0311 08:54:55.132102 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/85875097-abf3-4f4d-baa8-19ee0d1b85e7-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-ds975\" (UID: \"85875097-abf3-4f4d-baa8-19ee0d1b85e7\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-ds975" Mar 11 08:54:55 crc kubenswrapper[4808]: I0311 08:54:55.132196 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mggcp\" (UniqueName: \"kubernetes.io/projected/05ae43c7-5b21-4813-adfe-8906527c2a44-kube-api-access-mggcp\") pod \"nmstate-metrics-9b8c8685d-trl6k\" (UID: \"05ae43c7-5b21-4813-adfe-8906527c2a44\") " pod="openshift-nmstate/nmstate-metrics-9b8c8685d-trl6k" Mar 11 08:54:55 crc kubenswrapper[4808]: I0311 08:54:55.132198 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-rzf4d"] Mar 11 08:54:55 crc kubenswrapper[4808]: I0311 08:54:55.133059 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-rzf4d" Mar 11 08:54:55 crc kubenswrapper[4808]: I0311 08:54:55.203115 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-86f58fcf4-cnw9s"] Mar 11 08:54:55 crc kubenswrapper[4808]: I0311 08:54:55.203912 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-cnw9s" Mar 11 08:54:55 crc kubenswrapper[4808]: I0311 08:54:55.205586 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-jpgl6" Mar 11 08:54:55 crc kubenswrapper[4808]: I0311 08:54:55.206020 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Mar 11 08:54:55 crc kubenswrapper[4808]: I0311 08:54:55.206875 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Mar 11 08:54:55 crc kubenswrapper[4808]: I0311 08:54:55.233301 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/bac5717e-202e-43a5-a822-a7eeedc11af5-ovs-socket\") pod \"nmstate-handler-rzf4d\" (UID: \"bac5717e-202e-43a5-a822-a7eeedc11af5\") " pod="openshift-nmstate/nmstate-handler-rzf4d" Mar 11 08:54:55 crc kubenswrapper[4808]: I0311 08:54:55.233370 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mggcp\" (UniqueName: \"kubernetes.io/projected/05ae43c7-5b21-4813-adfe-8906527c2a44-kube-api-access-mggcp\") pod \"nmstate-metrics-9b8c8685d-trl6k\" (UID: \"05ae43c7-5b21-4813-adfe-8906527c2a44\") " pod="openshift-nmstate/nmstate-metrics-9b8c8685d-trl6k" Mar 11 08:54:55 crc kubenswrapper[4808]: I0311 08:54:55.233431 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/bac5717e-202e-43a5-a822-a7eeedc11af5-nmstate-lock\") pod \"nmstate-handler-rzf4d\" (UID: \"bac5717e-202e-43a5-a822-a7eeedc11af5\") " pod="openshift-nmstate/nmstate-handler-rzf4d" Mar 11 08:54:55 crc kubenswrapper[4808]: I0311 08:54:55.233455 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7bs5\" (UniqueName: \"kubernetes.io/projected/85875097-abf3-4f4d-baa8-19ee0d1b85e7-kube-api-access-v7bs5\") pod \"nmstate-webhook-5f558f5558-ds975\" (UID: \"85875097-abf3-4f4d-baa8-19ee0d1b85e7\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-ds975" Mar 11 08:54:55 crc kubenswrapper[4808]: I0311 08:54:55.233486 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/bac5717e-202e-43a5-a822-a7eeedc11af5-dbus-socket\") pod \"nmstate-handler-rzf4d\" (UID: \"bac5717e-202e-43a5-a822-a7eeedc11af5\") " pod="openshift-nmstate/nmstate-handler-rzf4d" Mar 11 08:54:55 crc kubenswrapper[4808]: I0311 08:54:55.233508 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/80c7794c-5470-4f6b-8f7d-abc0e9f31785-nginx-conf\") pod \"nmstate-console-plugin-86f58fcf4-cnw9s\" (UID: \"80c7794c-5470-4f6b-8f7d-abc0e9f31785\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-cnw9s" Mar 11 08:54:55 crc kubenswrapper[4808]: I0311 08:54:55.233548 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/85875097-abf3-4f4d-baa8-19ee0d1b85e7-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-ds975\" (UID: \"85875097-abf3-4f4d-baa8-19ee0d1b85e7\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-ds975" Mar 11 08:54:55 crc kubenswrapper[4808]: I0311 08:54:55.233588 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/80c7794c-5470-4f6b-8f7d-abc0e9f31785-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-cnw9s\" (UID: \"80c7794c-5470-4f6b-8f7d-abc0e9f31785\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-cnw9s" Mar 11 08:54:55 crc kubenswrapper[4808]: I0311 08:54:55.233634 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mfwt\" (UniqueName: \"kubernetes.io/projected/80c7794c-5470-4f6b-8f7d-abc0e9f31785-kube-api-access-7mfwt\") pod \"nmstate-console-plugin-86f58fcf4-cnw9s\" (UID: \"80c7794c-5470-4f6b-8f7d-abc0e9f31785\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-cnw9s" Mar 11 08:54:55 crc kubenswrapper[4808]: I0311 08:54:55.233662 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2ml7\" (UniqueName: \"kubernetes.io/projected/bac5717e-202e-43a5-a822-a7eeedc11af5-kube-api-access-j2ml7\") pod \"nmstate-handler-rzf4d\" (UID: \"bac5717e-202e-43a5-a822-a7eeedc11af5\") " pod="openshift-nmstate/nmstate-handler-rzf4d" Mar 11 08:54:55 crc kubenswrapper[4808]: E0311 08:54:55.234850 4808 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Mar 11 08:54:55 crc kubenswrapper[4808]: E0311 08:54:55.234903 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/85875097-abf3-4f4d-baa8-19ee0d1b85e7-tls-key-pair podName:85875097-abf3-4f4d-baa8-19ee0d1b85e7 nodeName:}" failed. No retries permitted until 2026-03-11 08:54:55.734886029 +0000 UTC m=+946.688209349 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/85875097-abf3-4f4d-baa8-19ee0d1b85e7-tls-key-pair") pod "nmstate-webhook-5f558f5558-ds975" (UID: "85875097-abf3-4f4d-baa8-19ee0d1b85e7") : secret "openshift-nmstate-webhook" not found Mar 11 08:54:55 crc kubenswrapper[4808]: I0311 08:54:55.243810 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-86f58fcf4-cnw9s"] Mar 11 08:54:55 crc kubenswrapper[4808]: I0311 08:54:55.259351 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7bs5\" (UniqueName: \"kubernetes.io/projected/85875097-abf3-4f4d-baa8-19ee0d1b85e7-kube-api-access-v7bs5\") pod \"nmstate-webhook-5f558f5558-ds975\" (UID: \"85875097-abf3-4f4d-baa8-19ee0d1b85e7\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-ds975" Mar 11 08:54:55 crc kubenswrapper[4808]: I0311 08:54:55.271832 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mggcp\" (UniqueName: \"kubernetes.io/projected/05ae43c7-5b21-4813-adfe-8906527c2a44-kube-api-access-mggcp\") pod \"nmstate-metrics-9b8c8685d-trl6k\" (UID: \"05ae43c7-5b21-4813-adfe-8906527c2a44\") " pod="openshift-nmstate/nmstate-metrics-9b8c8685d-trl6k" Mar 11 08:54:55 crc kubenswrapper[4808]: I0311 08:54:55.334889 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mfwt\" (UniqueName: \"kubernetes.io/projected/80c7794c-5470-4f6b-8f7d-abc0e9f31785-kube-api-access-7mfwt\") pod \"nmstate-console-plugin-86f58fcf4-cnw9s\" (UID: \"80c7794c-5470-4f6b-8f7d-abc0e9f31785\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-cnw9s" Mar 11 08:54:55 crc kubenswrapper[4808]: I0311 08:54:55.334939 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2ml7\" (UniqueName: \"kubernetes.io/projected/bac5717e-202e-43a5-a822-a7eeedc11af5-kube-api-access-j2ml7\") pod \"nmstate-handler-rzf4d\" (UID: \"bac5717e-202e-43a5-a822-a7eeedc11af5\") " pod="openshift-nmstate/nmstate-handler-rzf4d" Mar 11 08:54:55 crc kubenswrapper[4808]: I0311 08:54:55.334960 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/bac5717e-202e-43a5-a822-a7eeedc11af5-ovs-socket\") pod \"nmstate-handler-rzf4d\" (UID: \"bac5717e-202e-43a5-a822-a7eeedc11af5\") " pod="openshift-nmstate/nmstate-handler-rzf4d" Mar 11 08:54:55 crc kubenswrapper[4808]: I0311 08:54:55.334997 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/bac5717e-202e-43a5-a822-a7eeedc11af5-nmstate-lock\") pod \"nmstate-handler-rzf4d\" (UID: \"bac5717e-202e-43a5-a822-a7eeedc11af5\") " pod="openshift-nmstate/nmstate-handler-rzf4d" Mar 11 08:54:55 crc kubenswrapper[4808]: I0311 08:54:55.335024 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/bac5717e-202e-43a5-a822-a7eeedc11af5-dbus-socket\") pod \"nmstate-handler-rzf4d\" (UID: \"bac5717e-202e-43a5-a822-a7eeedc11af5\") " pod="openshift-nmstate/nmstate-handler-rzf4d" Mar 11 08:54:55 crc kubenswrapper[4808]: I0311 08:54:55.335051 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/80c7794c-5470-4f6b-8f7d-abc0e9f31785-nginx-conf\") pod \"nmstate-console-plugin-86f58fcf4-cnw9s\" (UID: \"80c7794c-5470-4f6b-8f7d-abc0e9f31785\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-cnw9s" Mar 11 08:54:55 crc kubenswrapper[4808]: I0311 08:54:55.335082 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/80c7794c-5470-4f6b-8f7d-abc0e9f31785-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-cnw9s\" (UID: \"80c7794c-5470-4f6b-8f7d-abc0e9f31785\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-cnw9s" Mar 11 08:54:55 crc kubenswrapper[4808]: I0311 08:54:55.335551 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/bac5717e-202e-43a5-a822-a7eeedc11af5-ovs-socket\") pod \"nmstate-handler-rzf4d\" (UID: \"bac5717e-202e-43a5-a822-a7eeedc11af5\") " pod="openshift-nmstate/nmstate-handler-rzf4d" Mar 11 08:54:55 crc kubenswrapper[4808]: I0311 08:54:55.335893 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/bac5717e-202e-43a5-a822-a7eeedc11af5-nmstate-lock\") pod \"nmstate-handler-rzf4d\" (UID: \"bac5717e-202e-43a5-a822-a7eeedc11af5\") " pod="openshift-nmstate/nmstate-handler-rzf4d" Mar 11 08:54:55 crc kubenswrapper[4808]: I0311 08:54:55.336045 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/bac5717e-202e-43a5-a822-a7eeedc11af5-dbus-socket\") pod \"nmstate-handler-rzf4d\" (UID: \"bac5717e-202e-43a5-a822-a7eeedc11af5\") " pod="openshift-nmstate/nmstate-handler-rzf4d" Mar 11 08:54:55 crc kubenswrapper[4808]: I0311 08:54:55.336540 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/80c7794c-5470-4f6b-8f7d-abc0e9f31785-nginx-conf\") pod \"nmstate-console-plugin-86f58fcf4-cnw9s\" (UID: \"80c7794c-5470-4f6b-8f7d-abc0e9f31785\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-cnw9s" Mar 11 08:54:55 crc kubenswrapper[4808]: I0311 08:54:55.337770 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/80c7794c-5470-4f6b-8f7d-abc0e9f31785-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-cnw9s\" (UID: \"80c7794c-5470-4f6b-8f7d-abc0e9f31785\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-cnw9s" Mar 11 08:54:55 crc kubenswrapper[4808]: I0311 08:54:55.351479 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mfwt\" (UniqueName: \"kubernetes.io/projected/80c7794c-5470-4f6b-8f7d-abc0e9f31785-kube-api-access-7mfwt\") pod \"nmstate-console-plugin-86f58fcf4-cnw9s\" (UID: \"80c7794c-5470-4f6b-8f7d-abc0e9f31785\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-cnw9s" Mar 11 08:54:55 crc kubenswrapper[4808]: I0311 08:54:55.352011 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2ml7\" (UniqueName: \"kubernetes.io/projected/bac5717e-202e-43a5-a822-a7eeedc11af5-kube-api-access-j2ml7\") pod \"nmstate-handler-rzf4d\" (UID: \"bac5717e-202e-43a5-a822-a7eeedc11af5\") " pod="openshift-nmstate/nmstate-handler-rzf4d" Mar 11 08:54:55 crc kubenswrapper[4808]: I0311 08:54:55.395975 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-trl6k" Mar 11 08:54:55 crc kubenswrapper[4808]: I0311 08:54:55.406171 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-678558f478-lm7fx"] Mar 11 08:54:55 crc kubenswrapper[4808]: I0311 08:54:55.406981 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-678558f478-lm7fx" Mar 11 08:54:55 crc kubenswrapper[4808]: I0311 08:54:55.419813 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-678558f478-lm7fx"] Mar 11 08:54:55 crc kubenswrapper[4808]: I0311 08:54:55.436215 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngrsf\" (UniqueName: \"kubernetes.io/projected/b398edfc-49af-43cf-bd89-96c8b12ce604-kube-api-access-ngrsf\") pod \"console-678558f478-lm7fx\" (UID: \"b398edfc-49af-43cf-bd89-96c8b12ce604\") " pod="openshift-console/console-678558f478-lm7fx" Mar 11 08:54:55 crc kubenswrapper[4808]: I0311 08:54:55.436563 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b398edfc-49af-43cf-bd89-96c8b12ce604-oauth-serving-cert\") pod \"console-678558f478-lm7fx\" (UID: \"b398edfc-49af-43cf-bd89-96c8b12ce604\") " pod="openshift-console/console-678558f478-lm7fx" Mar 11 08:54:55 crc kubenswrapper[4808]: I0311 08:54:55.436586 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b398edfc-49af-43cf-bd89-96c8b12ce604-service-ca\") pod \"console-678558f478-lm7fx\" (UID: \"b398edfc-49af-43cf-bd89-96c8b12ce604\") " pod="openshift-console/console-678558f478-lm7fx" Mar 11 08:54:55 crc kubenswrapper[4808]: I0311 08:54:55.436608 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b398edfc-49af-43cf-bd89-96c8b12ce604-console-serving-cert\") pod \"console-678558f478-lm7fx\" (UID: \"b398edfc-49af-43cf-bd89-96c8b12ce604\") " pod="openshift-console/console-678558f478-lm7fx" Mar 11 08:54:55 crc kubenswrapper[4808]: I0311 08:54:55.436635 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b398edfc-49af-43cf-bd89-96c8b12ce604-console-config\") pod \"console-678558f478-lm7fx\" (UID: \"b398edfc-49af-43cf-bd89-96c8b12ce604\") " pod="openshift-console/console-678558f478-lm7fx" Mar 11 08:54:55 crc kubenswrapper[4808]: I0311 08:54:55.436651 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b398edfc-49af-43cf-bd89-96c8b12ce604-console-oauth-config\") pod \"console-678558f478-lm7fx\" (UID: \"b398edfc-49af-43cf-bd89-96c8b12ce604\") " pod="openshift-console/console-678558f478-lm7fx" Mar 11 08:54:55 crc kubenswrapper[4808]: I0311 08:54:55.436690 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b398edfc-49af-43cf-bd89-96c8b12ce604-trusted-ca-bundle\") pod \"console-678558f478-lm7fx\" (UID: \"b398edfc-49af-43cf-bd89-96c8b12ce604\") " pod="openshift-console/console-678558f478-lm7fx" Mar 11 08:54:55 crc kubenswrapper[4808]: I0311 08:54:55.446038 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-rzf4d" Mar 11 08:54:55 crc kubenswrapper[4808]: I0311 08:54:55.518451 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hnb2g" Mar 11 08:54:55 crc kubenswrapper[4808]: I0311 08:54:55.520732 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-cnw9s" Mar 11 08:54:55 crc kubenswrapper[4808]: I0311 08:54:55.537697 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b398edfc-49af-43cf-bd89-96c8b12ce604-console-serving-cert\") pod \"console-678558f478-lm7fx\" (UID: \"b398edfc-49af-43cf-bd89-96c8b12ce604\") " pod="openshift-console/console-678558f478-lm7fx" Mar 11 08:54:55 crc kubenswrapper[4808]: I0311 08:54:55.537766 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b398edfc-49af-43cf-bd89-96c8b12ce604-console-config\") pod \"console-678558f478-lm7fx\" (UID: \"b398edfc-49af-43cf-bd89-96c8b12ce604\") " pod="openshift-console/console-678558f478-lm7fx" Mar 11 08:54:55 crc kubenswrapper[4808]: I0311 08:54:55.537787 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b398edfc-49af-43cf-bd89-96c8b12ce604-console-oauth-config\") pod \"console-678558f478-lm7fx\" (UID: \"b398edfc-49af-43cf-bd89-96c8b12ce604\") " pod="openshift-console/console-678558f478-lm7fx" Mar 11 08:54:55 crc kubenswrapper[4808]: I0311 08:54:55.537824 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b398edfc-49af-43cf-bd89-96c8b12ce604-trusted-ca-bundle\") pod \"console-678558f478-lm7fx\" (UID: \"b398edfc-49af-43cf-bd89-96c8b12ce604\") " pod="openshift-console/console-678558f478-lm7fx" Mar 11 08:54:55 crc kubenswrapper[4808]: I0311 08:54:55.537875 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngrsf\" (UniqueName: \"kubernetes.io/projected/b398edfc-49af-43cf-bd89-96c8b12ce604-kube-api-access-ngrsf\") pod \"console-678558f478-lm7fx\" (UID: \"b398edfc-49af-43cf-bd89-96c8b12ce604\") " pod="openshift-console/console-678558f478-lm7fx" Mar 11 08:54:55 crc kubenswrapper[4808]: I0311 08:54:55.537891 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b398edfc-49af-43cf-bd89-96c8b12ce604-oauth-serving-cert\") pod \"console-678558f478-lm7fx\" (UID: \"b398edfc-49af-43cf-bd89-96c8b12ce604\") " pod="openshift-console/console-678558f478-lm7fx" Mar 11 08:54:55 crc kubenswrapper[4808]: I0311 08:54:55.537910 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b398edfc-49af-43cf-bd89-96c8b12ce604-service-ca\") pod \"console-678558f478-lm7fx\" (UID: \"b398edfc-49af-43cf-bd89-96c8b12ce604\") " pod="openshift-console/console-678558f478-lm7fx" Mar 11 08:54:55 crc kubenswrapper[4808]: I0311 08:54:55.539035 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b398edfc-49af-43cf-bd89-96c8b12ce604-service-ca\") pod \"console-678558f478-lm7fx\" (UID: \"b398edfc-49af-43cf-bd89-96c8b12ce604\") " pod="openshift-console/console-678558f478-lm7fx" Mar 11 08:54:55 crc kubenswrapper[4808]: I0311 08:54:55.539195 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b398edfc-49af-43cf-bd89-96c8b12ce604-oauth-serving-cert\") pod \"console-678558f478-lm7fx\" (UID: \"b398edfc-49af-43cf-bd89-96c8b12ce604\") " pod="openshift-console/console-678558f478-lm7fx" Mar 11 08:54:55 crc kubenswrapper[4808]: I0311 08:54:55.540096 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b398edfc-49af-43cf-bd89-96c8b12ce604-console-config\") pod \"console-678558f478-lm7fx\" (UID: \"b398edfc-49af-43cf-bd89-96c8b12ce604\") " pod="openshift-console/console-678558f478-lm7fx" Mar 11 08:54:55 crc kubenswrapper[4808]: I0311 08:54:55.540553 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b398edfc-49af-43cf-bd89-96c8b12ce604-trusted-ca-bundle\") pod \"console-678558f478-lm7fx\" (UID: \"b398edfc-49af-43cf-bd89-96c8b12ce604\") " pod="openshift-console/console-678558f478-lm7fx" Mar 11 08:54:55 crc kubenswrapper[4808]: I0311 08:54:55.545703 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b398edfc-49af-43cf-bd89-96c8b12ce604-console-serving-cert\") pod \"console-678558f478-lm7fx\" (UID: \"b398edfc-49af-43cf-bd89-96c8b12ce604\") " pod="openshift-console/console-678558f478-lm7fx" Mar 11 08:54:55 crc kubenswrapper[4808]: I0311 08:54:55.545797 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b398edfc-49af-43cf-bd89-96c8b12ce604-console-oauth-config\") pod \"console-678558f478-lm7fx\" (UID: \"b398edfc-49af-43cf-bd89-96c8b12ce604\") " pod="openshift-console/console-678558f478-lm7fx" Mar 11 08:54:55 crc kubenswrapper[4808]: I0311 08:54:55.564242 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngrsf\" (UniqueName: \"kubernetes.io/projected/b398edfc-49af-43cf-bd89-96c8b12ce604-kube-api-access-ngrsf\") pod \"console-678558f478-lm7fx\" (UID: \"b398edfc-49af-43cf-bd89-96c8b12ce604\") " pod="openshift-console/console-678558f478-lm7fx" Mar 11 08:54:55 crc kubenswrapper[4808]: I0311 08:54:55.741069 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/85875097-abf3-4f4d-baa8-19ee0d1b85e7-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-ds975\" (UID: \"85875097-abf3-4f4d-baa8-19ee0d1b85e7\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-ds975" Mar 11 08:54:55 crc kubenswrapper[4808]: I0311 08:54:55.745128 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/85875097-abf3-4f4d-baa8-19ee0d1b85e7-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-ds975\" (UID: \"85875097-abf3-4f4d-baa8-19ee0d1b85e7\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-ds975" Mar 11 08:54:55 crc kubenswrapper[4808]: I0311 08:54:55.754076 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-678558f478-lm7fx" Mar 11 08:54:55 crc kubenswrapper[4808]: I0311 08:54:55.807999 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-9b8c8685d-trl6k"] Mar 11 08:54:55 crc kubenswrapper[4808]: W0311 08:54:55.810567 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod05ae43c7_5b21_4813_adfe_8906527c2a44.slice/crio-e73d7e34a875854cc1d6849ff86cda318264786816a530aebade41425cfedab0 WatchSource:0}: Error finding container e73d7e34a875854cc1d6849ff86cda318264786816a530aebade41425cfedab0: Status 404 returned error can't find the container with id e73d7e34a875854cc1d6849ff86cda318264786816a530aebade41425cfedab0 Mar 11 08:54:55 crc kubenswrapper[4808]: W0311 08:54:55.933180 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod80c7794c_5470_4f6b_8f7d_abc0e9f31785.slice/crio-5823910bd63a0c3aa2e3382e3d6bae3e5818f22048c43b4406b72476eea0462f WatchSource:0}: Error finding container 5823910bd63a0c3aa2e3382e3d6bae3e5818f22048c43b4406b72476eea0462f: Status 404 returned error can't find the container with id 5823910bd63a0c3aa2e3382e3d6bae3e5818f22048c43b4406b72476eea0462f Mar 11 08:54:55 crc kubenswrapper[4808]: I0311 08:54:55.934149 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-86f58fcf4-cnw9s"] Mar 11 08:54:55 crc kubenswrapper[4808]: I0311 08:54:55.976166 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-678558f478-lm7fx"] Mar 11 08:54:55 crc kubenswrapper[4808]: W0311 08:54:55.986328 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb398edfc_49af_43cf_bd89_96c8b12ce604.slice/crio-7057243100c806be6d88ee28bbab25399f908f9f3979207aaf1a21f4184aa825 WatchSource:0}: Error finding container 7057243100c806be6d88ee28bbab25399f908f9f3979207aaf1a21f4184aa825: Status 404 returned error can't find the container with id 7057243100c806be6d88ee28bbab25399f908f9f3979207aaf1a21f4184aa825 Mar 11 08:54:56 crc kubenswrapper[4808]: I0311 08:54:56.016025 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f558f5558-ds975" Mar 11 08:54:56 crc kubenswrapper[4808]: I0311 08:54:56.205258 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f558f5558-ds975"] Mar 11 08:54:56 crc kubenswrapper[4808]: W0311 08:54:56.212082 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod85875097_abf3_4f4d_baa8_19ee0d1b85e7.slice/crio-63610b2598c9ff99a5218d096a5469685ae5cbfd98cdc338b86c2012f85bd536 WatchSource:0}: Error finding container 63610b2598c9ff99a5218d096a5469685ae5cbfd98cdc338b86c2012f85bd536: Status 404 returned error can't find the container with id 63610b2598c9ff99a5218d096a5469685ae5cbfd98cdc338b86c2012f85bd536 Mar 11 08:54:56 crc kubenswrapper[4808]: I0311 08:54:56.472468 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-trl6k" event={"ID":"05ae43c7-5b21-4813-adfe-8906527c2a44","Type":"ContainerStarted","Data":"e73d7e34a875854cc1d6849ff86cda318264786816a530aebade41425cfedab0"} Mar 11 08:54:56 crc kubenswrapper[4808]: I0311 08:54:56.474219 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-678558f478-lm7fx" event={"ID":"b398edfc-49af-43cf-bd89-96c8b12ce604","Type":"ContainerStarted","Data":"3c30afc9dc558ecdba01336ee80d727f1d37cce2b3dae35e32a0813b0f4c9924"} Mar 11 08:54:56 crc kubenswrapper[4808]: I0311 08:54:56.474286 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-678558f478-lm7fx" event={"ID":"b398edfc-49af-43cf-bd89-96c8b12ce604","Type":"ContainerStarted","Data":"7057243100c806be6d88ee28bbab25399f908f9f3979207aaf1a21f4184aa825"} Mar 11 08:54:56 crc kubenswrapper[4808]: I0311 08:54:56.476901 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-rzf4d" event={"ID":"bac5717e-202e-43a5-a822-a7eeedc11af5","Type":"ContainerStarted","Data":"cc9aae94fe910e9c92f619836e9413b7a58ccaaae6a1cbb1ce25870faec96cac"} Mar 11 08:54:56 crc kubenswrapper[4808]: I0311 08:54:56.479302 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f558f5558-ds975" event={"ID":"85875097-abf3-4f4d-baa8-19ee0d1b85e7","Type":"ContainerStarted","Data":"63610b2598c9ff99a5218d096a5469685ae5cbfd98cdc338b86c2012f85bd536"} Mar 11 08:54:56 crc kubenswrapper[4808]: I0311 08:54:56.480447 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-cnw9s" event={"ID":"80c7794c-5470-4f6b-8f7d-abc0e9f31785","Type":"ContainerStarted","Data":"5823910bd63a0c3aa2e3382e3d6bae3e5818f22048c43b4406b72476eea0462f"} Mar 11 08:54:56 crc kubenswrapper[4808]: I0311 08:54:56.495123 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-678558f478-lm7fx" podStartSLOduration=1.495096849 podStartE2EDuration="1.495096849s" podCreationTimestamp="2026-03-11 08:54:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:54:56.492813469 +0000 UTC m=+947.446136829" watchObservedRunningTime="2026-03-11 08:54:56.495096849 +0000 UTC m=+947.448420199" Mar 11 08:54:57 crc kubenswrapper[4808]: I0311 08:54:57.245067 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hnb2g"] Mar 11 08:54:57 crc kubenswrapper[4808]: I0311 08:54:57.485936 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hnb2g" podUID="d3a9e52e-5f5c-45cb-a809-a872ece0cd2f" containerName="registry-server" containerID="cri-o://beb108dc26bdaae8efc7506371767c15b98584a9f3cb53ffa270d38ea145e80a" gracePeriod=2 Mar 11 08:54:58 crc kubenswrapper[4808]: I0311 08:54:58.500706 4808 generic.go:334] "Generic (PLEG): container finished" podID="d3a9e52e-5f5c-45cb-a809-a872ece0cd2f" containerID="beb108dc26bdaae8efc7506371767c15b98584a9f3cb53ffa270d38ea145e80a" exitCode=0 Mar 11 08:54:58 crc kubenswrapper[4808]: I0311 08:54:58.500745 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hnb2g" event={"ID":"d3a9e52e-5f5c-45cb-a809-a872ece0cd2f","Type":"ContainerDied","Data":"beb108dc26bdaae8efc7506371767c15b98584a9f3cb53ffa270d38ea145e80a"} Mar 11 08:54:58 crc kubenswrapper[4808]: I0311 08:54:58.616603 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hnb2g" Mar 11 08:54:58 crc kubenswrapper[4808]: I0311 08:54:58.686114 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v2n8v\" (UniqueName: \"kubernetes.io/projected/d3a9e52e-5f5c-45cb-a809-a872ece0cd2f-kube-api-access-v2n8v\") pod \"d3a9e52e-5f5c-45cb-a809-a872ece0cd2f\" (UID: \"d3a9e52e-5f5c-45cb-a809-a872ece0cd2f\") " Mar 11 08:54:58 crc kubenswrapper[4808]: I0311 08:54:58.686425 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3a9e52e-5f5c-45cb-a809-a872ece0cd2f-utilities\") pod \"d3a9e52e-5f5c-45cb-a809-a872ece0cd2f\" (UID: \"d3a9e52e-5f5c-45cb-a809-a872ece0cd2f\") " Mar 11 08:54:58 crc kubenswrapper[4808]: I0311 08:54:58.686577 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3a9e52e-5f5c-45cb-a809-a872ece0cd2f-catalog-content\") pod \"d3a9e52e-5f5c-45cb-a809-a872ece0cd2f\" (UID: \"d3a9e52e-5f5c-45cb-a809-a872ece0cd2f\") " Mar 11 08:54:58 crc kubenswrapper[4808]: I0311 08:54:58.693697 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3a9e52e-5f5c-45cb-a809-a872ece0cd2f-kube-api-access-v2n8v" (OuterVolumeSpecName: "kube-api-access-v2n8v") pod "d3a9e52e-5f5c-45cb-a809-a872ece0cd2f" (UID: "d3a9e52e-5f5c-45cb-a809-a872ece0cd2f"). InnerVolumeSpecName "kube-api-access-v2n8v". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:54:58 crc kubenswrapper[4808]: I0311 08:54:58.695858 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3a9e52e-5f5c-45cb-a809-a872ece0cd2f-utilities" (OuterVolumeSpecName: "utilities") pod "d3a9e52e-5f5c-45cb-a809-a872ece0cd2f" (UID: "d3a9e52e-5f5c-45cb-a809-a872ece0cd2f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 08:54:58 crc kubenswrapper[4808]: I0311 08:54:58.723224 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3a9e52e-5f5c-45cb-a809-a872ece0cd2f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d3a9e52e-5f5c-45cb-a809-a872ece0cd2f" (UID: "d3a9e52e-5f5c-45cb-a809-a872ece0cd2f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 08:54:58 crc kubenswrapper[4808]: I0311 08:54:58.788612 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3a9e52e-5f5c-45cb-a809-a872ece0cd2f-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 11 08:54:58 crc kubenswrapper[4808]: I0311 08:54:58.788660 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v2n8v\" (UniqueName: \"kubernetes.io/projected/d3a9e52e-5f5c-45cb-a809-a872ece0cd2f-kube-api-access-v2n8v\") on node \"crc\" DevicePath \"\"" Mar 11 08:54:58 crc kubenswrapper[4808]: I0311 08:54:58.788684 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3a9e52e-5f5c-45cb-a809-a872ece0cd2f-utilities\") on node \"crc\" DevicePath \"\"" Mar 11 08:54:59 crc kubenswrapper[4808]: I0311 08:54:59.507903 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-trl6k" event={"ID":"05ae43c7-5b21-4813-adfe-8906527c2a44","Type":"ContainerStarted","Data":"cf04f6ae04962468d91efd1d020bc0275cfa0b06a9176cfd10862d783fc318b8"} Mar 11 08:54:59 crc kubenswrapper[4808]: I0311 08:54:59.509372 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-rzf4d" event={"ID":"bac5717e-202e-43a5-a822-a7eeedc11af5","Type":"ContainerStarted","Data":"575120aea6d038cb0aa9da6149b0fe1546ac0d92f82a45a40cae436e127e0f9c"} Mar 11 08:54:59 crc kubenswrapper[4808]: I0311 08:54:59.509522 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-rzf4d" Mar 11 08:54:59 crc kubenswrapper[4808]: I0311 08:54:59.512132 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f558f5558-ds975" event={"ID":"85875097-abf3-4f4d-baa8-19ee0d1b85e7","Type":"ContainerStarted","Data":"730ba7dcc08fe108b4f81bc18801f01ab0a9cf281c7fbfec98d3e477db116cfb"} Mar 11 08:54:59 crc kubenswrapper[4808]: I0311 08:54:59.512308 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-5f558f5558-ds975" Mar 11 08:54:59 crc kubenswrapper[4808]: I0311 08:54:59.518781 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hnb2g" event={"ID":"d3a9e52e-5f5c-45cb-a809-a872ece0cd2f","Type":"ContainerDied","Data":"6a3c6fd8dae3bf353f6332bcb6fcd7cc0f594f8094c1f0d66f0cf907cdcc964d"} Mar 11 08:54:59 crc kubenswrapper[4808]: I0311 08:54:59.518833 4808 scope.go:117] "RemoveContainer" containerID="beb108dc26bdaae8efc7506371767c15b98584a9f3cb53ffa270d38ea145e80a" Mar 11 08:54:59 crc kubenswrapper[4808]: I0311 08:54:59.518857 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hnb2g" Mar 11 08:54:59 crc kubenswrapper[4808]: I0311 08:54:59.521508 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-cnw9s" event={"ID":"80c7794c-5470-4f6b-8f7d-abc0e9f31785","Type":"ContainerStarted","Data":"a9f811cba4909d2942d0564266433772ea35d5ea083f26fcfe3b9592544e41ed"} Mar 11 08:54:59 crc kubenswrapper[4808]: I0311 08:54:59.546249 4808 scope.go:117] "RemoveContainer" containerID="4f1bdc8ca1c5a54ecf87f5a3e257a34a110f5db93797657648ff7ff41c7cd651" Mar 11 08:54:59 crc kubenswrapper[4808]: I0311 08:54:59.548616 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-rzf4d" podStartSLOduration=1.333419229 podStartE2EDuration="4.548597979s" podCreationTimestamp="2026-03-11 08:54:55 +0000 UTC" firstStartedPulling="2026-03-11 08:54:55.463777172 +0000 UTC m=+946.417100492" lastFinishedPulling="2026-03-11 08:54:58.678955922 +0000 UTC m=+949.632279242" observedRunningTime="2026-03-11 08:54:59.531132101 +0000 UTC m=+950.484455441" watchObservedRunningTime="2026-03-11 08:54:59.548597979 +0000 UTC m=+950.501921299" Mar 11 08:54:59 crc kubenswrapper[4808]: I0311 08:54:59.552415 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-cnw9s" podStartSLOduration=1.825745681 podStartE2EDuration="4.552396159s" podCreationTimestamp="2026-03-11 08:54:55 +0000 UTC" firstStartedPulling="2026-03-11 08:54:55.934368203 +0000 UTC m=+946.887691533" lastFinishedPulling="2026-03-11 08:54:58.661018691 +0000 UTC m=+949.614342011" observedRunningTime="2026-03-11 08:54:59.543418593 +0000 UTC m=+950.496741943" watchObservedRunningTime="2026-03-11 08:54:59.552396159 +0000 UTC m=+950.505719479" Mar 11 08:54:59 crc kubenswrapper[4808]: I0311 08:54:59.566940 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-5f558f5558-ds975" podStartSLOduration=2.119651389 podStartE2EDuration="4.56691661s" podCreationTimestamp="2026-03-11 08:54:55 +0000 UTC" firstStartedPulling="2026-03-11 08:54:56.221154685 +0000 UTC m=+947.174478005" lastFinishedPulling="2026-03-11 08:54:58.668419906 +0000 UTC m=+949.621743226" observedRunningTime="2026-03-11 08:54:59.562844953 +0000 UTC m=+950.516168273" watchObservedRunningTime="2026-03-11 08:54:59.56691661 +0000 UTC m=+950.520239930" Mar 11 08:54:59 crc kubenswrapper[4808]: I0311 08:54:59.572390 4808 scope.go:117] "RemoveContainer" containerID="4ecd84343d8ddd3ea233e82256bb79cdd34990b8288051c3a6f2de52fccfaa39" Mar 11 08:54:59 crc kubenswrapper[4808]: I0311 08:54:59.614319 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hnb2g"] Mar 11 08:54:59 crc kubenswrapper[4808]: I0311 08:54:59.630441 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hnb2g"] Mar 11 08:54:59 crc kubenswrapper[4808]: I0311 08:54:59.798110 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3a9e52e-5f5c-45cb-a809-a872ece0cd2f" path="/var/lib/kubelet/pods/d3a9e52e-5f5c-45cb-a809-a872ece0cd2f/volumes" Mar 11 08:55:01 crc kubenswrapper[4808]: I0311 08:55:01.547622 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-trl6k" event={"ID":"05ae43c7-5b21-4813-adfe-8906527c2a44","Type":"ContainerStarted","Data":"39b098c61ad568160390c5caa3f366cadd16552e3ecd9ca0437c327b5e031b14"} Mar 11 08:55:01 crc kubenswrapper[4808]: I0311 08:55:01.576052 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-trl6k" podStartSLOduration=1.179208485 podStartE2EDuration="6.576029411s" podCreationTimestamp="2026-03-11 08:54:55 +0000 UTC" firstStartedPulling="2026-03-11 08:54:55.81299798 +0000 UTC m=+946.766321300" lastFinishedPulling="2026-03-11 08:55:01.209818906 +0000 UTC m=+952.163142226" observedRunningTime="2026-03-11 08:55:01.568235166 +0000 UTC m=+952.521558506" watchObservedRunningTime="2026-03-11 08:55:01.576029411 +0000 UTC m=+952.529352751" Mar 11 08:55:05 crc kubenswrapper[4808]: I0311 08:55:05.469115 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-rzf4d" Mar 11 08:55:05 crc kubenswrapper[4808]: I0311 08:55:05.754847 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-678558f478-lm7fx" Mar 11 08:55:05 crc kubenswrapper[4808]: I0311 08:55:05.755255 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-678558f478-lm7fx" Mar 11 08:55:05 crc kubenswrapper[4808]: I0311 08:55:05.763437 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-678558f478-lm7fx" Mar 11 08:55:06 crc kubenswrapper[4808]: I0311 08:55:06.587453 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-678558f478-lm7fx" Mar 11 08:55:06 crc kubenswrapper[4808]: I0311 08:55:06.695704 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-zp4ks"] Mar 11 08:55:16 crc kubenswrapper[4808]: I0311 08:55:16.022990 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-5f558f5558-ds975" Mar 11 08:55:16 crc kubenswrapper[4808]: I0311 08:55:16.027397 4808 patch_prober.go:28] interesting pod/machine-config-daemon-tfsm9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 08:55:16 crc kubenswrapper[4808]: I0311 08:55:16.027457 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 08:55:28 crc kubenswrapper[4808]: I0311 08:55:28.898978 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c189zgv"] Mar 11 08:55:28 crc kubenswrapper[4808]: E0311 08:55:28.899773 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3a9e52e-5f5c-45cb-a809-a872ece0cd2f" containerName="extract-utilities" Mar 11 08:55:28 crc kubenswrapper[4808]: I0311 08:55:28.899794 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3a9e52e-5f5c-45cb-a809-a872ece0cd2f" containerName="extract-utilities" Mar 11 08:55:28 crc kubenswrapper[4808]: E0311 08:55:28.899817 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3a9e52e-5f5c-45cb-a809-a872ece0cd2f" containerName="registry-server" Mar 11 08:55:28 crc kubenswrapper[4808]: I0311 08:55:28.899825 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3a9e52e-5f5c-45cb-a809-a872ece0cd2f" containerName="registry-server" Mar 11 08:55:28 crc kubenswrapper[4808]: E0311 08:55:28.899842 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3a9e52e-5f5c-45cb-a809-a872ece0cd2f" containerName="extract-content" Mar 11 08:55:28 crc kubenswrapper[4808]: I0311 08:55:28.899850 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3a9e52e-5f5c-45cb-a809-a872ece0cd2f" containerName="extract-content" Mar 11 08:55:28 crc kubenswrapper[4808]: I0311 08:55:28.899970 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3a9e52e-5f5c-45cb-a809-a872ece0cd2f" containerName="registry-server" Mar 11 08:55:28 crc kubenswrapper[4808]: I0311 08:55:28.900808 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c189zgv" Mar 11 08:55:28 crc kubenswrapper[4808]: I0311 08:55:28.903967 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Mar 11 08:55:28 crc kubenswrapper[4808]: I0311 08:55:28.910160 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c189zgv"] Mar 11 08:55:28 crc kubenswrapper[4808]: I0311 08:55:28.991126 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fm9rs\" (UniqueName: \"kubernetes.io/projected/e795600f-0369-4c94-b30c-f79f2d2eef08-kube-api-access-fm9rs\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c189zgv\" (UID: \"e795600f-0369-4c94-b30c-f79f2d2eef08\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c189zgv" Mar 11 08:55:28 crc kubenswrapper[4808]: I0311 08:55:28.991216 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e795600f-0369-4c94-b30c-f79f2d2eef08-util\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c189zgv\" (UID: \"e795600f-0369-4c94-b30c-f79f2d2eef08\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c189zgv" Mar 11 08:55:28 crc kubenswrapper[4808]: I0311 08:55:28.991281 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e795600f-0369-4c94-b30c-f79f2d2eef08-bundle\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c189zgv\" (UID: \"e795600f-0369-4c94-b30c-f79f2d2eef08\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c189zgv" Mar 11 08:55:29 crc kubenswrapper[4808]: I0311 08:55:29.092185 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e795600f-0369-4c94-b30c-f79f2d2eef08-bundle\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c189zgv\" (UID: \"e795600f-0369-4c94-b30c-f79f2d2eef08\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c189zgv" Mar 11 08:55:29 crc kubenswrapper[4808]: I0311 08:55:29.092315 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fm9rs\" (UniqueName: \"kubernetes.io/projected/e795600f-0369-4c94-b30c-f79f2d2eef08-kube-api-access-fm9rs\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c189zgv\" (UID: \"e795600f-0369-4c94-b30c-f79f2d2eef08\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c189zgv" Mar 11 08:55:29 crc kubenswrapper[4808]: I0311 08:55:29.092341 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e795600f-0369-4c94-b30c-f79f2d2eef08-util\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c189zgv\" (UID: \"e795600f-0369-4c94-b30c-f79f2d2eef08\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c189zgv" Mar 11 08:55:29 crc kubenswrapper[4808]: I0311 08:55:29.093308 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e795600f-0369-4c94-b30c-f79f2d2eef08-util\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c189zgv\" (UID: \"e795600f-0369-4c94-b30c-f79f2d2eef08\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c189zgv" Mar 11 08:55:29 crc kubenswrapper[4808]: I0311 08:55:29.094076 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e795600f-0369-4c94-b30c-f79f2d2eef08-bundle\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c189zgv\" (UID: \"e795600f-0369-4c94-b30c-f79f2d2eef08\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c189zgv" Mar 11 08:55:29 crc kubenswrapper[4808]: I0311 08:55:29.124181 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fm9rs\" (UniqueName: \"kubernetes.io/projected/e795600f-0369-4c94-b30c-f79f2d2eef08-kube-api-access-fm9rs\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c189zgv\" (UID: \"e795600f-0369-4c94-b30c-f79f2d2eef08\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c189zgv" Mar 11 08:55:29 crc kubenswrapper[4808]: I0311 08:55:29.216289 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c189zgv" Mar 11 08:55:29 crc kubenswrapper[4808]: I0311 08:55:29.423607 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c189zgv"] Mar 11 08:55:29 crc kubenswrapper[4808]: I0311 08:55:29.733722 4808 generic.go:334] "Generic (PLEG): container finished" podID="e795600f-0369-4c94-b30c-f79f2d2eef08" containerID="9e93f80aa96826658f410a241b98f2e38eed8c92b571c591b0a051c654208cff" exitCode=0 Mar 11 08:55:29 crc kubenswrapper[4808]: I0311 08:55:29.733841 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c189zgv" event={"ID":"e795600f-0369-4c94-b30c-f79f2d2eef08","Type":"ContainerDied","Data":"9e93f80aa96826658f410a241b98f2e38eed8c92b571c591b0a051c654208cff"} Mar 11 08:55:29 crc kubenswrapper[4808]: I0311 08:55:29.734118 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c189zgv" event={"ID":"e795600f-0369-4c94-b30c-f79f2d2eef08","Type":"ContainerStarted","Data":"e9f992deddee480601d8509f994f152725ee331c489240e2e48eff28a3265dab"} Mar 11 08:55:31 crc kubenswrapper[4808]: I0311 08:55:31.747168 4808 generic.go:334] "Generic (PLEG): container finished" podID="e795600f-0369-4c94-b30c-f79f2d2eef08" containerID="fba8c21e87602fe782b3fdf6e83a12659bea3cf8cc2da22c78878dc31ed8b2cb" exitCode=0 Mar 11 08:55:31 crc kubenswrapper[4808]: I0311 08:55:31.747225 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c189zgv" event={"ID":"e795600f-0369-4c94-b30c-f79f2d2eef08","Type":"ContainerDied","Data":"fba8c21e87602fe782b3fdf6e83a12659bea3cf8cc2da22c78878dc31ed8b2cb"} Mar 11 08:55:31 crc kubenswrapper[4808]: I0311 08:55:31.748701 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-zp4ks" podUID="24ab5c03-d768-4147-bbc2-4e71ac337623" containerName="console" containerID="cri-o://1cbab9ab4724fe279fbf33e2fa4c3350b20b119494cf9ac472c15763714a09e0" gracePeriod=15 Mar 11 08:55:32 crc kubenswrapper[4808]: I0311 08:55:32.204419 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-zp4ks_24ab5c03-d768-4147-bbc2-4e71ac337623/console/0.log" Mar 11 08:55:32 crc kubenswrapper[4808]: I0311 08:55:32.204737 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-zp4ks" Mar 11 08:55:32 crc kubenswrapper[4808]: I0311 08:55:32.230600 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twpbf\" (UniqueName: \"kubernetes.io/projected/24ab5c03-d768-4147-bbc2-4e71ac337623-kube-api-access-twpbf\") pod \"24ab5c03-d768-4147-bbc2-4e71ac337623\" (UID: \"24ab5c03-d768-4147-bbc2-4e71ac337623\") " Mar 11 08:55:32 crc kubenswrapper[4808]: I0311 08:55:32.230690 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/24ab5c03-d768-4147-bbc2-4e71ac337623-oauth-serving-cert\") pod \"24ab5c03-d768-4147-bbc2-4e71ac337623\" (UID: \"24ab5c03-d768-4147-bbc2-4e71ac337623\") " Mar 11 08:55:32 crc kubenswrapper[4808]: I0311 08:55:32.230749 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/24ab5c03-d768-4147-bbc2-4e71ac337623-service-ca\") pod \"24ab5c03-d768-4147-bbc2-4e71ac337623\" (UID: \"24ab5c03-d768-4147-bbc2-4e71ac337623\") " Mar 11 08:55:32 crc kubenswrapper[4808]: I0311 08:55:32.230808 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/24ab5c03-d768-4147-bbc2-4e71ac337623-trusted-ca-bundle\") pod \"24ab5c03-d768-4147-bbc2-4e71ac337623\" (UID: \"24ab5c03-d768-4147-bbc2-4e71ac337623\") " Mar 11 08:55:32 crc kubenswrapper[4808]: I0311 08:55:32.230843 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/24ab5c03-d768-4147-bbc2-4e71ac337623-console-config\") pod \"24ab5c03-d768-4147-bbc2-4e71ac337623\" (UID: \"24ab5c03-d768-4147-bbc2-4e71ac337623\") " Mar 11 08:55:32 crc kubenswrapper[4808]: I0311 08:55:32.230974 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/24ab5c03-d768-4147-bbc2-4e71ac337623-console-oauth-config\") pod \"24ab5c03-d768-4147-bbc2-4e71ac337623\" (UID: \"24ab5c03-d768-4147-bbc2-4e71ac337623\") " Mar 11 08:55:32 crc kubenswrapper[4808]: I0311 08:55:32.231012 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/24ab5c03-d768-4147-bbc2-4e71ac337623-console-serving-cert\") pod \"24ab5c03-d768-4147-bbc2-4e71ac337623\" (UID: \"24ab5c03-d768-4147-bbc2-4e71ac337623\") " Mar 11 08:55:32 crc kubenswrapper[4808]: I0311 08:55:32.231654 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24ab5c03-d768-4147-bbc2-4e71ac337623-service-ca" (OuterVolumeSpecName: "service-ca") pod "24ab5c03-d768-4147-bbc2-4e71ac337623" (UID: "24ab5c03-d768-4147-bbc2-4e71ac337623"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:55:32 crc kubenswrapper[4808]: I0311 08:55:32.231677 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24ab5c03-d768-4147-bbc2-4e71ac337623-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "24ab5c03-d768-4147-bbc2-4e71ac337623" (UID: "24ab5c03-d768-4147-bbc2-4e71ac337623"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:55:32 crc kubenswrapper[4808]: I0311 08:55:32.232148 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24ab5c03-d768-4147-bbc2-4e71ac337623-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "24ab5c03-d768-4147-bbc2-4e71ac337623" (UID: "24ab5c03-d768-4147-bbc2-4e71ac337623"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:55:32 crc kubenswrapper[4808]: I0311 08:55:32.232377 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24ab5c03-d768-4147-bbc2-4e71ac337623-console-config" (OuterVolumeSpecName: "console-config") pod "24ab5c03-d768-4147-bbc2-4e71ac337623" (UID: "24ab5c03-d768-4147-bbc2-4e71ac337623"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:55:32 crc kubenswrapper[4808]: I0311 08:55:32.237139 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24ab5c03-d768-4147-bbc2-4e71ac337623-kube-api-access-twpbf" (OuterVolumeSpecName: "kube-api-access-twpbf") pod "24ab5c03-d768-4147-bbc2-4e71ac337623" (UID: "24ab5c03-d768-4147-bbc2-4e71ac337623"). InnerVolumeSpecName "kube-api-access-twpbf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:55:32 crc kubenswrapper[4808]: I0311 08:55:32.237582 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24ab5c03-d768-4147-bbc2-4e71ac337623-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "24ab5c03-d768-4147-bbc2-4e71ac337623" (UID: "24ab5c03-d768-4147-bbc2-4e71ac337623"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:55:32 crc kubenswrapper[4808]: I0311 08:55:32.237699 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24ab5c03-d768-4147-bbc2-4e71ac337623-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "24ab5c03-d768-4147-bbc2-4e71ac337623" (UID: "24ab5c03-d768-4147-bbc2-4e71ac337623"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:55:32 crc kubenswrapper[4808]: I0311 08:55:32.333194 4808 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/24ab5c03-d768-4147-bbc2-4e71ac337623-console-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 11 08:55:32 crc kubenswrapper[4808]: I0311 08:55:32.333221 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-twpbf\" (UniqueName: \"kubernetes.io/projected/24ab5c03-d768-4147-bbc2-4e71ac337623-kube-api-access-twpbf\") on node \"crc\" DevicePath \"\"" Mar 11 08:55:32 crc kubenswrapper[4808]: I0311 08:55:32.333249 4808 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/24ab5c03-d768-4147-bbc2-4e71ac337623-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 11 08:55:32 crc kubenswrapper[4808]: I0311 08:55:32.333263 4808 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/24ab5c03-d768-4147-bbc2-4e71ac337623-service-ca\") on node \"crc\" DevicePath \"\"" Mar 11 08:55:32 crc kubenswrapper[4808]: I0311 08:55:32.333271 4808 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/24ab5c03-d768-4147-bbc2-4e71ac337623-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 08:55:32 crc kubenswrapper[4808]: I0311 08:55:32.333278 4808 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/24ab5c03-d768-4147-bbc2-4e71ac337623-console-config\") on node \"crc\" DevicePath \"\"" Mar 11 08:55:32 crc kubenswrapper[4808]: I0311 08:55:32.333287 4808 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/24ab5c03-d768-4147-bbc2-4e71ac337623-console-oauth-config\") on node \"crc\" DevicePath \"\"" Mar 11 08:55:32 crc kubenswrapper[4808]: I0311 08:55:32.754960 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-zp4ks_24ab5c03-d768-4147-bbc2-4e71ac337623/console/0.log" Mar 11 08:55:32 crc kubenswrapper[4808]: I0311 08:55:32.755018 4808 generic.go:334] "Generic (PLEG): container finished" podID="24ab5c03-d768-4147-bbc2-4e71ac337623" containerID="1cbab9ab4724fe279fbf33e2fa4c3350b20b119494cf9ac472c15763714a09e0" exitCode=2 Mar 11 08:55:32 crc kubenswrapper[4808]: I0311 08:55:32.755099 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-zp4ks" event={"ID":"24ab5c03-d768-4147-bbc2-4e71ac337623","Type":"ContainerDied","Data":"1cbab9ab4724fe279fbf33e2fa4c3350b20b119494cf9ac472c15763714a09e0"} Mar 11 08:55:32 crc kubenswrapper[4808]: I0311 08:55:32.755165 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-zp4ks" event={"ID":"24ab5c03-d768-4147-bbc2-4e71ac337623","Type":"ContainerDied","Data":"d0c1fc932778507e5ff4041017fff259ed2eda76dc8719beab06708af1a0c33d"} Mar 11 08:55:32 crc kubenswrapper[4808]: I0311 08:55:32.755188 4808 scope.go:117] "RemoveContainer" containerID="1cbab9ab4724fe279fbf33e2fa4c3350b20b119494cf9ac472c15763714a09e0" Mar 11 08:55:32 crc kubenswrapper[4808]: I0311 08:55:32.755187 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-zp4ks" Mar 11 08:55:32 crc kubenswrapper[4808]: I0311 08:55:32.758621 4808 generic.go:334] "Generic (PLEG): container finished" podID="e795600f-0369-4c94-b30c-f79f2d2eef08" containerID="ae2acbda6d0d4b6a43bad1d95e313dc0f97703408684076ca37f85c256323972" exitCode=0 Mar 11 08:55:32 crc kubenswrapper[4808]: I0311 08:55:32.759813 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c189zgv" event={"ID":"e795600f-0369-4c94-b30c-f79f2d2eef08","Type":"ContainerDied","Data":"ae2acbda6d0d4b6a43bad1d95e313dc0f97703408684076ca37f85c256323972"} Mar 11 08:55:32 crc kubenswrapper[4808]: I0311 08:55:32.780994 4808 scope.go:117] "RemoveContainer" containerID="1cbab9ab4724fe279fbf33e2fa4c3350b20b119494cf9ac472c15763714a09e0" Mar 11 08:55:32 crc kubenswrapper[4808]: E0311 08:55:32.781661 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1cbab9ab4724fe279fbf33e2fa4c3350b20b119494cf9ac472c15763714a09e0\": container with ID starting with 1cbab9ab4724fe279fbf33e2fa4c3350b20b119494cf9ac472c15763714a09e0 not found: ID does not exist" containerID="1cbab9ab4724fe279fbf33e2fa4c3350b20b119494cf9ac472c15763714a09e0" Mar 11 08:55:32 crc kubenswrapper[4808]: I0311 08:55:32.781760 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1cbab9ab4724fe279fbf33e2fa4c3350b20b119494cf9ac472c15763714a09e0"} err="failed to get container status \"1cbab9ab4724fe279fbf33e2fa4c3350b20b119494cf9ac472c15763714a09e0\": rpc error: code = NotFound desc = could not find container \"1cbab9ab4724fe279fbf33e2fa4c3350b20b119494cf9ac472c15763714a09e0\": container with ID starting with 1cbab9ab4724fe279fbf33e2fa4c3350b20b119494cf9ac472c15763714a09e0 not found: ID does not exist" Mar 11 08:55:32 crc kubenswrapper[4808]: I0311 08:55:32.804017 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-zp4ks"] Mar 11 08:55:32 crc kubenswrapper[4808]: I0311 08:55:32.809537 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-zp4ks"] Mar 11 08:55:33 crc kubenswrapper[4808]: I0311 08:55:33.808595 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24ab5c03-d768-4147-bbc2-4e71ac337623" path="/var/lib/kubelet/pods/24ab5c03-d768-4147-bbc2-4e71ac337623/volumes" Mar 11 08:55:34 crc kubenswrapper[4808]: I0311 08:55:34.068143 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c189zgv" Mar 11 08:55:34 crc kubenswrapper[4808]: I0311 08:55:34.152915 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fm9rs\" (UniqueName: \"kubernetes.io/projected/e795600f-0369-4c94-b30c-f79f2d2eef08-kube-api-access-fm9rs\") pod \"e795600f-0369-4c94-b30c-f79f2d2eef08\" (UID: \"e795600f-0369-4c94-b30c-f79f2d2eef08\") " Mar 11 08:55:34 crc kubenswrapper[4808]: I0311 08:55:34.152998 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e795600f-0369-4c94-b30c-f79f2d2eef08-bundle\") pod \"e795600f-0369-4c94-b30c-f79f2d2eef08\" (UID: \"e795600f-0369-4c94-b30c-f79f2d2eef08\") " Mar 11 08:55:34 crc kubenswrapper[4808]: I0311 08:55:34.153127 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e795600f-0369-4c94-b30c-f79f2d2eef08-util\") pod \"e795600f-0369-4c94-b30c-f79f2d2eef08\" (UID: \"e795600f-0369-4c94-b30c-f79f2d2eef08\") " Mar 11 08:55:34 crc kubenswrapper[4808]: I0311 08:55:34.154700 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e795600f-0369-4c94-b30c-f79f2d2eef08-bundle" (OuterVolumeSpecName: "bundle") pod "e795600f-0369-4c94-b30c-f79f2d2eef08" (UID: "e795600f-0369-4c94-b30c-f79f2d2eef08"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 08:55:34 crc kubenswrapper[4808]: I0311 08:55:34.157684 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e795600f-0369-4c94-b30c-f79f2d2eef08-kube-api-access-fm9rs" (OuterVolumeSpecName: "kube-api-access-fm9rs") pod "e795600f-0369-4c94-b30c-f79f2d2eef08" (UID: "e795600f-0369-4c94-b30c-f79f2d2eef08"). InnerVolumeSpecName "kube-api-access-fm9rs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:55:34 crc kubenswrapper[4808]: I0311 08:55:34.172662 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e795600f-0369-4c94-b30c-f79f2d2eef08-util" (OuterVolumeSpecName: "util") pod "e795600f-0369-4c94-b30c-f79f2d2eef08" (UID: "e795600f-0369-4c94-b30c-f79f2d2eef08"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 08:55:34 crc kubenswrapper[4808]: I0311 08:55:34.255780 4808 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e795600f-0369-4c94-b30c-f79f2d2eef08-util\") on node \"crc\" DevicePath \"\"" Mar 11 08:55:34 crc kubenswrapper[4808]: I0311 08:55:34.255849 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fm9rs\" (UniqueName: \"kubernetes.io/projected/e795600f-0369-4c94-b30c-f79f2d2eef08-kube-api-access-fm9rs\") on node \"crc\" DevicePath \"\"" Mar 11 08:55:34 crc kubenswrapper[4808]: I0311 08:55:34.255873 4808 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e795600f-0369-4c94-b30c-f79f2d2eef08-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 08:55:34 crc kubenswrapper[4808]: I0311 08:55:34.781015 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c189zgv" event={"ID":"e795600f-0369-4c94-b30c-f79f2d2eef08","Type":"ContainerDied","Data":"e9f992deddee480601d8509f994f152725ee331c489240e2e48eff28a3265dab"} Mar 11 08:55:34 crc kubenswrapper[4808]: I0311 08:55:34.781062 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c189zgv" Mar 11 08:55:34 crc kubenswrapper[4808]: I0311 08:55:34.781071 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e9f992deddee480601d8509f994f152725ee331c489240e2e48eff28a3265dab" Mar 11 08:55:46 crc kubenswrapper[4808]: I0311 08:55:46.027716 4808 patch_prober.go:28] interesting pod/machine-config-daemon-tfsm9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 08:55:46 crc kubenswrapper[4808]: I0311 08:55:46.028278 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 08:55:46 crc kubenswrapper[4808]: I0311 08:55:46.028325 4808 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" Mar 11 08:55:46 crc kubenswrapper[4808]: I0311 08:55:46.029073 4808 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9e9ccff456ae05e5b80f4063f4e4da8d311a6152671f06021793956cce879777"} pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 11 08:55:46 crc kubenswrapper[4808]: I0311 08:55:46.029189 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerName="machine-config-daemon" containerID="cri-o://9e9ccff456ae05e5b80f4063f4e4da8d311a6152671f06021793956cce879777" gracePeriod=600 Mar 11 08:55:46 crc kubenswrapper[4808]: I0311 08:55:46.851800 4808 generic.go:334] "Generic (PLEG): container finished" podID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerID="9e9ccff456ae05e5b80f4063f4e4da8d311a6152671f06021793956cce879777" exitCode=0 Mar 11 08:55:46 crc kubenswrapper[4808]: I0311 08:55:46.851859 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" event={"ID":"3dda5309-668d-4e3c-b3b2-1d708eecc578","Type":"ContainerDied","Data":"9e9ccff456ae05e5b80f4063f4e4da8d311a6152671f06021793956cce879777"} Mar 11 08:55:46 crc kubenswrapper[4808]: I0311 08:55:46.852166 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" event={"ID":"3dda5309-668d-4e3c-b3b2-1d708eecc578","Type":"ContainerStarted","Data":"488e906783a49352d04d778f4c40f55061de3db9ceb8af5362f944dc622b1e1a"} Mar 11 08:55:46 crc kubenswrapper[4808]: I0311 08:55:46.852187 4808 scope.go:117] "RemoveContainer" containerID="dfe1963de46b9f0d9bf3f89f3d4ece211127d1565ecdc2dcad109566897a8ce5" Mar 11 08:55:47 crc kubenswrapper[4808]: I0311 08:55:47.141940 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-66bd996c44-tl4z2"] Mar 11 08:55:47 crc kubenswrapper[4808]: E0311 08:55:47.142521 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24ab5c03-d768-4147-bbc2-4e71ac337623" containerName="console" Mar 11 08:55:47 crc kubenswrapper[4808]: I0311 08:55:47.142539 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="24ab5c03-d768-4147-bbc2-4e71ac337623" containerName="console" Mar 11 08:55:47 crc kubenswrapper[4808]: E0311 08:55:47.142557 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e795600f-0369-4c94-b30c-f79f2d2eef08" containerName="util" Mar 11 08:55:47 crc kubenswrapper[4808]: I0311 08:55:47.142564 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="e795600f-0369-4c94-b30c-f79f2d2eef08" containerName="util" Mar 11 08:55:47 crc kubenswrapper[4808]: E0311 08:55:47.142574 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e795600f-0369-4c94-b30c-f79f2d2eef08" containerName="extract" Mar 11 08:55:47 crc kubenswrapper[4808]: I0311 08:55:47.142581 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="e795600f-0369-4c94-b30c-f79f2d2eef08" containerName="extract" Mar 11 08:55:47 crc kubenswrapper[4808]: E0311 08:55:47.142595 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e795600f-0369-4c94-b30c-f79f2d2eef08" containerName="pull" Mar 11 08:55:47 crc kubenswrapper[4808]: I0311 08:55:47.142603 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="e795600f-0369-4c94-b30c-f79f2d2eef08" containerName="pull" Mar 11 08:55:47 crc kubenswrapper[4808]: I0311 08:55:47.142720 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="e795600f-0369-4c94-b30c-f79f2d2eef08" containerName="extract" Mar 11 08:55:47 crc kubenswrapper[4808]: I0311 08:55:47.142736 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="24ab5c03-d768-4147-bbc2-4e71ac337623" containerName="console" Mar 11 08:55:47 crc kubenswrapper[4808]: I0311 08:55:47.143188 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-66bd996c44-tl4z2" Mar 11 08:55:47 crc kubenswrapper[4808]: I0311 08:55:47.144856 4808 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Mar 11 08:55:47 crc kubenswrapper[4808]: I0311 08:55:47.145494 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Mar 11 08:55:47 crc kubenswrapper[4808]: I0311 08:55:47.146348 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Mar 11 08:55:47 crc kubenswrapper[4808]: I0311 08:55:47.146352 4808 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-lftzt" Mar 11 08:55:47 crc kubenswrapper[4808]: I0311 08:55:47.149821 4808 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Mar 11 08:55:47 crc kubenswrapper[4808]: I0311 08:55:47.166792 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-66bd996c44-tl4z2"] Mar 11 08:55:47 crc kubenswrapper[4808]: I0311 08:55:47.232657 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/966ee7c4-0701-4503-adcb-b5f389defad1-webhook-cert\") pod \"metallb-operator-controller-manager-66bd996c44-tl4z2\" (UID: \"966ee7c4-0701-4503-adcb-b5f389defad1\") " pod="metallb-system/metallb-operator-controller-manager-66bd996c44-tl4z2" Mar 11 08:55:47 crc kubenswrapper[4808]: I0311 08:55:47.233071 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g52qh\" (UniqueName: \"kubernetes.io/projected/966ee7c4-0701-4503-adcb-b5f389defad1-kube-api-access-g52qh\") pod \"metallb-operator-controller-manager-66bd996c44-tl4z2\" (UID: \"966ee7c4-0701-4503-adcb-b5f389defad1\") " pod="metallb-system/metallb-operator-controller-manager-66bd996c44-tl4z2" Mar 11 08:55:47 crc kubenswrapper[4808]: I0311 08:55:47.233229 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/966ee7c4-0701-4503-adcb-b5f389defad1-apiservice-cert\") pod \"metallb-operator-controller-manager-66bd996c44-tl4z2\" (UID: \"966ee7c4-0701-4503-adcb-b5f389defad1\") " pod="metallb-system/metallb-operator-controller-manager-66bd996c44-tl4z2" Mar 11 08:55:47 crc kubenswrapper[4808]: I0311 08:55:47.334016 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g52qh\" (UniqueName: \"kubernetes.io/projected/966ee7c4-0701-4503-adcb-b5f389defad1-kube-api-access-g52qh\") pod \"metallb-operator-controller-manager-66bd996c44-tl4z2\" (UID: \"966ee7c4-0701-4503-adcb-b5f389defad1\") " pod="metallb-system/metallb-operator-controller-manager-66bd996c44-tl4z2" Mar 11 08:55:47 crc kubenswrapper[4808]: I0311 08:55:47.334110 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/966ee7c4-0701-4503-adcb-b5f389defad1-apiservice-cert\") pod \"metallb-operator-controller-manager-66bd996c44-tl4z2\" (UID: \"966ee7c4-0701-4503-adcb-b5f389defad1\") " pod="metallb-system/metallb-operator-controller-manager-66bd996c44-tl4z2" Mar 11 08:55:47 crc kubenswrapper[4808]: I0311 08:55:47.334164 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/966ee7c4-0701-4503-adcb-b5f389defad1-webhook-cert\") pod \"metallb-operator-controller-manager-66bd996c44-tl4z2\" (UID: \"966ee7c4-0701-4503-adcb-b5f389defad1\") " pod="metallb-system/metallb-operator-controller-manager-66bd996c44-tl4z2" Mar 11 08:55:47 crc kubenswrapper[4808]: I0311 08:55:47.340976 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/966ee7c4-0701-4503-adcb-b5f389defad1-apiservice-cert\") pod \"metallb-operator-controller-manager-66bd996c44-tl4z2\" (UID: \"966ee7c4-0701-4503-adcb-b5f389defad1\") " pod="metallb-system/metallb-operator-controller-manager-66bd996c44-tl4z2" Mar 11 08:55:47 crc kubenswrapper[4808]: I0311 08:55:47.342980 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/966ee7c4-0701-4503-adcb-b5f389defad1-webhook-cert\") pod \"metallb-operator-controller-manager-66bd996c44-tl4z2\" (UID: \"966ee7c4-0701-4503-adcb-b5f389defad1\") " pod="metallb-system/metallb-operator-controller-manager-66bd996c44-tl4z2" Mar 11 08:55:47 crc kubenswrapper[4808]: I0311 08:55:47.352697 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g52qh\" (UniqueName: \"kubernetes.io/projected/966ee7c4-0701-4503-adcb-b5f389defad1-kube-api-access-g52qh\") pod \"metallb-operator-controller-manager-66bd996c44-tl4z2\" (UID: \"966ee7c4-0701-4503-adcb-b5f389defad1\") " pod="metallb-system/metallb-operator-controller-manager-66bd996c44-tl4z2" Mar 11 08:55:47 crc kubenswrapper[4808]: I0311 08:55:47.394423 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-666999559-qhsgf"] Mar 11 08:55:47 crc kubenswrapper[4808]: I0311 08:55:47.395051 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-666999559-qhsgf" Mar 11 08:55:47 crc kubenswrapper[4808]: I0311 08:55:47.401161 4808 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-hvmbp" Mar 11 08:55:47 crc kubenswrapper[4808]: I0311 08:55:47.401390 4808 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Mar 11 08:55:47 crc kubenswrapper[4808]: I0311 08:55:47.404860 4808 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Mar 11 08:55:47 crc kubenswrapper[4808]: I0311 08:55:47.409977 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-666999559-qhsgf"] Mar 11 08:55:47 crc kubenswrapper[4808]: I0311 08:55:47.460547 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-66bd996c44-tl4z2" Mar 11 08:55:47 crc kubenswrapper[4808]: I0311 08:55:47.536150 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d25c1c34-0aa8-42ca-ad90-0c50c934b31d-webhook-cert\") pod \"metallb-operator-webhook-server-666999559-qhsgf\" (UID: \"d25c1c34-0aa8-42ca-ad90-0c50c934b31d\") " pod="metallb-system/metallb-operator-webhook-server-666999559-qhsgf" Mar 11 08:55:47 crc kubenswrapper[4808]: I0311 08:55:47.536493 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d25c1c34-0aa8-42ca-ad90-0c50c934b31d-apiservice-cert\") pod \"metallb-operator-webhook-server-666999559-qhsgf\" (UID: \"d25c1c34-0aa8-42ca-ad90-0c50c934b31d\") " pod="metallb-system/metallb-operator-webhook-server-666999559-qhsgf" Mar 11 08:55:47 crc kubenswrapper[4808]: I0311 08:55:47.536530 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gg2js\" (UniqueName: \"kubernetes.io/projected/d25c1c34-0aa8-42ca-ad90-0c50c934b31d-kube-api-access-gg2js\") pod \"metallb-operator-webhook-server-666999559-qhsgf\" (UID: \"d25c1c34-0aa8-42ca-ad90-0c50c934b31d\") " pod="metallb-system/metallb-operator-webhook-server-666999559-qhsgf" Mar 11 08:55:47 crc kubenswrapper[4808]: I0311 08:55:47.638421 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d25c1c34-0aa8-42ca-ad90-0c50c934b31d-apiservice-cert\") pod \"metallb-operator-webhook-server-666999559-qhsgf\" (UID: \"d25c1c34-0aa8-42ca-ad90-0c50c934b31d\") " pod="metallb-system/metallb-operator-webhook-server-666999559-qhsgf" Mar 11 08:55:47 crc kubenswrapper[4808]: I0311 08:55:47.638469 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gg2js\" (UniqueName: \"kubernetes.io/projected/d25c1c34-0aa8-42ca-ad90-0c50c934b31d-kube-api-access-gg2js\") pod \"metallb-operator-webhook-server-666999559-qhsgf\" (UID: \"d25c1c34-0aa8-42ca-ad90-0c50c934b31d\") " pod="metallb-system/metallb-operator-webhook-server-666999559-qhsgf" Mar 11 08:55:47 crc kubenswrapper[4808]: I0311 08:55:47.638551 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d25c1c34-0aa8-42ca-ad90-0c50c934b31d-webhook-cert\") pod \"metallb-operator-webhook-server-666999559-qhsgf\" (UID: \"d25c1c34-0aa8-42ca-ad90-0c50c934b31d\") " pod="metallb-system/metallb-operator-webhook-server-666999559-qhsgf" Mar 11 08:55:47 crc kubenswrapper[4808]: I0311 08:55:47.644002 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d25c1c34-0aa8-42ca-ad90-0c50c934b31d-webhook-cert\") pod \"metallb-operator-webhook-server-666999559-qhsgf\" (UID: \"d25c1c34-0aa8-42ca-ad90-0c50c934b31d\") " pod="metallb-system/metallb-operator-webhook-server-666999559-qhsgf" Mar 11 08:55:47 crc kubenswrapper[4808]: I0311 08:55:47.654642 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d25c1c34-0aa8-42ca-ad90-0c50c934b31d-apiservice-cert\") pod \"metallb-operator-webhook-server-666999559-qhsgf\" (UID: \"d25c1c34-0aa8-42ca-ad90-0c50c934b31d\") " pod="metallb-system/metallb-operator-webhook-server-666999559-qhsgf" Mar 11 08:55:47 crc kubenswrapper[4808]: I0311 08:55:47.668663 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gg2js\" (UniqueName: \"kubernetes.io/projected/d25c1c34-0aa8-42ca-ad90-0c50c934b31d-kube-api-access-gg2js\") pod \"metallb-operator-webhook-server-666999559-qhsgf\" (UID: \"d25c1c34-0aa8-42ca-ad90-0c50c934b31d\") " pod="metallb-system/metallb-operator-webhook-server-666999559-qhsgf" Mar 11 08:55:47 crc kubenswrapper[4808]: I0311 08:55:47.692735 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-66bd996c44-tl4z2"] Mar 11 08:55:47 crc kubenswrapper[4808]: W0311 08:55:47.701442 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod966ee7c4_0701_4503_adcb_b5f389defad1.slice/crio-15e514e19f9608c5be553885dc251a77a39572141ad61dfc6306f812a6b4b5b6 WatchSource:0}: Error finding container 15e514e19f9608c5be553885dc251a77a39572141ad61dfc6306f812a6b4b5b6: Status 404 returned error can't find the container with id 15e514e19f9608c5be553885dc251a77a39572141ad61dfc6306f812a6b4b5b6 Mar 11 08:55:47 crc kubenswrapper[4808]: I0311 08:55:47.712041 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-666999559-qhsgf" Mar 11 08:55:47 crc kubenswrapper[4808]: I0311 08:55:47.873430 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-66bd996c44-tl4z2" event={"ID":"966ee7c4-0701-4503-adcb-b5f389defad1","Type":"ContainerStarted","Data":"15e514e19f9608c5be553885dc251a77a39572141ad61dfc6306f812a6b4b5b6"} Mar 11 08:55:48 crc kubenswrapper[4808]: I0311 08:55:48.195186 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-666999559-qhsgf"] Mar 11 08:55:48 crc kubenswrapper[4808]: W0311 08:55:48.215465 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd25c1c34_0aa8_42ca_ad90_0c50c934b31d.slice/crio-d0a8c80c6bc6a11214eb29c4eb78063684c460dc0a5abaea1dbcb841db2dec00 WatchSource:0}: Error finding container d0a8c80c6bc6a11214eb29c4eb78063684c460dc0a5abaea1dbcb841db2dec00: Status 404 returned error can't find the container with id d0a8c80c6bc6a11214eb29c4eb78063684c460dc0a5abaea1dbcb841db2dec00 Mar 11 08:55:48 crc kubenswrapper[4808]: I0311 08:55:48.896013 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-666999559-qhsgf" event={"ID":"d25c1c34-0aa8-42ca-ad90-0c50c934b31d","Type":"ContainerStarted","Data":"d0a8c80c6bc6a11214eb29c4eb78063684c460dc0a5abaea1dbcb841db2dec00"} Mar 11 08:55:52 crc kubenswrapper[4808]: I0311 08:55:52.919218 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-66bd996c44-tl4z2" event={"ID":"966ee7c4-0701-4503-adcb-b5f389defad1","Type":"ContainerStarted","Data":"9d1de0874905fa4ffbcd7bbbd2f5edc795f4f024c380a6ad066c32ab349b5b03"} Mar 11 08:55:52 crc kubenswrapper[4808]: I0311 08:55:52.919628 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-66bd996c44-tl4z2" Mar 11 08:55:52 crc kubenswrapper[4808]: I0311 08:55:52.920895 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-666999559-qhsgf" event={"ID":"d25c1c34-0aa8-42ca-ad90-0c50c934b31d","Type":"ContainerStarted","Data":"252be65a8a5cd2a7fd1882e88e6c1dde8c2ecf75824f38b271c3f1aca6b8129d"} Mar 11 08:55:52 crc kubenswrapper[4808]: I0311 08:55:52.921049 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-666999559-qhsgf" Mar 11 08:55:52 crc kubenswrapper[4808]: I0311 08:55:52.946300 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-66bd996c44-tl4z2" podStartSLOduration=1.246246028 podStartE2EDuration="5.946276013s" podCreationTimestamp="2026-03-11 08:55:47 +0000 UTC" firstStartedPulling="2026-03-11 08:55:47.704277954 +0000 UTC m=+998.657601264" lastFinishedPulling="2026-03-11 08:55:52.404307929 +0000 UTC m=+1003.357631249" observedRunningTime="2026-03-11 08:55:52.941739153 +0000 UTC m=+1003.895062513" watchObservedRunningTime="2026-03-11 08:55:52.946276013 +0000 UTC m=+1003.899599363" Mar 11 08:55:52 crc kubenswrapper[4808]: I0311 08:55:52.960724 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-666999559-qhsgf" podStartSLOduration=1.758101216 podStartE2EDuration="5.960701487s" podCreationTimestamp="2026-03-11 08:55:47 +0000 UTC" firstStartedPulling="2026-03-11 08:55:48.218322305 +0000 UTC m=+999.171645625" lastFinishedPulling="2026-03-11 08:55:52.420922576 +0000 UTC m=+1003.374245896" observedRunningTime="2026-03-11 08:55:52.956837086 +0000 UTC m=+1003.910160416" watchObservedRunningTime="2026-03-11 08:55:52.960701487 +0000 UTC m=+1003.914024837" Mar 11 08:56:00 crc kubenswrapper[4808]: I0311 08:56:00.134509 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553656-v2j29"] Mar 11 08:56:00 crc kubenswrapper[4808]: I0311 08:56:00.135792 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553656-v2j29" Mar 11 08:56:00 crc kubenswrapper[4808]: I0311 08:56:00.138259 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 08:56:00 crc kubenswrapper[4808]: I0311 08:56:00.138457 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 08:56:00 crc kubenswrapper[4808]: I0311 08:56:00.138526 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-8r7cc" Mar 11 08:56:00 crc kubenswrapper[4808]: I0311 08:56:00.146104 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553656-v2j29"] Mar 11 08:56:00 crc kubenswrapper[4808]: I0311 08:56:00.315831 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qc6fn\" (UniqueName: \"kubernetes.io/projected/911163d2-4780-49ae-bbc0-523d657323a8-kube-api-access-qc6fn\") pod \"auto-csr-approver-29553656-v2j29\" (UID: \"911163d2-4780-49ae-bbc0-523d657323a8\") " pod="openshift-infra/auto-csr-approver-29553656-v2j29" Mar 11 08:56:00 crc kubenswrapper[4808]: I0311 08:56:00.421305 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qc6fn\" (UniqueName: \"kubernetes.io/projected/911163d2-4780-49ae-bbc0-523d657323a8-kube-api-access-qc6fn\") pod \"auto-csr-approver-29553656-v2j29\" (UID: \"911163d2-4780-49ae-bbc0-523d657323a8\") " pod="openshift-infra/auto-csr-approver-29553656-v2j29" Mar 11 08:56:00 crc kubenswrapper[4808]: I0311 08:56:00.446409 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qc6fn\" (UniqueName: \"kubernetes.io/projected/911163d2-4780-49ae-bbc0-523d657323a8-kube-api-access-qc6fn\") pod \"auto-csr-approver-29553656-v2j29\" (UID: \"911163d2-4780-49ae-bbc0-523d657323a8\") " pod="openshift-infra/auto-csr-approver-29553656-v2j29" Mar 11 08:56:00 crc kubenswrapper[4808]: I0311 08:56:00.461834 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553656-v2j29" Mar 11 08:56:00 crc kubenswrapper[4808]: I0311 08:56:00.701245 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553656-v2j29"] Mar 11 08:56:00 crc kubenswrapper[4808]: I0311 08:56:00.969016 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553656-v2j29" event={"ID":"911163d2-4780-49ae-bbc0-523d657323a8","Type":"ContainerStarted","Data":"0f1ead28bc15fcc5488a556e92f6b207cf1022810bcc51df4d6ad8f063739c16"} Mar 11 08:56:01 crc kubenswrapper[4808]: I0311 08:56:01.975629 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553656-v2j29" event={"ID":"911163d2-4780-49ae-bbc0-523d657323a8","Type":"ContainerStarted","Data":"0cf42abbdc063b77a117eb7b7888a5cd44a70d34d36139998e533c45e5c2b395"} Mar 11 08:56:01 crc kubenswrapper[4808]: I0311 08:56:01.991109 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29553656-v2j29" podStartSLOduration=1.029056945 podStartE2EDuration="1.991086993s" podCreationTimestamp="2026-03-11 08:56:00 +0000 UTC" firstStartedPulling="2026-03-11 08:56:00.706758509 +0000 UTC m=+1011.660081829" lastFinishedPulling="2026-03-11 08:56:01.668788557 +0000 UTC m=+1012.622111877" observedRunningTime="2026-03-11 08:56:01.987886491 +0000 UTC m=+1012.941209821" watchObservedRunningTime="2026-03-11 08:56:01.991086993 +0000 UTC m=+1012.944410323" Mar 11 08:56:02 crc kubenswrapper[4808]: I0311 08:56:02.982536 4808 generic.go:334] "Generic (PLEG): container finished" podID="911163d2-4780-49ae-bbc0-523d657323a8" containerID="0cf42abbdc063b77a117eb7b7888a5cd44a70d34d36139998e533c45e5c2b395" exitCode=0 Mar 11 08:56:02 crc kubenswrapper[4808]: I0311 08:56:02.982660 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553656-v2j29" event={"ID":"911163d2-4780-49ae-bbc0-523d657323a8","Type":"ContainerDied","Data":"0cf42abbdc063b77a117eb7b7888a5cd44a70d34d36139998e533c45e5c2b395"} Mar 11 08:56:04 crc kubenswrapper[4808]: I0311 08:56:04.232054 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553656-v2j29" Mar 11 08:56:04 crc kubenswrapper[4808]: I0311 08:56:04.373561 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qc6fn\" (UniqueName: \"kubernetes.io/projected/911163d2-4780-49ae-bbc0-523d657323a8-kube-api-access-qc6fn\") pod \"911163d2-4780-49ae-bbc0-523d657323a8\" (UID: \"911163d2-4780-49ae-bbc0-523d657323a8\") " Mar 11 08:56:04 crc kubenswrapper[4808]: I0311 08:56:04.382606 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/911163d2-4780-49ae-bbc0-523d657323a8-kube-api-access-qc6fn" (OuterVolumeSpecName: "kube-api-access-qc6fn") pod "911163d2-4780-49ae-bbc0-523d657323a8" (UID: "911163d2-4780-49ae-bbc0-523d657323a8"). InnerVolumeSpecName "kube-api-access-qc6fn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:56:04 crc kubenswrapper[4808]: I0311 08:56:04.474933 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qc6fn\" (UniqueName: \"kubernetes.io/projected/911163d2-4780-49ae-bbc0-523d657323a8-kube-api-access-qc6fn\") on node \"crc\" DevicePath \"\"" Mar 11 08:56:04 crc kubenswrapper[4808]: I0311 08:56:04.997636 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553656-v2j29" event={"ID":"911163d2-4780-49ae-bbc0-523d657323a8","Type":"ContainerDied","Data":"0f1ead28bc15fcc5488a556e92f6b207cf1022810bcc51df4d6ad8f063739c16"} Mar 11 08:56:04 crc kubenswrapper[4808]: I0311 08:56:04.997676 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f1ead28bc15fcc5488a556e92f6b207cf1022810bcc51df4d6ad8f063739c16" Mar 11 08:56:04 crc kubenswrapper[4808]: I0311 08:56:04.997677 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553656-v2j29" Mar 11 08:56:05 crc kubenswrapper[4808]: I0311 08:56:05.043207 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553650-l5bbk"] Mar 11 08:56:05 crc kubenswrapper[4808]: I0311 08:56:05.046805 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553650-l5bbk"] Mar 11 08:56:05 crc kubenswrapper[4808]: I0311 08:56:05.797662 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5bd9ff2a-2fef-4d54-b6ec-5c6200d34c4c" path="/var/lib/kubelet/pods/5bd9ff2a-2fef-4d54-b6ec-5c6200d34c4c/volumes" Mar 11 08:56:07 crc kubenswrapper[4808]: I0311 08:56:07.715811 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-666999559-qhsgf" Mar 11 08:56:27 crc kubenswrapper[4808]: I0311 08:56:27.462459 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-66bd996c44-tl4z2" Mar 11 08:56:28 crc kubenswrapper[4808]: I0311 08:56:28.076032 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-sfkdf"] Mar 11 08:56:28 crc kubenswrapper[4808]: E0311 08:56:28.076397 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="911163d2-4780-49ae-bbc0-523d657323a8" containerName="oc" Mar 11 08:56:28 crc kubenswrapper[4808]: I0311 08:56:28.076425 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="911163d2-4780-49ae-bbc0-523d657323a8" containerName="oc" Mar 11 08:56:28 crc kubenswrapper[4808]: I0311 08:56:28.076624 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="911163d2-4780-49ae-bbc0-523d657323a8" containerName="oc" Mar 11 08:56:28 crc kubenswrapper[4808]: I0311 08:56:28.079856 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-sfkdf" Mar 11 08:56:28 crc kubenswrapper[4808]: I0311 08:56:28.082854 4808 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-8c8sg" Mar 11 08:56:28 crc kubenswrapper[4808]: I0311 08:56:28.083090 4808 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Mar 11 08:56:28 crc kubenswrapper[4808]: I0311 08:56:28.085863 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Mar 11 08:56:28 crc kubenswrapper[4808]: I0311 08:56:28.091038 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-bcc4b6f68-nn6tk"] Mar 11 08:56:28 crc kubenswrapper[4808]: I0311 08:56:28.091856 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-nn6tk" Mar 11 08:56:28 crc kubenswrapper[4808]: I0311 08:56:28.094038 4808 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Mar 11 08:56:28 crc kubenswrapper[4808]: I0311 08:56:28.102282 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-bcc4b6f68-nn6tk"] Mar 11 08:56:28 crc kubenswrapper[4808]: I0311 08:56:28.153562 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-xfmfr"] Mar 11 08:56:28 crc kubenswrapper[4808]: I0311 08:56:28.154644 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-xfmfr" Mar 11 08:56:28 crc kubenswrapper[4808]: I0311 08:56:28.157457 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Mar 11 08:56:28 crc kubenswrapper[4808]: I0311 08:56:28.157656 4808 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-ttcb6" Mar 11 08:56:28 crc kubenswrapper[4808]: I0311 08:56:28.157808 4808 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Mar 11 08:56:28 crc kubenswrapper[4808]: I0311 08:56:28.158058 4808 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Mar 11 08:56:28 crc kubenswrapper[4808]: I0311 08:56:28.173246 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-7bb4cc7c98-bj8qf"] Mar 11 08:56:28 crc kubenswrapper[4808]: I0311 08:56:28.176319 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-7bb4cc7c98-bj8qf" Mar 11 08:56:28 crc kubenswrapper[4808]: I0311 08:56:28.178920 4808 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Mar 11 08:56:28 crc kubenswrapper[4808]: I0311 08:56:28.195768 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-7bb4cc7c98-bj8qf"] Mar 11 08:56:28 crc kubenswrapper[4808]: I0311 08:56:28.251755 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/50d06bd1-c292-47f1-bea4-19930631d122-metrics\") pod \"frr-k8s-sfkdf\" (UID: \"50d06bd1-c292-47f1-bea4-19930631d122\") " pod="metallb-system/frr-k8s-sfkdf" Mar 11 08:56:28 crc kubenswrapper[4808]: I0311 08:56:28.251801 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/50d06bd1-c292-47f1-bea4-19930631d122-frr-sockets\") pod \"frr-k8s-sfkdf\" (UID: \"50d06bd1-c292-47f1-bea4-19930631d122\") " pod="metallb-system/frr-k8s-sfkdf" Mar 11 08:56:28 crc kubenswrapper[4808]: I0311 08:56:28.251826 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b4effb47-31c0-4aff-89e2-0bee10906717-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-nn6tk\" (UID: \"b4effb47-31c0-4aff-89e2-0bee10906717\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-nn6tk" Mar 11 08:56:28 crc kubenswrapper[4808]: I0311 08:56:28.251854 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rncg5\" (UniqueName: \"kubernetes.io/projected/50d06bd1-c292-47f1-bea4-19930631d122-kube-api-access-rncg5\") pod \"frr-k8s-sfkdf\" (UID: \"50d06bd1-c292-47f1-bea4-19930631d122\") " pod="metallb-system/frr-k8s-sfkdf" Mar 11 08:56:28 crc kubenswrapper[4808]: I0311 08:56:28.251876 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvpts\" (UniqueName: \"kubernetes.io/projected/b4effb47-31c0-4aff-89e2-0bee10906717-kube-api-access-mvpts\") pod \"frr-k8s-webhook-server-bcc4b6f68-nn6tk\" (UID: \"b4effb47-31c0-4aff-89e2-0bee10906717\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-nn6tk" Mar 11 08:56:28 crc kubenswrapper[4808]: I0311 08:56:28.251892 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/50d06bd1-c292-47f1-bea4-19930631d122-reloader\") pod \"frr-k8s-sfkdf\" (UID: \"50d06bd1-c292-47f1-bea4-19930631d122\") " pod="metallb-system/frr-k8s-sfkdf" Mar 11 08:56:28 crc kubenswrapper[4808]: I0311 08:56:28.251906 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/50d06bd1-c292-47f1-bea4-19930631d122-frr-startup\") pod \"frr-k8s-sfkdf\" (UID: \"50d06bd1-c292-47f1-bea4-19930631d122\") " pod="metallb-system/frr-k8s-sfkdf" Mar 11 08:56:28 crc kubenswrapper[4808]: I0311 08:56:28.251922 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/50d06bd1-c292-47f1-bea4-19930631d122-metrics-certs\") pod \"frr-k8s-sfkdf\" (UID: \"50d06bd1-c292-47f1-bea4-19930631d122\") " pod="metallb-system/frr-k8s-sfkdf" Mar 11 08:56:28 crc kubenswrapper[4808]: I0311 08:56:28.251947 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/50d06bd1-c292-47f1-bea4-19930631d122-frr-conf\") pod \"frr-k8s-sfkdf\" (UID: \"50d06bd1-c292-47f1-bea4-19930631d122\") " pod="metallb-system/frr-k8s-sfkdf" Mar 11 08:56:28 crc kubenswrapper[4808]: I0311 08:56:28.353732 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/50d06bd1-c292-47f1-bea4-19930631d122-metrics\") pod \"frr-k8s-sfkdf\" (UID: \"50d06bd1-c292-47f1-bea4-19930631d122\") " pod="metallb-system/frr-k8s-sfkdf" Mar 11 08:56:28 crc kubenswrapper[4808]: I0311 08:56:28.353799 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/50d06bd1-c292-47f1-bea4-19930631d122-frr-sockets\") pod \"frr-k8s-sfkdf\" (UID: \"50d06bd1-c292-47f1-bea4-19930631d122\") " pod="metallb-system/frr-k8s-sfkdf" Mar 11 08:56:28 crc kubenswrapper[4808]: I0311 08:56:28.353830 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nx8rn\" (UniqueName: \"kubernetes.io/projected/3bcbbc12-ad14-4533-8e3a-740616030126-kube-api-access-nx8rn\") pod \"controller-7bb4cc7c98-bj8qf\" (UID: \"3bcbbc12-ad14-4533-8e3a-740616030126\") " pod="metallb-system/controller-7bb4cc7c98-bj8qf" Mar 11 08:56:28 crc kubenswrapper[4808]: I0311 08:56:28.353862 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b4effb47-31c0-4aff-89e2-0bee10906717-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-nn6tk\" (UID: \"b4effb47-31c0-4aff-89e2-0bee10906717\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-nn6tk" Mar 11 08:56:28 crc kubenswrapper[4808]: I0311 08:56:28.353908 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3bcbbc12-ad14-4533-8e3a-740616030126-metrics-certs\") pod \"controller-7bb4cc7c98-bj8qf\" (UID: \"3bcbbc12-ad14-4533-8e3a-740616030126\") " pod="metallb-system/controller-7bb4cc7c98-bj8qf" Mar 11 08:56:28 crc kubenswrapper[4808]: I0311 08:56:28.353931 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rncg5\" (UniqueName: \"kubernetes.io/projected/50d06bd1-c292-47f1-bea4-19930631d122-kube-api-access-rncg5\") pod \"frr-k8s-sfkdf\" (UID: \"50d06bd1-c292-47f1-bea4-19930631d122\") " pod="metallb-system/frr-k8s-sfkdf" Mar 11 08:56:28 crc kubenswrapper[4808]: I0311 08:56:28.353956 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/50d06bd1-c292-47f1-bea4-19930631d122-reloader\") pod \"frr-k8s-sfkdf\" (UID: \"50d06bd1-c292-47f1-bea4-19930631d122\") " pod="metallb-system/frr-k8s-sfkdf" Mar 11 08:56:28 crc kubenswrapper[4808]: I0311 08:56:28.353977 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvpts\" (UniqueName: \"kubernetes.io/projected/b4effb47-31c0-4aff-89e2-0bee10906717-kube-api-access-mvpts\") pod \"frr-k8s-webhook-server-bcc4b6f68-nn6tk\" (UID: \"b4effb47-31c0-4aff-89e2-0bee10906717\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-nn6tk" Mar 11 08:56:28 crc kubenswrapper[4808]: I0311 08:56:28.353998 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/50d06bd1-c292-47f1-bea4-19930631d122-frr-startup\") pod \"frr-k8s-sfkdf\" (UID: \"50d06bd1-c292-47f1-bea4-19930631d122\") " pod="metallb-system/frr-k8s-sfkdf" Mar 11 08:56:28 crc kubenswrapper[4808]: I0311 08:56:28.354025 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/50d06bd1-c292-47f1-bea4-19930631d122-metrics-certs\") pod \"frr-k8s-sfkdf\" (UID: \"50d06bd1-c292-47f1-bea4-19930631d122\") " pod="metallb-system/frr-k8s-sfkdf" Mar 11 08:56:28 crc kubenswrapper[4808]: I0311 08:56:28.354047 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/43da1e53-74d6-4e82-8937-beb757888f5f-metrics-certs\") pod \"speaker-xfmfr\" (UID: \"43da1e53-74d6-4e82-8937-beb757888f5f\") " pod="metallb-system/speaker-xfmfr" Mar 11 08:56:28 crc kubenswrapper[4808]: I0311 08:56:28.354071 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3bcbbc12-ad14-4533-8e3a-740616030126-cert\") pod \"controller-7bb4cc7c98-bj8qf\" (UID: \"3bcbbc12-ad14-4533-8e3a-740616030126\") " pod="metallb-system/controller-7bb4cc7c98-bj8qf" Mar 11 08:56:28 crc kubenswrapper[4808]: I0311 08:56:28.354105 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/50d06bd1-c292-47f1-bea4-19930631d122-frr-conf\") pod \"frr-k8s-sfkdf\" (UID: \"50d06bd1-c292-47f1-bea4-19930631d122\") " pod="metallb-system/frr-k8s-sfkdf" Mar 11 08:56:28 crc kubenswrapper[4808]: I0311 08:56:28.354134 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94hlj\" (UniqueName: \"kubernetes.io/projected/43da1e53-74d6-4e82-8937-beb757888f5f-kube-api-access-94hlj\") pod \"speaker-xfmfr\" (UID: \"43da1e53-74d6-4e82-8937-beb757888f5f\") " pod="metallb-system/speaker-xfmfr" Mar 11 08:56:28 crc kubenswrapper[4808]: I0311 08:56:28.354168 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/43da1e53-74d6-4e82-8937-beb757888f5f-memberlist\") pod \"speaker-xfmfr\" (UID: \"43da1e53-74d6-4e82-8937-beb757888f5f\") " pod="metallb-system/speaker-xfmfr" Mar 11 08:56:28 crc kubenswrapper[4808]: I0311 08:56:28.354194 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/43da1e53-74d6-4e82-8937-beb757888f5f-metallb-excludel2\") pod \"speaker-xfmfr\" (UID: \"43da1e53-74d6-4e82-8937-beb757888f5f\") " pod="metallb-system/speaker-xfmfr" Mar 11 08:56:28 crc kubenswrapper[4808]: E0311 08:56:28.354327 4808 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Mar 11 08:56:28 crc kubenswrapper[4808]: I0311 08:56:28.354386 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/50d06bd1-c292-47f1-bea4-19930631d122-frr-sockets\") pod \"frr-k8s-sfkdf\" (UID: \"50d06bd1-c292-47f1-bea4-19930631d122\") " pod="metallb-system/frr-k8s-sfkdf" Mar 11 08:56:28 crc kubenswrapper[4808]: E0311 08:56:28.354399 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b4effb47-31c0-4aff-89e2-0bee10906717-cert podName:b4effb47-31c0-4aff-89e2-0bee10906717 nodeName:}" failed. No retries permitted until 2026-03-11 08:56:28.854377903 +0000 UTC m=+1039.807701233 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b4effb47-31c0-4aff-89e2-0bee10906717-cert") pod "frr-k8s-webhook-server-bcc4b6f68-nn6tk" (UID: "b4effb47-31c0-4aff-89e2-0bee10906717") : secret "frr-k8s-webhook-server-cert" not found Mar 11 08:56:28 crc kubenswrapper[4808]: I0311 08:56:28.354322 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/50d06bd1-c292-47f1-bea4-19930631d122-metrics\") pod \"frr-k8s-sfkdf\" (UID: \"50d06bd1-c292-47f1-bea4-19930631d122\") " pod="metallb-system/frr-k8s-sfkdf" Mar 11 08:56:28 crc kubenswrapper[4808]: I0311 08:56:28.354679 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/50d06bd1-c292-47f1-bea4-19930631d122-frr-conf\") pod \"frr-k8s-sfkdf\" (UID: \"50d06bd1-c292-47f1-bea4-19930631d122\") " pod="metallb-system/frr-k8s-sfkdf" Mar 11 08:56:28 crc kubenswrapper[4808]: I0311 08:56:28.354878 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/50d06bd1-c292-47f1-bea4-19930631d122-reloader\") pod \"frr-k8s-sfkdf\" (UID: \"50d06bd1-c292-47f1-bea4-19930631d122\") " pod="metallb-system/frr-k8s-sfkdf" Mar 11 08:56:28 crc kubenswrapper[4808]: I0311 08:56:28.355841 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/50d06bd1-c292-47f1-bea4-19930631d122-frr-startup\") pod \"frr-k8s-sfkdf\" (UID: \"50d06bd1-c292-47f1-bea4-19930631d122\") " pod="metallb-system/frr-k8s-sfkdf" Mar 11 08:56:28 crc kubenswrapper[4808]: I0311 08:56:28.368080 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/50d06bd1-c292-47f1-bea4-19930631d122-metrics-certs\") pod \"frr-k8s-sfkdf\" (UID: \"50d06bd1-c292-47f1-bea4-19930631d122\") " pod="metallb-system/frr-k8s-sfkdf" Mar 11 08:56:28 crc kubenswrapper[4808]: I0311 08:56:28.378961 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rncg5\" (UniqueName: \"kubernetes.io/projected/50d06bd1-c292-47f1-bea4-19930631d122-kube-api-access-rncg5\") pod \"frr-k8s-sfkdf\" (UID: \"50d06bd1-c292-47f1-bea4-19930631d122\") " pod="metallb-system/frr-k8s-sfkdf" Mar 11 08:56:28 crc kubenswrapper[4808]: I0311 08:56:28.382187 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvpts\" (UniqueName: \"kubernetes.io/projected/b4effb47-31c0-4aff-89e2-0bee10906717-kube-api-access-mvpts\") pod \"frr-k8s-webhook-server-bcc4b6f68-nn6tk\" (UID: \"b4effb47-31c0-4aff-89e2-0bee10906717\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-nn6tk" Mar 11 08:56:28 crc kubenswrapper[4808]: I0311 08:56:28.432252 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-sfkdf" Mar 11 08:56:28 crc kubenswrapper[4808]: I0311 08:56:28.455258 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94hlj\" (UniqueName: \"kubernetes.io/projected/43da1e53-74d6-4e82-8937-beb757888f5f-kube-api-access-94hlj\") pod \"speaker-xfmfr\" (UID: \"43da1e53-74d6-4e82-8937-beb757888f5f\") " pod="metallb-system/speaker-xfmfr" Mar 11 08:56:28 crc kubenswrapper[4808]: I0311 08:56:28.455326 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/43da1e53-74d6-4e82-8937-beb757888f5f-memberlist\") pod \"speaker-xfmfr\" (UID: \"43da1e53-74d6-4e82-8937-beb757888f5f\") " pod="metallb-system/speaker-xfmfr" Mar 11 08:56:28 crc kubenswrapper[4808]: I0311 08:56:28.455390 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/43da1e53-74d6-4e82-8937-beb757888f5f-metallb-excludel2\") pod \"speaker-xfmfr\" (UID: \"43da1e53-74d6-4e82-8937-beb757888f5f\") " pod="metallb-system/speaker-xfmfr" Mar 11 08:56:28 crc kubenswrapper[4808]: I0311 08:56:28.455444 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nx8rn\" (UniqueName: \"kubernetes.io/projected/3bcbbc12-ad14-4533-8e3a-740616030126-kube-api-access-nx8rn\") pod \"controller-7bb4cc7c98-bj8qf\" (UID: \"3bcbbc12-ad14-4533-8e3a-740616030126\") " pod="metallb-system/controller-7bb4cc7c98-bj8qf" Mar 11 08:56:28 crc kubenswrapper[4808]: E0311 08:56:28.455470 4808 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Mar 11 08:56:28 crc kubenswrapper[4808]: I0311 08:56:28.455507 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3bcbbc12-ad14-4533-8e3a-740616030126-metrics-certs\") pod \"controller-7bb4cc7c98-bj8qf\" (UID: \"3bcbbc12-ad14-4533-8e3a-740616030126\") " pod="metallb-system/controller-7bb4cc7c98-bj8qf" Mar 11 08:56:28 crc kubenswrapper[4808]: E0311 08:56:28.455534 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/43da1e53-74d6-4e82-8937-beb757888f5f-memberlist podName:43da1e53-74d6-4e82-8937-beb757888f5f nodeName:}" failed. No retries permitted until 2026-03-11 08:56:28.955515797 +0000 UTC m=+1039.908839117 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/43da1e53-74d6-4e82-8937-beb757888f5f-memberlist") pod "speaker-xfmfr" (UID: "43da1e53-74d6-4e82-8937-beb757888f5f") : secret "metallb-memberlist" not found Mar 11 08:56:28 crc kubenswrapper[4808]: I0311 08:56:28.455553 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/43da1e53-74d6-4e82-8937-beb757888f5f-metrics-certs\") pod \"speaker-xfmfr\" (UID: \"43da1e53-74d6-4e82-8937-beb757888f5f\") " pod="metallb-system/speaker-xfmfr" Mar 11 08:56:28 crc kubenswrapper[4808]: I0311 08:56:28.455581 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3bcbbc12-ad14-4533-8e3a-740616030126-cert\") pod \"controller-7bb4cc7c98-bj8qf\" (UID: \"3bcbbc12-ad14-4533-8e3a-740616030126\") " pod="metallb-system/controller-7bb4cc7c98-bj8qf" Mar 11 08:56:28 crc kubenswrapper[4808]: I0311 08:56:28.456000 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/43da1e53-74d6-4e82-8937-beb757888f5f-metallb-excludel2\") pod \"speaker-xfmfr\" (UID: \"43da1e53-74d6-4e82-8937-beb757888f5f\") " pod="metallb-system/speaker-xfmfr" Mar 11 08:56:28 crc kubenswrapper[4808]: I0311 08:56:28.459030 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/43da1e53-74d6-4e82-8937-beb757888f5f-metrics-certs\") pod \"speaker-xfmfr\" (UID: \"43da1e53-74d6-4e82-8937-beb757888f5f\") " pod="metallb-system/speaker-xfmfr" Mar 11 08:56:28 crc kubenswrapper[4808]: I0311 08:56:28.460229 4808 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Mar 11 08:56:28 crc kubenswrapper[4808]: I0311 08:56:28.469557 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3bcbbc12-ad14-4533-8e3a-740616030126-metrics-certs\") pod \"controller-7bb4cc7c98-bj8qf\" (UID: \"3bcbbc12-ad14-4533-8e3a-740616030126\") " pod="metallb-system/controller-7bb4cc7c98-bj8qf" Mar 11 08:56:28 crc kubenswrapper[4808]: I0311 08:56:28.469632 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3bcbbc12-ad14-4533-8e3a-740616030126-cert\") pod \"controller-7bb4cc7c98-bj8qf\" (UID: \"3bcbbc12-ad14-4533-8e3a-740616030126\") " pod="metallb-system/controller-7bb4cc7c98-bj8qf" Mar 11 08:56:28 crc kubenswrapper[4808]: I0311 08:56:28.478530 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nx8rn\" (UniqueName: \"kubernetes.io/projected/3bcbbc12-ad14-4533-8e3a-740616030126-kube-api-access-nx8rn\") pod \"controller-7bb4cc7c98-bj8qf\" (UID: \"3bcbbc12-ad14-4533-8e3a-740616030126\") " pod="metallb-system/controller-7bb4cc7c98-bj8qf" Mar 11 08:56:28 crc kubenswrapper[4808]: I0311 08:56:28.482547 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94hlj\" (UniqueName: \"kubernetes.io/projected/43da1e53-74d6-4e82-8937-beb757888f5f-kube-api-access-94hlj\") pod \"speaker-xfmfr\" (UID: \"43da1e53-74d6-4e82-8937-beb757888f5f\") " pod="metallb-system/speaker-xfmfr" Mar 11 08:56:28 crc kubenswrapper[4808]: I0311 08:56:28.499132 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-7bb4cc7c98-bj8qf" Mar 11 08:56:28 crc kubenswrapper[4808]: I0311 08:56:28.860829 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b4effb47-31c0-4aff-89e2-0bee10906717-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-nn6tk\" (UID: \"b4effb47-31c0-4aff-89e2-0bee10906717\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-nn6tk" Mar 11 08:56:28 crc kubenswrapper[4808]: I0311 08:56:28.874347 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b4effb47-31c0-4aff-89e2-0bee10906717-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-nn6tk\" (UID: \"b4effb47-31c0-4aff-89e2-0bee10906717\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-nn6tk" Mar 11 08:56:28 crc kubenswrapper[4808]: I0311 08:56:28.908048 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-7bb4cc7c98-bj8qf"] Mar 11 08:56:28 crc kubenswrapper[4808]: I0311 08:56:28.969351 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/43da1e53-74d6-4e82-8937-beb757888f5f-memberlist\") pod \"speaker-xfmfr\" (UID: \"43da1e53-74d6-4e82-8937-beb757888f5f\") " pod="metallb-system/speaker-xfmfr" Mar 11 08:56:28 crc kubenswrapper[4808]: E0311 08:56:28.970010 4808 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Mar 11 08:56:28 crc kubenswrapper[4808]: E0311 08:56:28.970082 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/43da1e53-74d6-4e82-8937-beb757888f5f-memberlist podName:43da1e53-74d6-4e82-8937-beb757888f5f nodeName:}" failed. No retries permitted until 2026-03-11 08:56:29.970064554 +0000 UTC m=+1040.923387874 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/43da1e53-74d6-4e82-8937-beb757888f5f-memberlist") pod "speaker-xfmfr" (UID: "43da1e53-74d6-4e82-8937-beb757888f5f") : secret "metallb-memberlist" not found Mar 11 08:56:29 crc kubenswrapper[4808]: I0311 08:56:29.043437 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-nn6tk" Mar 11 08:56:29 crc kubenswrapper[4808]: I0311 08:56:29.144315 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-sfkdf" event={"ID":"50d06bd1-c292-47f1-bea4-19930631d122","Type":"ContainerStarted","Data":"7dde4d6c46de79deeccc2fc956e2956e0d76d220abfc98baf9b47b61462093c0"} Mar 11 08:56:29 crc kubenswrapper[4808]: I0311 08:56:29.145769 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-bj8qf" event={"ID":"3bcbbc12-ad14-4533-8e3a-740616030126","Type":"ContainerStarted","Data":"b366871fcdb3b6ab052bab8f572b309535b27cdf7ddb3d2c993516a8d387dd59"} Mar 11 08:56:29 crc kubenswrapper[4808]: I0311 08:56:29.145811 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-bj8qf" event={"ID":"3bcbbc12-ad14-4533-8e3a-740616030126","Type":"ContainerStarted","Data":"1e16968a7d66fc09330eda23148cc4919287e727f2467df42d72a91e505bfde0"} Mar 11 08:56:29 crc kubenswrapper[4808]: I0311 08:56:29.318686 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-bcc4b6f68-nn6tk"] Mar 11 08:56:29 crc kubenswrapper[4808]: W0311 08:56:29.328515 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb4effb47_31c0_4aff_89e2_0bee10906717.slice/crio-54bba039ee7f3ef985f5a322ca074144c9f62d52e8ac6018fb3aba474647f58e WatchSource:0}: Error finding container 54bba039ee7f3ef985f5a322ca074144c9f62d52e8ac6018fb3aba474647f58e: Status 404 returned error can't find the container with id 54bba039ee7f3ef985f5a322ca074144c9f62d52e8ac6018fb3aba474647f58e Mar 11 08:56:29 crc kubenswrapper[4808]: I0311 08:56:29.983818 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/43da1e53-74d6-4e82-8937-beb757888f5f-memberlist\") pod \"speaker-xfmfr\" (UID: \"43da1e53-74d6-4e82-8937-beb757888f5f\") " pod="metallb-system/speaker-xfmfr" Mar 11 08:56:29 crc kubenswrapper[4808]: I0311 08:56:29.990286 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/43da1e53-74d6-4e82-8937-beb757888f5f-memberlist\") pod \"speaker-xfmfr\" (UID: \"43da1e53-74d6-4e82-8937-beb757888f5f\") " pod="metallb-system/speaker-xfmfr" Mar 11 08:56:30 crc kubenswrapper[4808]: I0311 08:56:30.162338 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-bj8qf" event={"ID":"3bcbbc12-ad14-4533-8e3a-740616030126","Type":"ContainerStarted","Data":"3896709c1b08e84abd53dbb84aa9b0f273b4b222b8eb5b029ad452b3d910445f"} Mar 11 08:56:30 crc kubenswrapper[4808]: I0311 08:56:30.162483 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-7bb4cc7c98-bj8qf" Mar 11 08:56:30 crc kubenswrapper[4808]: I0311 08:56:30.163592 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-nn6tk" event={"ID":"b4effb47-31c0-4aff-89e2-0bee10906717","Type":"ContainerStarted","Data":"54bba039ee7f3ef985f5a322ca074144c9f62d52e8ac6018fb3aba474647f58e"} Mar 11 08:56:30 crc kubenswrapper[4808]: I0311 08:56:30.276987 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-xfmfr" Mar 11 08:56:31 crc kubenswrapper[4808]: I0311 08:56:31.178474 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-xfmfr" event={"ID":"43da1e53-74d6-4e82-8937-beb757888f5f","Type":"ContainerStarted","Data":"3a7b44880f9339ac49ca2c03a0a9058d16d80e1076e6b622258b243f56aa97ff"} Mar 11 08:56:31 crc kubenswrapper[4808]: I0311 08:56:31.178851 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-xfmfr" event={"ID":"43da1e53-74d6-4e82-8937-beb757888f5f","Type":"ContainerStarted","Data":"7cd6928ce99d94c75539d1a9e5577caef67eeb740b75c75f55a89cf152177421"} Mar 11 08:56:31 crc kubenswrapper[4808]: I0311 08:56:31.178866 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-xfmfr" event={"ID":"43da1e53-74d6-4e82-8937-beb757888f5f","Type":"ContainerStarted","Data":"88522d7b108fdbe0f436f0e74ce02dea4127c3825fdcf60337da49e643000d9b"} Mar 11 08:56:31 crc kubenswrapper[4808]: I0311 08:56:31.179033 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-xfmfr" Mar 11 08:56:31 crc kubenswrapper[4808]: I0311 08:56:31.219269 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-7bb4cc7c98-bj8qf" podStartSLOduration=3.219246706 podStartE2EDuration="3.219246706s" podCreationTimestamp="2026-03-11 08:56:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:56:30.185271193 +0000 UTC m=+1041.138594513" watchObservedRunningTime="2026-03-11 08:56:31.219246706 +0000 UTC m=+1042.172570026" Mar 11 08:56:31 crc kubenswrapper[4808]: I0311 08:56:31.220665 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-xfmfr" podStartSLOduration=3.220651166 podStartE2EDuration="3.220651166s" podCreationTimestamp="2026-03-11 08:56:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:56:31.215463027 +0000 UTC m=+1042.168786337" watchObservedRunningTime="2026-03-11 08:56:31.220651166 +0000 UTC m=+1042.173974486" Mar 11 08:56:36 crc kubenswrapper[4808]: I0311 08:56:36.126640 4808 scope.go:117] "RemoveContainer" containerID="aed826059c55a11766c57ccb9bc0e7e0c65ffc98781eaeff7bba0544c82ba5ca" Mar 11 08:56:36 crc kubenswrapper[4808]: I0311 08:56:36.216908 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-nn6tk" event={"ID":"b4effb47-31c0-4aff-89e2-0bee10906717","Type":"ContainerStarted","Data":"159a85b0de1104b01ec0ba8bb8a3126fc780701808fee2cacda62fe10f94c1e6"} Mar 11 08:56:36 crc kubenswrapper[4808]: I0311 08:56:36.217016 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-nn6tk" Mar 11 08:56:36 crc kubenswrapper[4808]: I0311 08:56:36.220113 4808 generic.go:334] "Generic (PLEG): container finished" podID="50d06bd1-c292-47f1-bea4-19930631d122" containerID="418eace21a27ea8c0a27b4e09b202af4e4c3b17fd4a5cf35219e38b198e4ff1c" exitCode=0 Mar 11 08:56:36 crc kubenswrapper[4808]: I0311 08:56:36.220156 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-sfkdf" event={"ID":"50d06bd1-c292-47f1-bea4-19930631d122","Type":"ContainerDied","Data":"418eace21a27ea8c0a27b4e09b202af4e4c3b17fd4a5cf35219e38b198e4ff1c"} Mar 11 08:56:36 crc kubenswrapper[4808]: I0311 08:56:36.244036 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-nn6tk" podStartSLOduration=1.9861188840000001 podStartE2EDuration="8.244016158s" podCreationTimestamp="2026-03-11 08:56:28 +0000 UTC" firstStartedPulling="2026-03-11 08:56:29.330068103 +0000 UTC m=+1040.283391423" lastFinishedPulling="2026-03-11 08:56:35.587965377 +0000 UTC m=+1046.541288697" observedRunningTime="2026-03-11 08:56:36.236056379 +0000 UTC m=+1047.189379709" watchObservedRunningTime="2026-03-11 08:56:36.244016158 +0000 UTC m=+1047.197339488" Mar 11 08:56:37 crc kubenswrapper[4808]: I0311 08:56:37.227539 4808 generic.go:334] "Generic (PLEG): container finished" podID="50d06bd1-c292-47f1-bea4-19930631d122" containerID="3a2782afdab238389025b2a750d6398436502b82af7d4e0bb9216e4a8c7ca005" exitCode=0 Mar 11 08:56:37 crc kubenswrapper[4808]: I0311 08:56:37.227607 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-sfkdf" event={"ID":"50d06bd1-c292-47f1-bea4-19930631d122","Type":"ContainerDied","Data":"3a2782afdab238389025b2a750d6398436502b82af7d4e0bb9216e4a8c7ca005"} Mar 11 08:56:38 crc kubenswrapper[4808]: I0311 08:56:38.236874 4808 generic.go:334] "Generic (PLEG): container finished" podID="50d06bd1-c292-47f1-bea4-19930631d122" containerID="957c4b20b11d68f348238f970af65f2092473a8221a6fdfc3cdc563c9b672cb5" exitCode=0 Mar 11 08:56:38 crc kubenswrapper[4808]: I0311 08:56:38.236942 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-sfkdf" event={"ID":"50d06bd1-c292-47f1-bea4-19930631d122","Type":"ContainerDied","Data":"957c4b20b11d68f348238f970af65f2092473a8221a6fdfc3cdc563c9b672cb5"} Mar 11 08:56:39 crc kubenswrapper[4808]: I0311 08:56:39.247122 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-sfkdf" event={"ID":"50d06bd1-c292-47f1-bea4-19930631d122","Type":"ContainerStarted","Data":"7afb45cb598fb0c9991f5eea6536d821b204be5c13f44c76ab958e9704ba5c95"} Mar 11 08:56:39 crc kubenswrapper[4808]: I0311 08:56:39.247160 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-sfkdf" event={"ID":"50d06bd1-c292-47f1-bea4-19930631d122","Type":"ContainerStarted","Data":"6dcded58f1e10d4cb58a2c407bc48dea3c3a9f760ba529818ebb60324d66c343"} Mar 11 08:56:39 crc kubenswrapper[4808]: I0311 08:56:39.247169 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-sfkdf" event={"ID":"50d06bd1-c292-47f1-bea4-19930631d122","Type":"ContainerStarted","Data":"c0c433f1993546eb83e2bc5050ec28633ec79201bd492ef2a5bf59e291bcceca"} Mar 11 08:56:39 crc kubenswrapper[4808]: I0311 08:56:39.247178 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-sfkdf" event={"ID":"50d06bd1-c292-47f1-bea4-19930631d122","Type":"ContainerStarted","Data":"aac584a386ce481d6836934c4b64c7c2f3432bf81e0d1d125aac7e6a7c238009"} Mar 11 08:56:39 crc kubenswrapper[4808]: I0311 08:56:39.247186 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-sfkdf" event={"ID":"50d06bd1-c292-47f1-bea4-19930631d122","Type":"ContainerStarted","Data":"229d36c97263c313aecc65b6d1375dfd135b52bfdb17a9489f03d2615c868095"} Mar 11 08:56:40 crc kubenswrapper[4808]: I0311 08:56:40.262152 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-sfkdf" event={"ID":"50d06bd1-c292-47f1-bea4-19930631d122","Type":"ContainerStarted","Data":"e2ba6e4a05fef1f72b1e8ee978e948b5533c029e5d0ee5fa1751fec6d733bc46"} Mar 11 08:56:40 crc kubenswrapper[4808]: I0311 08:56:40.262587 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-sfkdf" Mar 11 08:56:40 crc kubenswrapper[4808]: I0311 08:56:40.282416 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-xfmfr" Mar 11 08:56:40 crc kubenswrapper[4808]: I0311 08:56:40.292303 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-sfkdf" podStartSLOduration=5.33163005 podStartE2EDuration="12.292282916s" podCreationTimestamp="2026-03-11 08:56:28 +0000 UTC" firstStartedPulling="2026-03-11 08:56:28.604953099 +0000 UTC m=+1039.558276419" lastFinishedPulling="2026-03-11 08:56:35.565605965 +0000 UTC m=+1046.518929285" observedRunningTime="2026-03-11 08:56:40.286396147 +0000 UTC m=+1051.239719487" watchObservedRunningTime="2026-03-11 08:56:40.292282916 +0000 UTC m=+1051.245606246" Mar 11 08:56:41 crc kubenswrapper[4808]: I0311 08:56:41.544602 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e52dlhh"] Mar 11 08:56:41 crc kubenswrapper[4808]: I0311 08:56:41.545831 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e52dlhh" Mar 11 08:56:41 crc kubenswrapper[4808]: I0311 08:56:41.548797 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Mar 11 08:56:41 crc kubenswrapper[4808]: I0311 08:56:41.559095 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e52dlhh"] Mar 11 08:56:41 crc kubenswrapper[4808]: I0311 08:56:41.638775 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/32820c19-f3d4-48c6-bb0a-a9d402f4db28-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e52dlhh\" (UID: \"32820c19-f3d4-48c6-bb0a-a9d402f4db28\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e52dlhh" Mar 11 08:56:41 crc kubenswrapper[4808]: I0311 08:56:41.638821 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/32820c19-f3d4-48c6-bb0a-a9d402f4db28-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e52dlhh\" (UID: \"32820c19-f3d4-48c6-bb0a-a9d402f4db28\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e52dlhh" Mar 11 08:56:41 crc kubenswrapper[4808]: I0311 08:56:41.638917 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2m6g2\" (UniqueName: \"kubernetes.io/projected/32820c19-f3d4-48c6-bb0a-a9d402f4db28-kube-api-access-2m6g2\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e52dlhh\" (UID: \"32820c19-f3d4-48c6-bb0a-a9d402f4db28\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e52dlhh" Mar 11 08:56:41 crc kubenswrapper[4808]: I0311 08:56:41.740008 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2m6g2\" (UniqueName: \"kubernetes.io/projected/32820c19-f3d4-48c6-bb0a-a9d402f4db28-kube-api-access-2m6g2\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e52dlhh\" (UID: \"32820c19-f3d4-48c6-bb0a-a9d402f4db28\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e52dlhh" Mar 11 08:56:41 crc kubenswrapper[4808]: I0311 08:56:41.740104 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/32820c19-f3d4-48c6-bb0a-a9d402f4db28-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e52dlhh\" (UID: \"32820c19-f3d4-48c6-bb0a-a9d402f4db28\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e52dlhh" Mar 11 08:56:41 crc kubenswrapper[4808]: I0311 08:56:41.740142 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/32820c19-f3d4-48c6-bb0a-a9d402f4db28-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e52dlhh\" (UID: \"32820c19-f3d4-48c6-bb0a-a9d402f4db28\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e52dlhh" Mar 11 08:56:41 crc kubenswrapper[4808]: I0311 08:56:41.740707 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/32820c19-f3d4-48c6-bb0a-a9d402f4db28-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e52dlhh\" (UID: \"32820c19-f3d4-48c6-bb0a-a9d402f4db28\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e52dlhh" Mar 11 08:56:41 crc kubenswrapper[4808]: I0311 08:56:41.740727 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/32820c19-f3d4-48c6-bb0a-a9d402f4db28-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e52dlhh\" (UID: \"32820c19-f3d4-48c6-bb0a-a9d402f4db28\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e52dlhh" Mar 11 08:56:41 crc kubenswrapper[4808]: I0311 08:56:41.761127 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2m6g2\" (UniqueName: \"kubernetes.io/projected/32820c19-f3d4-48c6-bb0a-a9d402f4db28-kube-api-access-2m6g2\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e52dlhh\" (UID: \"32820c19-f3d4-48c6-bb0a-a9d402f4db28\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e52dlhh" Mar 11 08:56:41 crc kubenswrapper[4808]: I0311 08:56:41.862348 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e52dlhh" Mar 11 08:56:42 crc kubenswrapper[4808]: I0311 08:56:42.326806 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e52dlhh"] Mar 11 08:56:42 crc kubenswrapper[4808]: W0311 08:56:42.330937 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod32820c19_f3d4_48c6_bb0a_a9d402f4db28.slice/crio-c6a68e6d3c88d3546538485f9f37f05b5aca15fdd0fd499907094316a00b5d55 WatchSource:0}: Error finding container c6a68e6d3c88d3546538485f9f37f05b5aca15fdd0fd499907094316a00b5d55: Status 404 returned error can't find the container with id c6a68e6d3c88d3546538485f9f37f05b5aca15fdd0fd499907094316a00b5d55 Mar 11 08:56:43 crc kubenswrapper[4808]: I0311 08:56:43.287191 4808 generic.go:334] "Generic (PLEG): container finished" podID="32820c19-f3d4-48c6-bb0a-a9d402f4db28" containerID="b8cd3ab8bdd9b127ee3ef515f89868113766462f6d3ad1ea2113f14bc9cfd317" exitCode=0 Mar 11 08:56:43 crc kubenswrapper[4808]: I0311 08:56:43.287238 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e52dlhh" event={"ID":"32820c19-f3d4-48c6-bb0a-a9d402f4db28","Type":"ContainerDied","Data":"b8cd3ab8bdd9b127ee3ef515f89868113766462f6d3ad1ea2113f14bc9cfd317"} Mar 11 08:56:43 crc kubenswrapper[4808]: I0311 08:56:43.287268 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e52dlhh" event={"ID":"32820c19-f3d4-48c6-bb0a-a9d402f4db28","Type":"ContainerStarted","Data":"c6a68e6d3c88d3546538485f9f37f05b5aca15fdd0fd499907094316a00b5d55"} Mar 11 08:56:43 crc kubenswrapper[4808]: I0311 08:56:43.433106 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-sfkdf" Mar 11 08:56:43 crc kubenswrapper[4808]: I0311 08:56:43.469210 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-sfkdf" Mar 11 08:56:48 crc kubenswrapper[4808]: I0311 08:56:48.437662 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-sfkdf" Mar 11 08:56:48 crc kubenswrapper[4808]: I0311 08:56:48.505919 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-7bb4cc7c98-bj8qf" Mar 11 08:56:49 crc kubenswrapper[4808]: I0311 08:56:49.050114 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-nn6tk" Mar 11 08:56:49 crc kubenswrapper[4808]: I0311 08:56:49.323636 4808 generic.go:334] "Generic (PLEG): container finished" podID="32820c19-f3d4-48c6-bb0a-a9d402f4db28" containerID="95df8dc33ed9f28ff15205aa494632570b157d09478e539eec992e9bfef0f442" exitCode=0 Mar 11 08:56:49 crc kubenswrapper[4808]: I0311 08:56:49.323741 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e52dlhh" event={"ID":"32820c19-f3d4-48c6-bb0a-a9d402f4db28","Type":"ContainerDied","Data":"95df8dc33ed9f28ff15205aa494632570b157d09478e539eec992e9bfef0f442"} Mar 11 08:56:50 crc kubenswrapper[4808]: I0311 08:56:50.345207 4808 generic.go:334] "Generic (PLEG): container finished" podID="32820c19-f3d4-48c6-bb0a-a9d402f4db28" containerID="2d52d4711a4fd26d5d6060d033b2179f99fa9ba5b7b801f522ec765453b44be7" exitCode=0 Mar 11 08:56:50 crc kubenswrapper[4808]: I0311 08:56:50.345276 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e52dlhh" event={"ID":"32820c19-f3d4-48c6-bb0a-a9d402f4db28","Type":"ContainerDied","Data":"2d52d4711a4fd26d5d6060d033b2179f99fa9ba5b7b801f522ec765453b44be7"} Mar 11 08:56:51 crc kubenswrapper[4808]: I0311 08:56:51.637480 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e52dlhh" Mar 11 08:56:51 crc kubenswrapper[4808]: I0311 08:56:51.784112 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/32820c19-f3d4-48c6-bb0a-a9d402f4db28-util\") pod \"32820c19-f3d4-48c6-bb0a-a9d402f4db28\" (UID: \"32820c19-f3d4-48c6-bb0a-a9d402f4db28\") " Mar 11 08:56:51 crc kubenswrapper[4808]: I0311 08:56:51.784244 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2m6g2\" (UniqueName: \"kubernetes.io/projected/32820c19-f3d4-48c6-bb0a-a9d402f4db28-kube-api-access-2m6g2\") pod \"32820c19-f3d4-48c6-bb0a-a9d402f4db28\" (UID: \"32820c19-f3d4-48c6-bb0a-a9d402f4db28\") " Mar 11 08:56:51 crc kubenswrapper[4808]: I0311 08:56:51.784286 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/32820c19-f3d4-48c6-bb0a-a9d402f4db28-bundle\") pod \"32820c19-f3d4-48c6-bb0a-a9d402f4db28\" (UID: \"32820c19-f3d4-48c6-bb0a-a9d402f4db28\") " Mar 11 08:56:51 crc kubenswrapper[4808]: I0311 08:56:51.785282 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32820c19-f3d4-48c6-bb0a-a9d402f4db28-bundle" (OuterVolumeSpecName: "bundle") pod "32820c19-f3d4-48c6-bb0a-a9d402f4db28" (UID: "32820c19-f3d4-48c6-bb0a-a9d402f4db28"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 08:56:51 crc kubenswrapper[4808]: I0311 08:56:51.785505 4808 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/32820c19-f3d4-48c6-bb0a-a9d402f4db28-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 08:56:51 crc kubenswrapper[4808]: I0311 08:56:51.793825 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32820c19-f3d4-48c6-bb0a-a9d402f4db28-kube-api-access-2m6g2" (OuterVolumeSpecName: "kube-api-access-2m6g2") pod "32820c19-f3d4-48c6-bb0a-a9d402f4db28" (UID: "32820c19-f3d4-48c6-bb0a-a9d402f4db28"). InnerVolumeSpecName "kube-api-access-2m6g2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:56:51 crc kubenswrapper[4808]: I0311 08:56:51.798337 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32820c19-f3d4-48c6-bb0a-a9d402f4db28-util" (OuterVolumeSpecName: "util") pod "32820c19-f3d4-48c6-bb0a-a9d402f4db28" (UID: "32820c19-f3d4-48c6-bb0a-a9d402f4db28"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 08:56:51 crc kubenswrapper[4808]: I0311 08:56:51.886948 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2m6g2\" (UniqueName: \"kubernetes.io/projected/32820c19-f3d4-48c6-bb0a-a9d402f4db28-kube-api-access-2m6g2\") on node \"crc\" DevicePath \"\"" Mar 11 08:56:51 crc kubenswrapper[4808]: I0311 08:56:51.886984 4808 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/32820c19-f3d4-48c6-bb0a-a9d402f4db28-util\") on node \"crc\" DevicePath \"\"" Mar 11 08:56:52 crc kubenswrapper[4808]: I0311 08:56:52.358835 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e52dlhh" event={"ID":"32820c19-f3d4-48c6-bb0a-a9d402f4db28","Type":"ContainerDied","Data":"c6a68e6d3c88d3546538485f9f37f05b5aca15fdd0fd499907094316a00b5d55"} Mar 11 08:56:52 crc kubenswrapper[4808]: I0311 08:56:52.359154 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c6a68e6d3c88d3546538485f9f37f05b5aca15fdd0fd499907094316a00b5d55" Mar 11 08:56:52 crc kubenswrapper[4808]: I0311 08:56:52.359216 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e52dlhh" Mar 11 08:56:59 crc kubenswrapper[4808]: I0311 08:56:59.626099 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-b7f8z"] Mar 11 08:56:59 crc kubenswrapper[4808]: E0311 08:56:59.626640 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32820c19-f3d4-48c6-bb0a-a9d402f4db28" containerName="pull" Mar 11 08:56:59 crc kubenswrapper[4808]: I0311 08:56:59.626652 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="32820c19-f3d4-48c6-bb0a-a9d402f4db28" containerName="pull" Mar 11 08:56:59 crc kubenswrapper[4808]: E0311 08:56:59.626665 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32820c19-f3d4-48c6-bb0a-a9d402f4db28" containerName="util" Mar 11 08:56:59 crc kubenswrapper[4808]: I0311 08:56:59.626671 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="32820c19-f3d4-48c6-bb0a-a9d402f4db28" containerName="util" Mar 11 08:56:59 crc kubenswrapper[4808]: E0311 08:56:59.626682 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32820c19-f3d4-48c6-bb0a-a9d402f4db28" containerName="extract" Mar 11 08:56:59 crc kubenswrapper[4808]: I0311 08:56:59.626689 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="32820c19-f3d4-48c6-bb0a-a9d402f4db28" containerName="extract" Mar 11 08:56:59 crc kubenswrapper[4808]: I0311 08:56:59.626786 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="32820c19-f3d4-48c6-bb0a-a9d402f4db28" containerName="extract" Mar 11 08:56:59 crc kubenswrapper[4808]: I0311 08:56:59.627184 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-b7f8z" Mar 11 08:56:59 crc kubenswrapper[4808]: I0311 08:56:59.629202 4808 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-2h7r2" Mar 11 08:56:59 crc kubenswrapper[4808]: I0311 08:56:59.629857 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Mar 11 08:56:59 crc kubenswrapper[4808]: I0311 08:56:59.630261 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Mar 11 08:56:59 crc kubenswrapper[4808]: I0311 08:56:59.638308 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-b7f8z"] Mar 11 08:56:59 crc kubenswrapper[4808]: I0311 08:56:59.790537 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4rpm\" (UniqueName: \"kubernetes.io/projected/93f74ce8-2a21-41a5-bd6e-4e8c11cc0191-kube-api-access-g4rpm\") pod \"cert-manager-operator-controller-manager-66c8bdd694-b7f8z\" (UID: \"93f74ce8-2a21-41a5-bd6e-4e8c11cc0191\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-b7f8z" Mar 11 08:56:59 crc kubenswrapper[4808]: I0311 08:56:59.790869 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/93f74ce8-2a21-41a5-bd6e-4e8c11cc0191-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-b7f8z\" (UID: \"93f74ce8-2a21-41a5-bd6e-4e8c11cc0191\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-b7f8z" Mar 11 08:56:59 crc kubenswrapper[4808]: I0311 08:56:59.892225 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4rpm\" (UniqueName: \"kubernetes.io/projected/93f74ce8-2a21-41a5-bd6e-4e8c11cc0191-kube-api-access-g4rpm\") pod \"cert-manager-operator-controller-manager-66c8bdd694-b7f8z\" (UID: \"93f74ce8-2a21-41a5-bd6e-4e8c11cc0191\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-b7f8z" Mar 11 08:56:59 crc kubenswrapper[4808]: I0311 08:56:59.892375 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/93f74ce8-2a21-41a5-bd6e-4e8c11cc0191-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-b7f8z\" (UID: \"93f74ce8-2a21-41a5-bd6e-4e8c11cc0191\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-b7f8z" Mar 11 08:56:59 crc kubenswrapper[4808]: I0311 08:56:59.893221 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/93f74ce8-2a21-41a5-bd6e-4e8c11cc0191-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-b7f8z\" (UID: \"93f74ce8-2a21-41a5-bd6e-4e8c11cc0191\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-b7f8z" Mar 11 08:56:59 crc kubenswrapper[4808]: I0311 08:56:59.918223 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4rpm\" (UniqueName: \"kubernetes.io/projected/93f74ce8-2a21-41a5-bd6e-4e8c11cc0191-kube-api-access-g4rpm\") pod \"cert-manager-operator-controller-manager-66c8bdd694-b7f8z\" (UID: \"93f74ce8-2a21-41a5-bd6e-4e8c11cc0191\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-b7f8z" Mar 11 08:57:00 crc kubenswrapper[4808]: I0311 08:57:00.008002 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-b7f8z" Mar 11 08:57:00 crc kubenswrapper[4808]: I0311 08:57:00.242897 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-b7f8z"] Mar 11 08:57:00 crc kubenswrapper[4808]: I0311 08:57:00.408023 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-b7f8z" event={"ID":"93f74ce8-2a21-41a5-bd6e-4e8c11cc0191","Type":"ContainerStarted","Data":"203c9f0a74b24e21eaeae18a8e2e7e422db5eb7c9bdad87215ce4c9a7903aa2a"} Mar 11 08:57:03 crc kubenswrapper[4808]: I0311 08:57:03.431055 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-b7f8z" event={"ID":"93f74ce8-2a21-41a5-bd6e-4e8c11cc0191","Type":"ContainerStarted","Data":"8eabd351502ff1097f9663f1bdd71d03ac2600fd7c846c1d828acac7ad5b2480"} Mar 11 08:57:06 crc kubenswrapper[4808]: I0311 08:57:06.847430 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-b7f8z" podStartSLOduration=5.414337311 podStartE2EDuration="7.847410885s" podCreationTimestamp="2026-03-11 08:56:59 +0000 UTC" firstStartedPulling="2026-03-11 08:57:00.253005106 +0000 UTC m=+1071.206328426" lastFinishedPulling="2026-03-11 08:57:02.68607867 +0000 UTC m=+1073.639402000" observedRunningTime="2026-03-11 08:57:03.457522083 +0000 UTC m=+1074.410845413" watchObservedRunningTime="2026-03-11 08:57:06.847410885 +0000 UTC m=+1077.800734215" Mar 11 08:57:06 crc kubenswrapper[4808]: I0311 08:57:06.849240 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-l9v5h"] Mar 11 08:57:06 crc kubenswrapper[4808]: I0311 08:57:06.850110 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-l9v5h" Mar 11 08:57:06 crc kubenswrapper[4808]: I0311 08:57:06.852798 4808 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-6psvc" Mar 11 08:57:06 crc kubenswrapper[4808]: I0311 08:57:06.852821 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Mar 11 08:57:06 crc kubenswrapper[4808]: I0311 08:57:06.852829 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Mar 11 08:57:06 crc kubenswrapper[4808]: I0311 08:57:06.857288 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-l9v5h"] Mar 11 08:57:06 crc kubenswrapper[4808]: I0311 08:57:06.976989 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f5f2c055-31b4-4e3b-a3e9-7e724fb0f1de-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-l9v5h\" (UID: \"f5f2c055-31b4-4e3b-a3e9-7e724fb0f1de\") " pod="cert-manager/cert-manager-webhook-6888856db4-l9v5h" Mar 11 08:57:06 crc kubenswrapper[4808]: I0311 08:57:06.977111 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jl5k\" (UniqueName: \"kubernetes.io/projected/f5f2c055-31b4-4e3b-a3e9-7e724fb0f1de-kube-api-access-8jl5k\") pod \"cert-manager-webhook-6888856db4-l9v5h\" (UID: \"f5f2c055-31b4-4e3b-a3e9-7e724fb0f1de\") " pod="cert-manager/cert-manager-webhook-6888856db4-l9v5h" Mar 11 08:57:07 crc kubenswrapper[4808]: I0311 08:57:07.078393 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jl5k\" (UniqueName: \"kubernetes.io/projected/f5f2c055-31b4-4e3b-a3e9-7e724fb0f1de-kube-api-access-8jl5k\") pod \"cert-manager-webhook-6888856db4-l9v5h\" (UID: \"f5f2c055-31b4-4e3b-a3e9-7e724fb0f1de\") " pod="cert-manager/cert-manager-webhook-6888856db4-l9v5h" Mar 11 08:57:07 crc kubenswrapper[4808]: I0311 08:57:07.078504 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f5f2c055-31b4-4e3b-a3e9-7e724fb0f1de-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-l9v5h\" (UID: \"f5f2c055-31b4-4e3b-a3e9-7e724fb0f1de\") " pod="cert-manager/cert-manager-webhook-6888856db4-l9v5h" Mar 11 08:57:07 crc kubenswrapper[4808]: I0311 08:57:07.099253 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jl5k\" (UniqueName: \"kubernetes.io/projected/f5f2c055-31b4-4e3b-a3e9-7e724fb0f1de-kube-api-access-8jl5k\") pod \"cert-manager-webhook-6888856db4-l9v5h\" (UID: \"f5f2c055-31b4-4e3b-a3e9-7e724fb0f1de\") " pod="cert-manager/cert-manager-webhook-6888856db4-l9v5h" Mar 11 08:57:07 crc kubenswrapper[4808]: I0311 08:57:07.111805 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f5f2c055-31b4-4e3b-a3e9-7e724fb0f1de-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-l9v5h\" (UID: \"f5f2c055-31b4-4e3b-a3e9-7e724fb0f1de\") " pod="cert-manager/cert-manager-webhook-6888856db4-l9v5h" Mar 11 08:57:07 crc kubenswrapper[4808]: I0311 08:57:07.168623 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-l9v5h" Mar 11 08:57:07 crc kubenswrapper[4808]: I0311 08:57:07.654567 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-l9v5h"] Mar 11 08:57:08 crc kubenswrapper[4808]: I0311 08:57:08.462951 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-l9v5h" event={"ID":"f5f2c055-31b4-4e3b-a3e9-7e724fb0f1de","Type":"ContainerStarted","Data":"c20408fc69e938a935a826ccc5af5b87f814f50963f0607290be3ea5e310bceb"} Mar 11 08:57:09 crc kubenswrapper[4808]: I0311 08:57:09.626687 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9t586"] Mar 11 08:57:09 crc kubenswrapper[4808]: I0311 08:57:09.628079 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9t586" Mar 11 08:57:09 crc kubenswrapper[4808]: I0311 08:57:09.637994 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9t586"] Mar 11 08:57:09 crc kubenswrapper[4808]: I0311 08:57:09.715296 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t82xl\" (UniqueName: \"kubernetes.io/projected/1a552133-9434-4c80-af43-6b54f0388bc3-kube-api-access-t82xl\") pod \"community-operators-9t586\" (UID: \"1a552133-9434-4c80-af43-6b54f0388bc3\") " pod="openshift-marketplace/community-operators-9t586" Mar 11 08:57:09 crc kubenswrapper[4808]: I0311 08:57:09.715383 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a552133-9434-4c80-af43-6b54f0388bc3-utilities\") pod \"community-operators-9t586\" (UID: \"1a552133-9434-4c80-af43-6b54f0388bc3\") " pod="openshift-marketplace/community-operators-9t586" Mar 11 08:57:09 crc kubenswrapper[4808]: I0311 08:57:09.715469 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a552133-9434-4c80-af43-6b54f0388bc3-catalog-content\") pod \"community-operators-9t586\" (UID: \"1a552133-9434-4c80-af43-6b54f0388bc3\") " pod="openshift-marketplace/community-operators-9t586" Mar 11 08:57:09 crc kubenswrapper[4808]: I0311 08:57:09.816646 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t82xl\" (UniqueName: \"kubernetes.io/projected/1a552133-9434-4c80-af43-6b54f0388bc3-kube-api-access-t82xl\") pod \"community-operators-9t586\" (UID: \"1a552133-9434-4c80-af43-6b54f0388bc3\") " pod="openshift-marketplace/community-operators-9t586" Mar 11 08:57:09 crc kubenswrapper[4808]: I0311 08:57:09.817069 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a552133-9434-4c80-af43-6b54f0388bc3-utilities\") pod \"community-operators-9t586\" (UID: \"1a552133-9434-4c80-af43-6b54f0388bc3\") " pod="openshift-marketplace/community-operators-9t586" Mar 11 08:57:09 crc kubenswrapper[4808]: I0311 08:57:09.817128 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a552133-9434-4c80-af43-6b54f0388bc3-catalog-content\") pod \"community-operators-9t586\" (UID: \"1a552133-9434-4c80-af43-6b54f0388bc3\") " pod="openshift-marketplace/community-operators-9t586" Mar 11 08:57:09 crc kubenswrapper[4808]: I0311 08:57:09.817719 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a552133-9434-4c80-af43-6b54f0388bc3-catalog-content\") pod \"community-operators-9t586\" (UID: \"1a552133-9434-4c80-af43-6b54f0388bc3\") " pod="openshift-marketplace/community-operators-9t586" Mar 11 08:57:09 crc kubenswrapper[4808]: I0311 08:57:09.817987 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a552133-9434-4c80-af43-6b54f0388bc3-utilities\") pod \"community-operators-9t586\" (UID: \"1a552133-9434-4c80-af43-6b54f0388bc3\") " pod="openshift-marketplace/community-operators-9t586" Mar 11 08:57:09 crc kubenswrapper[4808]: I0311 08:57:09.844963 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t82xl\" (UniqueName: \"kubernetes.io/projected/1a552133-9434-4c80-af43-6b54f0388bc3-kube-api-access-t82xl\") pod \"community-operators-9t586\" (UID: \"1a552133-9434-4c80-af43-6b54f0388bc3\") " pod="openshift-marketplace/community-operators-9t586" Mar 11 08:57:09 crc kubenswrapper[4808]: I0311 08:57:09.968013 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9t586" Mar 11 08:57:10 crc kubenswrapper[4808]: I0311 08:57:10.330244 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-hpwdk"] Mar 11 08:57:10 crc kubenswrapper[4808]: I0311 08:57:10.331411 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-hpwdk" Mar 11 08:57:10 crc kubenswrapper[4808]: I0311 08:57:10.338155 4808 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-c25jc" Mar 11 08:57:10 crc kubenswrapper[4808]: I0311 08:57:10.342838 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-hpwdk"] Mar 11 08:57:10 crc kubenswrapper[4808]: I0311 08:57:10.368195 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9t586"] Mar 11 08:57:10 crc kubenswrapper[4808]: I0311 08:57:10.424123 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6ea1f6d6-8e38-4c9b-bc81-d72f4f182323-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-hpwdk\" (UID: \"6ea1f6d6-8e38-4c9b-bc81-d72f4f182323\") " pod="cert-manager/cert-manager-cainjector-5545bd876-hpwdk" Mar 11 08:57:10 crc kubenswrapper[4808]: I0311 08:57:10.424210 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqrgm\" (UniqueName: \"kubernetes.io/projected/6ea1f6d6-8e38-4c9b-bc81-d72f4f182323-kube-api-access-cqrgm\") pod \"cert-manager-cainjector-5545bd876-hpwdk\" (UID: \"6ea1f6d6-8e38-4c9b-bc81-d72f4f182323\") " pod="cert-manager/cert-manager-cainjector-5545bd876-hpwdk" Mar 11 08:57:10 crc kubenswrapper[4808]: I0311 08:57:10.477545 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9t586" event={"ID":"1a552133-9434-4c80-af43-6b54f0388bc3","Type":"ContainerStarted","Data":"c2ea30a3db34fb14cfd74435979b2bee2320581d1e6a36324605ae6f4e924f15"} Mar 11 08:57:10 crc kubenswrapper[4808]: I0311 08:57:10.525791 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqrgm\" (UniqueName: \"kubernetes.io/projected/6ea1f6d6-8e38-4c9b-bc81-d72f4f182323-kube-api-access-cqrgm\") pod \"cert-manager-cainjector-5545bd876-hpwdk\" (UID: \"6ea1f6d6-8e38-4c9b-bc81-d72f4f182323\") " pod="cert-manager/cert-manager-cainjector-5545bd876-hpwdk" Mar 11 08:57:10 crc kubenswrapper[4808]: I0311 08:57:10.525853 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6ea1f6d6-8e38-4c9b-bc81-d72f4f182323-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-hpwdk\" (UID: \"6ea1f6d6-8e38-4c9b-bc81-d72f4f182323\") " pod="cert-manager/cert-manager-cainjector-5545bd876-hpwdk" Mar 11 08:57:10 crc kubenswrapper[4808]: I0311 08:57:10.543666 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6ea1f6d6-8e38-4c9b-bc81-d72f4f182323-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-hpwdk\" (UID: \"6ea1f6d6-8e38-4c9b-bc81-d72f4f182323\") " pod="cert-manager/cert-manager-cainjector-5545bd876-hpwdk" Mar 11 08:57:10 crc kubenswrapper[4808]: I0311 08:57:10.544060 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqrgm\" (UniqueName: \"kubernetes.io/projected/6ea1f6d6-8e38-4c9b-bc81-d72f4f182323-kube-api-access-cqrgm\") pod \"cert-manager-cainjector-5545bd876-hpwdk\" (UID: \"6ea1f6d6-8e38-4c9b-bc81-d72f4f182323\") " pod="cert-manager/cert-manager-cainjector-5545bd876-hpwdk" Mar 11 08:57:10 crc kubenswrapper[4808]: I0311 08:57:10.658567 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-hpwdk" Mar 11 08:57:11 crc kubenswrapper[4808]: I0311 08:57:11.065787 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-hpwdk"] Mar 11 08:57:11 crc kubenswrapper[4808]: I0311 08:57:11.484884 4808 generic.go:334] "Generic (PLEG): container finished" podID="1a552133-9434-4c80-af43-6b54f0388bc3" containerID="8d4ed15953501bcfdd703d02756652d2b97af2083d590575e32117b7677589e5" exitCode=0 Mar 11 08:57:11 crc kubenswrapper[4808]: I0311 08:57:11.484966 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9t586" event={"ID":"1a552133-9434-4c80-af43-6b54f0388bc3","Type":"ContainerDied","Data":"8d4ed15953501bcfdd703d02756652d2b97af2083d590575e32117b7677589e5"} Mar 11 08:57:11 crc kubenswrapper[4808]: I0311 08:57:11.485984 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-hpwdk" event={"ID":"6ea1f6d6-8e38-4c9b-bc81-d72f4f182323","Type":"ContainerStarted","Data":"2e53a9079bb2ee004c267739826dedbc8a4ed1aa33c50489709ae31588879452"} Mar 11 08:57:13 crc kubenswrapper[4808]: I0311 08:57:13.497278 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-hpwdk" event={"ID":"6ea1f6d6-8e38-4c9b-bc81-d72f4f182323","Type":"ContainerStarted","Data":"528a955441c24dcbc4189c96043a897f5fbfb4912b99c602a4f9e0758c04687b"} Mar 11 08:57:13 crc kubenswrapper[4808]: I0311 08:57:13.499273 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-l9v5h" event={"ID":"f5f2c055-31b4-4e3b-a3e9-7e724fb0f1de","Type":"ContainerStarted","Data":"ac300840b9f8f8a8324fc8ceb17ff5ec364dcc19ccef441d5795b0458dad62c7"} Mar 11 08:57:13 crc kubenswrapper[4808]: I0311 08:57:13.499395 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-6888856db4-l9v5h" Mar 11 08:57:13 crc kubenswrapper[4808]: I0311 08:57:13.500729 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9t586" event={"ID":"1a552133-9434-4c80-af43-6b54f0388bc3","Type":"ContainerStarted","Data":"184ed2f6dbceaf63bd90613a90720f246134248da9272d4e46ba5cea8b697f01"} Mar 11 08:57:13 crc kubenswrapper[4808]: I0311 08:57:13.516177 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-5545bd876-hpwdk" podStartSLOduration=1.8039436260000001 podStartE2EDuration="3.516158368s" podCreationTimestamp="2026-03-11 08:57:10 +0000 UTC" firstStartedPulling="2026-03-11 08:57:11.072898742 +0000 UTC m=+1082.026222082" lastFinishedPulling="2026-03-11 08:57:12.785113494 +0000 UTC m=+1083.738436824" observedRunningTime="2026-03-11 08:57:13.513945475 +0000 UTC m=+1084.467268795" watchObservedRunningTime="2026-03-11 08:57:13.516158368 +0000 UTC m=+1084.469481708" Mar 11 08:57:13 crc kubenswrapper[4808]: I0311 08:57:13.558113 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-6888856db4-l9v5h" podStartSLOduration=2.426516174 podStartE2EDuration="7.558092453s" podCreationTimestamp="2026-03-11 08:57:06 +0000 UTC" firstStartedPulling="2026-03-11 08:57:07.669085191 +0000 UTC m=+1078.622408511" lastFinishedPulling="2026-03-11 08:57:12.80066147 +0000 UTC m=+1083.753984790" observedRunningTime="2026-03-11 08:57:13.55625117 +0000 UTC m=+1084.509574500" watchObservedRunningTime="2026-03-11 08:57:13.558092453 +0000 UTC m=+1084.511415773" Mar 11 08:57:14 crc kubenswrapper[4808]: I0311 08:57:14.509259 4808 generic.go:334] "Generic (PLEG): container finished" podID="1a552133-9434-4c80-af43-6b54f0388bc3" containerID="184ed2f6dbceaf63bd90613a90720f246134248da9272d4e46ba5cea8b697f01" exitCode=0 Mar 11 08:57:14 crc kubenswrapper[4808]: I0311 08:57:14.509320 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9t586" event={"ID":"1a552133-9434-4c80-af43-6b54f0388bc3","Type":"ContainerDied","Data":"184ed2f6dbceaf63bd90613a90720f246134248da9272d4e46ba5cea8b697f01"} Mar 11 08:57:16 crc kubenswrapper[4808]: I0311 08:57:16.522189 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9t586" event={"ID":"1a552133-9434-4c80-af43-6b54f0388bc3","Type":"ContainerStarted","Data":"114e618b7187bbc05f860206aedbf6afbbcf57da98e2b1c09643a2849cb7bbf8"} Mar 11 08:57:16 crc kubenswrapper[4808]: I0311 08:57:16.541227 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9t586" podStartSLOduration=4.840289537 podStartE2EDuration="7.541212352s" podCreationTimestamp="2026-03-11 08:57:09 +0000 UTC" firstStartedPulling="2026-03-11 08:57:12.716536924 +0000 UTC m=+1083.669860244" lastFinishedPulling="2026-03-11 08:57:15.417459739 +0000 UTC m=+1086.370783059" observedRunningTime="2026-03-11 08:57:16.539390439 +0000 UTC m=+1087.492713759" watchObservedRunningTime="2026-03-11 08:57:16.541212352 +0000 UTC m=+1087.494535672" Mar 11 08:57:17 crc kubenswrapper[4808]: I0311 08:57:17.172245 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-6888856db4-l9v5h" Mar 11 08:57:19 crc kubenswrapper[4808]: I0311 08:57:19.968436 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9t586" Mar 11 08:57:19 crc kubenswrapper[4808]: I0311 08:57:19.968486 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9t586" Mar 11 08:57:20 crc kubenswrapper[4808]: I0311 08:57:20.008513 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9t586" Mar 11 08:57:20 crc kubenswrapper[4808]: I0311 08:57:20.264683 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-545d4d4674-z7dgl"] Mar 11 08:57:20 crc kubenswrapper[4808]: I0311 08:57:20.266402 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-z7dgl" Mar 11 08:57:20 crc kubenswrapper[4808]: I0311 08:57:20.269555 4808 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-tphkl" Mar 11 08:57:20 crc kubenswrapper[4808]: I0311 08:57:20.275264 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-z7dgl"] Mar 11 08:57:20 crc kubenswrapper[4808]: I0311 08:57:20.383885 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3d663586-8683-4f72-adb2-86264ae4d951-bound-sa-token\") pod \"cert-manager-545d4d4674-z7dgl\" (UID: \"3d663586-8683-4f72-adb2-86264ae4d951\") " pod="cert-manager/cert-manager-545d4d4674-z7dgl" Mar 11 08:57:20 crc kubenswrapper[4808]: I0311 08:57:20.384023 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2qs5\" (UniqueName: \"kubernetes.io/projected/3d663586-8683-4f72-adb2-86264ae4d951-kube-api-access-d2qs5\") pod \"cert-manager-545d4d4674-z7dgl\" (UID: \"3d663586-8683-4f72-adb2-86264ae4d951\") " pod="cert-manager/cert-manager-545d4d4674-z7dgl" Mar 11 08:57:20 crc kubenswrapper[4808]: I0311 08:57:20.485677 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2qs5\" (UniqueName: \"kubernetes.io/projected/3d663586-8683-4f72-adb2-86264ae4d951-kube-api-access-d2qs5\") pod \"cert-manager-545d4d4674-z7dgl\" (UID: \"3d663586-8683-4f72-adb2-86264ae4d951\") " pod="cert-manager/cert-manager-545d4d4674-z7dgl" Mar 11 08:57:20 crc kubenswrapper[4808]: I0311 08:57:20.485755 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3d663586-8683-4f72-adb2-86264ae4d951-bound-sa-token\") pod \"cert-manager-545d4d4674-z7dgl\" (UID: \"3d663586-8683-4f72-adb2-86264ae4d951\") " pod="cert-manager/cert-manager-545d4d4674-z7dgl" Mar 11 08:57:20 crc kubenswrapper[4808]: I0311 08:57:20.511801 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3d663586-8683-4f72-adb2-86264ae4d951-bound-sa-token\") pod \"cert-manager-545d4d4674-z7dgl\" (UID: \"3d663586-8683-4f72-adb2-86264ae4d951\") " pod="cert-manager/cert-manager-545d4d4674-z7dgl" Mar 11 08:57:20 crc kubenswrapper[4808]: I0311 08:57:20.514382 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2qs5\" (UniqueName: \"kubernetes.io/projected/3d663586-8683-4f72-adb2-86264ae4d951-kube-api-access-d2qs5\") pod \"cert-manager-545d4d4674-z7dgl\" (UID: \"3d663586-8683-4f72-adb2-86264ae4d951\") " pod="cert-manager/cert-manager-545d4d4674-z7dgl" Mar 11 08:57:20 crc kubenswrapper[4808]: I0311 08:57:20.586401 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-z7dgl" Mar 11 08:57:20 crc kubenswrapper[4808]: I0311 08:57:20.606147 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9t586" Mar 11 08:57:20 crc kubenswrapper[4808]: I0311 08:57:20.646061 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9t586"] Mar 11 08:57:20 crc kubenswrapper[4808]: I0311 08:57:20.981402 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-z7dgl"] Mar 11 08:57:21 crc kubenswrapper[4808]: I0311 08:57:21.573135 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-z7dgl" event={"ID":"3d663586-8683-4f72-adb2-86264ae4d951","Type":"ContainerStarted","Data":"00d12942f9ca441e9228fa0de5291741dc6d6346a06ec98d0cb8a6be47756796"} Mar 11 08:57:21 crc kubenswrapper[4808]: I0311 08:57:21.573488 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-z7dgl" event={"ID":"3d663586-8683-4f72-adb2-86264ae4d951","Type":"ContainerStarted","Data":"7ff0eababf5e974f66cd85f68a8555777cd04359b67cfe5d2cac0fc60eeead8a"} Mar 11 08:57:21 crc kubenswrapper[4808]: I0311 08:57:21.589393 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-545d4d4674-z7dgl" podStartSLOduration=1.589345664 podStartE2EDuration="1.589345664s" podCreationTimestamp="2026-03-11 08:57:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:57:21.588543951 +0000 UTC m=+1092.541867271" watchObservedRunningTime="2026-03-11 08:57:21.589345664 +0000 UTC m=+1092.542669004" Mar 11 08:57:22 crc kubenswrapper[4808]: I0311 08:57:22.577870 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9t586" podUID="1a552133-9434-4c80-af43-6b54f0388bc3" containerName="registry-server" containerID="cri-o://114e618b7187bbc05f860206aedbf6afbbcf57da98e2b1c09643a2849cb7bbf8" gracePeriod=2 Mar 11 08:57:22 crc kubenswrapper[4808]: E0311 08:57:22.713010 4808 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1a552133_9434_4c80_af43_6b54f0388bc3.slice/crio-conmon-114e618b7187bbc05f860206aedbf6afbbcf57da98e2b1c09643a2849cb7bbf8.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1a552133_9434_4c80_af43_6b54f0388bc3.slice/crio-114e618b7187bbc05f860206aedbf6afbbcf57da98e2b1c09643a2849cb7bbf8.scope\": RecentStats: unable to find data in memory cache]" Mar 11 08:57:22 crc kubenswrapper[4808]: I0311 08:57:22.984890 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9t586" Mar 11 08:57:23 crc kubenswrapper[4808]: I0311 08:57:23.161976 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a552133-9434-4c80-af43-6b54f0388bc3-utilities\") pod \"1a552133-9434-4c80-af43-6b54f0388bc3\" (UID: \"1a552133-9434-4c80-af43-6b54f0388bc3\") " Mar 11 08:57:23 crc kubenswrapper[4808]: I0311 08:57:23.162090 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t82xl\" (UniqueName: \"kubernetes.io/projected/1a552133-9434-4c80-af43-6b54f0388bc3-kube-api-access-t82xl\") pod \"1a552133-9434-4c80-af43-6b54f0388bc3\" (UID: \"1a552133-9434-4c80-af43-6b54f0388bc3\") " Mar 11 08:57:23 crc kubenswrapper[4808]: I0311 08:57:23.162151 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a552133-9434-4c80-af43-6b54f0388bc3-catalog-content\") pod \"1a552133-9434-4c80-af43-6b54f0388bc3\" (UID: \"1a552133-9434-4c80-af43-6b54f0388bc3\") " Mar 11 08:57:23 crc kubenswrapper[4808]: I0311 08:57:23.162904 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a552133-9434-4c80-af43-6b54f0388bc3-utilities" (OuterVolumeSpecName: "utilities") pod "1a552133-9434-4c80-af43-6b54f0388bc3" (UID: "1a552133-9434-4c80-af43-6b54f0388bc3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 08:57:23 crc kubenswrapper[4808]: I0311 08:57:23.169414 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a552133-9434-4c80-af43-6b54f0388bc3-kube-api-access-t82xl" (OuterVolumeSpecName: "kube-api-access-t82xl") pod "1a552133-9434-4c80-af43-6b54f0388bc3" (UID: "1a552133-9434-4c80-af43-6b54f0388bc3"). InnerVolumeSpecName "kube-api-access-t82xl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:57:23 crc kubenswrapper[4808]: I0311 08:57:23.222715 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a552133-9434-4c80-af43-6b54f0388bc3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1a552133-9434-4c80-af43-6b54f0388bc3" (UID: "1a552133-9434-4c80-af43-6b54f0388bc3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 08:57:23 crc kubenswrapper[4808]: I0311 08:57:23.264048 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a552133-9434-4c80-af43-6b54f0388bc3-utilities\") on node \"crc\" DevicePath \"\"" Mar 11 08:57:23 crc kubenswrapper[4808]: I0311 08:57:23.264098 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t82xl\" (UniqueName: \"kubernetes.io/projected/1a552133-9434-4c80-af43-6b54f0388bc3-kube-api-access-t82xl\") on node \"crc\" DevicePath \"\"" Mar 11 08:57:23 crc kubenswrapper[4808]: I0311 08:57:23.264115 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a552133-9434-4c80-af43-6b54f0388bc3-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 11 08:57:23 crc kubenswrapper[4808]: I0311 08:57:23.585288 4808 generic.go:334] "Generic (PLEG): container finished" podID="1a552133-9434-4c80-af43-6b54f0388bc3" containerID="114e618b7187bbc05f860206aedbf6afbbcf57da98e2b1c09643a2849cb7bbf8" exitCode=0 Mar 11 08:57:23 crc kubenswrapper[4808]: I0311 08:57:23.585347 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9t586" event={"ID":"1a552133-9434-4c80-af43-6b54f0388bc3","Type":"ContainerDied","Data":"114e618b7187bbc05f860206aedbf6afbbcf57da98e2b1c09643a2849cb7bbf8"} Mar 11 08:57:23 crc kubenswrapper[4808]: I0311 08:57:23.585401 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9t586" Mar 11 08:57:23 crc kubenswrapper[4808]: I0311 08:57:23.585432 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9t586" event={"ID":"1a552133-9434-4c80-af43-6b54f0388bc3","Type":"ContainerDied","Data":"c2ea30a3db34fb14cfd74435979b2bee2320581d1e6a36324605ae6f4e924f15"} Mar 11 08:57:23 crc kubenswrapper[4808]: I0311 08:57:23.585455 4808 scope.go:117] "RemoveContainer" containerID="114e618b7187bbc05f860206aedbf6afbbcf57da98e2b1c09643a2849cb7bbf8" Mar 11 08:57:23 crc kubenswrapper[4808]: I0311 08:57:23.602247 4808 scope.go:117] "RemoveContainer" containerID="184ed2f6dbceaf63bd90613a90720f246134248da9272d4e46ba5cea8b697f01" Mar 11 08:57:23 crc kubenswrapper[4808]: I0311 08:57:23.619543 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9t586"] Mar 11 08:57:23 crc kubenswrapper[4808]: I0311 08:57:23.621271 4808 scope.go:117] "RemoveContainer" containerID="8d4ed15953501bcfdd703d02756652d2b97af2083d590575e32117b7677589e5" Mar 11 08:57:23 crc kubenswrapper[4808]: I0311 08:57:23.624461 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9t586"] Mar 11 08:57:23 crc kubenswrapper[4808]: I0311 08:57:23.662125 4808 scope.go:117] "RemoveContainer" containerID="114e618b7187bbc05f860206aedbf6afbbcf57da98e2b1c09643a2849cb7bbf8" Mar 11 08:57:23 crc kubenswrapper[4808]: E0311 08:57:23.662530 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"114e618b7187bbc05f860206aedbf6afbbcf57da98e2b1c09643a2849cb7bbf8\": container with ID starting with 114e618b7187bbc05f860206aedbf6afbbcf57da98e2b1c09643a2849cb7bbf8 not found: ID does not exist" containerID="114e618b7187bbc05f860206aedbf6afbbcf57da98e2b1c09643a2849cb7bbf8" Mar 11 08:57:23 crc kubenswrapper[4808]: I0311 08:57:23.662566 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"114e618b7187bbc05f860206aedbf6afbbcf57da98e2b1c09643a2849cb7bbf8"} err="failed to get container status \"114e618b7187bbc05f860206aedbf6afbbcf57da98e2b1c09643a2849cb7bbf8\": rpc error: code = NotFound desc = could not find container \"114e618b7187bbc05f860206aedbf6afbbcf57da98e2b1c09643a2849cb7bbf8\": container with ID starting with 114e618b7187bbc05f860206aedbf6afbbcf57da98e2b1c09643a2849cb7bbf8 not found: ID does not exist" Mar 11 08:57:23 crc kubenswrapper[4808]: I0311 08:57:23.662589 4808 scope.go:117] "RemoveContainer" containerID="184ed2f6dbceaf63bd90613a90720f246134248da9272d4e46ba5cea8b697f01" Mar 11 08:57:23 crc kubenswrapper[4808]: E0311 08:57:23.662918 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"184ed2f6dbceaf63bd90613a90720f246134248da9272d4e46ba5cea8b697f01\": container with ID starting with 184ed2f6dbceaf63bd90613a90720f246134248da9272d4e46ba5cea8b697f01 not found: ID does not exist" containerID="184ed2f6dbceaf63bd90613a90720f246134248da9272d4e46ba5cea8b697f01" Mar 11 08:57:23 crc kubenswrapper[4808]: I0311 08:57:23.662945 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"184ed2f6dbceaf63bd90613a90720f246134248da9272d4e46ba5cea8b697f01"} err="failed to get container status \"184ed2f6dbceaf63bd90613a90720f246134248da9272d4e46ba5cea8b697f01\": rpc error: code = NotFound desc = could not find container \"184ed2f6dbceaf63bd90613a90720f246134248da9272d4e46ba5cea8b697f01\": container with ID starting with 184ed2f6dbceaf63bd90613a90720f246134248da9272d4e46ba5cea8b697f01 not found: ID does not exist" Mar 11 08:57:23 crc kubenswrapper[4808]: I0311 08:57:23.662970 4808 scope.go:117] "RemoveContainer" containerID="8d4ed15953501bcfdd703d02756652d2b97af2083d590575e32117b7677589e5" Mar 11 08:57:23 crc kubenswrapper[4808]: E0311 08:57:23.663290 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d4ed15953501bcfdd703d02756652d2b97af2083d590575e32117b7677589e5\": container with ID starting with 8d4ed15953501bcfdd703d02756652d2b97af2083d590575e32117b7677589e5 not found: ID does not exist" containerID="8d4ed15953501bcfdd703d02756652d2b97af2083d590575e32117b7677589e5" Mar 11 08:57:23 crc kubenswrapper[4808]: I0311 08:57:23.663315 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d4ed15953501bcfdd703d02756652d2b97af2083d590575e32117b7677589e5"} err="failed to get container status \"8d4ed15953501bcfdd703d02756652d2b97af2083d590575e32117b7677589e5\": rpc error: code = NotFound desc = could not find container \"8d4ed15953501bcfdd703d02756652d2b97af2083d590575e32117b7677589e5\": container with ID starting with 8d4ed15953501bcfdd703d02756652d2b97af2083d590575e32117b7677589e5 not found: ID does not exist" Mar 11 08:57:23 crc kubenswrapper[4808]: I0311 08:57:23.797235 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a552133-9434-4c80-af43-6b54f0388bc3" path="/var/lib/kubelet/pods/1a552133-9434-4c80-af43-6b54f0388bc3/volumes" Mar 11 08:57:29 crc kubenswrapper[4808]: I0311 08:57:29.903333 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-rb6xg"] Mar 11 08:57:29 crc kubenswrapper[4808]: E0311 08:57:29.904093 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a552133-9434-4c80-af43-6b54f0388bc3" containerName="registry-server" Mar 11 08:57:29 crc kubenswrapper[4808]: I0311 08:57:29.904105 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a552133-9434-4c80-af43-6b54f0388bc3" containerName="registry-server" Mar 11 08:57:29 crc kubenswrapper[4808]: E0311 08:57:29.904119 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a552133-9434-4c80-af43-6b54f0388bc3" containerName="extract-content" Mar 11 08:57:29 crc kubenswrapper[4808]: I0311 08:57:29.904125 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a552133-9434-4c80-af43-6b54f0388bc3" containerName="extract-content" Mar 11 08:57:29 crc kubenswrapper[4808]: E0311 08:57:29.904139 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a552133-9434-4c80-af43-6b54f0388bc3" containerName="extract-utilities" Mar 11 08:57:29 crc kubenswrapper[4808]: I0311 08:57:29.904147 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a552133-9434-4c80-af43-6b54f0388bc3" containerName="extract-utilities" Mar 11 08:57:29 crc kubenswrapper[4808]: I0311 08:57:29.904250 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a552133-9434-4c80-af43-6b54f0388bc3" containerName="registry-server" Mar 11 08:57:29 crc kubenswrapper[4808]: I0311 08:57:29.904652 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-rb6xg" Mar 11 08:57:29 crc kubenswrapper[4808]: I0311 08:57:29.907080 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-vv5jq" Mar 11 08:57:29 crc kubenswrapper[4808]: I0311 08:57:29.907132 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Mar 11 08:57:29 crc kubenswrapper[4808]: I0311 08:57:29.908661 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Mar 11 08:57:29 crc kubenswrapper[4808]: I0311 08:57:29.957322 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-rb6xg"] Mar 11 08:57:30 crc kubenswrapper[4808]: I0311 08:57:30.056973 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r97ns\" (UniqueName: \"kubernetes.io/projected/47178445-c83c-455e-86eb-6bcf242371d7-kube-api-access-r97ns\") pod \"openstack-operator-index-rb6xg\" (UID: \"47178445-c83c-455e-86eb-6bcf242371d7\") " pod="openstack-operators/openstack-operator-index-rb6xg" Mar 11 08:57:30 crc kubenswrapper[4808]: I0311 08:57:30.157549 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r97ns\" (UniqueName: \"kubernetes.io/projected/47178445-c83c-455e-86eb-6bcf242371d7-kube-api-access-r97ns\") pod \"openstack-operator-index-rb6xg\" (UID: \"47178445-c83c-455e-86eb-6bcf242371d7\") " pod="openstack-operators/openstack-operator-index-rb6xg" Mar 11 08:57:30 crc kubenswrapper[4808]: I0311 08:57:30.177759 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r97ns\" (UniqueName: \"kubernetes.io/projected/47178445-c83c-455e-86eb-6bcf242371d7-kube-api-access-r97ns\") pod \"openstack-operator-index-rb6xg\" (UID: \"47178445-c83c-455e-86eb-6bcf242371d7\") " pod="openstack-operators/openstack-operator-index-rb6xg" Mar 11 08:57:30 crc kubenswrapper[4808]: I0311 08:57:30.226133 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-rb6xg" Mar 11 08:57:30 crc kubenswrapper[4808]: I0311 08:57:30.465974 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-rb6xg"] Mar 11 08:57:30 crc kubenswrapper[4808]: I0311 08:57:30.642570 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-rb6xg" event={"ID":"47178445-c83c-455e-86eb-6bcf242371d7","Type":"ContainerStarted","Data":"6a3ff6957fefccb9f6cc1e84f9cb583eed3839bf01aad22f162047be1ec93c85"} Mar 11 08:57:31 crc kubenswrapper[4808]: I0311 08:57:31.658126 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-rb6xg" event={"ID":"47178445-c83c-455e-86eb-6bcf242371d7","Type":"ContainerStarted","Data":"7a2d8bfa0eac0677dadf3d94eea7749026ea14a364e78e1a2980f2ac3efffa3b"} Mar 11 08:57:31 crc kubenswrapper[4808]: I0311 08:57:31.674925 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-rb6xg" podStartSLOduration=1.7553986579999998 podStartE2EDuration="2.674905583s" podCreationTimestamp="2026-03-11 08:57:29 +0000 UTC" firstStartedPulling="2026-03-11 08:57:30.481805382 +0000 UTC m=+1101.435128702" lastFinishedPulling="2026-03-11 08:57:31.401312267 +0000 UTC m=+1102.354635627" observedRunningTime="2026-03-11 08:57:31.674284615 +0000 UTC m=+1102.627607965" watchObservedRunningTime="2026-03-11 08:57:31.674905583 +0000 UTC m=+1102.628228913" Mar 11 08:57:33 crc kubenswrapper[4808]: I0311 08:57:33.282866 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-rb6xg"] Mar 11 08:57:33 crc kubenswrapper[4808]: I0311 08:57:33.674674 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-rb6xg" podUID="47178445-c83c-455e-86eb-6bcf242371d7" containerName="registry-server" containerID="cri-o://7a2d8bfa0eac0677dadf3d94eea7749026ea14a364e78e1a2980f2ac3efffa3b" gracePeriod=2 Mar 11 08:57:33 crc kubenswrapper[4808]: I0311 08:57:33.897421 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-s5m27"] Mar 11 08:57:33 crc kubenswrapper[4808]: I0311 08:57:33.898234 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-s5m27" Mar 11 08:57:33 crc kubenswrapper[4808]: I0311 08:57:33.903088 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-s5m27"] Mar 11 08:57:33 crc kubenswrapper[4808]: I0311 08:57:33.910345 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fclq5\" (UniqueName: \"kubernetes.io/projected/d013bfe9-72da-4846-9528-9e5d1c6846e7-kube-api-access-fclq5\") pod \"openstack-operator-index-s5m27\" (UID: \"d013bfe9-72da-4846-9528-9e5d1c6846e7\") " pod="openstack-operators/openstack-operator-index-s5m27" Mar 11 08:57:34 crc kubenswrapper[4808]: I0311 08:57:34.011405 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fclq5\" (UniqueName: \"kubernetes.io/projected/d013bfe9-72da-4846-9528-9e5d1c6846e7-kube-api-access-fclq5\") pod \"openstack-operator-index-s5m27\" (UID: \"d013bfe9-72da-4846-9528-9e5d1c6846e7\") " pod="openstack-operators/openstack-operator-index-s5m27" Mar 11 08:57:34 crc kubenswrapper[4808]: I0311 08:57:34.034949 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fclq5\" (UniqueName: \"kubernetes.io/projected/d013bfe9-72da-4846-9528-9e5d1c6846e7-kube-api-access-fclq5\") pod \"openstack-operator-index-s5m27\" (UID: \"d013bfe9-72da-4846-9528-9e5d1c6846e7\") " pod="openstack-operators/openstack-operator-index-s5m27" Mar 11 08:57:34 crc kubenswrapper[4808]: I0311 08:57:34.060414 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-rb6xg" Mar 11 08:57:34 crc kubenswrapper[4808]: I0311 08:57:34.214088 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r97ns\" (UniqueName: \"kubernetes.io/projected/47178445-c83c-455e-86eb-6bcf242371d7-kube-api-access-r97ns\") pod \"47178445-c83c-455e-86eb-6bcf242371d7\" (UID: \"47178445-c83c-455e-86eb-6bcf242371d7\") " Mar 11 08:57:34 crc kubenswrapper[4808]: I0311 08:57:34.215496 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-s5m27" Mar 11 08:57:34 crc kubenswrapper[4808]: I0311 08:57:34.217658 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47178445-c83c-455e-86eb-6bcf242371d7-kube-api-access-r97ns" (OuterVolumeSpecName: "kube-api-access-r97ns") pod "47178445-c83c-455e-86eb-6bcf242371d7" (UID: "47178445-c83c-455e-86eb-6bcf242371d7"). InnerVolumeSpecName "kube-api-access-r97ns". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:57:34 crc kubenswrapper[4808]: I0311 08:57:34.315776 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r97ns\" (UniqueName: \"kubernetes.io/projected/47178445-c83c-455e-86eb-6bcf242371d7-kube-api-access-r97ns\") on node \"crc\" DevicePath \"\"" Mar 11 08:57:34 crc kubenswrapper[4808]: I0311 08:57:34.405365 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-s5m27"] Mar 11 08:57:34 crc kubenswrapper[4808]: W0311 08:57:34.412391 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd013bfe9_72da_4846_9528_9e5d1c6846e7.slice/crio-8bb9b21528389adf7531ecee542a791371128393f33a67d16ef287c305276cae WatchSource:0}: Error finding container 8bb9b21528389adf7531ecee542a791371128393f33a67d16ef287c305276cae: Status 404 returned error can't find the container with id 8bb9b21528389adf7531ecee542a791371128393f33a67d16ef287c305276cae Mar 11 08:57:34 crc kubenswrapper[4808]: I0311 08:57:34.683638 4808 generic.go:334] "Generic (PLEG): container finished" podID="47178445-c83c-455e-86eb-6bcf242371d7" containerID="7a2d8bfa0eac0677dadf3d94eea7749026ea14a364e78e1a2980f2ac3efffa3b" exitCode=0 Mar 11 08:57:34 crc kubenswrapper[4808]: I0311 08:57:34.683724 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-rb6xg" Mar 11 08:57:34 crc kubenswrapper[4808]: I0311 08:57:34.683714 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-rb6xg" event={"ID":"47178445-c83c-455e-86eb-6bcf242371d7","Type":"ContainerDied","Data":"7a2d8bfa0eac0677dadf3d94eea7749026ea14a364e78e1a2980f2ac3efffa3b"} Mar 11 08:57:34 crc kubenswrapper[4808]: I0311 08:57:34.683966 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-rb6xg" event={"ID":"47178445-c83c-455e-86eb-6bcf242371d7","Type":"ContainerDied","Data":"6a3ff6957fefccb9f6cc1e84f9cb583eed3839bf01aad22f162047be1ec93c85"} Mar 11 08:57:34 crc kubenswrapper[4808]: I0311 08:57:34.684010 4808 scope.go:117] "RemoveContainer" containerID="7a2d8bfa0eac0677dadf3d94eea7749026ea14a364e78e1a2980f2ac3efffa3b" Mar 11 08:57:34 crc kubenswrapper[4808]: I0311 08:57:34.685425 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-s5m27" event={"ID":"d013bfe9-72da-4846-9528-9e5d1c6846e7","Type":"ContainerStarted","Data":"8bb9b21528389adf7531ecee542a791371128393f33a67d16ef287c305276cae"} Mar 11 08:57:34 crc kubenswrapper[4808]: I0311 08:57:34.706699 4808 scope.go:117] "RemoveContainer" containerID="7a2d8bfa0eac0677dadf3d94eea7749026ea14a364e78e1a2980f2ac3efffa3b" Mar 11 08:57:34 crc kubenswrapper[4808]: E0311 08:57:34.707781 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a2d8bfa0eac0677dadf3d94eea7749026ea14a364e78e1a2980f2ac3efffa3b\": container with ID starting with 7a2d8bfa0eac0677dadf3d94eea7749026ea14a364e78e1a2980f2ac3efffa3b not found: ID does not exist" containerID="7a2d8bfa0eac0677dadf3d94eea7749026ea14a364e78e1a2980f2ac3efffa3b" Mar 11 08:57:34 crc kubenswrapper[4808]: I0311 08:57:34.707831 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a2d8bfa0eac0677dadf3d94eea7749026ea14a364e78e1a2980f2ac3efffa3b"} err="failed to get container status \"7a2d8bfa0eac0677dadf3d94eea7749026ea14a364e78e1a2980f2ac3efffa3b\": rpc error: code = NotFound desc = could not find container \"7a2d8bfa0eac0677dadf3d94eea7749026ea14a364e78e1a2980f2ac3efffa3b\": container with ID starting with 7a2d8bfa0eac0677dadf3d94eea7749026ea14a364e78e1a2980f2ac3efffa3b not found: ID does not exist" Mar 11 08:57:34 crc kubenswrapper[4808]: I0311 08:57:34.741004 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-rb6xg"] Mar 11 08:57:34 crc kubenswrapper[4808]: I0311 08:57:34.768508 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-rb6xg"] Mar 11 08:57:35 crc kubenswrapper[4808]: I0311 08:57:35.693334 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-s5m27" event={"ID":"d013bfe9-72da-4846-9528-9e5d1c6846e7","Type":"ContainerStarted","Data":"ae98d8ae7a1486e6d4a0e2b2fe402f73ebec73365b45da5653bab8b5c58d9272"} Mar 11 08:57:35 crc kubenswrapper[4808]: I0311 08:57:35.713251 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-s5m27" podStartSLOduration=2.21596198 podStartE2EDuration="2.713227575s" podCreationTimestamp="2026-03-11 08:57:33 +0000 UTC" firstStartedPulling="2026-03-11 08:57:34.417343085 +0000 UTC m=+1105.370666405" lastFinishedPulling="2026-03-11 08:57:34.91460868 +0000 UTC m=+1105.867932000" observedRunningTime="2026-03-11 08:57:35.708372636 +0000 UTC m=+1106.661695966" watchObservedRunningTime="2026-03-11 08:57:35.713227575 +0000 UTC m=+1106.666550905" Mar 11 08:57:35 crc kubenswrapper[4808]: I0311 08:57:35.800275 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47178445-c83c-455e-86eb-6bcf242371d7" path="/var/lib/kubelet/pods/47178445-c83c-455e-86eb-6bcf242371d7/volumes" Mar 11 08:57:44 crc kubenswrapper[4808]: I0311 08:57:44.216271 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-s5m27" Mar 11 08:57:44 crc kubenswrapper[4808]: I0311 08:57:44.216886 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-s5m27" Mar 11 08:57:44 crc kubenswrapper[4808]: I0311 08:57:44.251171 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-s5m27" Mar 11 08:57:44 crc kubenswrapper[4808]: I0311 08:57:44.789317 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-s5m27" Mar 11 08:57:46 crc kubenswrapper[4808]: I0311 08:57:46.027853 4808 patch_prober.go:28] interesting pod/machine-config-daemon-tfsm9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 08:57:46 crc kubenswrapper[4808]: I0311 08:57:46.027933 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 08:57:50 crc kubenswrapper[4808]: I0311 08:57:50.172389 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49banh6c4"] Mar 11 08:57:50 crc kubenswrapper[4808]: E0311 08:57:50.173069 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47178445-c83c-455e-86eb-6bcf242371d7" containerName="registry-server" Mar 11 08:57:50 crc kubenswrapper[4808]: I0311 08:57:50.173087 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="47178445-c83c-455e-86eb-6bcf242371d7" containerName="registry-server" Mar 11 08:57:50 crc kubenswrapper[4808]: I0311 08:57:50.173235 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="47178445-c83c-455e-86eb-6bcf242371d7" containerName="registry-server" Mar 11 08:57:50 crc kubenswrapper[4808]: I0311 08:57:50.174222 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49banh6c4" Mar 11 08:57:50 crc kubenswrapper[4808]: I0311 08:57:50.176729 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-xz6cn" Mar 11 08:57:50 crc kubenswrapper[4808]: I0311 08:57:50.194164 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49banh6c4"] Mar 11 08:57:50 crc kubenswrapper[4808]: I0311 08:57:50.224692 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0a920883-841f-4567-aed2-8d6c2f5e2d1e-util\") pod \"951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49banh6c4\" (UID: \"0a920883-841f-4567-aed2-8d6c2f5e2d1e\") " pod="openstack-operators/951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49banh6c4" Mar 11 08:57:50 crc kubenswrapper[4808]: I0311 08:57:50.225028 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0a920883-841f-4567-aed2-8d6c2f5e2d1e-bundle\") pod \"951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49banh6c4\" (UID: \"0a920883-841f-4567-aed2-8d6c2f5e2d1e\") " pod="openstack-operators/951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49banh6c4" Mar 11 08:57:50 crc kubenswrapper[4808]: I0311 08:57:50.225122 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzr46\" (UniqueName: \"kubernetes.io/projected/0a920883-841f-4567-aed2-8d6c2f5e2d1e-kube-api-access-hzr46\") pod \"951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49banh6c4\" (UID: \"0a920883-841f-4567-aed2-8d6c2f5e2d1e\") " pod="openstack-operators/951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49banh6c4" Mar 11 08:57:50 crc kubenswrapper[4808]: I0311 08:57:50.326116 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzr46\" (UniqueName: \"kubernetes.io/projected/0a920883-841f-4567-aed2-8d6c2f5e2d1e-kube-api-access-hzr46\") pod \"951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49banh6c4\" (UID: \"0a920883-841f-4567-aed2-8d6c2f5e2d1e\") " pod="openstack-operators/951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49banh6c4" Mar 11 08:57:50 crc kubenswrapper[4808]: I0311 08:57:50.326387 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0a920883-841f-4567-aed2-8d6c2f5e2d1e-util\") pod \"951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49banh6c4\" (UID: \"0a920883-841f-4567-aed2-8d6c2f5e2d1e\") " pod="openstack-operators/951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49banh6c4" Mar 11 08:57:50 crc kubenswrapper[4808]: I0311 08:57:50.326428 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0a920883-841f-4567-aed2-8d6c2f5e2d1e-bundle\") pod \"951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49banh6c4\" (UID: \"0a920883-841f-4567-aed2-8d6c2f5e2d1e\") " pod="openstack-operators/951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49banh6c4" Mar 11 08:57:50 crc kubenswrapper[4808]: I0311 08:57:50.327116 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0a920883-841f-4567-aed2-8d6c2f5e2d1e-bundle\") pod \"951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49banh6c4\" (UID: \"0a920883-841f-4567-aed2-8d6c2f5e2d1e\") " pod="openstack-operators/951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49banh6c4" Mar 11 08:57:50 crc kubenswrapper[4808]: I0311 08:57:50.327174 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0a920883-841f-4567-aed2-8d6c2f5e2d1e-util\") pod \"951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49banh6c4\" (UID: \"0a920883-841f-4567-aed2-8d6c2f5e2d1e\") " pod="openstack-operators/951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49banh6c4" Mar 11 08:57:50 crc kubenswrapper[4808]: I0311 08:57:50.359406 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzr46\" (UniqueName: \"kubernetes.io/projected/0a920883-841f-4567-aed2-8d6c2f5e2d1e-kube-api-access-hzr46\") pod \"951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49banh6c4\" (UID: \"0a920883-841f-4567-aed2-8d6c2f5e2d1e\") " pod="openstack-operators/951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49banh6c4" Mar 11 08:57:50 crc kubenswrapper[4808]: I0311 08:57:50.502450 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49banh6c4" Mar 11 08:57:50 crc kubenswrapper[4808]: I0311 08:57:50.740442 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49banh6c4"] Mar 11 08:57:50 crc kubenswrapper[4808]: W0311 08:57:50.746989 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a920883_841f_4567_aed2_8d6c2f5e2d1e.slice/crio-472cd39b78593ddb06eba58906c4e6c587aacf40bd5e82c6bd0d7a6c03f71dea WatchSource:0}: Error finding container 472cd39b78593ddb06eba58906c4e6c587aacf40bd5e82c6bd0d7a6c03f71dea: Status 404 returned error can't find the container with id 472cd39b78593ddb06eba58906c4e6c587aacf40bd5e82c6bd0d7a6c03f71dea Mar 11 08:57:50 crc kubenswrapper[4808]: I0311 08:57:50.795263 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49banh6c4" event={"ID":"0a920883-841f-4567-aed2-8d6c2f5e2d1e","Type":"ContainerStarted","Data":"472cd39b78593ddb06eba58906c4e6c587aacf40bd5e82c6bd0d7a6c03f71dea"} Mar 11 08:57:51 crc kubenswrapper[4808]: I0311 08:57:51.809416 4808 generic.go:334] "Generic (PLEG): container finished" podID="0a920883-841f-4567-aed2-8d6c2f5e2d1e" containerID="1f1849b657dfb1918d6b378848dc779b3f201a2af2a5ed13bed95bee7d36ae41" exitCode=0 Mar 11 08:57:51 crc kubenswrapper[4808]: I0311 08:57:51.809538 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49banh6c4" event={"ID":"0a920883-841f-4567-aed2-8d6c2f5e2d1e","Type":"ContainerDied","Data":"1f1849b657dfb1918d6b378848dc779b3f201a2af2a5ed13bed95bee7d36ae41"} Mar 11 08:57:53 crc kubenswrapper[4808]: I0311 08:57:53.827091 4808 generic.go:334] "Generic (PLEG): container finished" podID="0a920883-841f-4567-aed2-8d6c2f5e2d1e" containerID="a3392ee5af17159da0fde0baf6e564e191eaceb52476af12fab3f5f50e19455d" exitCode=0 Mar 11 08:57:53 crc kubenswrapper[4808]: I0311 08:57:53.827178 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49banh6c4" event={"ID":"0a920883-841f-4567-aed2-8d6c2f5e2d1e","Type":"ContainerDied","Data":"a3392ee5af17159da0fde0baf6e564e191eaceb52476af12fab3f5f50e19455d"} Mar 11 08:57:54 crc kubenswrapper[4808]: I0311 08:57:54.840648 4808 generic.go:334] "Generic (PLEG): container finished" podID="0a920883-841f-4567-aed2-8d6c2f5e2d1e" containerID="9fdd9f1b16c00a901adff7b8cd806276c27b485c49e8e92902eb15081981aa8f" exitCode=0 Mar 11 08:57:54 crc kubenswrapper[4808]: I0311 08:57:54.840702 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49banh6c4" event={"ID":"0a920883-841f-4567-aed2-8d6c2f5e2d1e","Type":"ContainerDied","Data":"9fdd9f1b16c00a901adff7b8cd806276c27b485c49e8e92902eb15081981aa8f"} Mar 11 08:57:56 crc kubenswrapper[4808]: I0311 08:57:56.085240 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49banh6c4" Mar 11 08:57:56 crc kubenswrapper[4808]: I0311 08:57:56.205920 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0a920883-841f-4567-aed2-8d6c2f5e2d1e-util\") pod \"0a920883-841f-4567-aed2-8d6c2f5e2d1e\" (UID: \"0a920883-841f-4567-aed2-8d6c2f5e2d1e\") " Mar 11 08:57:56 crc kubenswrapper[4808]: I0311 08:57:56.205993 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hzr46\" (UniqueName: \"kubernetes.io/projected/0a920883-841f-4567-aed2-8d6c2f5e2d1e-kube-api-access-hzr46\") pod \"0a920883-841f-4567-aed2-8d6c2f5e2d1e\" (UID: \"0a920883-841f-4567-aed2-8d6c2f5e2d1e\") " Mar 11 08:57:56 crc kubenswrapper[4808]: I0311 08:57:56.206121 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0a920883-841f-4567-aed2-8d6c2f5e2d1e-bundle\") pod \"0a920883-841f-4567-aed2-8d6c2f5e2d1e\" (UID: \"0a920883-841f-4567-aed2-8d6c2f5e2d1e\") " Mar 11 08:57:56 crc kubenswrapper[4808]: I0311 08:57:56.207342 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a920883-841f-4567-aed2-8d6c2f5e2d1e-bundle" (OuterVolumeSpecName: "bundle") pod "0a920883-841f-4567-aed2-8d6c2f5e2d1e" (UID: "0a920883-841f-4567-aed2-8d6c2f5e2d1e"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 08:57:56 crc kubenswrapper[4808]: I0311 08:57:56.212198 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a920883-841f-4567-aed2-8d6c2f5e2d1e-kube-api-access-hzr46" (OuterVolumeSpecName: "kube-api-access-hzr46") pod "0a920883-841f-4567-aed2-8d6c2f5e2d1e" (UID: "0a920883-841f-4567-aed2-8d6c2f5e2d1e"). InnerVolumeSpecName "kube-api-access-hzr46". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:57:56 crc kubenswrapper[4808]: I0311 08:57:56.219494 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a920883-841f-4567-aed2-8d6c2f5e2d1e-util" (OuterVolumeSpecName: "util") pod "0a920883-841f-4567-aed2-8d6c2f5e2d1e" (UID: "0a920883-841f-4567-aed2-8d6c2f5e2d1e"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 08:57:56 crc kubenswrapper[4808]: I0311 08:57:56.307791 4808 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0a920883-841f-4567-aed2-8d6c2f5e2d1e-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 08:57:56 crc kubenswrapper[4808]: I0311 08:57:56.307834 4808 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0a920883-841f-4567-aed2-8d6c2f5e2d1e-util\") on node \"crc\" DevicePath \"\"" Mar 11 08:57:56 crc kubenswrapper[4808]: I0311 08:57:56.307846 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hzr46\" (UniqueName: \"kubernetes.io/projected/0a920883-841f-4567-aed2-8d6c2f5e2d1e-kube-api-access-hzr46\") on node \"crc\" DevicePath \"\"" Mar 11 08:57:56 crc kubenswrapper[4808]: I0311 08:57:56.859005 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49banh6c4" event={"ID":"0a920883-841f-4567-aed2-8d6c2f5e2d1e","Type":"ContainerDied","Data":"472cd39b78593ddb06eba58906c4e6c587aacf40bd5e82c6bd0d7a6c03f71dea"} Mar 11 08:57:56 crc kubenswrapper[4808]: I0311 08:57:56.859052 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="472cd39b78593ddb06eba58906c4e6c587aacf40bd5e82c6bd0d7a6c03f71dea" Mar 11 08:57:56 crc kubenswrapper[4808]: I0311 08:57:56.859164 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49banh6c4" Mar 11 08:58:00 crc kubenswrapper[4808]: I0311 08:58:00.147715 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553658-6z4sp"] Mar 11 08:58:00 crc kubenswrapper[4808]: E0311 08:58:00.148543 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a920883-841f-4567-aed2-8d6c2f5e2d1e" containerName="pull" Mar 11 08:58:00 crc kubenswrapper[4808]: I0311 08:58:00.148567 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a920883-841f-4567-aed2-8d6c2f5e2d1e" containerName="pull" Mar 11 08:58:00 crc kubenswrapper[4808]: E0311 08:58:00.148596 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a920883-841f-4567-aed2-8d6c2f5e2d1e" containerName="extract" Mar 11 08:58:00 crc kubenswrapper[4808]: I0311 08:58:00.148613 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a920883-841f-4567-aed2-8d6c2f5e2d1e" containerName="extract" Mar 11 08:58:00 crc kubenswrapper[4808]: E0311 08:58:00.148638 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a920883-841f-4567-aed2-8d6c2f5e2d1e" containerName="util" Mar 11 08:58:00 crc kubenswrapper[4808]: I0311 08:58:00.148654 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a920883-841f-4567-aed2-8d6c2f5e2d1e" containerName="util" Mar 11 08:58:00 crc kubenswrapper[4808]: I0311 08:58:00.148846 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a920883-841f-4567-aed2-8d6c2f5e2d1e" containerName="extract" Mar 11 08:58:00 crc kubenswrapper[4808]: I0311 08:58:00.149642 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553658-6z4sp" Mar 11 08:58:00 crc kubenswrapper[4808]: I0311 08:58:00.151455 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 08:58:00 crc kubenswrapper[4808]: I0311 08:58:00.152224 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-8r7cc" Mar 11 08:58:00 crc kubenswrapper[4808]: I0311 08:58:00.157864 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553658-6z4sp"] Mar 11 08:58:00 crc kubenswrapper[4808]: I0311 08:58:00.162173 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 08:58:00 crc kubenswrapper[4808]: I0311 08:58:00.265017 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnb9s\" (UniqueName: \"kubernetes.io/projected/85e3d78d-fe06-452c-a667-90fb06438938-kube-api-access-pnb9s\") pod \"auto-csr-approver-29553658-6z4sp\" (UID: \"85e3d78d-fe06-452c-a667-90fb06438938\") " pod="openshift-infra/auto-csr-approver-29553658-6z4sp" Mar 11 08:58:00 crc kubenswrapper[4808]: I0311 08:58:00.367963 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnb9s\" (UniqueName: \"kubernetes.io/projected/85e3d78d-fe06-452c-a667-90fb06438938-kube-api-access-pnb9s\") pod \"auto-csr-approver-29553658-6z4sp\" (UID: \"85e3d78d-fe06-452c-a667-90fb06438938\") " pod="openshift-infra/auto-csr-approver-29553658-6z4sp" Mar 11 08:58:00 crc kubenswrapper[4808]: I0311 08:58:00.388908 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnb9s\" (UniqueName: \"kubernetes.io/projected/85e3d78d-fe06-452c-a667-90fb06438938-kube-api-access-pnb9s\") pod \"auto-csr-approver-29553658-6z4sp\" (UID: \"85e3d78d-fe06-452c-a667-90fb06438938\") " pod="openshift-infra/auto-csr-approver-29553658-6z4sp" Mar 11 08:58:00 crc kubenswrapper[4808]: I0311 08:58:00.478610 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553658-6z4sp" Mar 11 08:58:00 crc kubenswrapper[4808]: I0311 08:58:00.727717 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553658-6z4sp"] Mar 11 08:58:00 crc kubenswrapper[4808]: I0311 08:58:00.891098 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553658-6z4sp" event={"ID":"85e3d78d-fe06-452c-a667-90fb06438938","Type":"ContainerStarted","Data":"385f5d6604555eee87453b75061127932e5067629b894fdc4da5e6de6a2b70e9"} Mar 11 08:58:01 crc kubenswrapper[4808]: I0311 08:58:01.897844 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553658-6z4sp" event={"ID":"85e3d78d-fe06-452c-a667-90fb06438938","Type":"ContainerStarted","Data":"a1bb330fc24dc8b6f1e8e00764cafdfb6954ab68b7644260dd538d66d235432a"} Mar 11 08:58:01 crc kubenswrapper[4808]: I0311 08:58:01.912881 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29553658-6z4sp" podStartSLOduration=1.111880112 podStartE2EDuration="1.912865366s" podCreationTimestamp="2026-03-11 08:58:00 +0000 UTC" firstStartedPulling="2026-03-11 08:58:00.735941078 +0000 UTC m=+1131.689264398" lastFinishedPulling="2026-03-11 08:58:01.536926332 +0000 UTC m=+1132.490249652" observedRunningTime="2026-03-11 08:58:01.908885562 +0000 UTC m=+1132.862208882" watchObservedRunningTime="2026-03-11 08:58:01.912865366 +0000 UTC m=+1132.866188686" Mar 11 08:58:02 crc kubenswrapper[4808]: I0311 08:58:02.905349 4808 generic.go:334] "Generic (PLEG): container finished" podID="85e3d78d-fe06-452c-a667-90fb06438938" containerID="a1bb330fc24dc8b6f1e8e00764cafdfb6954ab68b7644260dd538d66d235432a" exitCode=0 Mar 11 08:58:02 crc kubenswrapper[4808]: I0311 08:58:02.905450 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553658-6z4sp" event={"ID":"85e3d78d-fe06-452c-a667-90fb06438938","Type":"ContainerDied","Data":"a1bb330fc24dc8b6f1e8e00764cafdfb6954ab68b7644260dd538d66d235432a"} Mar 11 08:58:03 crc kubenswrapper[4808]: I0311 08:58:03.364207 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-6cf8df7788-ghlgz"] Mar 11 08:58:03 crc kubenswrapper[4808]: I0311 08:58:03.365498 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-6cf8df7788-ghlgz" Mar 11 08:58:03 crc kubenswrapper[4808]: I0311 08:58:03.368498 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-6f7lz" Mar 11 08:58:03 crc kubenswrapper[4808]: I0311 08:58:03.390171 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-6cf8df7788-ghlgz"] Mar 11 08:58:03 crc kubenswrapper[4808]: I0311 08:58:03.456426 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5qvf\" (UniqueName: \"kubernetes.io/projected/e125e50d-5830-4cbc-9eaf-7df7c62cb706-kube-api-access-r5qvf\") pod \"openstack-operator-controller-init-6cf8df7788-ghlgz\" (UID: \"e125e50d-5830-4cbc-9eaf-7df7c62cb706\") " pod="openstack-operators/openstack-operator-controller-init-6cf8df7788-ghlgz" Mar 11 08:58:03 crc kubenswrapper[4808]: I0311 08:58:03.557449 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5qvf\" (UniqueName: \"kubernetes.io/projected/e125e50d-5830-4cbc-9eaf-7df7c62cb706-kube-api-access-r5qvf\") pod \"openstack-operator-controller-init-6cf8df7788-ghlgz\" (UID: \"e125e50d-5830-4cbc-9eaf-7df7c62cb706\") " pod="openstack-operators/openstack-operator-controller-init-6cf8df7788-ghlgz" Mar 11 08:58:03 crc kubenswrapper[4808]: I0311 08:58:03.577379 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5qvf\" (UniqueName: \"kubernetes.io/projected/e125e50d-5830-4cbc-9eaf-7df7c62cb706-kube-api-access-r5qvf\") pod \"openstack-operator-controller-init-6cf8df7788-ghlgz\" (UID: \"e125e50d-5830-4cbc-9eaf-7df7c62cb706\") " pod="openstack-operators/openstack-operator-controller-init-6cf8df7788-ghlgz" Mar 11 08:58:03 crc kubenswrapper[4808]: I0311 08:58:03.682658 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-6cf8df7788-ghlgz" Mar 11 08:58:04 crc kubenswrapper[4808]: I0311 08:58:04.133909 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553658-6z4sp" Mar 11 08:58:04 crc kubenswrapper[4808]: I0311 08:58:04.190671 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-6cf8df7788-ghlgz"] Mar 11 08:58:04 crc kubenswrapper[4808]: I0311 08:58:04.265430 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pnb9s\" (UniqueName: \"kubernetes.io/projected/85e3d78d-fe06-452c-a667-90fb06438938-kube-api-access-pnb9s\") pod \"85e3d78d-fe06-452c-a667-90fb06438938\" (UID: \"85e3d78d-fe06-452c-a667-90fb06438938\") " Mar 11 08:58:04 crc kubenswrapper[4808]: I0311 08:58:04.271417 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85e3d78d-fe06-452c-a667-90fb06438938-kube-api-access-pnb9s" (OuterVolumeSpecName: "kube-api-access-pnb9s") pod "85e3d78d-fe06-452c-a667-90fb06438938" (UID: "85e3d78d-fe06-452c-a667-90fb06438938"). InnerVolumeSpecName "kube-api-access-pnb9s". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:58:04 crc kubenswrapper[4808]: I0311 08:58:04.276599 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rck8c"] Mar 11 08:58:04 crc kubenswrapper[4808]: E0311 08:58:04.276908 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85e3d78d-fe06-452c-a667-90fb06438938" containerName="oc" Mar 11 08:58:04 crc kubenswrapper[4808]: I0311 08:58:04.276923 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="85e3d78d-fe06-452c-a667-90fb06438938" containerName="oc" Mar 11 08:58:04 crc kubenswrapper[4808]: I0311 08:58:04.277072 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="85e3d78d-fe06-452c-a667-90fb06438938" containerName="oc" Mar 11 08:58:04 crc kubenswrapper[4808]: I0311 08:58:04.277942 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rck8c" Mar 11 08:58:04 crc kubenswrapper[4808]: I0311 08:58:04.281010 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rck8c"] Mar 11 08:58:04 crc kubenswrapper[4808]: I0311 08:58:04.367648 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9140f76f-5644-4b2c-a1aa-143025a120ec-catalog-content\") pod \"certified-operators-rck8c\" (UID: \"9140f76f-5644-4b2c-a1aa-143025a120ec\") " pod="openshift-marketplace/certified-operators-rck8c" Mar 11 08:58:04 crc kubenswrapper[4808]: I0311 08:58:04.367781 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9140f76f-5644-4b2c-a1aa-143025a120ec-utilities\") pod \"certified-operators-rck8c\" (UID: \"9140f76f-5644-4b2c-a1aa-143025a120ec\") " pod="openshift-marketplace/certified-operators-rck8c" Mar 11 08:58:04 crc kubenswrapper[4808]: I0311 08:58:04.367890 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s294m\" (UniqueName: \"kubernetes.io/projected/9140f76f-5644-4b2c-a1aa-143025a120ec-kube-api-access-s294m\") pod \"certified-operators-rck8c\" (UID: \"9140f76f-5644-4b2c-a1aa-143025a120ec\") " pod="openshift-marketplace/certified-operators-rck8c" Mar 11 08:58:04 crc kubenswrapper[4808]: I0311 08:58:04.368077 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pnb9s\" (UniqueName: \"kubernetes.io/projected/85e3d78d-fe06-452c-a667-90fb06438938-kube-api-access-pnb9s\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:04 crc kubenswrapper[4808]: I0311 08:58:04.468842 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9140f76f-5644-4b2c-a1aa-143025a120ec-catalog-content\") pod \"certified-operators-rck8c\" (UID: \"9140f76f-5644-4b2c-a1aa-143025a120ec\") " pod="openshift-marketplace/certified-operators-rck8c" Mar 11 08:58:04 crc kubenswrapper[4808]: I0311 08:58:04.468911 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9140f76f-5644-4b2c-a1aa-143025a120ec-utilities\") pod \"certified-operators-rck8c\" (UID: \"9140f76f-5644-4b2c-a1aa-143025a120ec\") " pod="openshift-marketplace/certified-operators-rck8c" Mar 11 08:58:04 crc kubenswrapper[4808]: I0311 08:58:04.468948 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s294m\" (UniqueName: \"kubernetes.io/projected/9140f76f-5644-4b2c-a1aa-143025a120ec-kube-api-access-s294m\") pod \"certified-operators-rck8c\" (UID: \"9140f76f-5644-4b2c-a1aa-143025a120ec\") " pod="openshift-marketplace/certified-operators-rck8c" Mar 11 08:58:04 crc kubenswrapper[4808]: I0311 08:58:04.469455 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9140f76f-5644-4b2c-a1aa-143025a120ec-utilities\") pod \"certified-operators-rck8c\" (UID: \"9140f76f-5644-4b2c-a1aa-143025a120ec\") " pod="openshift-marketplace/certified-operators-rck8c" Mar 11 08:58:04 crc kubenswrapper[4808]: I0311 08:58:04.469639 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9140f76f-5644-4b2c-a1aa-143025a120ec-catalog-content\") pod \"certified-operators-rck8c\" (UID: \"9140f76f-5644-4b2c-a1aa-143025a120ec\") " pod="openshift-marketplace/certified-operators-rck8c" Mar 11 08:58:04 crc kubenswrapper[4808]: I0311 08:58:04.486276 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s294m\" (UniqueName: \"kubernetes.io/projected/9140f76f-5644-4b2c-a1aa-143025a120ec-kube-api-access-s294m\") pod \"certified-operators-rck8c\" (UID: \"9140f76f-5644-4b2c-a1aa-143025a120ec\") " pod="openshift-marketplace/certified-operators-rck8c" Mar 11 08:58:04 crc kubenswrapper[4808]: I0311 08:58:04.593168 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rck8c" Mar 11 08:58:04 crc kubenswrapper[4808]: I0311 08:58:04.921532 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-6cf8df7788-ghlgz" event={"ID":"e125e50d-5830-4cbc-9eaf-7df7c62cb706","Type":"ContainerStarted","Data":"64c140575544579482da206cf16c8e9c6a6edc7788e46430325bd9801265e076"} Mar 11 08:58:04 crc kubenswrapper[4808]: I0311 08:58:04.923599 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553658-6z4sp" event={"ID":"85e3d78d-fe06-452c-a667-90fb06438938","Type":"ContainerDied","Data":"385f5d6604555eee87453b75061127932e5067629b894fdc4da5e6de6a2b70e9"} Mar 11 08:58:04 crc kubenswrapper[4808]: I0311 08:58:04.923629 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="385f5d6604555eee87453b75061127932e5067629b894fdc4da5e6de6a2b70e9" Mar 11 08:58:04 crc kubenswrapper[4808]: I0311 08:58:04.923657 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553658-6z4sp" Mar 11 08:58:04 crc kubenswrapper[4808]: I0311 08:58:04.974632 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553652-tbd96"] Mar 11 08:58:04 crc kubenswrapper[4808]: I0311 08:58:04.980243 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553652-tbd96"] Mar 11 08:58:05 crc kubenswrapper[4808]: I0311 08:58:05.065579 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rck8c"] Mar 11 08:58:05 crc kubenswrapper[4808]: I0311 08:58:05.798976 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="165effca-3ddd-4f44-9e37-38142ea6cfe6" path="/var/lib/kubelet/pods/165effca-3ddd-4f44-9e37-38142ea6cfe6/volumes" Mar 11 08:58:05 crc kubenswrapper[4808]: I0311 08:58:05.932169 4808 generic.go:334] "Generic (PLEG): container finished" podID="9140f76f-5644-4b2c-a1aa-143025a120ec" containerID="9a8d2f174d6ee3ac02c04772632398cdbf92de0bd9b1fff2406641a844460591" exitCode=0 Mar 11 08:58:05 crc kubenswrapper[4808]: I0311 08:58:05.932217 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rck8c" event={"ID":"9140f76f-5644-4b2c-a1aa-143025a120ec","Type":"ContainerDied","Data":"9a8d2f174d6ee3ac02c04772632398cdbf92de0bd9b1fff2406641a844460591"} Mar 11 08:58:05 crc kubenswrapper[4808]: I0311 08:58:05.932241 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rck8c" event={"ID":"9140f76f-5644-4b2c-a1aa-143025a120ec","Type":"ContainerStarted","Data":"e68bb49f53532d56fbe50107fa1c239103b01fdc264c88d5653a1ae4ba303b97"} Mar 11 08:58:08 crc kubenswrapper[4808]: I0311 08:58:08.966546 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rck8c" event={"ID":"9140f76f-5644-4b2c-a1aa-143025a120ec","Type":"ContainerStarted","Data":"5a3b31bd24180f95ac47a007d9c791a0ee6094118d3d146cc38d2e03cefa7b2f"} Mar 11 08:58:09 crc kubenswrapper[4808]: I0311 08:58:09.976552 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-6cf8df7788-ghlgz" event={"ID":"e125e50d-5830-4cbc-9eaf-7df7c62cb706","Type":"ContainerStarted","Data":"ec01ceb758acf1dacd489c79fadd2e8f26f67de351b5e72d52082493cb08674a"} Mar 11 08:58:09 crc kubenswrapper[4808]: I0311 08:58:09.976951 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-6cf8df7788-ghlgz" Mar 11 08:58:09 crc kubenswrapper[4808]: I0311 08:58:09.978619 4808 generic.go:334] "Generic (PLEG): container finished" podID="9140f76f-5644-4b2c-a1aa-143025a120ec" containerID="5a3b31bd24180f95ac47a007d9c791a0ee6094118d3d146cc38d2e03cefa7b2f" exitCode=0 Mar 11 08:58:09 crc kubenswrapper[4808]: I0311 08:58:09.978663 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rck8c" event={"ID":"9140f76f-5644-4b2c-a1aa-143025a120ec","Type":"ContainerDied","Data":"5a3b31bd24180f95ac47a007d9c791a0ee6094118d3d146cc38d2e03cefa7b2f"} Mar 11 08:58:10 crc kubenswrapper[4808]: I0311 08:58:10.010563 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-6cf8df7788-ghlgz" podStartSLOduration=2.4032311650000002 podStartE2EDuration="7.010538723s" podCreationTimestamp="2026-03-11 08:58:03 +0000 UTC" firstStartedPulling="2026-03-11 08:58:04.196908395 +0000 UTC m=+1135.150231715" lastFinishedPulling="2026-03-11 08:58:08.804215943 +0000 UTC m=+1139.757539273" observedRunningTime="2026-03-11 08:58:10.002011539 +0000 UTC m=+1140.955334869" watchObservedRunningTime="2026-03-11 08:58:10.010538723 +0000 UTC m=+1140.963862083" Mar 11 08:58:10 crc kubenswrapper[4808]: I0311 08:58:10.986982 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rck8c" event={"ID":"9140f76f-5644-4b2c-a1aa-143025a120ec","Type":"ContainerStarted","Data":"54f0636e4f1ef0099fb2a48579fb14f939dde4096104c654d8d5b0efa378cb8f"} Mar 11 08:58:11 crc kubenswrapper[4808]: I0311 08:58:11.003934 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rck8c" podStartSLOduration=2.231701953 podStartE2EDuration="7.00391085s" podCreationTimestamp="2026-03-11 08:58:04 +0000 UTC" firstStartedPulling="2026-03-11 08:58:05.933567884 +0000 UTC m=+1136.886891204" lastFinishedPulling="2026-03-11 08:58:10.705776791 +0000 UTC m=+1141.659100101" observedRunningTime="2026-03-11 08:58:11.00251761 +0000 UTC m=+1141.955840940" watchObservedRunningTime="2026-03-11 08:58:11.00391085 +0000 UTC m=+1141.957234170" Mar 11 08:58:13 crc kubenswrapper[4808]: I0311 08:58:13.686101 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-6cf8df7788-ghlgz" Mar 11 08:58:14 crc kubenswrapper[4808]: I0311 08:58:14.593944 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rck8c" Mar 11 08:58:14 crc kubenswrapper[4808]: I0311 08:58:14.593996 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rck8c" Mar 11 08:58:14 crc kubenswrapper[4808]: I0311 08:58:14.645073 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rck8c" Mar 11 08:58:15 crc kubenswrapper[4808]: I0311 08:58:15.078411 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rck8c" Mar 11 08:58:16 crc kubenswrapper[4808]: I0311 08:58:16.027680 4808 patch_prober.go:28] interesting pod/machine-config-daemon-tfsm9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 08:58:16 crc kubenswrapper[4808]: I0311 08:58:16.027789 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 08:58:16 crc kubenswrapper[4808]: I0311 08:58:16.258048 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rck8c"] Mar 11 08:58:17 crc kubenswrapper[4808]: I0311 08:58:17.023066 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rck8c" podUID="9140f76f-5644-4b2c-a1aa-143025a120ec" containerName="registry-server" containerID="cri-o://54f0636e4f1ef0099fb2a48579fb14f939dde4096104c654d8d5b0efa378cb8f" gracePeriod=2 Mar 11 08:58:17 crc kubenswrapper[4808]: I0311 08:58:17.390183 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rck8c" Mar 11 08:58:17 crc kubenswrapper[4808]: I0311 08:58:17.482169 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9140f76f-5644-4b2c-a1aa-143025a120ec-utilities\") pod \"9140f76f-5644-4b2c-a1aa-143025a120ec\" (UID: \"9140f76f-5644-4b2c-a1aa-143025a120ec\") " Mar 11 08:58:17 crc kubenswrapper[4808]: I0311 08:58:17.482276 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9140f76f-5644-4b2c-a1aa-143025a120ec-catalog-content\") pod \"9140f76f-5644-4b2c-a1aa-143025a120ec\" (UID: \"9140f76f-5644-4b2c-a1aa-143025a120ec\") " Mar 11 08:58:17 crc kubenswrapper[4808]: I0311 08:58:17.482325 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s294m\" (UniqueName: \"kubernetes.io/projected/9140f76f-5644-4b2c-a1aa-143025a120ec-kube-api-access-s294m\") pod \"9140f76f-5644-4b2c-a1aa-143025a120ec\" (UID: \"9140f76f-5644-4b2c-a1aa-143025a120ec\") " Mar 11 08:58:17 crc kubenswrapper[4808]: I0311 08:58:17.483249 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9140f76f-5644-4b2c-a1aa-143025a120ec-utilities" (OuterVolumeSpecName: "utilities") pod "9140f76f-5644-4b2c-a1aa-143025a120ec" (UID: "9140f76f-5644-4b2c-a1aa-143025a120ec"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 08:58:17 crc kubenswrapper[4808]: I0311 08:58:17.487527 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9140f76f-5644-4b2c-a1aa-143025a120ec-kube-api-access-s294m" (OuterVolumeSpecName: "kube-api-access-s294m") pod "9140f76f-5644-4b2c-a1aa-143025a120ec" (UID: "9140f76f-5644-4b2c-a1aa-143025a120ec"). InnerVolumeSpecName "kube-api-access-s294m". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:58:17 crc kubenswrapper[4808]: I0311 08:58:17.542272 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9140f76f-5644-4b2c-a1aa-143025a120ec-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9140f76f-5644-4b2c-a1aa-143025a120ec" (UID: "9140f76f-5644-4b2c-a1aa-143025a120ec"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 08:58:17 crc kubenswrapper[4808]: I0311 08:58:17.583821 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s294m\" (UniqueName: \"kubernetes.io/projected/9140f76f-5644-4b2c-a1aa-143025a120ec-kube-api-access-s294m\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:17 crc kubenswrapper[4808]: I0311 08:58:17.583855 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9140f76f-5644-4b2c-a1aa-143025a120ec-utilities\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:17 crc kubenswrapper[4808]: I0311 08:58:17.583867 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9140f76f-5644-4b2c-a1aa-143025a120ec-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:18 crc kubenswrapper[4808]: I0311 08:58:18.038021 4808 generic.go:334] "Generic (PLEG): container finished" podID="9140f76f-5644-4b2c-a1aa-143025a120ec" containerID="54f0636e4f1ef0099fb2a48579fb14f939dde4096104c654d8d5b0efa378cb8f" exitCode=0 Mar 11 08:58:18 crc kubenswrapper[4808]: I0311 08:58:18.038096 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rck8c" event={"ID":"9140f76f-5644-4b2c-a1aa-143025a120ec","Type":"ContainerDied","Data":"54f0636e4f1ef0099fb2a48579fb14f939dde4096104c654d8d5b0efa378cb8f"} Mar 11 08:58:18 crc kubenswrapper[4808]: I0311 08:58:18.038144 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rck8c" event={"ID":"9140f76f-5644-4b2c-a1aa-143025a120ec","Type":"ContainerDied","Data":"e68bb49f53532d56fbe50107fa1c239103b01fdc264c88d5653a1ae4ba303b97"} Mar 11 08:58:18 crc kubenswrapper[4808]: I0311 08:58:18.038143 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rck8c" Mar 11 08:58:18 crc kubenswrapper[4808]: I0311 08:58:18.038168 4808 scope.go:117] "RemoveContainer" containerID="54f0636e4f1ef0099fb2a48579fb14f939dde4096104c654d8d5b0efa378cb8f" Mar 11 08:58:18 crc kubenswrapper[4808]: I0311 08:58:18.059207 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rck8c"] Mar 11 08:58:18 crc kubenswrapper[4808]: I0311 08:58:18.063154 4808 scope.go:117] "RemoveContainer" containerID="5a3b31bd24180f95ac47a007d9c791a0ee6094118d3d146cc38d2e03cefa7b2f" Mar 11 08:58:18 crc kubenswrapper[4808]: I0311 08:58:18.065440 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rck8c"] Mar 11 08:58:18 crc kubenswrapper[4808]: I0311 08:58:18.080280 4808 scope.go:117] "RemoveContainer" containerID="9a8d2f174d6ee3ac02c04772632398cdbf92de0bd9b1fff2406641a844460591" Mar 11 08:58:18 crc kubenswrapper[4808]: I0311 08:58:18.101077 4808 scope.go:117] "RemoveContainer" containerID="54f0636e4f1ef0099fb2a48579fb14f939dde4096104c654d8d5b0efa378cb8f" Mar 11 08:58:18 crc kubenswrapper[4808]: E0311 08:58:18.101580 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54f0636e4f1ef0099fb2a48579fb14f939dde4096104c654d8d5b0efa378cb8f\": container with ID starting with 54f0636e4f1ef0099fb2a48579fb14f939dde4096104c654d8d5b0efa378cb8f not found: ID does not exist" containerID="54f0636e4f1ef0099fb2a48579fb14f939dde4096104c654d8d5b0efa378cb8f" Mar 11 08:58:18 crc kubenswrapper[4808]: I0311 08:58:18.101625 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54f0636e4f1ef0099fb2a48579fb14f939dde4096104c654d8d5b0efa378cb8f"} err="failed to get container status \"54f0636e4f1ef0099fb2a48579fb14f939dde4096104c654d8d5b0efa378cb8f\": rpc error: code = NotFound desc = could not find container \"54f0636e4f1ef0099fb2a48579fb14f939dde4096104c654d8d5b0efa378cb8f\": container with ID starting with 54f0636e4f1ef0099fb2a48579fb14f939dde4096104c654d8d5b0efa378cb8f not found: ID does not exist" Mar 11 08:58:18 crc kubenswrapper[4808]: I0311 08:58:18.101649 4808 scope.go:117] "RemoveContainer" containerID="5a3b31bd24180f95ac47a007d9c791a0ee6094118d3d146cc38d2e03cefa7b2f" Mar 11 08:58:18 crc kubenswrapper[4808]: E0311 08:58:18.102056 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a3b31bd24180f95ac47a007d9c791a0ee6094118d3d146cc38d2e03cefa7b2f\": container with ID starting with 5a3b31bd24180f95ac47a007d9c791a0ee6094118d3d146cc38d2e03cefa7b2f not found: ID does not exist" containerID="5a3b31bd24180f95ac47a007d9c791a0ee6094118d3d146cc38d2e03cefa7b2f" Mar 11 08:58:18 crc kubenswrapper[4808]: I0311 08:58:18.102096 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a3b31bd24180f95ac47a007d9c791a0ee6094118d3d146cc38d2e03cefa7b2f"} err="failed to get container status \"5a3b31bd24180f95ac47a007d9c791a0ee6094118d3d146cc38d2e03cefa7b2f\": rpc error: code = NotFound desc = could not find container \"5a3b31bd24180f95ac47a007d9c791a0ee6094118d3d146cc38d2e03cefa7b2f\": container with ID starting with 5a3b31bd24180f95ac47a007d9c791a0ee6094118d3d146cc38d2e03cefa7b2f not found: ID does not exist" Mar 11 08:58:18 crc kubenswrapper[4808]: I0311 08:58:18.102125 4808 scope.go:117] "RemoveContainer" containerID="9a8d2f174d6ee3ac02c04772632398cdbf92de0bd9b1fff2406641a844460591" Mar 11 08:58:18 crc kubenswrapper[4808]: E0311 08:58:18.102417 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a8d2f174d6ee3ac02c04772632398cdbf92de0bd9b1fff2406641a844460591\": container with ID starting with 9a8d2f174d6ee3ac02c04772632398cdbf92de0bd9b1fff2406641a844460591 not found: ID does not exist" containerID="9a8d2f174d6ee3ac02c04772632398cdbf92de0bd9b1fff2406641a844460591" Mar 11 08:58:18 crc kubenswrapper[4808]: I0311 08:58:18.102440 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a8d2f174d6ee3ac02c04772632398cdbf92de0bd9b1fff2406641a844460591"} err="failed to get container status \"9a8d2f174d6ee3ac02c04772632398cdbf92de0bd9b1fff2406641a844460591\": rpc error: code = NotFound desc = could not find container \"9a8d2f174d6ee3ac02c04772632398cdbf92de0bd9b1fff2406641a844460591\": container with ID starting with 9a8d2f174d6ee3ac02c04772632398cdbf92de0bd9b1fff2406641a844460591 not found: ID does not exist" Mar 11 08:58:19 crc kubenswrapper[4808]: I0311 08:58:19.796057 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9140f76f-5644-4b2c-a1aa-143025a120ec" path="/var/lib/kubelet/pods/9140f76f-5644-4b2c-a1aa-143025a120ec/volumes" Mar 11 08:58:36 crc kubenswrapper[4808]: I0311 08:58:36.257516 4808 scope.go:117] "RemoveContainer" containerID="88f77b9f5b01e21a5ceb4871bdbe498513e15bfd8369974e34be9506b9c7eae8" Mar 11 08:58:46 crc kubenswrapper[4808]: I0311 08:58:46.027714 4808 patch_prober.go:28] interesting pod/machine-config-daemon-tfsm9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 08:58:46 crc kubenswrapper[4808]: I0311 08:58:46.029295 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 08:58:46 crc kubenswrapper[4808]: I0311 08:58:46.029492 4808 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" Mar 11 08:58:46 crc kubenswrapper[4808]: I0311 08:58:46.030211 4808 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"488e906783a49352d04d778f4c40f55061de3db9ceb8af5362f944dc622b1e1a"} pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 11 08:58:46 crc kubenswrapper[4808]: I0311 08:58:46.030377 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerName="machine-config-daemon" containerID="cri-o://488e906783a49352d04d778f4c40f55061de3db9ceb8af5362f944dc622b1e1a" gracePeriod=600 Mar 11 08:58:46 crc kubenswrapper[4808]: I0311 08:58:46.215715 4808 generic.go:334] "Generic (PLEG): container finished" podID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerID="488e906783a49352d04d778f4c40f55061de3db9ceb8af5362f944dc622b1e1a" exitCode=0 Mar 11 08:58:46 crc kubenswrapper[4808]: I0311 08:58:46.215784 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" event={"ID":"3dda5309-668d-4e3c-b3b2-1d708eecc578","Type":"ContainerDied","Data":"488e906783a49352d04d778f4c40f55061de3db9ceb8af5362f944dc622b1e1a"} Mar 11 08:58:46 crc kubenswrapper[4808]: I0311 08:58:46.216077 4808 scope.go:117] "RemoveContainer" containerID="9e9ccff456ae05e5b80f4063f4e4da8d311a6152671f06021793956cce879777" Mar 11 08:58:47 crc kubenswrapper[4808]: I0311 08:58:47.225129 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" event={"ID":"3dda5309-668d-4e3c-b3b2-1d708eecc578","Type":"ContainerStarted","Data":"84afb20a36811210fd2305d9fb0d3f8a8331946a4c99ca791ee6a486c55a2dfe"} Mar 11 08:58:50 crc kubenswrapper[4808]: I0311 08:58:50.848211 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-677bd678f7-s5kbb"] Mar 11 08:58:50 crc kubenswrapper[4808]: E0311 08:58:50.849199 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9140f76f-5644-4b2c-a1aa-143025a120ec" containerName="registry-server" Mar 11 08:58:50 crc kubenswrapper[4808]: I0311 08:58:50.849217 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="9140f76f-5644-4b2c-a1aa-143025a120ec" containerName="registry-server" Mar 11 08:58:50 crc kubenswrapper[4808]: E0311 08:58:50.849227 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9140f76f-5644-4b2c-a1aa-143025a120ec" containerName="extract-utilities" Mar 11 08:58:50 crc kubenswrapper[4808]: I0311 08:58:50.849234 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="9140f76f-5644-4b2c-a1aa-143025a120ec" containerName="extract-utilities" Mar 11 08:58:50 crc kubenswrapper[4808]: E0311 08:58:50.849251 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9140f76f-5644-4b2c-a1aa-143025a120ec" containerName="extract-content" Mar 11 08:58:50 crc kubenswrapper[4808]: I0311 08:58:50.849259 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="9140f76f-5644-4b2c-a1aa-143025a120ec" containerName="extract-content" Mar 11 08:58:50 crc kubenswrapper[4808]: I0311 08:58:50.849428 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="9140f76f-5644-4b2c-a1aa-143025a120ec" containerName="registry-server" Mar 11 08:58:50 crc kubenswrapper[4808]: I0311 08:58:50.849959 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-s5kbb" Mar 11 08:58:50 crc kubenswrapper[4808]: I0311 08:58:50.852533 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-zf8q9" Mar 11 08:58:50 crc kubenswrapper[4808]: I0311 08:58:50.853879 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-984cd4dcf-nt6vp"] Mar 11 08:58:50 crc kubenswrapper[4808]: I0311 08:58:50.855048 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-nt6vp" Mar 11 08:58:50 crc kubenswrapper[4808]: I0311 08:58:50.857775 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-bh6ln" Mar 11 08:58:50 crc kubenswrapper[4808]: I0311 08:58:50.865131 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-677bd678f7-s5kbb"] Mar 11 08:58:50 crc kubenswrapper[4808]: I0311 08:58:50.870941 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-66d56f6ff4-7zq56"] Mar 11 08:58:50 crc kubenswrapper[4808]: I0311 08:58:50.871949 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-7zq56" Mar 11 08:58:50 crc kubenswrapper[4808]: I0311 08:58:50.873510 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-bprpt" Mar 11 08:58:50 crc kubenswrapper[4808]: I0311 08:58:50.875504 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-984cd4dcf-nt6vp"] Mar 11 08:58:50 crc kubenswrapper[4808]: I0311 08:58:50.884960 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-66d56f6ff4-7zq56"] Mar 11 08:58:50 crc kubenswrapper[4808]: I0311 08:58:50.905097 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-77b6666d85-6wz56"] Mar 11 08:58:50 crc kubenswrapper[4808]: I0311 08:58:50.905923 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-6wz56" Mar 11 08:58:50 crc kubenswrapper[4808]: I0311 08:58:50.907450 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-7npwn" Mar 11 08:58:50 crc kubenswrapper[4808]: I0311 08:58:50.914199 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-5964f64c48-lqxgc"] Mar 11 08:58:50 crc kubenswrapper[4808]: I0311 08:58:50.917655 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-lqxgc" Mar 11 08:58:50 crc kubenswrapper[4808]: I0311 08:58:50.920943 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hprt\" (UniqueName: \"kubernetes.io/projected/383d2b79-82c7-4abd-bd0e-7cc157c35f28-kube-api-access-7hprt\") pod \"barbican-operator-controller-manager-677bd678f7-s5kbb\" (UID: \"383d2b79-82c7-4abd-bd0e-7cc157c35f28\") " pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-s5kbb" Mar 11 08:58:50 crc kubenswrapper[4808]: I0311 08:58:50.921006 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2px8\" (UniqueName: \"kubernetes.io/projected/b82ff9e7-d047-4da9-8de7-177e1a3fbb7e-kube-api-access-s2px8\") pod \"cinder-operator-controller-manager-984cd4dcf-nt6vp\" (UID: \"b82ff9e7-d047-4da9-8de7-177e1a3fbb7e\") " pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-nt6vp" Mar 11 08:58:50 crc kubenswrapper[4808]: I0311 08:58:50.921034 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fq96j\" (UniqueName: \"kubernetes.io/projected/46a1ed3a-101b-4412-80d3-b246794f4439-kube-api-access-fq96j\") pod \"designate-operator-controller-manager-66d56f6ff4-7zq56\" (UID: \"46a1ed3a-101b-4412-80d3-b246794f4439\") " pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-7zq56" Mar 11 08:58:50 crc kubenswrapper[4808]: I0311 08:58:50.921089 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n75ts\" (UniqueName: \"kubernetes.io/projected/4d0b42e5-b4bf-47a9-afed-ddcf3f770ca0-kube-api-access-n75ts\") pod \"heat-operator-controller-manager-77b6666d85-6wz56\" (UID: \"4d0b42e5-b4bf-47a9-afed-ddcf3f770ca0\") " pod="openstack-operators/heat-operator-controller-manager-77b6666d85-6wz56" Mar 11 08:58:50 crc kubenswrapper[4808]: I0311 08:58:50.934859 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-z95pb" Mar 11 08:58:50 crc kubenswrapper[4808]: I0311 08:58:50.936653 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-6d9d6b584d-n49mc"] Mar 11 08:58:50 crc kubenswrapper[4808]: I0311 08:58:50.938291 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-n49mc" Mar 11 08:58:50 crc kubenswrapper[4808]: I0311 08:58:50.943965 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-77b6666d85-6wz56"] Mar 11 08:58:50 crc kubenswrapper[4808]: I0311 08:58:50.960517 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-9jdms" Mar 11 08:58:50 crc kubenswrapper[4808]: I0311 08:58:50.980657 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-6d9d6b584d-n49mc"] Mar 11 08:58:50 crc kubenswrapper[4808]: I0311 08:58:50.993945 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-5995f4446f-cb6bf"] Mar 11 08:58:50 crc kubenswrapper[4808]: I0311 08:58:50.995056 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-5995f4446f-cb6bf" Mar 11 08:58:50 crc kubenswrapper[4808]: I0311 08:58:50.997499 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-vcms8" Mar 11 08:58:50 crc kubenswrapper[4808]: I0311 08:58:50.997847 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.002822 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-5964f64c48-lqxgc"] Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.015824 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6bbb499bbc-n674k"] Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.016642 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-n674k" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.024328 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mklhb\" (UniqueName: \"kubernetes.io/projected/cd11b5ec-454a-4bbb-a4e4-5b4569c0e219-kube-api-access-mklhb\") pod \"glance-operator-controller-manager-5964f64c48-lqxgc\" (UID: \"cd11b5ec-454a-4bbb-a4e4-5b4569c0e219\") " pod="openstack-operators/glance-operator-controller-manager-5964f64c48-lqxgc" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.024616 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hprt\" (UniqueName: \"kubernetes.io/projected/383d2b79-82c7-4abd-bd0e-7cc157c35f28-kube-api-access-7hprt\") pod \"barbican-operator-controller-manager-677bd678f7-s5kbb\" (UID: \"383d2b79-82c7-4abd-bd0e-7cc157c35f28\") " pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-s5kbb" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.024704 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2l6z9\" (UniqueName: \"kubernetes.io/projected/14bd6d19-3c60-4752-bd29-09809df6f7b6-kube-api-access-2l6z9\") pod \"horizon-operator-controller-manager-6d9d6b584d-n49mc\" (UID: \"14bd6d19-3c60-4752-bd29-09809df6f7b6\") " pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-n49mc" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.024781 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2px8\" (UniqueName: \"kubernetes.io/projected/b82ff9e7-d047-4da9-8de7-177e1a3fbb7e-kube-api-access-s2px8\") pod \"cinder-operator-controller-manager-984cd4dcf-nt6vp\" (UID: \"b82ff9e7-d047-4da9-8de7-177e1a3fbb7e\") " pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-nt6vp" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.024869 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fq96j\" (UniqueName: \"kubernetes.io/projected/46a1ed3a-101b-4412-80d3-b246794f4439-kube-api-access-fq96j\") pod \"designate-operator-controller-manager-66d56f6ff4-7zq56\" (UID: \"46a1ed3a-101b-4412-80d3-b246794f4439\") " pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-7zq56" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.024957 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54fks\" (UniqueName: \"kubernetes.io/projected/557a289a-7329-40b7-a593-bfcfa58e679d-kube-api-access-54fks\") pod \"infra-operator-controller-manager-5995f4446f-cb6bf\" (UID: \"557a289a-7329-40b7-a593-bfcfa58e679d\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-cb6bf" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.025045 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n75ts\" (UniqueName: \"kubernetes.io/projected/4d0b42e5-b4bf-47a9-afed-ddcf3f770ca0-kube-api-access-n75ts\") pod \"heat-operator-controller-manager-77b6666d85-6wz56\" (UID: \"4d0b42e5-b4bf-47a9-afed-ddcf3f770ca0\") " pod="openstack-operators/heat-operator-controller-manager-77b6666d85-6wz56" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.025130 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/557a289a-7329-40b7-a593-bfcfa58e679d-cert\") pod \"infra-operator-controller-manager-5995f4446f-cb6bf\" (UID: \"557a289a-7329-40b7-a593-bfcfa58e679d\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-cb6bf" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.026678 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-jc9x4" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.027623 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-5995f4446f-cb6bf"] Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.054494 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-684f77d66d-9k6vf"] Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.055582 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-9k6vf" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.060785 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-kfh5x" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.061217 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hprt\" (UniqueName: \"kubernetes.io/projected/383d2b79-82c7-4abd-bd0e-7cc157c35f28-kube-api-access-7hprt\") pod \"barbican-operator-controller-manager-677bd678f7-s5kbb\" (UID: \"383d2b79-82c7-4abd-bd0e-7cc157c35f28\") " pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-s5kbb" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.061691 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2px8\" (UniqueName: \"kubernetes.io/projected/b82ff9e7-d047-4da9-8de7-177e1a3fbb7e-kube-api-access-s2px8\") pod \"cinder-operator-controller-manager-984cd4dcf-nt6vp\" (UID: \"b82ff9e7-d047-4da9-8de7-177e1a3fbb7e\") " pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-nt6vp" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.069176 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fq96j\" (UniqueName: \"kubernetes.io/projected/46a1ed3a-101b-4412-80d3-b246794f4439-kube-api-access-fq96j\") pod \"designate-operator-controller-manager-66d56f6ff4-7zq56\" (UID: \"46a1ed3a-101b-4412-80d3-b246794f4439\") " pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-7zq56" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.078874 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-68f45f9d9f-95v7v"] Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.079821 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-95v7v" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.088065 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-pwr9h" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.092272 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n75ts\" (UniqueName: \"kubernetes.io/projected/4d0b42e5-b4bf-47a9-afed-ddcf3f770ca0-kube-api-access-n75ts\") pod \"heat-operator-controller-manager-77b6666d85-6wz56\" (UID: \"4d0b42e5-b4bf-47a9-afed-ddcf3f770ca0\") " pod="openstack-operators/heat-operator-controller-manager-77b6666d85-6wz56" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.100721 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6bbb499bbc-n674k"] Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.110373 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-68f45f9d9f-95v7v"] Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.126094 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nj7p\" (UniqueName: \"kubernetes.io/projected/f90023c4-1729-4193-9853-4548be9c786c-kube-api-access-2nj7p\") pod \"keystone-operator-controller-manager-684f77d66d-9k6vf\" (UID: \"f90023c4-1729-4193-9853-4548be9c786c\") " pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-9k6vf" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.126136 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/557a289a-7329-40b7-a593-bfcfa58e679d-cert\") pod \"infra-operator-controller-manager-5995f4446f-cb6bf\" (UID: \"557a289a-7329-40b7-a593-bfcfa58e679d\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-cb6bf" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.126157 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9glj2\" (UniqueName: \"kubernetes.io/projected/b1553dcf-b18c-45ef-a328-9eb5b86d5a02-kube-api-access-9glj2\") pod \"ironic-operator-controller-manager-6bbb499bbc-n674k\" (UID: \"b1553dcf-b18c-45ef-a328-9eb5b86d5a02\") " pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-n674k" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.126212 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mklhb\" (UniqueName: \"kubernetes.io/projected/cd11b5ec-454a-4bbb-a4e4-5b4569c0e219-kube-api-access-mklhb\") pod \"glance-operator-controller-manager-5964f64c48-lqxgc\" (UID: \"cd11b5ec-454a-4bbb-a4e4-5b4569c0e219\") " pod="openstack-operators/glance-operator-controller-manager-5964f64c48-lqxgc" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.126238 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2l6z9\" (UniqueName: \"kubernetes.io/projected/14bd6d19-3c60-4752-bd29-09809df6f7b6-kube-api-access-2l6z9\") pod \"horizon-operator-controller-manager-6d9d6b584d-n49mc\" (UID: \"14bd6d19-3c60-4752-bd29-09809df6f7b6\") " pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-n49mc" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.126273 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjtxv\" (UniqueName: \"kubernetes.io/projected/b39efce9-91d6-43b0-b80c-30c223d26460-kube-api-access-gjtxv\") pod \"manila-operator-controller-manager-68f45f9d9f-95v7v\" (UID: \"b39efce9-91d6-43b0-b80c-30c223d26460\") " pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-95v7v" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.126295 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54fks\" (UniqueName: \"kubernetes.io/projected/557a289a-7329-40b7-a593-bfcfa58e679d-kube-api-access-54fks\") pod \"infra-operator-controller-manager-5995f4446f-cb6bf\" (UID: \"557a289a-7329-40b7-a593-bfcfa58e679d\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-cb6bf" Mar 11 08:58:51 crc kubenswrapper[4808]: E0311 08:58:51.126776 4808 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 11 08:58:51 crc kubenswrapper[4808]: E0311 08:58:51.126857 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/557a289a-7329-40b7-a593-bfcfa58e679d-cert podName:557a289a-7329-40b7-a593-bfcfa58e679d nodeName:}" failed. No retries permitted until 2026-03-11 08:58:51.626838217 +0000 UTC m=+1182.580161537 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/557a289a-7329-40b7-a593-bfcfa58e679d-cert") pod "infra-operator-controller-manager-5995f4446f-cb6bf" (UID: "557a289a-7329-40b7-a593-bfcfa58e679d") : secret "infra-operator-webhook-server-cert" not found Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.130149 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-658d4cdd5-lsmx2"] Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.131221 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-lsmx2" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.133579 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-9brpn" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.143842 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-684f77d66d-9k6vf"] Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.150352 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2l6z9\" (UniqueName: \"kubernetes.io/projected/14bd6d19-3c60-4752-bd29-09809df6f7b6-kube-api-access-2l6z9\") pod \"horizon-operator-controller-manager-6d9d6b584d-n49mc\" (UID: \"14bd6d19-3c60-4752-bd29-09809df6f7b6\") " pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-n49mc" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.155292 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54fks\" (UniqueName: \"kubernetes.io/projected/557a289a-7329-40b7-a593-bfcfa58e679d-kube-api-access-54fks\") pod \"infra-operator-controller-manager-5995f4446f-cb6bf\" (UID: \"557a289a-7329-40b7-a593-bfcfa58e679d\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-cb6bf" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.155592 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-776c5696bf-snl5f"] Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.156526 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-snl5f" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.165193 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-rctxf" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.175643 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mklhb\" (UniqueName: \"kubernetes.io/projected/cd11b5ec-454a-4bbb-a4e4-5b4569c0e219-kube-api-access-mklhb\") pod \"glance-operator-controller-manager-5964f64c48-lqxgc\" (UID: \"cd11b5ec-454a-4bbb-a4e4-5b4569c0e219\") " pod="openstack-operators/glance-operator-controller-manager-5964f64c48-lqxgc" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.201438 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-569cc54c5-44gbn"] Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.202410 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-44gbn" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.205781 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-6g49g" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.208830 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-776c5696bf-snl5f"] Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.219401 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-658d4cdd5-lsmx2"] Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.227795 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64r4w\" (UniqueName: \"kubernetes.io/projected/6863fca8-473b-4fc3-8e19-f717c2d164c9-kube-api-access-64r4w\") pod \"mariadb-operator-controller-manager-658d4cdd5-lsmx2\" (UID: \"6863fca8-473b-4fc3-8e19-f717c2d164c9\") " pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-lsmx2" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.227838 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n524j\" (UniqueName: \"kubernetes.io/projected/94668c95-67c1-4f5e-9bde-e2d34d7ce631-kube-api-access-n524j\") pod \"nova-operator-controller-manager-569cc54c5-44gbn\" (UID: \"94668c95-67c1-4f5e-9bde-e2d34d7ce631\") " pod="openstack-operators/nova-operator-controller-manager-569cc54c5-44gbn" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.227863 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjtxv\" (UniqueName: \"kubernetes.io/projected/b39efce9-91d6-43b0-b80c-30c223d26460-kube-api-access-gjtxv\") pod \"manila-operator-controller-manager-68f45f9d9f-95v7v\" (UID: \"b39efce9-91d6-43b0-b80c-30c223d26460\") " pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-95v7v" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.227908 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cw6j7\" (UniqueName: \"kubernetes.io/projected/9d6bd72a-6ed8-4558-950a-80c4aab533b0-kube-api-access-cw6j7\") pod \"neutron-operator-controller-manager-776c5696bf-snl5f\" (UID: \"9d6bd72a-6ed8-4558-950a-80c4aab533b0\") " pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-snl5f" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.227928 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nj7p\" (UniqueName: \"kubernetes.io/projected/f90023c4-1729-4193-9853-4548be9c786c-kube-api-access-2nj7p\") pod \"keystone-operator-controller-manager-684f77d66d-9k6vf\" (UID: \"f90023c4-1729-4193-9853-4548be9c786c\") " pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-9k6vf" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.227963 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9glj2\" (UniqueName: \"kubernetes.io/projected/b1553dcf-b18c-45ef-a328-9eb5b86d5a02-kube-api-access-9glj2\") pod \"ironic-operator-controller-manager-6bbb499bbc-n674k\" (UID: \"b1553dcf-b18c-45ef-a328-9eb5b86d5a02\") " pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-n674k" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.231628 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-8kpv9"] Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.233049 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-8kpv9" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.236157 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-gfkcg" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.239060 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-569cc54c5-44gbn"] Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.239441 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-s5kbb" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.256184 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nj7p\" (UniqueName: \"kubernetes.io/projected/f90023c4-1729-4193-9853-4548be9c786c-kube-api-access-2nj7p\") pod \"keystone-operator-controller-manager-684f77d66d-9k6vf\" (UID: \"f90023c4-1729-4193-9853-4548be9c786c\") " pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-9k6vf" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.258864 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjtxv\" (UniqueName: \"kubernetes.io/projected/b39efce9-91d6-43b0-b80c-30c223d26460-kube-api-access-gjtxv\") pod \"manila-operator-controller-manager-68f45f9d9f-95v7v\" (UID: \"b39efce9-91d6-43b0-b80c-30c223d26460\") " pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-95v7v" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.259015 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9glj2\" (UniqueName: \"kubernetes.io/projected/b1553dcf-b18c-45ef-a328-9eb5b86d5a02-kube-api-access-9glj2\") pod \"ironic-operator-controller-manager-6bbb499bbc-n674k\" (UID: \"b1553dcf-b18c-45ef-a328-9eb5b86d5a02\") " pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-n674k" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.265622 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-8kpv9"] Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.267488 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-nt6vp" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.277955 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-bbc5b68f9-pgtxk"] Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.282443 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-pgtxk" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.283810 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6647d7885fct7h5"] Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.284900 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6647d7885fct7h5" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.290211 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-7zq56" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.290524 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.290809 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-gzq4m" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.290958 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-9fm7l" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.292761 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-677c674df7-j9bkm"] Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.293666 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-677c674df7-j9bkm" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.296258 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-bbc5b68f9-pgtxk"] Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.299975 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-g9ddq" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.302464 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-574d45c66c-b72f8"] Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.311746 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6647d7885fct7h5"] Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.311872 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-b72f8" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.317311 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-lqxgc" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.323685 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-574d45c66c-b72f8"] Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.323754 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-4nmdg" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.336082 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64r4w\" (UniqueName: \"kubernetes.io/projected/6863fca8-473b-4fc3-8e19-f717c2d164c9-kube-api-access-64r4w\") pod \"mariadb-operator-controller-manager-658d4cdd5-lsmx2\" (UID: \"6863fca8-473b-4fc3-8e19-f717c2d164c9\") " pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-lsmx2" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.366276 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4wgz\" (UniqueName: \"kubernetes.io/projected/cf281551-b7f4-4d5c-823e-6e70132ae2d0-kube-api-access-b4wgz\") pod \"ovn-operator-controller-manager-bbc5b68f9-pgtxk\" (UID: \"cf281551-b7f4-4d5c-823e-6e70132ae2d0\") " pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-pgtxk" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.366370 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n524j\" (UniqueName: \"kubernetes.io/projected/94668c95-67c1-4f5e-9bde-e2d34d7ce631-kube-api-access-n524j\") pod \"nova-operator-controller-manager-569cc54c5-44gbn\" (UID: \"94668c95-67c1-4f5e-9bde-e2d34d7ce631\") " pod="openstack-operators/nova-operator-controller-manager-569cc54c5-44gbn" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.369614 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/aabfac3f-1196-4f9c-be5f-84bfbb833ae3-cert\") pod \"openstack-baremetal-operator-controller-manager-6647d7885fct7h5\" (UID: \"aabfac3f-1196-4f9c-be5f-84bfbb833ae3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6647d7885fct7h5" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.369672 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6skm\" (UniqueName: \"kubernetes.io/projected/aabfac3f-1196-4f9c-be5f-84bfbb833ae3-kube-api-access-m6skm\") pod \"openstack-baremetal-operator-controller-manager-6647d7885fct7h5\" (UID: \"aabfac3f-1196-4f9c-be5f-84bfbb833ae3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6647d7885fct7h5" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.369709 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cw6j7\" (UniqueName: \"kubernetes.io/projected/9d6bd72a-6ed8-4558-950a-80c4aab533b0-kube-api-access-cw6j7\") pod \"neutron-operator-controller-manager-776c5696bf-snl5f\" (UID: \"9d6bd72a-6ed8-4558-950a-80c4aab533b0\") " pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-snl5f" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.369778 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfcd6\" (UniqueName: \"kubernetes.io/projected/4b276a74-0d31-48b5-9556-0671578c4ab2-kube-api-access-bfcd6\") pod \"swift-operator-controller-manager-677c674df7-j9bkm\" (UID: \"4b276a74-0d31-48b5-9556-0671578c4ab2\") " pod="openstack-operators/swift-operator-controller-manager-677c674df7-j9bkm" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.369904 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sdds\" (UniqueName: \"kubernetes.io/projected/7dd34e69-2d2e-43aa-9f2e-ee9a2a747a0a-kube-api-access-4sdds\") pod \"placement-operator-controller-manager-574d45c66c-b72f8\" (UID: \"7dd34e69-2d2e-43aa-9f2e-ee9a2a747a0a\") " pod="openstack-operators/placement-operator-controller-manager-574d45c66c-b72f8" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.369951 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5q6f\" (UniqueName: \"kubernetes.io/projected/354b0411-5c50-48d0-9ed8-e4871a92dc0e-kube-api-access-m5q6f\") pod \"octavia-operator-controller-manager-5f4f55cb5c-8kpv9\" (UID: \"354b0411-5c50-48d0-9ed8-e4871a92dc0e\") " pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-8kpv9" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.338436 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-6wz56" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.354246 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-n49mc" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.378058 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64r4w\" (UniqueName: \"kubernetes.io/projected/6863fca8-473b-4fc3-8e19-f717c2d164c9-kube-api-access-64r4w\") pod \"mariadb-operator-controller-manager-658d4cdd5-lsmx2\" (UID: \"6863fca8-473b-4fc3-8e19-f717c2d164c9\") " pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-lsmx2" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.386270 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-677c674df7-j9bkm"] Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.397658 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cw6j7\" (UniqueName: \"kubernetes.io/projected/9d6bd72a-6ed8-4558-950a-80c4aab533b0-kube-api-access-cw6j7\") pod \"neutron-operator-controller-manager-776c5696bf-snl5f\" (UID: \"9d6bd72a-6ed8-4558-950a-80c4aab533b0\") " pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-snl5f" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.404571 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n524j\" (UniqueName: \"kubernetes.io/projected/94668c95-67c1-4f5e-9bde-e2d34d7ce631-kube-api-access-n524j\") pod \"nova-operator-controller-manager-569cc54c5-44gbn\" (UID: \"94668c95-67c1-4f5e-9bde-e2d34d7ce631\") " pod="openstack-operators/nova-operator-controller-manager-569cc54c5-44gbn" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.450177 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-clf8c"] Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.456547 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-clf8c" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.468044 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-pfvzv" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.471273 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4sdds\" (UniqueName: \"kubernetes.io/projected/7dd34e69-2d2e-43aa-9f2e-ee9a2a747a0a-kube-api-access-4sdds\") pod \"placement-operator-controller-manager-574d45c66c-b72f8\" (UID: \"7dd34e69-2d2e-43aa-9f2e-ee9a2a747a0a\") " pod="openstack-operators/placement-operator-controller-manager-574d45c66c-b72f8" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.471318 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5q6f\" (UniqueName: \"kubernetes.io/projected/354b0411-5c50-48d0-9ed8-e4871a92dc0e-kube-api-access-m5q6f\") pod \"octavia-operator-controller-manager-5f4f55cb5c-8kpv9\" (UID: \"354b0411-5c50-48d0-9ed8-e4871a92dc0e\") " pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-8kpv9" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.471393 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4wgz\" (UniqueName: \"kubernetes.io/projected/cf281551-b7f4-4d5c-823e-6e70132ae2d0-kube-api-access-b4wgz\") pod \"ovn-operator-controller-manager-bbc5b68f9-pgtxk\" (UID: \"cf281551-b7f4-4d5c-823e-6e70132ae2d0\") " pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-pgtxk" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.471425 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/aabfac3f-1196-4f9c-be5f-84bfbb833ae3-cert\") pod \"openstack-baremetal-operator-controller-manager-6647d7885fct7h5\" (UID: \"aabfac3f-1196-4f9c-be5f-84bfbb833ae3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6647d7885fct7h5" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.471450 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6skm\" (UniqueName: \"kubernetes.io/projected/aabfac3f-1196-4f9c-be5f-84bfbb833ae3-kube-api-access-m6skm\") pod \"openstack-baremetal-operator-controller-manager-6647d7885fct7h5\" (UID: \"aabfac3f-1196-4f9c-be5f-84bfbb833ae3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6647d7885fct7h5" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.471475 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfcd6\" (UniqueName: \"kubernetes.io/projected/4b276a74-0d31-48b5-9556-0671578c4ab2-kube-api-access-bfcd6\") pod \"swift-operator-controller-manager-677c674df7-j9bkm\" (UID: \"4b276a74-0d31-48b5-9556-0671578c4ab2\") " pod="openstack-operators/swift-operator-controller-manager-677c674df7-j9bkm" Mar 11 08:58:51 crc kubenswrapper[4808]: E0311 08:58:51.472272 4808 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 11 08:58:51 crc kubenswrapper[4808]: E0311 08:58:51.472351 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aabfac3f-1196-4f9c-be5f-84bfbb833ae3-cert podName:aabfac3f-1196-4f9c-be5f-84bfbb833ae3 nodeName:}" failed. No retries permitted until 2026-03-11 08:58:51.97232698 +0000 UTC m=+1182.925650300 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/aabfac3f-1196-4f9c-be5f-84bfbb833ae3-cert") pod "openstack-baremetal-operator-controller-manager-6647d7885fct7h5" (UID: "aabfac3f-1196-4f9c-be5f-84bfbb833ae3") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.495785 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-clf8c"] Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.507219 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6skm\" (UniqueName: \"kubernetes.io/projected/aabfac3f-1196-4f9c-be5f-84bfbb833ae3-kube-api-access-m6skm\") pod \"openstack-baremetal-operator-controller-manager-6647d7885fct7h5\" (UID: \"aabfac3f-1196-4f9c-be5f-84bfbb833ae3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6647d7885fct7h5" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.507879 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4wgz\" (UniqueName: \"kubernetes.io/projected/cf281551-b7f4-4d5c-823e-6e70132ae2d0-kube-api-access-b4wgz\") pod \"ovn-operator-controller-manager-bbc5b68f9-pgtxk\" (UID: \"cf281551-b7f4-4d5c-823e-6e70132ae2d0\") " pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-pgtxk" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.512486 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5q6f\" (UniqueName: \"kubernetes.io/projected/354b0411-5c50-48d0-9ed8-e4871a92dc0e-kube-api-access-m5q6f\") pod \"octavia-operator-controller-manager-5f4f55cb5c-8kpv9\" (UID: \"354b0411-5c50-48d0-9ed8-e4871a92dc0e\") " pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-8kpv9" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.519731 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-n674k" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.524697 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5c5cb9c4d7-kns94"] Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.525667 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-kns94" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.530443 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-vxjqn" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.531201 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfcd6\" (UniqueName: \"kubernetes.io/projected/4b276a74-0d31-48b5-9556-0671578c4ab2-kube-api-access-bfcd6\") pod \"swift-operator-controller-manager-677c674df7-j9bkm\" (UID: \"4b276a74-0d31-48b5-9556-0671578c4ab2\") " pod="openstack-operators/swift-operator-controller-manager-677c674df7-j9bkm" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.533809 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5c5cb9c4d7-kns94"] Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.534392 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4sdds\" (UniqueName: \"kubernetes.io/projected/7dd34e69-2d2e-43aa-9f2e-ee9a2a747a0a-kube-api-access-4sdds\") pod \"placement-operator-controller-manager-574d45c66c-b72f8\" (UID: \"7dd34e69-2d2e-43aa-9f2e-ee9a2a747a0a\") " pod="openstack-operators/placement-operator-controller-manager-574d45c66c-b72f8" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.541953 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-9k6vf" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.558342 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-95v7v" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.572685 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsxrg\" (UniqueName: \"kubernetes.io/projected/1ea888eb-3b8a-4871-846d-6a3006087ffb-kube-api-access-dsxrg\") pod \"telemetry-operator-controller-manager-6cd66dbd4b-clf8c\" (UID: \"1ea888eb-3b8a-4871-846d-6a3006087ffb\") " pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-clf8c" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.585942 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-lsmx2" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.586379 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6dd88c6f67-jw9mr"] Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.587377 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-jw9mr" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.592737 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6dd88c6f67-jw9mr"] Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.600649 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-snl5f" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.604659 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-k7v2r" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.636564 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-44gbn" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.638606 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6679ddfdc7-c8cqh"] Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.639459 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6679ddfdc7-c8cqh" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.646963 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-8kpv9" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.656011 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6679ddfdc7-c8cqh"] Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.656767 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.657008 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-bpnvg" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.657191 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.676572 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-pgtxk" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.677010 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/557a289a-7329-40b7-a593-bfcfa58e679d-cert\") pod \"infra-operator-controller-manager-5995f4446f-cb6bf\" (UID: \"557a289a-7329-40b7-a593-bfcfa58e679d\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-cb6bf" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.677099 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dsxrg\" (UniqueName: \"kubernetes.io/projected/1ea888eb-3b8a-4871-846d-6a3006087ffb-kube-api-access-dsxrg\") pod \"telemetry-operator-controller-manager-6cd66dbd4b-clf8c\" (UID: \"1ea888eb-3b8a-4871-846d-6a3006087ffb\") " pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-clf8c" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.677184 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgt45\" (UniqueName: \"kubernetes.io/projected/cd5790a0-53c4-4ee8-95fa-f72ea9135488-kube-api-access-wgt45\") pod \"test-operator-controller-manager-5c5cb9c4d7-kns94\" (UID: \"cd5790a0-53c4-4ee8-95fa-f72ea9135488\") " pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-kns94" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.677233 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bh8x\" (UniqueName: \"kubernetes.io/projected/90bdb089-2ce0-4e37-bda6-5db68ebd89e8-kube-api-access-7bh8x\") pod \"watcher-operator-controller-manager-6dd88c6f67-jw9mr\" (UID: \"90bdb089-2ce0-4e37-bda6-5db68ebd89e8\") " pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-jw9mr" Mar 11 08:58:51 crc kubenswrapper[4808]: E0311 08:58:51.677436 4808 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 11 08:58:51 crc kubenswrapper[4808]: E0311 08:58:51.677488 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/557a289a-7329-40b7-a593-bfcfa58e679d-cert podName:557a289a-7329-40b7-a593-bfcfa58e679d nodeName:}" failed. No retries permitted until 2026-03-11 08:58:52.677470169 +0000 UTC m=+1183.630793489 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/557a289a-7329-40b7-a593-bfcfa58e679d-cert") pod "infra-operator-controller-manager-5995f4446f-cb6bf" (UID: "557a289a-7329-40b7-a593-bfcfa58e679d") : secret "infra-operator-webhook-server-cert" not found Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.709588 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsxrg\" (UniqueName: \"kubernetes.io/projected/1ea888eb-3b8a-4871-846d-6a3006087ffb-kube-api-access-dsxrg\") pod \"telemetry-operator-controller-manager-6cd66dbd4b-clf8c\" (UID: \"1ea888eb-3b8a-4871-846d-6a3006087ffb\") " pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-clf8c" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.747710 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-677c674df7-j9bkm" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.767225 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-b72f8" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.784175 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/37e5de88-802a-408e-9362-51166d0b7662-webhook-certs\") pod \"openstack-operator-controller-manager-6679ddfdc7-c8cqh\" (UID: \"37e5de88-802a-408e-9362-51166d0b7662\") " pod="openstack-operators/openstack-operator-controller-manager-6679ddfdc7-c8cqh" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.784226 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgt45\" (UniqueName: \"kubernetes.io/projected/cd5790a0-53c4-4ee8-95fa-f72ea9135488-kube-api-access-wgt45\") pod \"test-operator-controller-manager-5c5cb9c4d7-kns94\" (UID: \"cd5790a0-53c4-4ee8-95fa-f72ea9135488\") " pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-kns94" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.784265 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jkf9\" (UniqueName: \"kubernetes.io/projected/37e5de88-802a-408e-9362-51166d0b7662-kube-api-access-9jkf9\") pod \"openstack-operator-controller-manager-6679ddfdc7-c8cqh\" (UID: \"37e5de88-802a-408e-9362-51166d0b7662\") " pod="openstack-operators/openstack-operator-controller-manager-6679ddfdc7-c8cqh" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.784291 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/37e5de88-802a-408e-9362-51166d0b7662-metrics-certs\") pod \"openstack-operator-controller-manager-6679ddfdc7-c8cqh\" (UID: \"37e5de88-802a-408e-9362-51166d0b7662\") " pod="openstack-operators/openstack-operator-controller-manager-6679ddfdc7-c8cqh" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.784329 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bh8x\" (UniqueName: \"kubernetes.io/projected/90bdb089-2ce0-4e37-bda6-5db68ebd89e8-kube-api-access-7bh8x\") pod \"watcher-operator-controller-manager-6dd88c6f67-jw9mr\" (UID: \"90bdb089-2ce0-4e37-bda6-5db68ebd89e8\") " pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-jw9mr" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.870516 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgt45\" (UniqueName: \"kubernetes.io/projected/cd5790a0-53c4-4ee8-95fa-f72ea9135488-kube-api-access-wgt45\") pod \"test-operator-controller-manager-5c5cb9c4d7-kns94\" (UID: \"cd5790a0-53c4-4ee8-95fa-f72ea9135488\") " pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-kns94" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.882247 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bh8x\" (UniqueName: \"kubernetes.io/projected/90bdb089-2ce0-4e37-bda6-5db68ebd89e8-kube-api-access-7bh8x\") pod \"watcher-operator-controller-manager-6dd88c6f67-jw9mr\" (UID: \"90bdb089-2ce0-4e37-bda6-5db68ebd89e8\") " pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-jw9mr" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.886113 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/37e5de88-802a-408e-9362-51166d0b7662-webhook-certs\") pod \"openstack-operator-controller-manager-6679ddfdc7-c8cqh\" (UID: \"37e5de88-802a-408e-9362-51166d0b7662\") " pod="openstack-operators/openstack-operator-controller-manager-6679ddfdc7-c8cqh" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.886170 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jkf9\" (UniqueName: \"kubernetes.io/projected/37e5de88-802a-408e-9362-51166d0b7662-kube-api-access-9jkf9\") pod \"openstack-operator-controller-manager-6679ddfdc7-c8cqh\" (UID: \"37e5de88-802a-408e-9362-51166d0b7662\") " pod="openstack-operators/openstack-operator-controller-manager-6679ddfdc7-c8cqh" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.886196 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/37e5de88-802a-408e-9362-51166d0b7662-metrics-certs\") pod \"openstack-operator-controller-manager-6679ddfdc7-c8cqh\" (UID: \"37e5de88-802a-408e-9362-51166d0b7662\") " pod="openstack-operators/openstack-operator-controller-manager-6679ddfdc7-c8cqh" Mar 11 08:58:51 crc kubenswrapper[4808]: E0311 08:58:51.886439 4808 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 11 08:58:51 crc kubenswrapper[4808]: E0311 08:58:51.886497 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/37e5de88-802a-408e-9362-51166d0b7662-metrics-certs podName:37e5de88-802a-408e-9362-51166d0b7662 nodeName:}" failed. No retries permitted until 2026-03-11 08:58:52.386478728 +0000 UTC m=+1183.339802048 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/37e5de88-802a-408e-9362-51166d0b7662-metrics-certs") pod "openstack-operator-controller-manager-6679ddfdc7-c8cqh" (UID: "37e5de88-802a-408e-9362-51166d0b7662") : secret "metrics-server-cert" not found Mar 11 08:58:51 crc kubenswrapper[4808]: E0311 08:58:51.887626 4808 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 11 08:58:51 crc kubenswrapper[4808]: E0311 08:58:51.887677 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/37e5de88-802a-408e-9362-51166d0b7662-webhook-certs podName:37e5de88-802a-408e-9362-51166d0b7662 nodeName:}" failed. No retries permitted until 2026-03-11 08:58:52.387664142 +0000 UTC m=+1183.340987462 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/37e5de88-802a-408e-9362-51166d0b7662-webhook-certs") pod "openstack-operator-controller-manager-6679ddfdc7-c8cqh" (UID: "37e5de88-802a-408e-9362-51166d0b7662") : secret "webhook-server-cert" not found Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.922910 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jkf9\" (UniqueName: \"kubernetes.io/projected/37e5de88-802a-408e-9362-51166d0b7662-kube-api-access-9jkf9\") pod \"openstack-operator-controller-manager-6679ddfdc7-c8cqh\" (UID: \"37e5de88-802a-408e-9362-51166d0b7662\") " pod="openstack-operators/openstack-operator-controller-manager-6679ddfdc7-c8cqh" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.988045 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/aabfac3f-1196-4f9c-be5f-84bfbb833ae3-cert\") pod \"openstack-baremetal-operator-controller-manager-6647d7885fct7h5\" (UID: \"aabfac3f-1196-4f9c-be5f-84bfbb833ae3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6647d7885fct7h5" Mar 11 08:58:51 crc kubenswrapper[4808]: E0311 08:58:51.988224 4808 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 11 08:58:51 crc kubenswrapper[4808]: E0311 08:58:51.988289 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aabfac3f-1196-4f9c-be5f-84bfbb833ae3-cert podName:aabfac3f-1196-4f9c-be5f-84bfbb833ae3 nodeName:}" failed. No retries permitted until 2026-03-11 08:58:52.98827032 +0000 UTC m=+1183.941593640 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/aabfac3f-1196-4f9c-be5f-84bfbb833ae3-cert") pod "openstack-baremetal-operator-controller-manager-6647d7885fct7h5" (UID: "aabfac3f-1196-4f9c-be5f-84bfbb833ae3") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.991806 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-kns94" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.994232 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7lk57"] Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.994958 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7lk57"] Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.995047 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7lk57" Mar 11 08:58:51 crc kubenswrapper[4808]: I0311 08:58:51.997093 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-9x9z2" Mar 11 08:58:52 crc kubenswrapper[4808]: I0311 08:58:52.011756 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-clf8c" Mar 11 08:58:52 crc kubenswrapper[4808]: I0311 08:58:52.014347 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-jw9mr" Mar 11 08:58:52 crc kubenswrapper[4808]: I0311 08:58:52.092404 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqmqk\" (UniqueName: \"kubernetes.io/projected/b0711f78-69be-46d0-8857-e01d6927edfd-kube-api-access-vqmqk\") pod \"rabbitmq-cluster-operator-manager-668c99d594-7lk57\" (UID: \"b0711f78-69be-46d0-8857-e01d6927edfd\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7lk57" Mar 11 08:58:52 crc kubenswrapper[4808]: I0311 08:58:52.194270 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqmqk\" (UniqueName: \"kubernetes.io/projected/b0711f78-69be-46d0-8857-e01d6927edfd-kube-api-access-vqmqk\") pod \"rabbitmq-cluster-operator-manager-668c99d594-7lk57\" (UID: \"b0711f78-69be-46d0-8857-e01d6927edfd\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7lk57" Mar 11 08:58:52 crc kubenswrapper[4808]: I0311 08:58:52.228194 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqmqk\" (UniqueName: \"kubernetes.io/projected/b0711f78-69be-46d0-8857-e01d6927edfd-kube-api-access-vqmqk\") pod \"rabbitmq-cluster-operator-manager-668c99d594-7lk57\" (UID: \"b0711f78-69be-46d0-8857-e01d6927edfd\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7lk57" Mar 11 08:58:52 crc kubenswrapper[4808]: I0311 08:58:52.231496 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-677bd678f7-s5kbb"] Mar 11 08:58:52 crc kubenswrapper[4808]: I0311 08:58:52.341915 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7lk57" Mar 11 08:58:52 crc kubenswrapper[4808]: I0311 08:58:52.398272 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/37e5de88-802a-408e-9362-51166d0b7662-metrics-certs\") pod \"openstack-operator-controller-manager-6679ddfdc7-c8cqh\" (UID: \"37e5de88-802a-408e-9362-51166d0b7662\") " pod="openstack-operators/openstack-operator-controller-manager-6679ddfdc7-c8cqh" Mar 11 08:58:52 crc kubenswrapper[4808]: I0311 08:58:52.398432 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/37e5de88-802a-408e-9362-51166d0b7662-webhook-certs\") pod \"openstack-operator-controller-manager-6679ddfdc7-c8cqh\" (UID: \"37e5de88-802a-408e-9362-51166d0b7662\") " pod="openstack-operators/openstack-operator-controller-manager-6679ddfdc7-c8cqh" Mar 11 08:58:52 crc kubenswrapper[4808]: E0311 08:58:52.398486 4808 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 11 08:58:52 crc kubenswrapper[4808]: E0311 08:58:52.398508 4808 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 11 08:58:52 crc kubenswrapper[4808]: E0311 08:58:52.398598 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/37e5de88-802a-408e-9362-51166d0b7662-metrics-certs podName:37e5de88-802a-408e-9362-51166d0b7662 nodeName:}" failed. No retries permitted until 2026-03-11 08:58:53.398581048 +0000 UTC m=+1184.351904368 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/37e5de88-802a-408e-9362-51166d0b7662-metrics-certs") pod "openstack-operator-controller-manager-6679ddfdc7-c8cqh" (UID: "37e5de88-802a-408e-9362-51166d0b7662") : secret "metrics-server-cert" not found Mar 11 08:58:52 crc kubenswrapper[4808]: E0311 08:58:52.398614 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/37e5de88-802a-408e-9362-51166d0b7662-webhook-certs podName:37e5de88-802a-408e-9362-51166d0b7662 nodeName:}" failed. No retries permitted until 2026-03-11 08:58:53.398607248 +0000 UTC m=+1184.351930558 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/37e5de88-802a-408e-9362-51166d0b7662-webhook-certs") pod "openstack-operator-controller-manager-6679ddfdc7-c8cqh" (UID: "37e5de88-802a-408e-9362-51166d0b7662") : secret "webhook-server-cert" not found Mar 11 08:58:52 crc kubenswrapper[4808]: I0311 08:58:52.665470 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-984cd4dcf-nt6vp"] Mar 11 08:58:52 crc kubenswrapper[4808]: I0311 08:58:52.671714 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-5964f64c48-lqxgc"] Mar 11 08:58:52 crc kubenswrapper[4808]: W0311 08:58:52.672611 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcd11b5ec_454a_4bbb_a4e4_5b4569c0e219.slice/crio-e04e0199d626240145fd134aa353e1aa9230d6aba6c18baa2f95cad7aea5b7a7 WatchSource:0}: Error finding container e04e0199d626240145fd134aa353e1aa9230d6aba6c18baa2f95cad7aea5b7a7: Status 404 returned error can't find the container with id e04e0199d626240145fd134aa353e1aa9230d6aba6c18baa2f95cad7aea5b7a7 Mar 11 08:58:52 crc kubenswrapper[4808]: I0311 08:58:52.685468 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-77b6666d85-6wz56"] Mar 11 08:58:52 crc kubenswrapper[4808]: W0311 08:58:52.691004 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4d0b42e5_b4bf_47a9_afed_ddcf3f770ca0.slice/crio-b11030fd6cad4b1244c0fa971f0c6b9283a653cd5630279dbaf5e58e5ec418a4 WatchSource:0}: Error finding container b11030fd6cad4b1244c0fa971f0c6b9283a653cd5630279dbaf5e58e5ec418a4: Status 404 returned error can't find the container with id b11030fd6cad4b1244c0fa971f0c6b9283a653cd5630279dbaf5e58e5ec418a4 Mar 11 08:58:52 crc kubenswrapper[4808]: I0311 08:58:52.704701 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/557a289a-7329-40b7-a593-bfcfa58e679d-cert\") pod \"infra-operator-controller-manager-5995f4446f-cb6bf\" (UID: \"557a289a-7329-40b7-a593-bfcfa58e679d\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-cb6bf" Mar 11 08:58:52 crc kubenswrapper[4808]: E0311 08:58:52.705993 4808 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 11 08:58:52 crc kubenswrapper[4808]: E0311 08:58:52.706080 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/557a289a-7329-40b7-a593-bfcfa58e679d-cert podName:557a289a-7329-40b7-a593-bfcfa58e679d nodeName:}" failed. No retries permitted until 2026-03-11 08:58:54.706057773 +0000 UTC m=+1185.659381093 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/557a289a-7329-40b7-a593-bfcfa58e679d-cert") pod "infra-operator-controller-manager-5995f4446f-cb6bf" (UID: "557a289a-7329-40b7-a593-bfcfa58e679d") : secret "infra-operator-webhook-server-cert" not found Mar 11 08:58:53 crc kubenswrapper[4808]: I0311 08:58:53.011880 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/aabfac3f-1196-4f9c-be5f-84bfbb833ae3-cert\") pod \"openstack-baremetal-operator-controller-manager-6647d7885fct7h5\" (UID: \"aabfac3f-1196-4f9c-be5f-84bfbb833ae3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6647d7885fct7h5" Mar 11 08:58:53 crc kubenswrapper[4808]: E0311 08:58:53.012074 4808 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 11 08:58:53 crc kubenswrapper[4808]: E0311 08:58:53.012158 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aabfac3f-1196-4f9c-be5f-84bfbb833ae3-cert podName:aabfac3f-1196-4f9c-be5f-84bfbb833ae3 nodeName:}" failed. No retries permitted until 2026-03-11 08:58:55.012138349 +0000 UTC m=+1185.965461669 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/aabfac3f-1196-4f9c-be5f-84bfbb833ae3-cert") pod "openstack-baremetal-operator-controller-manager-6647d7885fct7h5" (UID: "aabfac3f-1196-4f9c-be5f-84bfbb833ae3") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 11 08:58:53 crc kubenswrapper[4808]: I0311 08:58:53.096830 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-66d56f6ff4-7zq56"] Mar 11 08:58:53 crc kubenswrapper[4808]: I0311 08:58:53.106170 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-6d9d6b584d-n49mc"] Mar 11 08:58:53 crc kubenswrapper[4808]: I0311 08:58:53.132888 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-684f77d66d-9k6vf"] Mar 11 08:58:53 crc kubenswrapper[4808]: I0311 08:58:53.138472 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-658d4cdd5-lsmx2"] Mar 11 08:58:53 crc kubenswrapper[4808]: I0311 08:58:53.148540 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-68f45f9d9f-95v7v"] Mar 11 08:58:53 crc kubenswrapper[4808]: I0311 08:58:53.158885 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-bbc5b68f9-pgtxk"] Mar 11 08:58:53 crc kubenswrapper[4808]: I0311 08:58:53.177790 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5c5cb9c4d7-kns94"] Mar 11 08:58:53 crc kubenswrapper[4808]: I0311 08:58:53.188995 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-677c674df7-j9bkm"] Mar 11 08:58:53 crc kubenswrapper[4808]: I0311 08:58:53.197451 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6bbb499bbc-n674k"] Mar 11 08:58:53 crc kubenswrapper[4808]: W0311 08:58:53.200212 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1553dcf_b18c_45ef_a328_9eb5b86d5a02.slice/crio-bf28d87959d7cccc294f30e802ba3a8c1ad958b8f79669e093f63bf6af9f59d1 WatchSource:0}: Error finding container bf28d87959d7cccc294f30e802ba3a8c1ad958b8f79669e093f63bf6af9f59d1: Status 404 returned error can't find the container with id bf28d87959d7cccc294f30e802ba3a8c1ad958b8f79669e093f63bf6af9f59d1 Mar 11 08:58:53 crc kubenswrapper[4808]: I0311 08:58:53.205647 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-776c5696bf-snl5f"] Mar 11 08:58:53 crc kubenswrapper[4808]: I0311 08:58:53.213430 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6dd88c6f67-jw9mr"] Mar 11 08:58:53 crc kubenswrapper[4808]: I0311 08:58:53.221550 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-clf8c"] Mar 11 08:58:53 crc kubenswrapper[4808]: I0311 08:58:53.226544 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-569cc54c5-44gbn"] Mar 11 08:58:53 crc kubenswrapper[4808]: I0311 08:58:53.232026 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-8kpv9"] Mar 11 08:58:53 crc kubenswrapper[4808]: I0311 08:58:53.237759 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-574d45c66c-b72f8"] Mar 11 08:58:53 crc kubenswrapper[4808]: E0311 08:58:53.238536 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:5fe5351a3de5e1267112d52cd81477a01d47f90be713cc5439c76543a4c33721,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cw6j7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-776c5696bf-snl5f_openstack-operators(9d6bd72a-6ed8-4558-950a-80c4aab533b0): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Mar 11 08:58:53 crc kubenswrapper[4808]: E0311 08:58:53.238680 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:18fe6f2f0be7e736db86ff2d600af12a753e14b0a03232ce4f03629a89905571,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-m5q6f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-5f4f55cb5c-8kpv9_openstack-operators(354b0411-5c50-48d0-9ed8-e4871a92dc0e): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Mar 11 08:58:53 crc kubenswrapper[4808]: E0311 08:58:53.238835 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:4af709a2a6a1a1abb9659dbdd6fb3818122bdec7e66009fcced0bf0949f91554,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7bh8x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-6dd88c6f67-jw9mr_openstack-operators(90bdb089-2ce0-4e37-bda6-5db68ebd89e8): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Mar 11 08:58:53 crc kubenswrapper[4808]: E0311 08:58:53.240438 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:2bd37bdd917e3abe72613a734ce5021330242ec8cae9b8da76c57a0765152922,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-n524j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-569cc54c5-44gbn_openstack-operators(94668c95-67c1-4f5e-9bde-e2d34d7ce631): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Mar 11 08:58:53 crc kubenswrapper[4808]: E0311 08:58:53.240539 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-jw9mr" podUID="90bdb089-2ce0-4e37-bda6-5db68ebd89e8" Mar 11 08:58:53 crc kubenswrapper[4808]: E0311 08:58:53.240579 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-8kpv9" podUID="354b0411-5c50-48d0-9ed8-e4871a92dc0e" Mar 11 08:58:53 crc kubenswrapper[4808]: E0311 08:58:53.240604 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-snl5f" podUID="9d6bd72a-6ed8-4558-950a-80c4aab533b0" Mar 11 08:58:53 crc kubenswrapper[4808]: E0311 08:58:53.241873 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-44gbn" podUID="94668c95-67c1-4f5e-9bde-e2d34d7ce631" Mar 11 08:58:53 crc kubenswrapper[4808]: E0311 08:58:53.247755 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:27c84b712abc2df6108e22636075eec25fea0229800f38594a492fd41b02c49d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dsxrg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-6cd66dbd4b-clf8c_openstack-operators(1ea888eb-3b8a-4871-846d-6a3006087ffb): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Mar 11 08:58:53 crc kubenswrapper[4808]: W0311 08:58:53.248474 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7dd34e69_2d2e_43aa_9f2e_ee9a2a747a0a.slice/crio-c09cdf9738d1c661a05ea6f3d697912d4a0196e4a38c28fdc3e46f1d4fef9069 WatchSource:0}: Error finding container c09cdf9738d1c661a05ea6f3d697912d4a0196e4a38c28fdc3e46f1d4fef9069: Status 404 returned error can't find the container with id c09cdf9738d1c661a05ea6f3d697912d4a0196e4a38c28fdc3e46f1d4fef9069 Mar 11 08:58:53 crc kubenswrapper[4808]: E0311 08:58:53.250607 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-clf8c" podUID="1ea888eb-3b8a-4871-846d-6a3006087ffb" Mar 11 08:58:53 crc kubenswrapper[4808]: E0311 08:58:53.253026 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:e7e865363955c670e41b6c042c4f87abceff78f5495ba5c5c82988baad45c978,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4sdds,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-574d45c66c-b72f8_openstack-operators(7dd34e69-2d2e-43aa-9f2e-ee9a2a747a0a): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Mar 11 08:58:53 crc kubenswrapper[4808]: E0311 08:58:53.254586 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-b72f8" podUID="7dd34e69-2d2e-43aa-9f2e-ee9a2a747a0a" Mar 11 08:58:53 crc kubenswrapper[4808]: I0311 08:58:53.282024 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-677c674df7-j9bkm" event={"ID":"4b276a74-0d31-48b5-9556-0671578c4ab2","Type":"ContainerStarted","Data":"8621b93163cba48112926b5226094bdeac9b0c813980b4a2b09c6b4a3d86a0e7"} Mar 11 08:58:53 crc kubenswrapper[4808]: I0311 08:58:53.284935 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-n674k" event={"ID":"b1553dcf-b18c-45ef-a328-9eb5b86d5a02","Type":"ContainerStarted","Data":"bf28d87959d7cccc294f30e802ba3a8c1ad958b8f79669e093f63bf6af9f59d1"} Mar 11 08:58:53 crc kubenswrapper[4808]: I0311 08:58:53.286450 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-b72f8" event={"ID":"7dd34e69-2d2e-43aa-9f2e-ee9a2a747a0a","Type":"ContainerStarted","Data":"c09cdf9738d1c661a05ea6f3d697912d4a0196e4a38c28fdc3e46f1d4fef9069"} Mar 11 08:58:53 crc kubenswrapper[4808]: E0311 08:58:53.288967 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:e7e865363955c670e41b6c042c4f87abceff78f5495ba5c5c82988baad45c978\\\"\"" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-b72f8" podUID="7dd34e69-2d2e-43aa-9f2e-ee9a2a747a0a" Mar 11 08:58:53 crc kubenswrapper[4808]: I0311 08:58:53.290802 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-lsmx2" event={"ID":"6863fca8-473b-4fc3-8e19-f717c2d164c9","Type":"ContainerStarted","Data":"d071c3ad5fd30e4a31f263247df91f0eb2000abf621cc868592be9c53f27da15"} Mar 11 08:58:53 crc kubenswrapper[4808]: I0311 08:58:53.308614 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-lqxgc" event={"ID":"cd11b5ec-454a-4bbb-a4e4-5b4569c0e219","Type":"ContainerStarted","Data":"e04e0199d626240145fd134aa353e1aa9230d6aba6c18baa2f95cad7aea5b7a7"} Mar 11 08:58:53 crc kubenswrapper[4808]: I0311 08:58:53.313175 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-9k6vf" event={"ID":"f90023c4-1729-4193-9853-4548be9c786c","Type":"ContainerStarted","Data":"eca2c8b35255ba778475652cb65641d7f8aa51afc272e796530106fa5c457083"} Mar 11 08:58:53 crc kubenswrapper[4808]: I0311 08:58:53.317821 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-pgtxk" event={"ID":"cf281551-b7f4-4d5c-823e-6e70132ae2d0","Type":"ContainerStarted","Data":"a67c65a72209cf914702628cd99936d4f886c3607d56b2c5a07034ed843bd2cb"} Mar 11 08:58:53 crc kubenswrapper[4808]: I0311 08:58:53.323933 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-s5kbb" event={"ID":"383d2b79-82c7-4abd-bd0e-7cc157c35f28","Type":"ContainerStarted","Data":"e5f6c08483e717cff7581f1f0808e5581eefd572ed393353cde72d66e90873c0"} Mar 11 08:58:53 crc kubenswrapper[4808]: I0311 08:58:53.327964 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-nt6vp" event={"ID":"b82ff9e7-d047-4da9-8de7-177e1a3fbb7e","Type":"ContainerStarted","Data":"ac499a3ccd7ea9c93a9406ff84858ba5da502725a1938be8c84294a30b1bace3"} Mar 11 08:58:53 crc kubenswrapper[4808]: I0311 08:58:53.333213 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-kns94" event={"ID":"cd5790a0-53c4-4ee8-95fa-f72ea9135488","Type":"ContainerStarted","Data":"6ff1cdfe3e682638e410601d8ca58ce94245f56f465d3d68fbf02442ec69b760"} Mar 11 08:58:53 crc kubenswrapper[4808]: I0311 08:58:53.335146 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-44gbn" event={"ID":"94668c95-67c1-4f5e-9bde-e2d34d7ce631","Type":"ContainerStarted","Data":"0b2dbf530a075f947c5e6c3b3fd0bc7b295cf485c870ddac297626997d48eb72"} Mar 11 08:58:53 crc kubenswrapper[4808]: E0311 08:58:53.336702 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:2bd37bdd917e3abe72613a734ce5021330242ec8cae9b8da76c57a0765152922\\\"\"" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-44gbn" podUID="94668c95-67c1-4f5e-9bde-e2d34d7ce631" Mar 11 08:58:53 crc kubenswrapper[4808]: I0311 08:58:53.338569 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-snl5f" event={"ID":"9d6bd72a-6ed8-4558-950a-80c4aab533b0","Type":"ContainerStarted","Data":"2f6bdc87ebab07b37353e8d6574441e8d1964b224a9a818a643cfddb96826161"} Mar 11 08:58:53 crc kubenswrapper[4808]: E0311 08:58:53.340191 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:5fe5351a3de5e1267112d52cd81477a01d47f90be713cc5439c76543a4c33721\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-snl5f" podUID="9d6bd72a-6ed8-4558-950a-80c4aab533b0" Mar 11 08:58:53 crc kubenswrapper[4808]: I0311 08:58:53.340623 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-6wz56" event={"ID":"4d0b42e5-b4bf-47a9-afed-ddcf3f770ca0","Type":"ContainerStarted","Data":"b11030fd6cad4b1244c0fa971f0c6b9283a653cd5630279dbaf5e58e5ec418a4"} Mar 11 08:58:53 crc kubenswrapper[4808]: I0311 08:58:53.346412 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-n49mc" event={"ID":"14bd6d19-3c60-4752-bd29-09809df6f7b6","Type":"ContainerStarted","Data":"b039dfcf3618caf413ef775d6d8580a5208d0720b51d64da22a93c14f25924c0"} Mar 11 08:58:53 crc kubenswrapper[4808]: I0311 08:58:53.351026 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-95v7v" event={"ID":"b39efce9-91d6-43b0-b80c-30c223d26460","Type":"ContainerStarted","Data":"791dd2b1d14bf305ed34b7f850e3c5554a47a1068ecbeaacd60f7cac5aea0fa6"} Mar 11 08:58:53 crc kubenswrapper[4808]: I0311 08:58:53.356418 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-8kpv9" event={"ID":"354b0411-5c50-48d0-9ed8-e4871a92dc0e","Type":"ContainerStarted","Data":"36f09d50a88d97d03fc60eb5616db5ca7a8d2913b313fb667b07b7504b0876e8"} Mar 11 08:58:53 crc kubenswrapper[4808]: E0311 08:58:53.358060 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:18fe6f2f0be7e736db86ff2d600af12a753e14b0a03232ce4f03629a89905571\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-8kpv9" podUID="354b0411-5c50-48d0-9ed8-e4871a92dc0e" Mar 11 08:58:53 crc kubenswrapper[4808]: I0311 08:58:53.358875 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-clf8c" event={"ID":"1ea888eb-3b8a-4871-846d-6a3006087ffb","Type":"ContainerStarted","Data":"3203f6eccf420a176d22a6d155244756cb1219068036e7db81914b09b6e29d1e"} Mar 11 08:58:53 crc kubenswrapper[4808]: E0311 08:58:53.361940 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:27c84b712abc2df6108e22636075eec25fea0229800f38594a492fd41b02c49d\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-clf8c" podUID="1ea888eb-3b8a-4871-846d-6a3006087ffb" Mar 11 08:58:53 crc kubenswrapper[4808]: I0311 08:58:53.362189 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-7zq56" event={"ID":"46a1ed3a-101b-4412-80d3-b246794f4439","Type":"ContainerStarted","Data":"157185dcc1ca17e1b319661184949bb90c8037724df5517438783202803fa92a"} Mar 11 08:58:53 crc kubenswrapper[4808]: I0311 08:58:53.365647 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-jw9mr" event={"ID":"90bdb089-2ce0-4e37-bda6-5db68ebd89e8","Type":"ContainerStarted","Data":"69b17a134fc7d200f0de4bcc54b8d1e1d115e4a5d8ea4e1325812274808169e1"} Mar 11 08:58:53 crc kubenswrapper[4808]: E0311 08:58:53.366860 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:4af709a2a6a1a1abb9659dbdd6fb3818122bdec7e66009fcced0bf0949f91554\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-jw9mr" podUID="90bdb089-2ce0-4e37-bda6-5db68ebd89e8" Mar 11 08:58:53 crc kubenswrapper[4808]: I0311 08:58:53.374838 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7lk57"] Mar 11 08:58:53 crc kubenswrapper[4808]: W0311 08:58:53.376626 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb0711f78_69be_46d0_8857_e01d6927edfd.slice/crio-19e07727d6b1fbd4286dbb5607d7de64c4f8636275e9dcd1e88ccedd70f69c89 WatchSource:0}: Error finding container 19e07727d6b1fbd4286dbb5607d7de64c4f8636275e9dcd1e88ccedd70f69c89: Status 404 returned error can't find the container with id 19e07727d6b1fbd4286dbb5607d7de64c4f8636275e9dcd1e88ccedd70f69c89 Mar 11 08:58:53 crc kubenswrapper[4808]: I0311 08:58:53.419517 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/37e5de88-802a-408e-9362-51166d0b7662-webhook-certs\") pod \"openstack-operator-controller-manager-6679ddfdc7-c8cqh\" (UID: \"37e5de88-802a-408e-9362-51166d0b7662\") " pod="openstack-operators/openstack-operator-controller-manager-6679ddfdc7-c8cqh" Mar 11 08:58:53 crc kubenswrapper[4808]: I0311 08:58:53.419570 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/37e5de88-802a-408e-9362-51166d0b7662-metrics-certs\") pod \"openstack-operator-controller-manager-6679ddfdc7-c8cqh\" (UID: \"37e5de88-802a-408e-9362-51166d0b7662\") " pod="openstack-operators/openstack-operator-controller-manager-6679ddfdc7-c8cqh" Mar 11 08:58:53 crc kubenswrapper[4808]: E0311 08:58:53.420611 4808 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 11 08:58:53 crc kubenswrapper[4808]: E0311 08:58:53.420663 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/37e5de88-802a-408e-9362-51166d0b7662-webhook-certs podName:37e5de88-802a-408e-9362-51166d0b7662 nodeName:}" failed. No retries permitted until 2026-03-11 08:58:55.420649036 +0000 UTC m=+1186.373972356 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/37e5de88-802a-408e-9362-51166d0b7662-webhook-certs") pod "openstack-operator-controller-manager-6679ddfdc7-c8cqh" (UID: "37e5de88-802a-408e-9362-51166d0b7662") : secret "webhook-server-cert" not found Mar 11 08:58:53 crc kubenswrapper[4808]: E0311 08:58:53.420938 4808 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 11 08:58:53 crc kubenswrapper[4808]: E0311 08:58:53.420966 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/37e5de88-802a-408e-9362-51166d0b7662-metrics-certs podName:37e5de88-802a-408e-9362-51166d0b7662 nodeName:}" failed. No retries permitted until 2026-03-11 08:58:55.420958715 +0000 UTC m=+1186.374282035 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/37e5de88-802a-408e-9362-51166d0b7662-metrics-certs") pod "openstack-operator-controller-manager-6679ddfdc7-c8cqh" (UID: "37e5de88-802a-408e-9362-51166d0b7662") : secret "metrics-server-cert" not found Mar 11 08:58:54 crc kubenswrapper[4808]: I0311 08:58:54.377490 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7lk57" event={"ID":"b0711f78-69be-46d0-8857-e01d6927edfd","Type":"ContainerStarted","Data":"19e07727d6b1fbd4286dbb5607d7de64c4f8636275e9dcd1e88ccedd70f69c89"} Mar 11 08:58:54 crc kubenswrapper[4808]: E0311 08:58:54.382517 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:4af709a2a6a1a1abb9659dbdd6fb3818122bdec7e66009fcced0bf0949f91554\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-jw9mr" podUID="90bdb089-2ce0-4e37-bda6-5db68ebd89e8" Mar 11 08:58:54 crc kubenswrapper[4808]: E0311 08:58:54.382544 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:18fe6f2f0be7e736db86ff2d600af12a753e14b0a03232ce4f03629a89905571\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-8kpv9" podUID="354b0411-5c50-48d0-9ed8-e4871a92dc0e" Mar 11 08:58:54 crc kubenswrapper[4808]: E0311 08:58:54.383100 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:5fe5351a3de5e1267112d52cd81477a01d47f90be713cc5439c76543a4c33721\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-snl5f" podUID="9d6bd72a-6ed8-4558-950a-80c4aab533b0" Mar 11 08:58:54 crc kubenswrapper[4808]: E0311 08:58:54.383254 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:27c84b712abc2df6108e22636075eec25fea0229800f38594a492fd41b02c49d\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-clf8c" podUID="1ea888eb-3b8a-4871-846d-6a3006087ffb" Mar 11 08:58:54 crc kubenswrapper[4808]: E0311 08:58:54.383369 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:e7e865363955c670e41b6c042c4f87abceff78f5495ba5c5c82988baad45c978\\\"\"" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-b72f8" podUID="7dd34e69-2d2e-43aa-9f2e-ee9a2a747a0a" Mar 11 08:58:54 crc kubenswrapper[4808]: E0311 08:58:54.384385 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:2bd37bdd917e3abe72613a734ce5021330242ec8cae9b8da76c57a0765152922\\\"\"" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-44gbn" podUID="94668c95-67c1-4f5e-9bde-e2d34d7ce631" Mar 11 08:58:54 crc kubenswrapper[4808]: I0311 08:58:54.746820 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/557a289a-7329-40b7-a593-bfcfa58e679d-cert\") pod \"infra-operator-controller-manager-5995f4446f-cb6bf\" (UID: \"557a289a-7329-40b7-a593-bfcfa58e679d\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-cb6bf" Mar 11 08:58:54 crc kubenswrapper[4808]: E0311 08:58:54.747029 4808 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 11 08:58:54 crc kubenswrapper[4808]: E0311 08:58:54.747125 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/557a289a-7329-40b7-a593-bfcfa58e679d-cert podName:557a289a-7329-40b7-a593-bfcfa58e679d nodeName:}" failed. No retries permitted until 2026-03-11 08:58:58.74710316 +0000 UTC m=+1189.700426550 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/557a289a-7329-40b7-a593-bfcfa58e679d-cert") pod "infra-operator-controller-manager-5995f4446f-cb6bf" (UID: "557a289a-7329-40b7-a593-bfcfa58e679d") : secret "infra-operator-webhook-server-cert" not found Mar 11 08:58:55 crc kubenswrapper[4808]: I0311 08:58:55.054333 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/aabfac3f-1196-4f9c-be5f-84bfbb833ae3-cert\") pod \"openstack-baremetal-operator-controller-manager-6647d7885fct7h5\" (UID: \"aabfac3f-1196-4f9c-be5f-84bfbb833ae3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6647d7885fct7h5" Mar 11 08:58:55 crc kubenswrapper[4808]: E0311 08:58:55.054478 4808 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 11 08:58:55 crc kubenswrapper[4808]: E0311 08:58:55.054551 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aabfac3f-1196-4f9c-be5f-84bfbb833ae3-cert podName:aabfac3f-1196-4f9c-be5f-84bfbb833ae3 nodeName:}" failed. No retries permitted until 2026-03-11 08:58:59.054531445 +0000 UTC m=+1190.007854765 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/aabfac3f-1196-4f9c-be5f-84bfbb833ae3-cert") pod "openstack-baremetal-operator-controller-manager-6647d7885fct7h5" (UID: "aabfac3f-1196-4f9c-be5f-84bfbb833ae3") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 11 08:58:55 crc kubenswrapper[4808]: I0311 08:58:55.461463 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/37e5de88-802a-408e-9362-51166d0b7662-webhook-certs\") pod \"openstack-operator-controller-manager-6679ddfdc7-c8cqh\" (UID: \"37e5de88-802a-408e-9362-51166d0b7662\") " pod="openstack-operators/openstack-operator-controller-manager-6679ddfdc7-c8cqh" Mar 11 08:58:55 crc kubenswrapper[4808]: I0311 08:58:55.461554 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/37e5de88-802a-408e-9362-51166d0b7662-metrics-certs\") pod \"openstack-operator-controller-manager-6679ddfdc7-c8cqh\" (UID: \"37e5de88-802a-408e-9362-51166d0b7662\") " pod="openstack-operators/openstack-operator-controller-manager-6679ddfdc7-c8cqh" Mar 11 08:58:55 crc kubenswrapper[4808]: E0311 08:58:55.461584 4808 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 11 08:58:55 crc kubenswrapper[4808]: E0311 08:58:55.461667 4808 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 11 08:58:55 crc kubenswrapper[4808]: E0311 08:58:55.461682 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/37e5de88-802a-408e-9362-51166d0b7662-webhook-certs podName:37e5de88-802a-408e-9362-51166d0b7662 nodeName:}" failed. No retries permitted until 2026-03-11 08:58:59.461659502 +0000 UTC m=+1190.414982882 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/37e5de88-802a-408e-9362-51166d0b7662-webhook-certs") pod "openstack-operator-controller-manager-6679ddfdc7-c8cqh" (UID: "37e5de88-802a-408e-9362-51166d0b7662") : secret "webhook-server-cert" not found Mar 11 08:58:55 crc kubenswrapper[4808]: E0311 08:58:55.461725 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/37e5de88-802a-408e-9362-51166d0b7662-metrics-certs podName:37e5de88-802a-408e-9362-51166d0b7662 nodeName:}" failed. No retries permitted until 2026-03-11 08:58:59.461710483 +0000 UTC m=+1190.415033803 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/37e5de88-802a-408e-9362-51166d0b7662-metrics-certs") pod "openstack-operator-controller-manager-6679ddfdc7-c8cqh" (UID: "37e5de88-802a-408e-9362-51166d0b7662") : secret "metrics-server-cert" not found Mar 11 08:58:58 crc kubenswrapper[4808]: I0311 08:58:58.811482 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/557a289a-7329-40b7-a593-bfcfa58e679d-cert\") pod \"infra-operator-controller-manager-5995f4446f-cb6bf\" (UID: \"557a289a-7329-40b7-a593-bfcfa58e679d\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-cb6bf" Mar 11 08:58:58 crc kubenswrapper[4808]: E0311 08:58:58.811629 4808 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 11 08:58:58 crc kubenswrapper[4808]: E0311 08:58:58.812001 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/557a289a-7329-40b7-a593-bfcfa58e679d-cert podName:557a289a-7329-40b7-a593-bfcfa58e679d nodeName:}" failed. No retries permitted until 2026-03-11 08:59:06.811983903 +0000 UTC m=+1197.765307213 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/557a289a-7329-40b7-a593-bfcfa58e679d-cert") pod "infra-operator-controller-manager-5995f4446f-cb6bf" (UID: "557a289a-7329-40b7-a593-bfcfa58e679d") : secret "infra-operator-webhook-server-cert" not found Mar 11 08:58:59 crc kubenswrapper[4808]: I0311 08:58:59.121107 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/aabfac3f-1196-4f9c-be5f-84bfbb833ae3-cert\") pod \"openstack-baremetal-operator-controller-manager-6647d7885fct7h5\" (UID: \"aabfac3f-1196-4f9c-be5f-84bfbb833ae3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6647d7885fct7h5" Mar 11 08:58:59 crc kubenswrapper[4808]: E0311 08:58:59.121428 4808 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 11 08:58:59 crc kubenswrapper[4808]: E0311 08:58:59.121485 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aabfac3f-1196-4f9c-be5f-84bfbb833ae3-cert podName:aabfac3f-1196-4f9c-be5f-84bfbb833ae3 nodeName:}" failed. No retries permitted until 2026-03-11 08:59:07.121466306 +0000 UTC m=+1198.074789626 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/aabfac3f-1196-4f9c-be5f-84bfbb833ae3-cert") pod "openstack-baremetal-operator-controller-manager-6647d7885fct7h5" (UID: "aabfac3f-1196-4f9c-be5f-84bfbb833ae3") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 11 08:58:59 crc kubenswrapper[4808]: I0311 08:58:59.527169 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/37e5de88-802a-408e-9362-51166d0b7662-webhook-certs\") pod \"openstack-operator-controller-manager-6679ddfdc7-c8cqh\" (UID: \"37e5de88-802a-408e-9362-51166d0b7662\") " pod="openstack-operators/openstack-operator-controller-manager-6679ddfdc7-c8cqh" Mar 11 08:58:59 crc kubenswrapper[4808]: I0311 08:58:59.527232 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/37e5de88-802a-408e-9362-51166d0b7662-metrics-certs\") pod \"openstack-operator-controller-manager-6679ddfdc7-c8cqh\" (UID: \"37e5de88-802a-408e-9362-51166d0b7662\") " pod="openstack-operators/openstack-operator-controller-manager-6679ddfdc7-c8cqh" Mar 11 08:58:59 crc kubenswrapper[4808]: E0311 08:58:59.527371 4808 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 11 08:58:59 crc kubenswrapper[4808]: E0311 08:58:59.527433 4808 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 11 08:58:59 crc kubenswrapper[4808]: E0311 08:58:59.527452 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/37e5de88-802a-408e-9362-51166d0b7662-webhook-certs podName:37e5de88-802a-408e-9362-51166d0b7662 nodeName:}" failed. No retries permitted until 2026-03-11 08:59:07.527430839 +0000 UTC m=+1198.480754239 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/37e5de88-802a-408e-9362-51166d0b7662-webhook-certs") pod "openstack-operator-controller-manager-6679ddfdc7-c8cqh" (UID: "37e5de88-802a-408e-9362-51166d0b7662") : secret "webhook-server-cert" not found Mar 11 08:58:59 crc kubenswrapper[4808]: E0311 08:58:59.527494 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/37e5de88-802a-408e-9362-51166d0b7662-metrics-certs podName:37e5de88-802a-408e-9362-51166d0b7662 nodeName:}" failed. No retries permitted until 2026-03-11 08:59:07.527475781 +0000 UTC m=+1198.480799171 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/37e5de88-802a-408e-9362-51166d0b7662-metrics-certs") pod "openstack-operator-controller-manager-6679ddfdc7-c8cqh" (UID: "37e5de88-802a-408e-9362-51166d0b7662") : secret "metrics-server-cert" not found Mar 11 08:59:04 crc kubenswrapper[4808]: I0311 08:59:04.794252 4808 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 11 08:59:06 crc kubenswrapper[4808]: I0311 08:59:06.833795 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/557a289a-7329-40b7-a593-bfcfa58e679d-cert\") pod \"infra-operator-controller-manager-5995f4446f-cb6bf\" (UID: \"557a289a-7329-40b7-a593-bfcfa58e679d\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-cb6bf" Mar 11 08:59:06 crc kubenswrapper[4808]: I0311 08:59:06.851856 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/557a289a-7329-40b7-a593-bfcfa58e679d-cert\") pod \"infra-operator-controller-manager-5995f4446f-cb6bf\" (UID: \"557a289a-7329-40b7-a593-bfcfa58e679d\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-cb6bf" Mar 11 08:59:07 crc kubenswrapper[4808]: I0311 08:59:07.050238 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-5995f4446f-cb6bf" Mar 11 08:59:07 crc kubenswrapper[4808]: I0311 08:59:07.137564 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/aabfac3f-1196-4f9c-be5f-84bfbb833ae3-cert\") pod \"openstack-baremetal-operator-controller-manager-6647d7885fct7h5\" (UID: \"aabfac3f-1196-4f9c-be5f-84bfbb833ae3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6647d7885fct7h5" Mar 11 08:59:07 crc kubenswrapper[4808]: E0311 08:59:07.137776 4808 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 11 08:59:07 crc kubenswrapper[4808]: E0311 08:59:07.137853 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aabfac3f-1196-4f9c-be5f-84bfbb833ae3-cert podName:aabfac3f-1196-4f9c-be5f-84bfbb833ae3 nodeName:}" failed. No retries permitted until 2026-03-11 08:59:23.137833527 +0000 UTC m=+1214.091156837 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/aabfac3f-1196-4f9c-be5f-84bfbb833ae3-cert") pod "openstack-baremetal-operator-controller-manager-6647d7885fct7h5" (UID: "aabfac3f-1196-4f9c-be5f-84bfbb833ae3") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 11 08:59:07 crc kubenswrapper[4808]: I0311 08:59:07.542577 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/37e5de88-802a-408e-9362-51166d0b7662-webhook-certs\") pod \"openstack-operator-controller-manager-6679ddfdc7-c8cqh\" (UID: \"37e5de88-802a-408e-9362-51166d0b7662\") " pod="openstack-operators/openstack-operator-controller-manager-6679ddfdc7-c8cqh" Mar 11 08:59:07 crc kubenswrapper[4808]: E0311 08:59:07.542794 4808 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 11 08:59:07 crc kubenswrapper[4808]: E0311 08:59:07.543050 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/37e5de88-802a-408e-9362-51166d0b7662-webhook-certs podName:37e5de88-802a-408e-9362-51166d0b7662 nodeName:}" failed. No retries permitted until 2026-03-11 08:59:23.543029058 +0000 UTC m=+1214.496352378 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/37e5de88-802a-408e-9362-51166d0b7662-webhook-certs") pod "openstack-operator-controller-manager-6679ddfdc7-c8cqh" (UID: "37e5de88-802a-408e-9362-51166d0b7662") : secret "webhook-server-cert" not found Mar 11 08:59:07 crc kubenswrapper[4808]: I0311 08:59:07.543312 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/37e5de88-802a-408e-9362-51166d0b7662-metrics-certs\") pod \"openstack-operator-controller-manager-6679ddfdc7-c8cqh\" (UID: \"37e5de88-802a-408e-9362-51166d0b7662\") " pod="openstack-operators/openstack-operator-controller-manager-6679ddfdc7-c8cqh" Mar 11 08:59:07 crc kubenswrapper[4808]: E0311 08:59:07.543480 4808 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 11 08:59:07 crc kubenswrapper[4808]: E0311 08:59:07.543542 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/37e5de88-802a-408e-9362-51166d0b7662-metrics-certs podName:37e5de88-802a-408e-9362-51166d0b7662 nodeName:}" failed. No retries permitted until 2026-03-11 08:59:23.543523073 +0000 UTC m=+1214.496846423 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/37e5de88-802a-408e-9362-51166d0b7662-metrics-certs") pod "openstack-operator-controller-manager-6679ddfdc7-c8cqh" (UID: "37e5de88-802a-408e-9362-51166d0b7662") : secret "metrics-server-cert" not found Mar 11 08:59:07 crc kubenswrapper[4808]: E0311 08:59:07.763412 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:d9bffb59bb7f9f0a6cb103c3986fd2c1bdb13ce6349c39427a690858cbd754d6" Mar 11 08:59:07 crc kubenswrapper[4808]: E0311 08:59:07.763606 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:d9bffb59bb7f9f0a6cb103c3986fd2c1bdb13ce6349c39427a690858cbd754d6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2l6z9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-6d9d6b584d-n49mc_openstack-operators(14bd6d19-3c60-4752-bd29-09809df6f7b6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 11 08:59:07 crc kubenswrapper[4808]: E0311 08:59:07.764756 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-n49mc" podUID="14bd6d19-3c60-4752-bd29-09809df6f7b6" Mar 11 08:59:08 crc kubenswrapper[4808]: E0311 08:59:08.284931 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:2f63ddf5c95c6c82f6e04bc9f7f20d56dc003614647726ab00276239eec40b7f" Mar 11 08:59:08 crc kubenswrapper[4808]: E0311 08:59:08.285075 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:2f63ddf5c95c6c82f6e04bc9f7f20d56dc003614647726ab00276239eec40b7f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-b4wgz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-bbc5b68f9-pgtxk_openstack-operators(cf281551-b7f4-4d5c-823e-6e70132ae2d0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 11 08:59:08 crc kubenswrapper[4808]: E0311 08:59:08.287884 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-pgtxk" podUID="cf281551-b7f4-4d5c-823e-6e70132ae2d0" Mar 11 08:59:08 crc kubenswrapper[4808]: E0311 08:59:08.482280 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:2f63ddf5c95c6c82f6e04bc9f7f20d56dc003614647726ab00276239eec40b7f\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-pgtxk" podUID="cf281551-b7f4-4d5c-823e-6e70132ae2d0" Mar 11 08:59:08 crc kubenswrapper[4808]: E0311 08:59:08.482754 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:d9bffb59bb7f9f0a6cb103c3986fd2c1bdb13ce6349c39427a690858cbd754d6\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-n49mc" podUID="14bd6d19-3c60-4752-bd29-09809df6f7b6" Mar 11 08:59:09 crc kubenswrapper[4808]: E0311 08:59:09.439596 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Mar 11 08:59:09 crc kubenswrapper[4808]: E0311 08:59:09.439740 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vqmqk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-7lk57_openstack-operators(b0711f78-69be-46d0-8857-e01d6927edfd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 11 08:59:09 crc kubenswrapper[4808]: E0311 08:59:09.441583 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7lk57" podUID="b0711f78-69be-46d0-8857-e01d6927edfd" Mar 11 08:59:09 crc kubenswrapper[4808]: E0311 08:59:09.488101 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7lk57" podUID="b0711f78-69be-46d0-8857-e01d6927edfd" Mar 11 08:59:09 crc kubenswrapper[4808]: E0311 08:59:09.981155 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:40b84319f2f12a1c7ee478fd86a8b1aa5ac2ea8e24f5ce0f1ca78ad879dea8ca" Mar 11 08:59:09 crc kubenswrapper[4808]: E0311 08:59:09.981342 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:40b84319f2f12a1c7ee478fd86a8b1aa5ac2ea8e24f5ce0f1ca78ad879dea8ca,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2nj7p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-684f77d66d-9k6vf_openstack-operators(f90023c4-1729-4193-9853-4548be9c786c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 11 08:59:09 crc kubenswrapper[4808]: E0311 08:59:09.983068 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-9k6vf" podUID="f90023c4-1729-4193-9853-4548be9c786c" Mar 11 08:59:10 crc kubenswrapper[4808]: E0311 08:59:10.493736 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:40b84319f2f12a1c7ee478fd86a8b1aa5ac2ea8e24f5ce0f1ca78ad879dea8ca\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-9k6vf" podUID="f90023c4-1729-4193-9853-4548be9c786c" Mar 11 08:59:12 crc kubenswrapper[4808]: I0311 08:59:12.017375 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-5995f4446f-cb6bf"] Mar 11 08:59:12 crc kubenswrapper[4808]: W0311 08:59:12.460211 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod557a289a_7329_40b7_a593_bfcfa58e679d.slice/crio-a377c87516246aeab9fb8c4b5851604b757b0e69c13e5356bed6e32ddf4b1a67 WatchSource:0}: Error finding container a377c87516246aeab9fb8c4b5851604b757b0e69c13e5356bed6e32ddf4b1a67: Status 404 returned error can't find the container with id a377c87516246aeab9fb8c4b5851604b757b0e69c13e5356bed6e32ddf4b1a67 Mar 11 08:59:12 crc kubenswrapper[4808]: I0311 08:59:12.510489 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-5995f4446f-cb6bf" event={"ID":"557a289a-7329-40b7-a593-bfcfa58e679d","Type":"ContainerStarted","Data":"a377c87516246aeab9fb8c4b5851604b757b0e69c13e5356bed6e32ddf4b1a67"} Mar 11 08:59:14 crc kubenswrapper[4808]: I0311 08:59:14.535936 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-s5kbb" event={"ID":"383d2b79-82c7-4abd-bd0e-7cc157c35f28","Type":"ContainerStarted","Data":"b8608b3932cfd3f929ed77894136a1a16cac0c85322d3618e6d69b142117737b"} Mar 11 08:59:14 crc kubenswrapper[4808]: I0311 08:59:14.537271 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-s5kbb" Mar 11 08:59:14 crc kubenswrapper[4808]: I0311 08:59:14.540214 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-b72f8" event={"ID":"7dd34e69-2d2e-43aa-9f2e-ee9a2a747a0a","Type":"ContainerStarted","Data":"a180ce7432b58e49327a8c67020899372fa16b1ca3d54b131583b3fe8d7dd69e"} Mar 11 08:59:14 crc kubenswrapper[4808]: I0311 08:59:14.540621 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-b72f8" Mar 11 08:59:14 crc kubenswrapper[4808]: I0311 08:59:14.551175 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-jw9mr" event={"ID":"90bdb089-2ce0-4e37-bda6-5db68ebd89e8","Type":"ContainerStarted","Data":"0d5bcf32630bb5406d2178c4c9010470048799c31b0dff09a6812a83af00d9ff"} Mar 11 08:59:14 crc kubenswrapper[4808]: I0311 08:59:14.551513 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-jw9mr" Mar 11 08:59:14 crc kubenswrapper[4808]: I0311 08:59:14.564220 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-7zq56" event={"ID":"46a1ed3a-101b-4412-80d3-b246794f4439","Type":"ContainerStarted","Data":"47dbb1be241329bde1231ba3ec7bb3f2dd809c37a9d2222e52fde9a5c61933c5"} Mar 11 08:59:14 crc kubenswrapper[4808]: I0311 08:59:14.564260 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-7zq56" Mar 11 08:59:14 crc kubenswrapper[4808]: I0311 08:59:14.568760 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-6wz56" event={"ID":"4d0b42e5-b4bf-47a9-afed-ddcf3f770ca0","Type":"ContainerStarted","Data":"65bad9f30366ec51122bb0307c582a66b1067c3b918420b8954913288baf20b8"} Mar 11 08:59:14 crc kubenswrapper[4808]: I0311 08:59:14.569401 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-6wz56" Mar 11 08:59:14 crc kubenswrapper[4808]: I0311 08:59:14.569604 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-s5kbb" podStartSLOduration=7.474354639 podStartE2EDuration="24.569589474s" podCreationTimestamp="2026-03-11 08:58:50 +0000 UTC" firstStartedPulling="2026-03-11 08:58:52.330312975 +0000 UTC m=+1183.283636295" lastFinishedPulling="2026-03-11 08:59:09.42554781 +0000 UTC m=+1200.378871130" observedRunningTime="2026-03-11 08:59:14.565772275 +0000 UTC m=+1205.519095595" watchObservedRunningTime="2026-03-11 08:59:14.569589474 +0000 UTC m=+1205.522912784" Mar 11 08:59:14 crc kubenswrapper[4808]: I0311 08:59:14.578524 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-95v7v" event={"ID":"b39efce9-91d6-43b0-b80c-30c223d26460","Type":"ContainerStarted","Data":"4124fdef7c2b2b26121b72797bec998dff0d915e062c68b1576dbf5a13e38b5a"} Mar 11 08:59:14 crc kubenswrapper[4808]: I0311 08:59:14.579117 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-95v7v" Mar 11 08:59:14 crc kubenswrapper[4808]: I0311 08:59:14.591561 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-b72f8" podStartSLOduration=3.169678864 podStartE2EDuration="23.591540152s" podCreationTimestamp="2026-03-11 08:58:51 +0000 UTC" firstStartedPulling="2026-03-11 08:58:53.252923208 +0000 UTC m=+1184.206246518" lastFinishedPulling="2026-03-11 08:59:13.674784486 +0000 UTC m=+1204.628107806" observedRunningTime="2026-03-11 08:59:14.587461565 +0000 UTC m=+1205.540784885" watchObservedRunningTime="2026-03-11 08:59:14.591540152 +0000 UTC m=+1205.544863472" Mar 11 08:59:14 crc kubenswrapper[4808]: I0311 08:59:14.594736 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-clf8c" event={"ID":"1ea888eb-3b8a-4871-846d-6a3006087ffb","Type":"ContainerStarted","Data":"9970555057dee0e3b1f5efcaf6e22473e190fc12d7dd5a0a1ce39c6d8d9171d6"} Mar 11 08:59:14 crc kubenswrapper[4808]: I0311 08:59:14.595575 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-clf8c" Mar 11 08:59:14 crc kubenswrapper[4808]: I0311 08:59:14.608151 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-44gbn" event={"ID":"94668c95-67c1-4f5e-9bde-e2d34d7ce631","Type":"ContainerStarted","Data":"2a8b2c35fcf6fefc52d883dcf1857748d2e427ca085cc393ad794bb57f923940"} Mar 11 08:59:14 crc kubenswrapper[4808]: I0311 08:59:14.608379 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-44gbn" Mar 11 08:59:14 crc kubenswrapper[4808]: I0311 08:59:14.612181 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-kns94" event={"ID":"cd5790a0-53c4-4ee8-95fa-f72ea9135488","Type":"ContainerStarted","Data":"ef8d72826310c7428c26ed600321f067bfea70e4e0343c9b2853334c5ed6be21"} Mar 11 08:59:14 crc kubenswrapper[4808]: I0311 08:59:14.613001 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-kns94" Mar 11 08:59:14 crc kubenswrapper[4808]: I0311 08:59:14.630898 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-snl5f" event={"ID":"9d6bd72a-6ed8-4558-950a-80c4aab533b0","Type":"ContainerStarted","Data":"4b914804413e29f693aec9b3de3d472bccca840bf78fb70ff99c1846f0264407"} Mar 11 08:59:14 crc kubenswrapper[4808]: I0311 08:59:14.631117 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-snl5f" Mar 11 08:59:14 crc kubenswrapper[4808]: I0311 08:59:14.634428 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-nt6vp" event={"ID":"b82ff9e7-d047-4da9-8de7-177e1a3fbb7e","Type":"ContainerStarted","Data":"08398924c3c8b9f9ed32921ed3f26828e71444ae13ffdd794cd171ab311e7e69"} Mar 11 08:59:14 crc kubenswrapper[4808]: I0311 08:59:14.635192 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-nt6vp" Mar 11 08:59:14 crc kubenswrapper[4808]: I0311 08:59:14.657307 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-jw9mr" podStartSLOduration=3.189968204 podStartE2EDuration="23.657289163s" podCreationTimestamp="2026-03-11 08:58:51 +0000 UTC" firstStartedPulling="2026-03-11 08:58:53.238638459 +0000 UTC m=+1184.191961779" lastFinishedPulling="2026-03-11 08:59:13.705959418 +0000 UTC m=+1204.659282738" observedRunningTime="2026-03-11 08:59:14.655850442 +0000 UTC m=+1205.609173762" watchObservedRunningTime="2026-03-11 08:59:14.657289163 +0000 UTC m=+1205.610612483" Mar 11 08:59:14 crc kubenswrapper[4808]: I0311 08:59:14.657912 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-7zq56" podStartSLOduration=8.373118419 podStartE2EDuration="24.65790725s" podCreationTimestamp="2026-03-11 08:58:50 +0000 UTC" firstStartedPulling="2026-03-11 08:58:53.139805122 +0000 UTC m=+1184.093128442" lastFinishedPulling="2026-03-11 08:59:09.424593953 +0000 UTC m=+1200.377917273" observedRunningTime="2026-03-11 08:59:14.606021506 +0000 UTC m=+1205.559344826" watchObservedRunningTime="2026-03-11 08:59:14.65790725 +0000 UTC m=+1205.611230570" Mar 11 08:59:14 crc kubenswrapper[4808]: I0311 08:59:14.661285 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-lsmx2" event={"ID":"6863fca8-473b-4fc3-8e19-f717c2d164c9","Type":"ContainerStarted","Data":"e02144420c28eb7f950166bc00acc2723dbe0f128d839d4e1ae6aa8c5794129e"} Mar 11 08:59:14 crc kubenswrapper[4808]: I0311 08:59:14.661517 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-lsmx2" Mar 11 08:59:14 crc kubenswrapper[4808]: I0311 08:59:14.687722 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-lqxgc" event={"ID":"cd11b5ec-454a-4bbb-a4e4-5b4569c0e219","Type":"ContainerStarted","Data":"d9c086f4369266a94e3ab3848b5054247e47280d80343bf54e188bdc3b86becd"} Mar 11 08:59:14 crc kubenswrapper[4808]: I0311 08:59:14.688603 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-lqxgc" Mar 11 08:59:14 crc kubenswrapper[4808]: I0311 08:59:14.751113 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-clf8c" podStartSLOduration=3.28492118 podStartE2EDuration="23.751086116s" podCreationTimestamp="2026-03-11 08:58:51 +0000 UTC" firstStartedPulling="2026-03-11 08:58:53.247558594 +0000 UTC m=+1184.200881914" lastFinishedPulling="2026-03-11 08:59:13.71372353 +0000 UTC m=+1204.667046850" observedRunningTime="2026-03-11 08:59:14.712268966 +0000 UTC m=+1205.665592306" watchObservedRunningTime="2026-03-11 08:59:14.751086116 +0000 UTC m=+1205.704409436" Mar 11 08:59:14 crc kubenswrapper[4808]: I0311 08:59:14.768313 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-677c674df7-j9bkm" event={"ID":"4b276a74-0d31-48b5-9556-0671578c4ab2","Type":"ContainerStarted","Data":"ffe3972995a1efd01e897e0c93802d7af4d0e204b1a12d7fd52951e8223583ca"} Mar 11 08:59:14 crc kubenswrapper[4808]: I0311 08:59:14.768976 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-677c674df7-j9bkm" Mar 11 08:59:14 crc kubenswrapper[4808]: I0311 08:59:14.771096 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-kns94" podStartSLOduration=5.227780698 podStartE2EDuration="23.771072758s" podCreationTimestamp="2026-03-11 08:58:51 +0000 UTC" firstStartedPulling="2026-03-11 08:58:53.199627573 +0000 UTC m=+1184.152950893" lastFinishedPulling="2026-03-11 08:59:11.742919633 +0000 UTC m=+1202.696242953" observedRunningTime="2026-03-11 08:59:14.753162875 +0000 UTC m=+1205.706486205" watchObservedRunningTime="2026-03-11 08:59:14.771072758 +0000 UTC m=+1205.724396068" Mar 11 08:59:14 crc kubenswrapper[4808]: I0311 08:59:14.784644 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-n674k" event={"ID":"b1553dcf-b18c-45ef-a328-9eb5b86d5a02","Type":"ContainerStarted","Data":"4eef6d22d2af941e6b170c6ab26b5b0d927fb14fa34bfbd17ab6eda5a19ff977"} Mar 11 08:59:14 crc kubenswrapper[4808]: I0311 08:59:14.785317 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-n674k" Mar 11 08:59:14 crc kubenswrapper[4808]: I0311 08:59:14.798020 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-8kpv9" event={"ID":"354b0411-5c50-48d0-9ed8-e4871a92dc0e","Type":"ContainerStarted","Data":"630f6f74fd44ef4a48c5b7753830e42f80437a1a198a8e4d59de41184950ba4d"} Mar 11 08:59:14 crc kubenswrapper[4808]: I0311 08:59:14.798870 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-8kpv9" Mar 11 08:59:14 crc kubenswrapper[4808]: I0311 08:59:14.848450 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-95v7v" podStartSLOduration=7.572358403 podStartE2EDuration="24.848431781s" podCreationTimestamp="2026-03-11 08:58:50 +0000 UTC" firstStartedPulling="2026-03-11 08:58:53.238239288 +0000 UTC m=+1184.191562608" lastFinishedPulling="2026-03-11 08:59:10.514312666 +0000 UTC m=+1201.467635986" observedRunningTime="2026-03-11 08:59:14.841795711 +0000 UTC m=+1205.795119031" watchObservedRunningTime="2026-03-11 08:59:14.848431781 +0000 UTC m=+1205.801755101" Mar 11 08:59:14 crc kubenswrapper[4808]: I0311 08:59:14.850589 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-nt6vp" podStartSLOduration=7.006967098 podStartE2EDuration="24.850579142s" podCreationTimestamp="2026-03-11 08:58:50 +0000 UTC" firstStartedPulling="2026-03-11 08:58:52.670332912 +0000 UTC m=+1183.623656232" lastFinishedPulling="2026-03-11 08:59:10.513944956 +0000 UTC m=+1201.467268276" observedRunningTime="2026-03-11 08:59:14.799098319 +0000 UTC m=+1205.752421639" watchObservedRunningTime="2026-03-11 08:59:14.850579142 +0000 UTC m=+1205.803902462" Mar 11 08:59:14 crc kubenswrapper[4808]: I0311 08:59:14.931205 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-44gbn" podStartSLOduration=3.595510045 podStartE2EDuration="23.931185928s" podCreationTimestamp="2026-03-11 08:58:51 +0000 UTC" firstStartedPulling="2026-03-11 08:58:53.240275836 +0000 UTC m=+1184.193599156" lastFinishedPulling="2026-03-11 08:59:13.575951729 +0000 UTC m=+1204.529275039" observedRunningTime="2026-03-11 08:59:14.921393468 +0000 UTC m=+1205.874716788" watchObservedRunningTime="2026-03-11 08:59:14.931185928 +0000 UTC m=+1205.884509248" Mar 11 08:59:14 crc kubenswrapper[4808]: I0311 08:59:14.933491 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-snl5f" podStartSLOduration=4.733355205 podStartE2EDuration="23.933484674s" podCreationTimestamp="2026-03-11 08:58:51 +0000 UTC" firstStartedPulling="2026-03-11 08:58:53.238406242 +0000 UTC m=+1184.191729562" lastFinishedPulling="2026-03-11 08:59:12.438535701 +0000 UTC m=+1203.391859031" observedRunningTime="2026-03-11 08:59:14.871434769 +0000 UTC m=+1205.824758089" watchObservedRunningTime="2026-03-11 08:59:14.933484674 +0000 UTC m=+1205.886807994" Mar 11 08:59:14 crc kubenswrapper[4808]: I0311 08:59:14.979069 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-6wz56" podStartSLOduration=8.267998722 podStartE2EDuration="24.979052267s" podCreationTimestamp="2026-03-11 08:58:50 +0000 UTC" firstStartedPulling="2026-03-11 08:58:52.713647421 +0000 UTC m=+1183.666970741" lastFinishedPulling="2026-03-11 08:59:09.424700976 +0000 UTC m=+1200.378024286" observedRunningTime="2026-03-11 08:59:14.976609167 +0000 UTC m=+1205.929932477" watchObservedRunningTime="2026-03-11 08:59:14.979052267 +0000 UTC m=+1205.932375587" Mar 11 08:59:15 crc kubenswrapper[4808]: I0311 08:59:15.011912 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-n674k" podStartSLOduration=7.712158111 podStartE2EDuration="25.011894797s" podCreationTimestamp="2026-03-11 08:58:50 +0000 UTC" firstStartedPulling="2026-03-11 08:58:53.21421669 +0000 UTC m=+1184.167540010" lastFinishedPulling="2026-03-11 08:59:10.513953376 +0000 UTC m=+1201.467276696" observedRunningTime="2026-03-11 08:59:15.010666852 +0000 UTC m=+1205.963990182" watchObservedRunningTime="2026-03-11 08:59:15.011894797 +0000 UTC m=+1205.965218117" Mar 11 08:59:15 crc kubenswrapper[4808]: I0311 08:59:15.094861 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-lsmx2" podStartSLOduration=7.757145659 podStartE2EDuration="25.09484182s" podCreationTimestamp="2026-03-11 08:58:50 +0000 UTC" firstStartedPulling="2026-03-11 08:58:53.17644179 +0000 UTC m=+1184.129765120" lastFinishedPulling="2026-03-11 08:59:10.514137961 +0000 UTC m=+1201.467461281" observedRunningTime="2026-03-11 08:59:15.055626788 +0000 UTC m=+1206.008950108" watchObservedRunningTime="2026-03-11 08:59:15.09484182 +0000 UTC m=+1206.048165140" Mar 11 08:59:15 crc kubenswrapper[4808]: I0311 08:59:15.131843 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-677c674df7-j9bkm" podStartSLOduration=6.83202642 podStartE2EDuration="24.131825298s" podCreationTimestamp="2026-03-11 08:58:51 +0000 UTC" firstStartedPulling="2026-03-11 08:58:53.214664203 +0000 UTC m=+1184.167987523" lastFinishedPulling="2026-03-11 08:59:10.514463081 +0000 UTC m=+1201.467786401" observedRunningTime="2026-03-11 08:59:15.100699907 +0000 UTC m=+1206.054023227" watchObservedRunningTime="2026-03-11 08:59:15.131825298 +0000 UTC m=+1206.085148618" Mar 11 08:59:15 crc kubenswrapper[4808]: I0311 08:59:15.176640 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-lqxgc" podStartSLOduration=8.429484862 podStartE2EDuration="25.176621249s" podCreationTimestamp="2026-03-11 08:58:50 +0000 UTC" firstStartedPulling="2026-03-11 08:58:52.678418013 +0000 UTC m=+1183.631741323" lastFinishedPulling="2026-03-11 08:59:09.42555439 +0000 UTC m=+1200.378877710" observedRunningTime="2026-03-11 08:59:15.174695034 +0000 UTC m=+1206.128018354" watchObservedRunningTime="2026-03-11 08:59:15.176621249 +0000 UTC m=+1206.129944569" Mar 11 08:59:15 crc kubenswrapper[4808]: I0311 08:59:15.176813 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-8kpv9" podStartSLOduration=3.740641666 podStartE2EDuration="24.176809175s" podCreationTimestamp="2026-03-11 08:58:51 +0000 UTC" firstStartedPulling="2026-03-11 08:58:53.238619858 +0000 UTC m=+1184.191943178" lastFinishedPulling="2026-03-11 08:59:13.674787357 +0000 UTC m=+1204.628110687" observedRunningTime="2026-03-11 08:59:15.130657814 +0000 UTC m=+1206.083981134" watchObservedRunningTime="2026-03-11 08:59:15.176809175 +0000 UTC m=+1206.130132495" Mar 11 08:59:17 crc kubenswrapper[4808]: I0311 08:59:17.833495 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-5995f4446f-cb6bf" event={"ID":"557a289a-7329-40b7-a593-bfcfa58e679d","Type":"ContainerStarted","Data":"3619c425d22a038a67182fad75d5d3d8d0c2427e9c082cd1f7f06a58fb76a02c"} Mar 11 08:59:17 crc kubenswrapper[4808]: I0311 08:59:17.834045 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-5995f4446f-cb6bf" Mar 11 08:59:17 crc kubenswrapper[4808]: I0311 08:59:17.854982 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-5995f4446f-cb6bf" podStartSLOduration=23.604750173 podStartE2EDuration="27.853361561s" podCreationTimestamp="2026-03-11 08:58:50 +0000 UTC" firstStartedPulling="2026-03-11 08:59:12.464669349 +0000 UTC m=+1203.417992669" lastFinishedPulling="2026-03-11 08:59:16.713280737 +0000 UTC m=+1207.666604057" observedRunningTime="2026-03-11 08:59:17.848959545 +0000 UTC m=+1208.802282865" watchObservedRunningTime="2026-03-11 08:59:17.853361561 +0000 UTC m=+1208.806684891" Mar 11 08:59:21 crc kubenswrapper[4808]: I0311 08:59:21.243103 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-s5kbb" Mar 11 08:59:21 crc kubenswrapper[4808]: I0311 08:59:21.270230 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-nt6vp" Mar 11 08:59:21 crc kubenswrapper[4808]: I0311 08:59:21.293817 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-7zq56" Mar 11 08:59:21 crc kubenswrapper[4808]: I0311 08:59:21.320544 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-lqxgc" Mar 11 08:59:21 crc kubenswrapper[4808]: I0311 08:59:21.342020 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-6wz56" Mar 11 08:59:21 crc kubenswrapper[4808]: I0311 08:59:21.522495 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-n674k" Mar 11 08:59:21 crc kubenswrapper[4808]: I0311 08:59:21.564352 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-95v7v" Mar 11 08:59:21 crc kubenswrapper[4808]: I0311 08:59:21.588482 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-lsmx2" Mar 11 08:59:21 crc kubenswrapper[4808]: I0311 08:59:21.603219 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-snl5f" Mar 11 08:59:21 crc kubenswrapper[4808]: I0311 08:59:21.640488 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-44gbn" Mar 11 08:59:21 crc kubenswrapper[4808]: I0311 08:59:21.650021 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-8kpv9" Mar 11 08:59:21 crc kubenswrapper[4808]: I0311 08:59:21.750668 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-677c674df7-j9bkm" Mar 11 08:59:21 crc kubenswrapper[4808]: I0311 08:59:21.774048 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-b72f8" Mar 11 08:59:21 crc kubenswrapper[4808]: I0311 08:59:21.996040 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-kns94" Mar 11 08:59:22 crc kubenswrapper[4808]: I0311 08:59:22.016783 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-jw9mr" Mar 11 08:59:22 crc kubenswrapper[4808]: I0311 08:59:22.022476 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-clf8c" Mar 11 08:59:23 crc kubenswrapper[4808]: I0311 08:59:23.182432 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/aabfac3f-1196-4f9c-be5f-84bfbb833ae3-cert\") pod \"openstack-baremetal-operator-controller-manager-6647d7885fct7h5\" (UID: \"aabfac3f-1196-4f9c-be5f-84bfbb833ae3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6647d7885fct7h5" Mar 11 08:59:23 crc kubenswrapper[4808]: I0311 08:59:23.188052 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/aabfac3f-1196-4f9c-be5f-84bfbb833ae3-cert\") pod \"openstack-baremetal-operator-controller-manager-6647d7885fct7h5\" (UID: \"aabfac3f-1196-4f9c-be5f-84bfbb833ae3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6647d7885fct7h5" Mar 11 08:59:23 crc kubenswrapper[4808]: I0311 08:59:23.205667 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-gzq4m" Mar 11 08:59:23 crc kubenswrapper[4808]: I0311 08:59:23.214407 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6647d7885fct7h5" Mar 11 08:59:23 crc kubenswrapper[4808]: I0311 08:59:23.590233 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/37e5de88-802a-408e-9362-51166d0b7662-webhook-certs\") pod \"openstack-operator-controller-manager-6679ddfdc7-c8cqh\" (UID: \"37e5de88-802a-408e-9362-51166d0b7662\") " pod="openstack-operators/openstack-operator-controller-manager-6679ddfdc7-c8cqh" Mar 11 08:59:23 crc kubenswrapper[4808]: I0311 08:59:23.590684 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/37e5de88-802a-408e-9362-51166d0b7662-metrics-certs\") pod \"openstack-operator-controller-manager-6679ddfdc7-c8cqh\" (UID: \"37e5de88-802a-408e-9362-51166d0b7662\") " pod="openstack-operators/openstack-operator-controller-manager-6679ddfdc7-c8cqh" Mar 11 08:59:23 crc kubenswrapper[4808]: I0311 08:59:23.595841 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/37e5de88-802a-408e-9362-51166d0b7662-webhook-certs\") pod \"openstack-operator-controller-manager-6679ddfdc7-c8cqh\" (UID: \"37e5de88-802a-408e-9362-51166d0b7662\") " pod="openstack-operators/openstack-operator-controller-manager-6679ddfdc7-c8cqh" Mar 11 08:59:23 crc kubenswrapper[4808]: I0311 08:59:23.602009 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/37e5de88-802a-408e-9362-51166d0b7662-metrics-certs\") pod \"openstack-operator-controller-manager-6679ddfdc7-c8cqh\" (UID: \"37e5de88-802a-408e-9362-51166d0b7662\") " pod="openstack-operators/openstack-operator-controller-manager-6679ddfdc7-c8cqh" Mar 11 08:59:23 crc kubenswrapper[4808]: I0311 08:59:23.716477 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6647d7885fct7h5"] Mar 11 08:59:23 crc kubenswrapper[4808]: I0311 08:59:23.836856 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-bpnvg" Mar 11 08:59:23 crc kubenswrapper[4808]: I0311 08:59:23.839623 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6679ddfdc7-c8cqh" Mar 11 08:59:23 crc kubenswrapper[4808]: I0311 08:59:23.872536 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6647d7885fct7h5" event={"ID":"aabfac3f-1196-4f9c-be5f-84bfbb833ae3","Type":"ContainerStarted","Data":"32ddb04ab408cfa0ab976bbb091bcaef7ce14bb7403aa3c0cf044540bcf4c435"} Mar 11 08:59:24 crc kubenswrapper[4808]: I0311 08:59:24.167045 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6679ddfdc7-c8cqh"] Mar 11 08:59:24 crc kubenswrapper[4808]: W0311 08:59:24.171604 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37e5de88_802a_408e_9362_51166d0b7662.slice/crio-594f89daf31d9c8c58f635c31497b41a22e20de40b0f578ef87cccdf2210c091 WatchSource:0}: Error finding container 594f89daf31d9c8c58f635c31497b41a22e20de40b0f578ef87cccdf2210c091: Status 404 returned error can't find the container with id 594f89daf31d9c8c58f635c31497b41a22e20de40b0f578ef87cccdf2210c091 Mar 11 08:59:24 crc kubenswrapper[4808]: I0311 08:59:24.883378 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6679ddfdc7-c8cqh" event={"ID":"37e5de88-802a-408e-9362-51166d0b7662","Type":"ContainerStarted","Data":"baf1e3a7ea5098951f1771813dc76ce3f71c7cf3360022aff1574ba07ca6e602"} Mar 11 08:59:24 crc kubenswrapper[4808]: I0311 08:59:24.883821 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6679ddfdc7-c8cqh" event={"ID":"37e5de88-802a-408e-9362-51166d0b7662","Type":"ContainerStarted","Data":"594f89daf31d9c8c58f635c31497b41a22e20de40b0f578ef87cccdf2210c091"} Mar 11 08:59:24 crc kubenswrapper[4808]: I0311 08:59:24.884926 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-6679ddfdc7-c8cqh" Mar 11 08:59:24 crc kubenswrapper[4808]: I0311 08:59:24.886383 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-pgtxk" event={"ID":"cf281551-b7f4-4d5c-823e-6e70132ae2d0","Type":"ContainerStarted","Data":"5eb5e1cec1f32c531f4b955d7448f85b984b97f74cc742730fec1572a438c71e"} Mar 11 08:59:24 crc kubenswrapper[4808]: I0311 08:59:24.886692 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-pgtxk" Mar 11 08:59:24 crc kubenswrapper[4808]: I0311 08:59:24.895350 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7lk57" event={"ID":"b0711f78-69be-46d0-8857-e01d6927edfd","Type":"ContainerStarted","Data":"af5495b7f6c6d20443e016832d776f770354ca11cd849ebfe6a95a7aa765df36"} Mar 11 08:59:24 crc kubenswrapper[4808]: I0311 08:59:24.897920 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-n49mc" event={"ID":"14bd6d19-3c60-4752-bd29-09809df6f7b6","Type":"ContainerStarted","Data":"9007296f101441b24d7c195b2924d67111c155bc848eedf493eca47f159d2b8a"} Mar 11 08:59:24 crc kubenswrapper[4808]: I0311 08:59:24.898153 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-n49mc" Mar 11 08:59:24 crc kubenswrapper[4808]: I0311 08:59:24.900178 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-9k6vf" event={"ID":"f90023c4-1729-4193-9853-4548be9c786c","Type":"ContainerStarted","Data":"8aa5e8e3e411cb6febdef5f034e6ceced1780e7a3cef7321f86de166e918a79d"} Mar 11 08:59:24 crc kubenswrapper[4808]: I0311 08:59:24.900421 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-9k6vf" Mar 11 08:59:24 crc kubenswrapper[4808]: I0311 08:59:24.922412 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-6679ddfdc7-c8cqh" podStartSLOduration=33.922390851 podStartE2EDuration="33.922390851s" podCreationTimestamp="2026-03-11 08:58:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:59:24.916418711 +0000 UTC m=+1215.869742041" watchObservedRunningTime="2026-03-11 08:59:24.922390851 +0000 UTC m=+1215.875714171" Mar 11 08:59:24 crc kubenswrapper[4808]: I0311 08:59:24.938290 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-9k6vf" podStartSLOduration=4.1700480540000004 podStartE2EDuration="34.938272946s" podCreationTimestamp="2026-03-11 08:58:50 +0000 UTC" firstStartedPulling="2026-03-11 08:58:53.172391814 +0000 UTC m=+1184.125715134" lastFinishedPulling="2026-03-11 08:59:23.940616706 +0000 UTC m=+1214.893940026" observedRunningTime="2026-03-11 08:59:24.93107139 +0000 UTC m=+1215.884394740" watchObservedRunningTime="2026-03-11 08:59:24.938272946 +0000 UTC m=+1215.891596286" Mar 11 08:59:24 crc kubenswrapper[4808]: I0311 08:59:24.946816 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-n49mc" podStartSLOduration=4.204287963 podStartE2EDuration="34.9467987s" podCreationTimestamp="2026-03-11 08:58:50 +0000 UTC" firstStartedPulling="2026-03-11 08:58:53.176002287 +0000 UTC m=+1184.129325607" lastFinishedPulling="2026-03-11 08:59:23.918513024 +0000 UTC m=+1214.871836344" observedRunningTime="2026-03-11 08:59:24.945291487 +0000 UTC m=+1215.898614807" watchObservedRunningTime="2026-03-11 08:59:24.9467987 +0000 UTC m=+1215.900122020" Mar 11 08:59:24 crc kubenswrapper[4808]: I0311 08:59:24.973393 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7lk57" podStartSLOduration=3.581347629 podStartE2EDuration="33.97337069s" podCreationTimestamp="2026-03-11 08:58:51 +0000 UTC" firstStartedPulling="2026-03-11 08:58:53.382254567 +0000 UTC m=+1184.335577887" lastFinishedPulling="2026-03-11 08:59:23.774277628 +0000 UTC m=+1214.727600948" observedRunningTime="2026-03-11 08:59:24.965843654 +0000 UTC m=+1215.919166974" watchObservedRunningTime="2026-03-11 08:59:24.97337069 +0000 UTC m=+1215.926694010" Mar 11 08:59:24 crc kubenswrapper[4808]: I0311 08:59:24.985780 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-pgtxk" podStartSLOduration=3.288957805 podStartE2EDuration="33.985763514s" podCreationTimestamp="2026-03-11 08:58:51 +0000 UTC" firstStartedPulling="2026-03-11 08:58:53.222711243 +0000 UTC m=+1184.176034563" lastFinishedPulling="2026-03-11 08:59:23.919516952 +0000 UTC m=+1214.872840272" observedRunningTime="2026-03-11 08:59:24.98000136 +0000 UTC m=+1215.933324680" watchObservedRunningTime="2026-03-11 08:59:24.985763514 +0000 UTC m=+1215.939086844" Mar 11 08:59:25 crc kubenswrapper[4808]: I0311 08:59:25.908889 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6647d7885fct7h5" event={"ID":"aabfac3f-1196-4f9c-be5f-84bfbb833ae3","Type":"ContainerStarted","Data":"8523b8058fe20bc7cb135e3d27994c7049dde3a30cb1d21a31e1c7fca67f72cf"} Mar 11 08:59:25 crc kubenswrapper[4808]: I0311 08:59:25.930270 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6647d7885fct7h5" podStartSLOduration=32.957400796 podStartE2EDuration="34.930251143s" podCreationTimestamp="2026-03-11 08:58:51 +0000 UTC" firstStartedPulling="2026-03-11 08:59:23.724830243 +0000 UTC m=+1214.678153553" lastFinishedPulling="2026-03-11 08:59:25.69768058 +0000 UTC m=+1216.651003900" observedRunningTime="2026-03-11 08:59:25.927771942 +0000 UTC m=+1216.881095262" watchObservedRunningTime="2026-03-11 08:59:25.930251143 +0000 UTC m=+1216.883574463" Mar 11 08:59:26 crc kubenswrapper[4808]: I0311 08:59:26.914404 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6647d7885fct7h5" Mar 11 08:59:27 crc kubenswrapper[4808]: I0311 08:59:27.057060 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-5995f4446f-cb6bf" Mar 11 08:59:31 crc kubenswrapper[4808]: I0311 08:59:31.357246 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-n49mc" Mar 11 08:59:31 crc kubenswrapper[4808]: I0311 08:59:31.545197 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-9k6vf" Mar 11 08:59:31 crc kubenswrapper[4808]: I0311 08:59:31.679296 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-pgtxk" Mar 11 08:59:33 crc kubenswrapper[4808]: I0311 08:59:33.223714 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6647d7885fct7h5" Mar 11 08:59:33 crc kubenswrapper[4808]: I0311 08:59:33.846934 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-6679ddfdc7-c8cqh" Mar 11 08:59:49 crc kubenswrapper[4808]: I0311 08:59:49.820899 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-589db6c89c-s9w9c"] Mar 11 08:59:49 crc kubenswrapper[4808]: I0311 08:59:49.823832 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-589db6c89c-s9w9c" Mar 11 08:59:49 crc kubenswrapper[4808]: I0311 08:59:49.827170 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Mar 11 08:59:49 crc kubenswrapper[4808]: I0311 08:59:49.827189 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-q4zw4" Mar 11 08:59:49 crc kubenswrapper[4808]: I0311 08:59:49.828396 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Mar 11 08:59:49 crc kubenswrapper[4808]: I0311 08:59:49.828505 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Mar 11 08:59:49 crc kubenswrapper[4808]: I0311 08:59:49.847962 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-589db6c89c-s9w9c"] Mar 11 08:59:49 crc kubenswrapper[4808]: I0311 08:59:49.869952 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b680370-1655-424d-add1-0a15b5137299-config\") pod \"dnsmasq-dns-589db6c89c-s9w9c\" (UID: \"8b680370-1655-424d-add1-0a15b5137299\") " pod="openstack/dnsmasq-dns-589db6c89c-s9w9c" Mar 11 08:59:49 crc kubenswrapper[4808]: I0311 08:59:49.870568 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-748cp\" (UniqueName: \"kubernetes.io/projected/8b680370-1655-424d-add1-0a15b5137299-kube-api-access-748cp\") pod \"dnsmasq-dns-589db6c89c-s9w9c\" (UID: \"8b680370-1655-424d-add1-0a15b5137299\") " pod="openstack/dnsmasq-dns-589db6c89c-s9w9c" Mar 11 08:59:49 crc kubenswrapper[4808]: I0311 08:59:49.887377 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86bbd886cf-87zbx"] Mar 11 08:59:49 crc kubenswrapper[4808]: I0311 08:59:49.894771 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86bbd886cf-87zbx" Mar 11 08:59:49 crc kubenswrapper[4808]: I0311 08:59:49.898572 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Mar 11 08:59:49 crc kubenswrapper[4808]: I0311 08:59:49.907631 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86bbd886cf-87zbx"] Mar 11 08:59:49 crc kubenswrapper[4808]: I0311 08:59:49.971791 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b680370-1655-424d-add1-0a15b5137299-config\") pod \"dnsmasq-dns-589db6c89c-s9w9c\" (UID: \"8b680370-1655-424d-add1-0a15b5137299\") " pod="openstack/dnsmasq-dns-589db6c89c-s9w9c" Mar 11 08:59:49 crc kubenswrapper[4808]: I0311 08:59:49.971849 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dc1ab132-bd3f-4240-9682-135407f6b4fd-dns-svc\") pod \"dnsmasq-dns-86bbd886cf-87zbx\" (UID: \"dc1ab132-bd3f-4240-9682-135407f6b4fd\") " pod="openstack/dnsmasq-dns-86bbd886cf-87zbx" Mar 11 08:59:49 crc kubenswrapper[4808]: I0311 08:59:49.971917 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-748cp\" (UniqueName: \"kubernetes.io/projected/8b680370-1655-424d-add1-0a15b5137299-kube-api-access-748cp\") pod \"dnsmasq-dns-589db6c89c-s9w9c\" (UID: \"8b680370-1655-424d-add1-0a15b5137299\") " pod="openstack/dnsmasq-dns-589db6c89c-s9w9c" Mar 11 08:59:49 crc kubenswrapper[4808]: I0311 08:59:49.971956 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9hrt\" (UniqueName: \"kubernetes.io/projected/dc1ab132-bd3f-4240-9682-135407f6b4fd-kube-api-access-f9hrt\") pod \"dnsmasq-dns-86bbd886cf-87zbx\" (UID: \"dc1ab132-bd3f-4240-9682-135407f6b4fd\") " pod="openstack/dnsmasq-dns-86bbd886cf-87zbx" Mar 11 08:59:49 crc kubenswrapper[4808]: I0311 08:59:49.972028 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc1ab132-bd3f-4240-9682-135407f6b4fd-config\") pod \"dnsmasq-dns-86bbd886cf-87zbx\" (UID: \"dc1ab132-bd3f-4240-9682-135407f6b4fd\") " pod="openstack/dnsmasq-dns-86bbd886cf-87zbx" Mar 11 08:59:49 crc kubenswrapper[4808]: I0311 08:59:49.972712 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b680370-1655-424d-add1-0a15b5137299-config\") pod \"dnsmasq-dns-589db6c89c-s9w9c\" (UID: \"8b680370-1655-424d-add1-0a15b5137299\") " pod="openstack/dnsmasq-dns-589db6c89c-s9w9c" Mar 11 08:59:49 crc kubenswrapper[4808]: I0311 08:59:49.991460 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-748cp\" (UniqueName: \"kubernetes.io/projected/8b680370-1655-424d-add1-0a15b5137299-kube-api-access-748cp\") pod \"dnsmasq-dns-589db6c89c-s9w9c\" (UID: \"8b680370-1655-424d-add1-0a15b5137299\") " pod="openstack/dnsmasq-dns-589db6c89c-s9w9c" Mar 11 08:59:50 crc kubenswrapper[4808]: I0311 08:59:50.073274 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9hrt\" (UniqueName: \"kubernetes.io/projected/dc1ab132-bd3f-4240-9682-135407f6b4fd-kube-api-access-f9hrt\") pod \"dnsmasq-dns-86bbd886cf-87zbx\" (UID: \"dc1ab132-bd3f-4240-9682-135407f6b4fd\") " pod="openstack/dnsmasq-dns-86bbd886cf-87zbx" Mar 11 08:59:50 crc kubenswrapper[4808]: I0311 08:59:50.073332 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc1ab132-bd3f-4240-9682-135407f6b4fd-config\") pod \"dnsmasq-dns-86bbd886cf-87zbx\" (UID: \"dc1ab132-bd3f-4240-9682-135407f6b4fd\") " pod="openstack/dnsmasq-dns-86bbd886cf-87zbx" Mar 11 08:59:50 crc kubenswrapper[4808]: I0311 08:59:50.073415 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dc1ab132-bd3f-4240-9682-135407f6b4fd-dns-svc\") pod \"dnsmasq-dns-86bbd886cf-87zbx\" (UID: \"dc1ab132-bd3f-4240-9682-135407f6b4fd\") " pod="openstack/dnsmasq-dns-86bbd886cf-87zbx" Mar 11 08:59:50 crc kubenswrapper[4808]: I0311 08:59:50.074384 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc1ab132-bd3f-4240-9682-135407f6b4fd-config\") pod \"dnsmasq-dns-86bbd886cf-87zbx\" (UID: \"dc1ab132-bd3f-4240-9682-135407f6b4fd\") " pod="openstack/dnsmasq-dns-86bbd886cf-87zbx" Mar 11 08:59:50 crc kubenswrapper[4808]: I0311 08:59:50.074399 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dc1ab132-bd3f-4240-9682-135407f6b4fd-dns-svc\") pod \"dnsmasq-dns-86bbd886cf-87zbx\" (UID: \"dc1ab132-bd3f-4240-9682-135407f6b4fd\") " pod="openstack/dnsmasq-dns-86bbd886cf-87zbx" Mar 11 08:59:50 crc kubenswrapper[4808]: I0311 08:59:50.089301 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9hrt\" (UniqueName: \"kubernetes.io/projected/dc1ab132-bd3f-4240-9682-135407f6b4fd-kube-api-access-f9hrt\") pod \"dnsmasq-dns-86bbd886cf-87zbx\" (UID: \"dc1ab132-bd3f-4240-9682-135407f6b4fd\") " pod="openstack/dnsmasq-dns-86bbd886cf-87zbx" Mar 11 08:59:50 crc kubenswrapper[4808]: I0311 08:59:50.145405 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-589db6c89c-s9w9c" Mar 11 08:59:50 crc kubenswrapper[4808]: I0311 08:59:50.210693 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86bbd886cf-87zbx" Mar 11 08:59:50 crc kubenswrapper[4808]: I0311 08:59:50.444896 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86bbd886cf-87zbx"] Mar 11 08:59:50 crc kubenswrapper[4808]: I0311 08:59:50.552562 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-589db6c89c-s9w9c"] Mar 11 08:59:50 crc kubenswrapper[4808]: W0311 08:59:50.556972 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8b680370_1655_424d_add1_0a15b5137299.slice/crio-fbe4913e0561f49134c373807c24aa0b1b9814278452438e93338bd751b87452 WatchSource:0}: Error finding container fbe4913e0561f49134c373807c24aa0b1b9814278452438e93338bd751b87452: Status 404 returned error can't find the container with id fbe4913e0561f49134c373807c24aa0b1b9814278452438e93338bd751b87452 Mar 11 08:59:51 crc kubenswrapper[4808]: I0311 08:59:51.047790 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-589db6c89c-s9w9c"] Mar 11 08:59:51 crc kubenswrapper[4808]: I0311 08:59:51.097597 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78cb4465c9-5pxmr"] Mar 11 08:59:51 crc kubenswrapper[4808]: I0311 08:59:51.098722 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78cb4465c9-5pxmr" Mar 11 08:59:51 crc kubenswrapper[4808]: I0311 08:59:51.112013 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78cb4465c9-5pxmr"] Mar 11 08:59:51 crc kubenswrapper[4808]: I0311 08:59:51.114306 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-589db6c89c-s9w9c" event={"ID":"8b680370-1655-424d-add1-0a15b5137299","Type":"ContainerStarted","Data":"fbe4913e0561f49134c373807c24aa0b1b9814278452438e93338bd751b87452"} Mar 11 08:59:51 crc kubenswrapper[4808]: I0311 08:59:51.115741 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86bbd886cf-87zbx" event={"ID":"dc1ab132-bd3f-4240-9682-135407f6b4fd","Type":"ContainerStarted","Data":"d3ab0ee3d77d15d6fffc71d2b032dcb23a34b4f2fe46506f401010c973e8d411"} Mar 11 08:59:51 crc kubenswrapper[4808]: I0311 08:59:51.195684 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wrjg\" (UniqueName: \"kubernetes.io/projected/7e0d4d29-dc82-4e09-bf90-874143df5d59-kube-api-access-7wrjg\") pod \"dnsmasq-dns-78cb4465c9-5pxmr\" (UID: \"7e0d4d29-dc82-4e09-bf90-874143df5d59\") " pod="openstack/dnsmasq-dns-78cb4465c9-5pxmr" Mar 11 08:59:51 crc kubenswrapper[4808]: I0311 08:59:51.195754 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7e0d4d29-dc82-4e09-bf90-874143df5d59-dns-svc\") pod \"dnsmasq-dns-78cb4465c9-5pxmr\" (UID: \"7e0d4d29-dc82-4e09-bf90-874143df5d59\") " pod="openstack/dnsmasq-dns-78cb4465c9-5pxmr" Mar 11 08:59:51 crc kubenswrapper[4808]: I0311 08:59:51.195811 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e0d4d29-dc82-4e09-bf90-874143df5d59-config\") pod \"dnsmasq-dns-78cb4465c9-5pxmr\" (UID: \"7e0d4d29-dc82-4e09-bf90-874143df5d59\") " pod="openstack/dnsmasq-dns-78cb4465c9-5pxmr" Mar 11 08:59:51 crc kubenswrapper[4808]: I0311 08:59:51.298017 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7e0d4d29-dc82-4e09-bf90-874143df5d59-dns-svc\") pod \"dnsmasq-dns-78cb4465c9-5pxmr\" (UID: \"7e0d4d29-dc82-4e09-bf90-874143df5d59\") " pod="openstack/dnsmasq-dns-78cb4465c9-5pxmr" Mar 11 08:59:51 crc kubenswrapper[4808]: I0311 08:59:51.298815 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e0d4d29-dc82-4e09-bf90-874143df5d59-config\") pod \"dnsmasq-dns-78cb4465c9-5pxmr\" (UID: \"7e0d4d29-dc82-4e09-bf90-874143df5d59\") " pod="openstack/dnsmasq-dns-78cb4465c9-5pxmr" Mar 11 08:59:51 crc kubenswrapper[4808]: I0311 08:59:51.299061 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wrjg\" (UniqueName: \"kubernetes.io/projected/7e0d4d29-dc82-4e09-bf90-874143df5d59-kube-api-access-7wrjg\") pod \"dnsmasq-dns-78cb4465c9-5pxmr\" (UID: \"7e0d4d29-dc82-4e09-bf90-874143df5d59\") " pod="openstack/dnsmasq-dns-78cb4465c9-5pxmr" Mar 11 08:59:51 crc kubenswrapper[4808]: I0311 08:59:51.299303 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7e0d4d29-dc82-4e09-bf90-874143df5d59-dns-svc\") pod \"dnsmasq-dns-78cb4465c9-5pxmr\" (UID: \"7e0d4d29-dc82-4e09-bf90-874143df5d59\") " pod="openstack/dnsmasq-dns-78cb4465c9-5pxmr" Mar 11 08:59:51 crc kubenswrapper[4808]: I0311 08:59:51.299990 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e0d4d29-dc82-4e09-bf90-874143df5d59-config\") pod \"dnsmasq-dns-78cb4465c9-5pxmr\" (UID: \"7e0d4d29-dc82-4e09-bf90-874143df5d59\") " pod="openstack/dnsmasq-dns-78cb4465c9-5pxmr" Mar 11 08:59:51 crc kubenswrapper[4808]: I0311 08:59:51.320550 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wrjg\" (UniqueName: \"kubernetes.io/projected/7e0d4d29-dc82-4e09-bf90-874143df5d59-kube-api-access-7wrjg\") pod \"dnsmasq-dns-78cb4465c9-5pxmr\" (UID: \"7e0d4d29-dc82-4e09-bf90-874143df5d59\") " pod="openstack/dnsmasq-dns-78cb4465c9-5pxmr" Mar 11 08:59:51 crc kubenswrapper[4808]: I0311 08:59:51.426083 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78cb4465c9-5pxmr" Mar 11 08:59:51 crc kubenswrapper[4808]: I0311 08:59:51.881419 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78cb4465c9-5pxmr"] Mar 11 08:59:51 crc kubenswrapper[4808]: W0311 08:59:51.910518 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7e0d4d29_dc82_4e09_bf90_874143df5d59.slice/crio-74fcf50ab2a02ba0a530c763f92e3e8565239181759b04f93ddac50d33301e92 WatchSource:0}: Error finding container 74fcf50ab2a02ba0a530c763f92e3e8565239181759b04f93ddac50d33301e92: Status 404 returned error can't find the container with id 74fcf50ab2a02ba0a530c763f92e3e8565239181759b04f93ddac50d33301e92 Mar 11 08:59:52 crc kubenswrapper[4808]: I0311 08:59:52.066915 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86bbd886cf-87zbx"] Mar 11 08:59:52 crc kubenswrapper[4808]: I0311 08:59:52.092332 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7c47bcb9f9-s9h2q"] Mar 11 08:59:52 crc kubenswrapper[4808]: I0311 08:59:52.093448 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c47bcb9f9-s9h2q" Mar 11 08:59:52 crc kubenswrapper[4808]: I0311 08:59:52.122756 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c47bcb9f9-s9h2q"] Mar 11 08:59:52 crc kubenswrapper[4808]: I0311 08:59:52.159130 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78cb4465c9-5pxmr" event={"ID":"7e0d4d29-dc82-4e09-bf90-874143df5d59","Type":"ContainerStarted","Data":"74fcf50ab2a02ba0a530c763f92e3e8565239181759b04f93ddac50d33301e92"} Mar 11 08:59:52 crc kubenswrapper[4808]: I0311 08:59:52.216493 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfh2c\" (UniqueName: \"kubernetes.io/projected/c5eae4bd-3de8-43d4-95d6-2f7ff2556bb1-kube-api-access-mfh2c\") pod \"dnsmasq-dns-7c47bcb9f9-s9h2q\" (UID: \"c5eae4bd-3de8-43d4-95d6-2f7ff2556bb1\") " pod="openstack/dnsmasq-dns-7c47bcb9f9-s9h2q" Mar 11 08:59:52 crc kubenswrapper[4808]: I0311 08:59:52.216591 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5eae4bd-3de8-43d4-95d6-2f7ff2556bb1-config\") pod \"dnsmasq-dns-7c47bcb9f9-s9h2q\" (UID: \"c5eae4bd-3de8-43d4-95d6-2f7ff2556bb1\") " pod="openstack/dnsmasq-dns-7c47bcb9f9-s9h2q" Mar 11 08:59:52 crc kubenswrapper[4808]: I0311 08:59:52.216613 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c5eae4bd-3de8-43d4-95d6-2f7ff2556bb1-dns-svc\") pod \"dnsmasq-dns-7c47bcb9f9-s9h2q\" (UID: \"c5eae4bd-3de8-43d4-95d6-2f7ff2556bb1\") " pod="openstack/dnsmasq-dns-7c47bcb9f9-s9h2q" Mar 11 08:59:52 crc kubenswrapper[4808]: I0311 08:59:52.278967 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 11 08:59:52 crc kubenswrapper[4808]: I0311 08:59:52.280733 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 11 08:59:52 crc kubenswrapper[4808]: I0311 08:59:52.284691 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-5nsvh" Mar 11 08:59:52 crc kubenswrapper[4808]: I0311 08:59:52.284873 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Mar 11 08:59:52 crc kubenswrapper[4808]: I0311 08:59:52.284978 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Mar 11 08:59:52 crc kubenswrapper[4808]: I0311 08:59:52.285071 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Mar 11 08:59:52 crc kubenswrapper[4808]: I0311 08:59:52.285250 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Mar 11 08:59:52 crc kubenswrapper[4808]: I0311 08:59:52.285352 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Mar 11 08:59:52 crc kubenswrapper[4808]: I0311 08:59:52.285467 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Mar 11 08:59:52 crc kubenswrapper[4808]: I0311 08:59:52.313495 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 11 08:59:52 crc kubenswrapper[4808]: I0311 08:59:52.317768 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/549d4ad5-b5b0-45bd-87b0-b9a6ee77866e-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"549d4ad5-b5b0-45bd-87b0-b9a6ee77866e\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 08:59:52 crc kubenswrapper[4808]: I0311 08:59:52.317820 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/549d4ad5-b5b0-45bd-87b0-b9a6ee77866e-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"549d4ad5-b5b0-45bd-87b0-b9a6ee77866e\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 08:59:52 crc kubenswrapper[4808]: I0311 08:59:52.317847 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/549d4ad5-b5b0-45bd-87b0-b9a6ee77866e-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"549d4ad5-b5b0-45bd-87b0-b9a6ee77866e\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 08:59:52 crc kubenswrapper[4808]: I0311 08:59:52.317863 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/549d4ad5-b5b0-45bd-87b0-b9a6ee77866e-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"549d4ad5-b5b0-45bd-87b0-b9a6ee77866e\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 08:59:52 crc kubenswrapper[4808]: I0311 08:59:52.317880 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/549d4ad5-b5b0-45bd-87b0-b9a6ee77866e-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"549d4ad5-b5b0-45bd-87b0-b9a6ee77866e\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 08:59:52 crc kubenswrapper[4808]: I0311 08:59:52.317912 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/549d4ad5-b5b0-45bd-87b0-b9a6ee77866e-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"549d4ad5-b5b0-45bd-87b0-b9a6ee77866e\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 08:59:52 crc kubenswrapper[4808]: I0311 08:59:52.317938 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfh2c\" (UniqueName: \"kubernetes.io/projected/c5eae4bd-3de8-43d4-95d6-2f7ff2556bb1-kube-api-access-mfh2c\") pod \"dnsmasq-dns-7c47bcb9f9-s9h2q\" (UID: \"c5eae4bd-3de8-43d4-95d6-2f7ff2556bb1\") " pod="openstack/dnsmasq-dns-7c47bcb9f9-s9h2q" Mar 11 08:59:52 crc kubenswrapper[4808]: I0311 08:59:52.317977 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5444\" (UniqueName: \"kubernetes.io/projected/549d4ad5-b5b0-45bd-87b0-b9a6ee77866e-kube-api-access-k5444\") pod \"rabbitmq-cell1-server-0\" (UID: \"549d4ad5-b5b0-45bd-87b0-b9a6ee77866e\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 08:59:52 crc kubenswrapper[4808]: I0311 08:59:52.318009 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"549d4ad5-b5b0-45bd-87b0-b9a6ee77866e\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 08:59:52 crc kubenswrapper[4808]: I0311 08:59:52.318031 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/549d4ad5-b5b0-45bd-87b0-b9a6ee77866e-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"549d4ad5-b5b0-45bd-87b0-b9a6ee77866e\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 08:59:52 crc kubenswrapper[4808]: I0311 08:59:52.318048 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/549d4ad5-b5b0-45bd-87b0-b9a6ee77866e-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"549d4ad5-b5b0-45bd-87b0-b9a6ee77866e\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 08:59:52 crc kubenswrapper[4808]: I0311 08:59:52.318069 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/549d4ad5-b5b0-45bd-87b0-b9a6ee77866e-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"549d4ad5-b5b0-45bd-87b0-b9a6ee77866e\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 08:59:52 crc kubenswrapper[4808]: I0311 08:59:52.318089 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5eae4bd-3de8-43d4-95d6-2f7ff2556bb1-config\") pod \"dnsmasq-dns-7c47bcb9f9-s9h2q\" (UID: \"c5eae4bd-3de8-43d4-95d6-2f7ff2556bb1\") " pod="openstack/dnsmasq-dns-7c47bcb9f9-s9h2q" Mar 11 08:59:52 crc kubenswrapper[4808]: I0311 08:59:52.318111 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c5eae4bd-3de8-43d4-95d6-2f7ff2556bb1-dns-svc\") pod \"dnsmasq-dns-7c47bcb9f9-s9h2q\" (UID: \"c5eae4bd-3de8-43d4-95d6-2f7ff2556bb1\") " pod="openstack/dnsmasq-dns-7c47bcb9f9-s9h2q" Mar 11 08:59:52 crc kubenswrapper[4808]: I0311 08:59:52.319132 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5eae4bd-3de8-43d4-95d6-2f7ff2556bb1-config\") pod \"dnsmasq-dns-7c47bcb9f9-s9h2q\" (UID: \"c5eae4bd-3de8-43d4-95d6-2f7ff2556bb1\") " pod="openstack/dnsmasq-dns-7c47bcb9f9-s9h2q" Mar 11 08:59:52 crc kubenswrapper[4808]: I0311 08:59:52.319137 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c5eae4bd-3de8-43d4-95d6-2f7ff2556bb1-dns-svc\") pod \"dnsmasq-dns-7c47bcb9f9-s9h2q\" (UID: \"c5eae4bd-3de8-43d4-95d6-2f7ff2556bb1\") " pod="openstack/dnsmasq-dns-7c47bcb9f9-s9h2q" Mar 11 08:59:52 crc kubenswrapper[4808]: I0311 08:59:52.335522 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfh2c\" (UniqueName: \"kubernetes.io/projected/c5eae4bd-3de8-43d4-95d6-2f7ff2556bb1-kube-api-access-mfh2c\") pod \"dnsmasq-dns-7c47bcb9f9-s9h2q\" (UID: \"c5eae4bd-3de8-43d4-95d6-2f7ff2556bb1\") " pod="openstack/dnsmasq-dns-7c47bcb9f9-s9h2q" Mar 11 08:59:52 crc kubenswrapper[4808]: I0311 08:59:52.420485 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c47bcb9f9-s9h2q" Mar 11 08:59:52 crc kubenswrapper[4808]: I0311 08:59:52.421170 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"549d4ad5-b5b0-45bd-87b0-b9a6ee77866e\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 08:59:52 crc kubenswrapper[4808]: I0311 08:59:52.421219 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/549d4ad5-b5b0-45bd-87b0-b9a6ee77866e-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"549d4ad5-b5b0-45bd-87b0-b9a6ee77866e\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 08:59:52 crc kubenswrapper[4808]: I0311 08:59:52.421258 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/549d4ad5-b5b0-45bd-87b0-b9a6ee77866e-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"549d4ad5-b5b0-45bd-87b0-b9a6ee77866e\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 08:59:52 crc kubenswrapper[4808]: I0311 08:59:52.421281 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/549d4ad5-b5b0-45bd-87b0-b9a6ee77866e-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"549d4ad5-b5b0-45bd-87b0-b9a6ee77866e\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 08:59:52 crc kubenswrapper[4808]: I0311 08:59:52.421382 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/549d4ad5-b5b0-45bd-87b0-b9a6ee77866e-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"549d4ad5-b5b0-45bd-87b0-b9a6ee77866e\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 08:59:52 crc kubenswrapper[4808]: I0311 08:59:52.421408 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/549d4ad5-b5b0-45bd-87b0-b9a6ee77866e-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"549d4ad5-b5b0-45bd-87b0-b9a6ee77866e\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 08:59:52 crc kubenswrapper[4808]: I0311 08:59:52.421433 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/549d4ad5-b5b0-45bd-87b0-b9a6ee77866e-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"549d4ad5-b5b0-45bd-87b0-b9a6ee77866e\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 08:59:52 crc kubenswrapper[4808]: I0311 08:59:52.421450 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/549d4ad5-b5b0-45bd-87b0-b9a6ee77866e-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"549d4ad5-b5b0-45bd-87b0-b9a6ee77866e\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 08:59:52 crc kubenswrapper[4808]: I0311 08:59:52.421467 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/549d4ad5-b5b0-45bd-87b0-b9a6ee77866e-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"549d4ad5-b5b0-45bd-87b0-b9a6ee77866e\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 08:59:52 crc kubenswrapper[4808]: I0311 08:59:52.421507 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/549d4ad5-b5b0-45bd-87b0-b9a6ee77866e-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"549d4ad5-b5b0-45bd-87b0-b9a6ee77866e\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 08:59:52 crc kubenswrapper[4808]: I0311 08:59:52.421556 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5444\" (UniqueName: \"kubernetes.io/projected/549d4ad5-b5b0-45bd-87b0-b9a6ee77866e-kube-api-access-k5444\") pod \"rabbitmq-cell1-server-0\" (UID: \"549d4ad5-b5b0-45bd-87b0-b9a6ee77866e\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 08:59:52 crc kubenswrapper[4808]: I0311 08:59:52.421697 4808 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"549d4ad5-b5b0-45bd-87b0-b9a6ee77866e\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/rabbitmq-cell1-server-0" Mar 11 08:59:52 crc kubenswrapper[4808]: I0311 08:59:52.422406 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/549d4ad5-b5b0-45bd-87b0-b9a6ee77866e-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"549d4ad5-b5b0-45bd-87b0-b9a6ee77866e\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 08:59:52 crc kubenswrapper[4808]: I0311 08:59:52.422948 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/549d4ad5-b5b0-45bd-87b0-b9a6ee77866e-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"549d4ad5-b5b0-45bd-87b0-b9a6ee77866e\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 08:59:52 crc kubenswrapper[4808]: I0311 08:59:52.423500 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/549d4ad5-b5b0-45bd-87b0-b9a6ee77866e-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"549d4ad5-b5b0-45bd-87b0-b9a6ee77866e\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 08:59:52 crc kubenswrapper[4808]: I0311 08:59:52.424107 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/549d4ad5-b5b0-45bd-87b0-b9a6ee77866e-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"549d4ad5-b5b0-45bd-87b0-b9a6ee77866e\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 08:59:52 crc kubenswrapper[4808]: I0311 08:59:52.424558 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/549d4ad5-b5b0-45bd-87b0-b9a6ee77866e-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"549d4ad5-b5b0-45bd-87b0-b9a6ee77866e\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 08:59:52 crc kubenswrapper[4808]: I0311 08:59:52.426123 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/549d4ad5-b5b0-45bd-87b0-b9a6ee77866e-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"549d4ad5-b5b0-45bd-87b0-b9a6ee77866e\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 08:59:52 crc kubenswrapper[4808]: I0311 08:59:52.428138 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/549d4ad5-b5b0-45bd-87b0-b9a6ee77866e-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"549d4ad5-b5b0-45bd-87b0-b9a6ee77866e\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 08:59:52 crc kubenswrapper[4808]: I0311 08:59:52.444043 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5444\" (UniqueName: \"kubernetes.io/projected/549d4ad5-b5b0-45bd-87b0-b9a6ee77866e-kube-api-access-k5444\") pod \"rabbitmq-cell1-server-0\" (UID: \"549d4ad5-b5b0-45bd-87b0-b9a6ee77866e\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 08:59:52 crc kubenswrapper[4808]: I0311 08:59:52.450270 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/549d4ad5-b5b0-45bd-87b0-b9a6ee77866e-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"549d4ad5-b5b0-45bd-87b0-b9a6ee77866e\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 08:59:52 crc kubenswrapper[4808]: I0311 08:59:52.451824 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/549d4ad5-b5b0-45bd-87b0-b9a6ee77866e-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"549d4ad5-b5b0-45bd-87b0-b9a6ee77866e\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 08:59:52 crc kubenswrapper[4808]: I0311 08:59:52.465526 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"549d4ad5-b5b0-45bd-87b0-b9a6ee77866e\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 08:59:52 crc kubenswrapper[4808]: I0311 08:59:52.619841 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 11 08:59:52 crc kubenswrapper[4808]: I0311 08:59:52.944024 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 11 08:59:52 crc kubenswrapper[4808]: I0311 08:59:52.986162 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c47bcb9f9-s9h2q"] Mar 11 08:59:53 crc kubenswrapper[4808]: W0311 08:59:53.003854 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc5eae4bd_3de8_43d4_95d6_2f7ff2556bb1.slice/crio-3f398b2a084a66daa16864bfafd6679c8b79fd196a3415ee3e0c2d16d4029926 WatchSource:0}: Error finding container 3f398b2a084a66daa16864bfafd6679c8b79fd196a3415ee3e0c2d16d4029926: Status 404 returned error can't find the container with id 3f398b2a084a66daa16864bfafd6679c8b79fd196a3415ee3e0c2d16d4029926 Mar 11 08:59:53 crc kubenswrapper[4808]: I0311 08:59:53.170042 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"549d4ad5-b5b0-45bd-87b0-b9a6ee77866e","Type":"ContainerStarted","Data":"634c1d9c33bd62beaea1da188b124ac41111282a8b7359d08dd847e583ed5b35"} Mar 11 08:59:53 crc kubenswrapper[4808]: I0311 08:59:53.171544 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c47bcb9f9-s9h2q" event={"ID":"c5eae4bd-3de8-43d4-95d6-2f7ff2556bb1","Type":"ContainerStarted","Data":"3f398b2a084a66daa16864bfafd6679c8b79fd196a3415ee3e0c2d16d4029926"} Mar 11 08:59:53 crc kubenswrapper[4808]: I0311 08:59:53.221743 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Mar 11 08:59:53 crc kubenswrapper[4808]: I0311 08:59:53.222898 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 11 08:59:53 crc kubenswrapper[4808]: I0311 08:59:53.226969 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Mar 11 08:59:53 crc kubenswrapper[4808]: I0311 08:59:53.229919 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-bs2qz" Mar 11 08:59:53 crc kubenswrapper[4808]: I0311 08:59:53.230084 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Mar 11 08:59:53 crc kubenswrapper[4808]: I0311 08:59:53.230178 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Mar 11 08:59:53 crc kubenswrapper[4808]: I0311 08:59:53.230374 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Mar 11 08:59:53 crc kubenswrapper[4808]: I0311 08:59:53.230461 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Mar 11 08:59:53 crc kubenswrapper[4808]: I0311 08:59:53.232937 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Mar 11 08:59:53 crc kubenswrapper[4808]: I0311 08:59:53.244204 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 11 08:59:53 crc kubenswrapper[4808]: I0311 08:59:53.340161 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a1e42e33-7453-4b97-abca-0c45cc27faa2-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"a1e42e33-7453-4b97-abca-0c45cc27faa2\") " pod="openstack/rabbitmq-server-0" Mar 11 08:59:53 crc kubenswrapper[4808]: I0311 08:59:53.340223 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a1e42e33-7453-4b97-abca-0c45cc27faa2-pod-info\") pod \"rabbitmq-server-0\" (UID: \"a1e42e33-7453-4b97-abca-0c45cc27faa2\") " pod="openstack/rabbitmq-server-0" Mar 11 08:59:53 crc kubenswrapper[4808]: I0311 08:59:53.340255 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a1e42e33-7453-4b97-abca-0c45cc27faa2-config-data\") pod \"rabbitmq-server-0\" (UID: \"a1e42e33-7453-4b97-abca-0c45cc27faa2\") " pod="openstack/rabbitmq-server-0" Mar 11 08:59:53 crc kubenswrapper[4808]: I0311 08:59:53.340300 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a1e42e33-7453-4b97-abca-0c45cc27faa2-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"a1e42e33-7453-4b97-abca-0c45cc27faa2\") " pod="openstack/rabbitmq-server-0" Mar 11 08:59:53 crc kubenswrapper[4808]: I0311 08:59:53.340330 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a1e42e33-7453-4b97-abca-0c45cc27faa2-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"a1e42e33-7453-4b97-abca-0c45cc27faa2\") " pod="openstack/rabbitmq-server-0" Mar 11 08:59:53 crc kubenswrapper[4808]: I0311 08:59:53.340437 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"a1e42e33-7453-4b97-abca-0c45cc27faa2\") " pod="openstack/rabbitmq-server-0" Mar 11 08:59:53 crc kubenswrapper[4808]: I0311 08:59:53.340469 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a1e42e33-7453-4b97-abca-0c45cc27faa2-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"a1e42e33-7453-4b97-abca-0c45cc27faa2\") " pod="openstack/rabbitmq-server-0" Mar 11 08:59:53 crc kubenswrapper[4808]: I0311 08:59:53.340493 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rjfk\" (UniqueName: \"kubernetes.io/projected/a1e42e33-7453-4b97-abca-0c45cc27faa2-kube-api-access-2rjfk\") pod \"rabbitmq-server-0\" (UID: \"a1e42e33-7453-4b97-abca-0c45cc27faa2\") " pod="openstack/rabbitmq-server-0" Mar 11 08:59:53 crc kubenswrapper[4808]: I0311 08:59:53.340519 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a1e42e33-7453-4b97-abca-0c45cc27faa2-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"a1e42e33-7453-4b97-abca-0c45cc27faa2\") " pod="openstack/rabbitmq-server-0" Mar 11 08:59:53 crc kubenswrapper[4808]: I0311 08:59:53.340542 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a1e42e33-7453-4b97-abca-0c45cc27faa2-server-conf\") pod \"rabbitmq-server-0\" (UID: \"a1e42e33-7453-4b97-abca-0c45cc27faa2\") " pod="openstack/rabbitmq-server-0" Mar 11 08:59:53 crc kubenswrapper[4808]: I0311 08:59:53.340564 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a1e42e33-7453-4b97-abca-0c45cc27faa2-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"a1e42e33-7453-4b97-abca-0c45cc27faa2\") " pod="openstack/rabbitmq-server-0" Mar 11 08:59:53 crc kubenswrapper[4808]: I0311 08:59:53.451538 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a1e42e33-7453-4b97-abca-0c45cc27faa2-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"a1e42e33-7453-4b97-abca-0c45cc27faa2\") " pod="openstack/rabbitmq-server-0" Mar 11 08:59:53 crc kubenswrapper[4808]: I0311 08:59:53.451615 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rjfk\" (UniqueName: \"kubernetes.io/projected/a1e42e33-7453-4b97-abca-0c45cc27faa2-kube-api-access-2rjfk\") pod \"rabbitmq-server-0\" (UID: \"a1e42e33-7453-4b97-abca-0c45cc27faa2\") " pod="openstack/rabbitmq-server-0" Mar 11 08:59:53 crc kubenswrapper[4808]: I0311 08:59:53.451937 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a1e42e33-7453-4b97-abca-0c45cc27faa2-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"a1e42e33-7453-4b97-abca-0c45cc27faa2\") " pod="openstack/rabbitmq-server-0" Mar 11 08:59:53 crc kubenswrapper[4808]: I0311 08:59:53.451972 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a1e42e33-7453-4b97-abca-0c45cc27faa2-server-conf\") pod \"rabbitmq-server-0\" (UID: \"a1e42e33-7453-4b97-abca-0c45cc27faa2\") " pod="openstack/rabbitmq-server-0" Mar 11 08:59:53 crc kubenswrapper[4808]: I0311 08:59:53.452010 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a1e42e33-7453-4b97-abca-0c45cc27faa2-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"a1e42e33-7453-4b97-abca-0c45cc27faa2\") " pod="openstack/rabbitmq-server-0" Mar 11 08:59:53 crc kubenswrapper[4808]: I0311 08:59:53.452127 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a1e42e33-7453-4b97-abca-0c45cc27faa2-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"a1e42e33-7453-4b97-abca-0c45cc27faa2\") " pod="openstack/rabbitmq-server-0" Mar 11 08:59:53 crc kubenswrapper[4808]: I0311 08:59:53.452190 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a1e42e33-7453-4b97-abca-0c45cc27faa2-pod-info\") pod \"rabbitmq-server-0\" (UID: \"a1e42e33-7453-4b97-abca-0c45cc27faa2\") " pod="openstack/rabbitmq-server-0" Mar 11 08:59:53 crc kubenswrapper[4808]: I0311 08:59:53.452240 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a1e42e33-7453-4b97-abca-0c45cc27faa2-config-data\") pod \"rabbitmq-server-0\" (UID: \"a1e42e33-7453-4b97-abca-0c45cc27faa2\") " pod="openstack/rabbitmq-server-0" Mar 11 08:59:53 crc kubenswrapper[4808]: I0311 08:59:53.452327 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a1e42e33-7453-4b97-abca-0c45cc27faa2-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"a1e42e33-7453-4b97-abca-0c45cc27faa2\") " pod="openstack/rabbitmq-server-0" Mar 11 08:59:53 crc kubenswrapper[4808]: I0311 08:59:53.452388 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a1e42e33-7453-4b97-abca-0c45cc27faa2-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"a1e42e33-7453-4b97-abca-0c45cc27faa2\") " pod="openstack/rabbitmq-server-0" Mar 11 08:59:53 crc kubenswrapper[4808]: I0311 08:59:53.452430 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"a1e42e33-7453-4b97-abca-0c45cc27faa2\") " pod="openstack/rabbitmq-server-0" Mar 11 08:59:53 crc kubenswrapper[4808]: I0311 08:59:53.454120 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a1e42e33-7453-4b97-abca-0c45cc27faa2-config-data\") pod \"rabbitmq-server-0\" (UID: \"a1e42e33-7453-4b97-abca-0c45cc27faa2\") " pod="openstack/rabbitmq-server-0" Mar 11 08:59:53 crc kubenswrapper[4808]: I0311 08:59:53.454376 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a1e42e33-7453-4b97-abca-0c45cc27faa2-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"a1e42e33-7453-4b97-abca-0c45cc27faa2\") " pod="openstack/rabbitmq-server-0" Mar 11 08:59:53 crc kubenswrapper[4808]: I0311 08:59:53.454458 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a1e42e33-7453-4b97-abca-0c45cc27faa2-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"a1e42e33-7453-4b97-abca-0c45cc27faa2\") " pod="openstack/rabbitmq-server-0" Mar 11 08:59:53 crc kubenswrapper[4808]: I0311 08:59:53.454744 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a1e42e33-7453-4b97-abca-0c45cc27faa2-server-conf\") pod \"rabbitmq-server-0\" (UID: \"a1e42e33-7453-4b97-abca-0c45cc27faa2\") " pod="openstack/rabbitmq-server-0" Mar 11 08:59:53 crc kubenswrapper[4808]: I0311 08:59:53.455043 4808 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"a1e42e33-7453-4b97-abca-0c45cc27faa2\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-server-0" Mar 11 08:59:53 crc kubenswrapper[4808]: I0311 08:59:53.459210 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a1e42e33-7453-4b97-abca-0c45cc27faa2-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"a1e42e33-7453-4b97-abca-0c45cc27faa2\") " pod="openstack/rabbitmq-server-0" Mar 11 08:59:53 crc kubenswrapper[4808]: I0311 08:59:53.460892 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a1e42e33-7453-4b97-abca-0c45cc27faa2-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"a1e42e33-7453-4b97-abca-0c45cc27faa2\") " pod="openstack/rabbitmq-server-0" Mar 11 08:59:53 crc kubenswrapper[4808]: I0311 08:59:53.461069 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a1e42e33-7453-4b97-abca-0c45cc27faa2-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"a1e42e33-7453-4b97-abca-0c45cc27faa2\") " pod="openstack/rabbitmq-server-0" Mar 11 08:59:53 crc kubenswrapper[4808]: I0311 08:59:53.466632 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a1e42e33-7453-4b97-abca-0c45cc27faa2-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"a1e42e33-7453-4b97-abca-0c45cc27faa2\") " pod="openstack/rabbitmq-server-0" Mar 11 08:59:53 crc kubenswrapper[4808]: I0311 08:59:53.471587 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rjfk\" (UniqueName: \"kubernetes.io/projected/a1e42e33-7453-4b97-abca-0c45cc27faa2-kube-api-access-2rjfk\") pod \"rabbitmq-server-0\" (UID: \"a1e42e33-7453-4b97-abca-0c45cc27faa2\") " pod="openstack/rabbitmq-server-0" Mar 11 08:59:53 crc kubenswrapper[4808]: I0311 08:59:53.479837 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a1e42e33-7453-4b97-abca-0c45cc27faa2-pod-info\") pod \"rabbitmq-server-0\" (UID: \"a1e42e33-7453-4b97-abca-0c45cc27faa2\") " pod="openstack/rabbitmq-server-0" Mar 11 08:59:53 crc kubenswrapper[4808]: I0311 08:59:53.527988 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"a1e42e33-7453-4b97-abca-0c45cc27faa2\") " pod="openstack/rabbitmq-server-0" Mar 11 08:59:53 crc kubenswrapper[4808]: I0311 08:59:53.557289 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 11 08:59:54 crc kubenswrapper[4808]: I0311 08:59:54.175926 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 11 08:59:54 crc kubenswrapper[4808]: W0311 08:59:54.187157 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1e42e33_7453_4b97_abca_0c45cc27faa2.slice/crio-9ed8629a77a7b2a4196e9177e08748adb9c8b72b95a4adcf70710437ccb8d0db WatchSource:0}: Error finding container 9ed8629a77a7b2a4196e9177e08748adb9c8b72b95a4adcf70710437ccb8d0db: Status 404 returned error can't find the container with id 9ed8629a77a7b2a4196e9177e08748adb9c8b72b95a4adcf70710437ccb8d0db Mar 11 08:59:54 crc kubenswrapper[4808]: I0311 08:59:54.626764 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Mar 11 08:59:54 crc kubenswrapper[4808]: I0311 08:59:54.628724 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Mar 11 08:59:54 crc kubenswrapper[4808]: I0311 08:59:54.636105 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-qkpdk" Mar 11 08:59:54 crc kubenswrapper[4808]: I0311 08:59:54.636247 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Mar 11 08:59:54 crc kubenswrapper[4808]: I0311 08:59:54.636417 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Mar 11 08:59:54 crc kubenswrapper[4808]: I0311 08:59:54.636726 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Mar 11 08:59:54 crc kubenswrapper[4808]: I0311 08:59:54.639968 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Mar 11 08:59:54 crc kubenswrapper[4808]: I0311 08:59:54.643032 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Mar 11 08:59:54 crc kubenswrapper[4808]: I0311 08:59:54.771737 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49e0938f-9c77-4bf3-b649-1be492ef1647-operator-scripts\") pod \"openstack-galera-0\" (UID: \"49e0938f-9c77-4bf3-b649-1be492ef1647\") " pod="openstack/openstack-galera-0" Mar 11 08:59:54 crc kubenswrapper[4808]: I0311 08:59:54.771800 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/49e0938f-9c77-4bf3-b649-1be492ef1647-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"49e0938f-9c77-4bf3-b649-1be492ef1647\") " pod="openstack/openstack-galera-0" Mar 11 08:59:54 crc kubenswrapper[4808]: I0311 08:59:54.771901 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49e0938f-9c77-4bf3-b649-1be492ef1647-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"49e0938f-9c77-4bf3-b649-1be492ef1647\") " pod="openstack/openstack-galera-0" Mar 11 08:59:54 crc kubenswrapper[4808]: I0311 08:59:54.771990 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/49e0938f-9c77-4bf3-b649-1be492ef1647-kolla-config\") pod \"openstack-galera-0\" (UID: \"49e0938f-9c77-4bf3-b649-1be492ef1647\") " pod="openstack/openstack-galera-0" Mar 11 08:59:54 crc kubenswrapper[4808]: I0311 08:59:54.772063 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-galera-0\" (UID: \"49e0938f-9c77-4bf3-b649-1be492ef1647\") " pod="openstack/openstack-galera-0" Mar 11 08:59:54 crc kubenswrapper[4808]: I0311 08:59:54.772107 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/49e0938f-9c77-4bf3-b649-1be492ef1647-config-data-generated\") pod \"openstack-galera-0\" (UID: \"49e0938f-9c77-4bf3-b649-1be492ef1647\") " pod="openstack/openstack-galera-0" Mar 11 08:59:54 crc kubenswrapper[4808]: I0311 08:59:54.772137 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2562v\" (UniqueName: \"kubernetes.io/projected/49e0938f-9c77-4bf3-b649-1be492ef1647-kube-api-access-2562v\") pod \"openstack-galera-0\" (UID: \"49e0938f-9c77-4bf3-b649-1be492ef1647\") " pod="openstack/openstack-galera-0" Mar 11 08:59:54 crc kubenswrapper[4808]: I0311 08:59:54.772174 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/49e0938f-9c77-4bf3-b649-1be492ef1647-config-data-default\") pod \"openstack-galera-0\" (UID: \"49e0938f-9c77-4bf3-b649-1be492ef1647\") " pod="openstack/openstack-galera-0" Mar 11 08:59:54 crc kubenswrapper[4808]: I0311 08:59:54.873846 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/49e0938f-9c77-4bf3-b649-1be492ef1647-kolla-config\") pod \"openstack-galera-0\" (UID: \"49e0938f-9c77-4bf3-b649-1be492ef1647\") " pod="openstack/openstack-galera-0" Mar 11 08:59:54 crc kubenswrapper[4808]: I0311 08:59:54.873906 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-galera-0\" (UID: \"49e0938f-9c77-4bf3-b649-1be492ef1647\") " pod="openstack/openstack-galera-0" Mar 11 08:59:54 crc kubenswrapper[4808]: I0311 08:59:54.873935 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2562v\" (UniqueName: \"kubernetes.io/projected/49e0938f-9c77-4bf3-b649-1be492ef1647-kube-api-access-2562v\") pod \"openstack-galera-0\" (UID: \"49e0938f-9c77-4bf3-b649-1be492ef1647\") " pod="openstack/openstack-galera-0" Mar 11 08:59:54 crc kubenswrapper[4808]: I0311 08:59:54.873958 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/49e0938f-9c77-4bf3-b649-1be492ef1647-config-data-generated\") pod \"openstack-galera-0\" (UID: \"49e0938f-9c77-4bf3-b649-1be492ef1647\") " pod="openstack/openstack-galera-0" Mar 11 08:59:54 crc kubenswrapper[4808]: I0311 08:59:54.873980 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/49e0938f-9c77-4bf3-b649-1be492ef1647-config-data-default\") pod \"openstack-galera-0\" (UID: \"49e0938f-9c77-4bf3-b649-1be492ef1647\") " pod="openstack/openstack-galera-0" Mar 11 08:59:54 crc kubenswrapper[4808]: I0311 08:59:54.875139 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49e0938f-9c77-4bf3-b649-1be492ef1647-operator-scripts\") pod \"openstack-galera-0\" (UID: \"49e0938f-9c77-4bf3-b649-1be492ef1647\") " pod="openstack/openstack-galera-0" Mar 11 08:59:54 crc kubenswrapper[4808]: I0311 08:59:54.875201 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/49e0938f-9c77-4bf3-b649-1be492ef1647-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"49e0938f-9c77-4bf3-b649-1be492ef1647\") " pod="openstack/openstack-galera-0" Mar 11 08:59:54 crc kubenswrapper[4808]: I0311 08:59:54.875255 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49e0938f-9c77-4bf3-b649-1be492ef1647-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"49e0938f-9c77-4bf3-b649-1be492ef1647\") " pod="openstack/openstack-galera-0" Mar 11 08:59:54 crc kubenswrapper[4808]: I0311 08:59:54.875842 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/49e0938f-9c77-4bf3-b649-1be492ef1647-kolla-config\") pod \"openstack-galera-0\" (UID: \"49e0938f-9c77-4bf3-b649-1be492ef1647\") " pod="openstack/openstack-galera-0" Mar 11 08:59:54 crc kubenswrapper[4808]: I0311 08:59:54.876942 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/49e0938f-9c77-4bf3-b649-1be492ef1647-config-data-default\") pod \"openstack-galera-0\" (UID: \"49e0938f-9c77-4bf3-b649-1be492ef1647\") " pod="openstack/openstack-galera-0" Mar 11 08:59:54 crc kubenswrapper[4808]: I0311 08:59:54.877273 4808 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-galera-0\" (UID: \"49e0938f-9c77-4bf3-b649-1be492ef1647\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/openstack-galera-0" Mar 11 08:59:54 crc kubenswrapper[4808]: I0311 08:59:54.877499 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/49e0938f-9c77-4bf3-b649-1be492ef1647-config-data-generated\") pod \"openstack-galera-0\" (UID: \"49e0938f-9c77-4bf3-b649-1be492ef1647\") " pod="openstack/openstack-galera-0" Mar 11 08:59:54 crc kubenswrapper[4808]: I0311 08:59:54.879980 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49e0938f-9c77-4bf3-b649-1be492ef1647-operator-scripts\") pod \"openstack-galera-0\" (UID: \"49e0938f-9c77-4bf3-b649-1be492ef1647\") " pod="openstack/openstack-galera-0" Mar 11 08:59:54 crc kubenswrapper[4808]: I0311 08:59:54.886144 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/49e0938f-9c77-4bf3-b649-1be492ef1647-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"49e0938f-9c77-4bf3-b649-1be492ef1647\") " pod="openstack/openstack-galera-0" Mar 11 08:59:54 crc kubenswrapper[4808]: I0311 08:59:54.905745 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2562v\" (UniqueName: \"kubernetes.io/projected/49e0938f-9c77-4bf3-b649-1be492ef1647-kube-api-access-2562v\") pod \"openstack-galera-0\" (UID: \"49e0938f-9c77-4bf3-b649-1be492ef1647\") " pod="openstack/openstack-galera-0" Mar 11 08:59:54 crc kubenswrapper[4808]: I0311 08:59:54.917926 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49e0938f-9c77-4bf3-b649-1be492ef1647-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"49e0938f-9c77-4bf3-b649-1be492ef1647\") " pod="openstack/openstack-galera-0" Mar 11 08:59:54 crc kubenswrapper[4808]: I0311 08:59:54.918948 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-galera-0\" (UID: \"49e0938f-9c77-4bf3-b649-1be492ef1647\") " pod="openstack/openstack-galera-0" Mar 11 08:59:54 crc kubenswrapper[4808]: I0311 08:59:54.951323 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Mar 11 08:59:55 crc kubenswrapper[4808]: I0311 08:59:55.195135 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a1e42e33-7453-4b97-abca-0c45cc27faa2","Type":"ContainerStarted","Data":"9ed8629a77a7b2a4196e9177e08748adb9c8b72b95a4adcf70710437ccb8d0db"} Mar 11 08:59:55 crc kubenswrapper[4808]: I0311 08:59:55.529164 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Mar 11 08:59:55 crc kubenswrapper[4808]: I0311 08:59:55.987115 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 11 08:59:55 crc kubenswrapper[4808]: I0311 08:59:55.998510 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 11 08:59:55 crc kubenswrapper[4808]: I0311 08:59:55.998699 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Mar 11 08:59:56 crc kubenswrapper[4808]: I0311 08:59:56.011677 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-kg7gr" Mar 11 08:59:56 crc kubenswrapper[4808]: I0311 08:59:56.012471 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Mar 11 08:59:56 crc kubenswrapper[4808]: I0311 08:59:56.013430 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Mar 11 08:59:56 crc kubenswrapper[4808]: I0311 08:59:56.014493 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Mar 11 08:59:56 crc kubenswrapper[4808]: I0311 08:59:56.095761 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c805958-e789-4689-bbd0-dc1a1a116486-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"5c805958-e789-4689-bbd0-dc1a1a116486\") " pod="openstack/openstack-cell1-galera-0" Mar 11 08:59:56 crc kubenswrapper[4808]: I0311 08:59:56.096088 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/5c805958-e789-4689-bbd0-dc1a1a116486-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"5c805958-e789-4689-bbd0-dc1a1a116486\") " pod="openstack/openstack-cell1-galera-0" Mar 11 08:59:56 crc kubenswrapper[4808]: I0311 08:59:56.096149 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4v4xg\" (UniqueName: \"kubernetes.io/projected/5c805958-e789-4689-bbd0-dc1a1a116486-kube-api-access-4v4xg\") pod \"openstack-cell1-galera-0\" (UID: \"5c805958-e789-4689-bbd0-dc1a1a116486\") " pod="openstack/openstack-cell1-galera-0" Mar 11 08:59:56 crc kubenswrapper[4808]: I0311 08:59:56.096168 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c805958-e789-4689-bbd0-dc1a1a116486-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"5c805958-e789-4689-bbd0-dc1a1a116486\") " pod="openstack/openstack-cell1-galera-0" Mar 11 08:59:56 crc kubenswrapper[4808]: I0311 08:59:56.096201 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c805958-e789-4689-bbd0-dc1a1a116486-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"5c805958-e789-4689-bbd0-dc1a1a116486\") " pod="openstack/openstack-cell1-galera-0" Mar 11 08:59:56 crc kubenswrapper[4808]: I0311 08:59:56.096219 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/5c805958-e789-4689-bbd0-dc1a1a116486-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"5c805958-e789-4689-bbd0-dc1a1a116486\") " pod="openstack/openstack-cell1-galera-0" Mar 11 08:59:56 crc kubenswrapper[4808]: I0311 08:59:56.096247 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"5c805958-e789-4689-bbd0-dc1a1a116486\") " pod="openstack/openstack-cell1-galera-0" Mar 11 08:59:56 crc kubenswrapper[4808]: I0311 08:59:56.096266 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5c805958-e789-4689-bbd0-dc1a1a116486-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"5c805958-e789-4689-bbd0-dc1a1a116486\") " pod="openstack/openstack-cell1-galera-0" Mar 11 08:59:56 crc kubenswrapper[4808]: I0311 08:59:56.197443 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"5c805958-e789-4689-bbd0-dc1a1a116486\") " pod="openstack/openstack-cell1-galera-0" Mar 11 08:59:56 crc kubenswrapper[4808]: I0311 08:59:56.197519 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5c805958-e789-4689-bbd0-dc1a1a116486-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"5c805958-e789-4689-bbd0-dc1a1a116486\") " pod="openstack/openstack-cell1-galera-0" Mar 11 08:59:56 crc kubenswrapper[4808]: I0311 08:59:56.197582 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c805958-e789-4689-bbd0-dc1a1a116486-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"5c805958-e789-4689-bbd0-dc1a1a116486\") " pod="openstack/openstack-cell1-galera-0" Mar 11 08:59:56 crc kubenswrapper[4808]: I0311 08:59:56.197603 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/5c805958-e789-4689-bbd0-dc1a1a116486-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"5c805958-e789-4689-bbd0-dc1a1a116486\") " pod="openstack/openstack-cell1-galera-0" Mar 11 08:59:56 crc kubenswrapper[4808]: I0311 08:59:56.197654 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4v4xg\" (UniqueName: \"kubernetes.io/projected/5c805958-e789-4689-bbd0-dc1a1a116486-kube-api-access-4v4xg\") pod \"openstack-cell1-galera-0\" (UID: \"5c805958-e789-4689-bbd0-dc1a1a116486\") " pod="openstack/openstack-cell1-galera-0" Mar 11 08:59:56 crc kubenswrapper[4808]: I0311 08:59:56.197675 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c805958-e789-4689-bbd0-dc1a1a116486-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"5c805958-e789-4689-bbd0-dc1a1a116486\") " pod="openstack/openstack-cell1-galera-0" Mar 11 08:59:56 crc kubenswrapper[4808]: I0311 08:59:56.197722 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c805958-e789-4689-bbd0-dc1a1a116486-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"5c805958-e789-4689-bbd0-dc1a1a116486\") " pod="openstack/openstack-cell1-galera-0" Mar 11 08:59:56 crc kubenswrapper[4808]: I0311 08:59:56.197740 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/5c805958-e789-4689-bbd0-dc1a1a116486-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"5c805958-e789-4689-bbd0-dc1a1a116486\") " pod="openstack/openstack-cell1-galera-0" Mar 11 08:59:56 crc kubenswrapper[4808]: I0311 08:59:56.198204 4808 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"5c805958-e789-4689-bbd0-dc1a1a116486\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/openstack-cell1-galera-0" Mar 11 08:59:56 crc kubenswrapper[4808]: I0311 08:59:56.199938 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/5c805958-e789-4689-bbd0-dc1a1a116486-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"5c805958-e789-4689-bbd0-dc1a1a116486\") " pod="openstack/openstack-cell1-galera-0" Mar 11 08:59:56 crc kubenswrapper[4808]: I0311 08:59:56.200553 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5c805958-e789-4689-bbd0-dc1a1a116486-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"5c805958-e789-4689-bbd0-dc1a1a116486\") " pod="openstack/openstack-cell1-galera-0" Mar 11 08:59:56 crc kubenswrapper[4808]: I0311 08:59:56.201912 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/5c805958-e789-4689-bbd0-dc1a1a116486-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"5c805958-e789-4689-bbd0-dc1a1a116486\") " pod="openstack/openstack-cell1-galera-0" Mar 11 08:59:56 crc kubenswrapper[4808]: I0311 08:59:56.212959 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c805958-e789-4689-bbd0-dc1a1a116486-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"5c805958-e789-4689-bbd0-dc1a1a116486\") " pod="openstack/openstack-cell1-galera-0" Mar 11 08:59:56 crc kubenswrapper[4808]: I0311 08:59:56.219386 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"49e0938f-9c77-4bf3-b649-1be492ef1647","Type":"ContainerStarted","Data":"4dc13a953beb5f28c0a418de47c1cbf82f0c69dafcb7418cc269198faae562c5"} Mar 11 08:59:56 crc kubenswrapper[4808]: I0311 08:59:56.219767 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c805958-e789-4689-bbd0-dc1a1a116486-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"5c805958-e789-4689-bbd0-dc1a1a116486\") " pod="openstack/openstack-cell1-galera-0" Mar 11 08:59:56 crc kubenswrapper[4808]: I0311 08:59:56.223542 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c805958-e789-4689-bbd0-dc1a1a116486-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"5c805958-e789-4689-bbd0-dc1a1a116486\") " pod="openstack/openstack-cell1-galera-0" Mar 11 08:59:56 crc kubenswrapper[4808]: I0311 08:59:56.226249 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4v4xg\" (UniqueName: \"kubernetes.io/projected/5c805958-e789-4689-bbd0-dc1a1a116486-kube-api-access-4v4xg\") pod \"openstack-cell1-galera-0\" (UID: \"5c805958-e789-4689-bbd0-dc1a1a116486\") " pod="openstack/openstack-cell1-galera-0" Mar 11 08:59:56 crc kubenswrapper[4808]: I0311 08:59:56.244693 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"5c805958-e789-4689-bbd0-dc1a1a116486\") " pod="openstack/openstack-cell1-galera-0" Mar 11 08:59:56 crc kubenswrapper[4808]: I0311 08:59:56.335566 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Mar 11 08:59:56 crc kubenswrapper[4808]: I0311 08:59:56.377469 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Mar 11 08:59:56 crc kubenswrapper[4808]: I0311 08:59:56.378384 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Mar 11 08:59:56 crc kubenswrapper[4808]: I0311 08:59:56.382838 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-xs9ll" Mar 11 08:59:56 crc kubenswrapper[4808]: I0311 08:59:56.383315 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Mar 11 08:59:56 crc kubenswrapper[4808]: I0311 08:59:56.384070 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Mar 11 08:59:56 crc kubenswrapper[4808]: I0311 08:59:56.397711 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Mar 11 08:59:56 crc kubenswrapper[4808]: I0311 08:59:56.507202 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6baf079-20ab-45df-8e2d-2459a4286c9a-combined-ca-bundle\") pod \"memcached-0\" (UID: \"c6baf079-20ab-45df-8e2d-2459a4286c9a\") " pod="openstack/memcached-0" Mar 11 08:59:56 crc kubenswrapper[4808]: I0311 08:59:56.507243 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcfbs\" (UniqueName: \"kubernetes.io/projected/c6baf079-20ab-45df-8e2d-2459a4286c9a-kube-api-access-wcfbs\") pod \"memcached-0\" (UID: \"c6baf079-20ab-45df-8e2d-2459a4286c9a\") " pod="openstack/memcached-0" Mar 11 08:59:56 crc kubenswrapper[4808]: I0311 08:59:56.507312 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c6baf079-20ab-45df-8e2d-2459a4286c9a-config-data\") pod \"memcached-0\" (UID: \"c6baf079-20ab-45df-8e2d-2459a4286c9a\") " pod="openstack/memcached-0" Mar 11 08:59:56 crc kubenswrapper[4808]: I0311 08:59:56.507333 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6baf079-20ab-45df-8e2d-2459a4286c9a-memcached-tls-certs\") pod \"memcached-0\" (UID: \"c6baf079-20ab-45df-8e2d-2459a4286c9a\") " pod="openstack/memcached-0" Mar 11 08:59:56 crc kubenswrapper[4808]: I0311 08:59:56.507414 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c6baf079-20ab-45df-8e2d-2459a4286c9a-kolla-config\") pod \"memcached-0\" (UID: \"c6baf079-20ab-45df-8e2d-2459a4286c9a\") " pod="openstack/memcached-0" Mar 11 08:59:56 crc kubenswrapper[4808]: I0311 08:59:56.608728 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcfbs\" (UniqueName: \"kubernetes.io/projected/c6baf079-20ab-45df-8e2d-2459a4286c9a-kube-api-access-wcfbs\") pod \"memcached-0\" (UID: \"c6baf079-20ab-45df-8e2d-2459a4286c9a\") " pod="openstack/memcached-0" Mar 11 08:59:56 crc kubenswrapper[4808]: I0311 08:59:56.608779 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6baf079-20ab-45df-8e2d-2459a4286c9a-combined-ca-bundle\") pod \"memcached-0\" (UID: \"c6baf079-20ab-45df-8e2d-2459a4286c9a\") " pod="openstack/memcached-0" Mar 11 08:59:56 crc kubenswrapper[4808]: I0311 08:59:56.608851 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c6baf079-20ab-45df-8e2d-2459a4286c9a-config-data\") pod \"memcached-0\" (UID: \"c6baf079-20ab-45df-8e2d-2459a4286c9a\") " pod="openstack/memcached-0" Mar 11 08:59:56 crc kubenswrapper[4808]: I0311 08:59:56.608874 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6baf079-20ab-45df-8e2d-2459a4286c9a-memcached-tls-certs\") pod \"memcached-0\" (UID: \"c6baf079-20ab-45df-8e2d-2459a4286c9a\") " pod="openstack/memcached-0" Mar 11 08:59:56 crc kubenswrapper[4808]: I0311 08:59:56.608905 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c6baf079-20ab-45df-8e2d-2459a4286c9a-kolla-config\") pod \"memcached-0\" (UID: \"c6baf079-20ab-45df-8e2d-2459a4286c9a\") " pod="openstack/memcached-0" Mar 11 08:59:56 crc kubenswrapper[4808]: I0311 08:59:56.610316 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c6baf079-20ab-45df-8e2d-2459a4286c9a-kolla-config\") pod \"memcached-0\" (UID: \"c6baf079-20ab-45df-8e2d-2459a4286c9a\") " pod="openstack/memcached-0" Mar 11 08:59:56 crc kubenswrapper[4808]: I0311 08:59:56.615079 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6baf079-20ab-45df-8e2d-2459a4286c9a-combined-ca-bundle\") pod \"memcached-0\" (UID: \"c6baf079-20ab-45df-8e2d-2459a4286c9a\") " pod="openstack/memcached-0" Mar 11 08:59:56 crc kubenswrapper[4808]: I0311 08:59:56.617003 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c6baf079-20ab-45df-8e2d-2459a4286c9a-config-data\") pod \"memcached-0\" (UID: \"c6baf079-20ab-45df-8e2d-2459a4286c9a\") " pod="openstack/memcached-0" Mar 11 08:59:56 crc kubenswrapper[4808]: I0311 08:59:56.619279 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6baf079-20ab-45df-8e2d-2459a4286c9a-memcached-tls-certs\") pod \"memcached-0\" (UID: \"c6baf079-20ab-45df-8e2d-2459a4286c9a\") " pod="openstack/memcached-0" Mar 11 08:59:56 crc kubenswrapper[4808]: I0311 08:59:56.625942 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcfbs\" (UniqueName: \"kubernetes.io/projected/c6baf079-20ab-45df-8e2d-2459a4286c9a-kube-api-access-wcfbs\") pod \"memcached-0\" (UID: \"c6baf079-20ab-45df-8e2d-2459a4286c9a\") " pod="openstack/memcached-0" Mar 11 08:59:56 crc kubenswrapper[4808]: I0311 08:59:56.711083 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Mar 11 08:59:58 crc kubenswrapper[4808]: I0311 08:59:58.484973 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Mar 11 08:59:58 crc kubenswrapper[4808]: I0311 08:59:58.486446 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 11 08:59:58 crc kubenswrapper[4808]: I0311 08:59:58.488866 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-b7w46" Mar 11 08:59:58 crc kubenswrapper[4808]: I0311 08:59:58.520113 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 11 08:59:58 crc kubenswrapper[4808]: I0311 08:59:58.640053 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvv7t\" (UniqueName: \"kubernetes.io/projected/fadbd323-a0cc-4d54-b71f-bebe9c716af4-kube-api-access-wvv7t\") pod \"kube-state-metrics-0\" (UID: \"fadbd323-a0cc-4d54-b71f-bebe9c716af4\") " pod="openstack/kube-state-metrics-0" Mar 11 08:59:58 crc kubenswrapper[4808]: I0311 08:59:58.741895 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvv7t\" (UniqueName: \"kubernetes.io/projected/fadbd323-a0cc-4d54-b71f-bebe9c716af4-kube-api-access-wvv7t\") pod \"kube-state-metrics-0\" (UID: \"fadbd323-a0cc-4d54-b71f-bebe9c716af4\") " pod="openstack/kube-state-metrics-0" Mar 11 08:59:58 crc kubenswrapper[4808]: I0311 08:59:58.783242 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvv7t\" (UniqueName: \"kubernetes.io/projected/fadbd323-a0cc-4d54-b71f-bebe9c716af4-kube-api-access-wvv7t\") pod \"kube-state-metrics-0\" (UID: \"fadbd323-a0cc-4d54-b71f-bebe9c716af4\") " pod="openstack/kube-state-metrics-0" Mar 11 08:59:58 crc kubenswrapper[4808]: I0311 08:59:58.824324 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 11 09:00:00 crc kubenswrapper[4808]: I0311 09:00:00.130804 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553660-56kcb"] Mar 11 09:00:00 crc kubenswrapper[4808]: I0311 09:00:00.132332 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553660-56kcb" Mar 11 09:00:00 crc kubenswrapper[4808]: I0311 09:00:00.135194 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 09:00:00 crc kubenswrapper[4808]: I0311 09:00:00.135516 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-8r7cc" Mar 11 09:00:00 crc kubenswrapper[4808]: I0311 09:00:00.135777 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 09:00:00 crc kubenswrapper[4808]: I0311 09:00:00.148051 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553660-56kcb"] Mar 11 09:00:00 crc kubenswrapper[4808]: I0311 09:00:00.231290 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29553660-2qbqt"] Mar 11 09:00:00 crc kubenswrapper[4808]: I0311 09:00:00.232313 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29553660-2qbqt" Mar 11 09:00:00 crc kubenswrapper[4808]: I0311 09:00:00.234925 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 11 09:00:00 crc kubenswrapper[4808]: I0311 09:00:00.235172 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 11 09:00:00 crc kubenswrapper[4808]: I0311 09:00:00.244166 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29553660-2qbqt"] Mar 11 09:00:00 crc kubenswrapper[4808]: I0311 09:00:00.269754 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7p7j\" (UniqueName: \"kubernetes.io/projected/cdfb1b93-4963-4e67-a64b-6489306f8fc3-kube-api-access-w7p7j\") pod \"auto-csr-approver-29553660-56kcb\" (UID: \"cdfb1b93-4963-4e67-a64b-6489306f8fc3\") " pod="openshift-infra/auto-csr-approver-29553660-56kcb" Mar 11 09:00:00 crc kubenswrapper[4808]: I0311 09:00:00.371544 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6e57a2f8-6fa7-4b10-96f9-cd2a678da7e9-config-volume\") pod \"collect-profiles-29553660-2qbqt\" (UID: \"6e57a2f8-6fa7-4b10-96f9-cd2a678da7e9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553660-2qbqt" Mar 11 09:00:00 crc kubenswrapper[4808]: I0311 09:00:00.371688 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7p7j\" (UniqueName: \"kubernetes.io/projected/cdfb1b93-4963-4e67-a64b-6489306f8fc3-kube-api-access-w7p7j\") pod \"auto-csr-approver-29553660-56kcb\" (UID: \"cdfb1b93-4963-4e67-a64b-6489306f8fc3\") " pod="openshift-infra/auto-csr-approver-29553660-56kcb" Mar 11 09:00:00 crc kubenswrapper[4808]: I0311 09:00:00.371739 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6e57a2f8-6fa7-4b10-96f9-cd2a678da7e9-secret-volume\") pod \"collect-profiles-29553660-2qbqt\" (UID: \"6e57a2f8-6fa7-4b10-96f9-cd2a678da7e9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553660-2qbqt" Mar 11 09:00:00 crc kubenswrapper[4808]: I0311 09:00:00.371783 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7z9q9\" (UniqueName: \"kubernetes.io/projected/6e57a2f8-6fa7-4b10-96f9-cd2a678da7e9-kube-api-access-7z9q9\") pod \"collect-profiles-29553660-2qbqt\" (UID: \"6e57a2f8-6fa7-4b10-96f9-cd2a678da7e9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553660-2qbqt" Mar 11 09:00:00 crc kubenswrapper[4808]: I0311 09:00:00.404878 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7p7j\" (UniqueName: \"kubernetes.io/projected/cdfb1b93-4963-4e67-a64b-6489306f8fc3-kube-api-access-w7p7j\") pod \"auto-csr-approver-29553660-56kcb\" (UID: \"cdfb1b93-4963-4e67-a64b-6489306f8fc3\") " pod="openshift-infra/auto-csr-approver-29553660-56kcb" Mar 11 09:00:00 crc kubenswrapper[4808]: I0311 09:00:00.453504 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553660-56kcb" Mar 11 09:00:00 crc kubenswrapper[4808]: I0311 09:00:00.473689 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6e57a2f8-6fa7-4b10-96f9-cd2a678da7e9-config-volume\") pod \"collect-profiles-29553660-2qbqt\" (UID: \"6e57a2f8-6fa7-4b10-96f9-cd2a678da7e9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553660-2qbqt" Mar 11 09:00:00 crc kubenswrapper[4808]: I0311 09:00:00.474099 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6e57a2f8-6fa7-4b10-96f9-cd2a678da7e9-secret-volume\") pod \"collect-profiles-29553660-2qbqt\" (UID: \"6e57a2f8-6fa7-4b10-96f9-cd2a678da7e9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553660-2qbqt" Mar 11 09:00:00 crc kubenswrapper[4808]: I0311 09:00:00.474306 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7z9q9\" (UniqueName: \"kubernetes.io/projected/6e57a2f8-6fa7-4b10-96f9-cd2a678da7e9-kube-api-access-7z9q9\") pod \"collect-profiles-29553660-2qbqt\" (UID: \"6e57a2f8-6fa7-4b10-96f9-cd2a678da7e9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553660-2qbqt" Mar 11 09:00:00 crc kubenswrapper[4808]: I0311 09:00:00.474743 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6e57a2f8-6fa7-4b10-96f9-cd2a678da7e9-config-volume\") pod \"collect-profiles-29553660-2qbqt\" (UID: \"6e57a2f8-6fa7-4b10-96f9-cd2a678da7e9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553660-2qbqt" Mar 11 09:00:00 crc kubenswrapper[4808]: I0311 09:00:00.478868 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6e57a2f8-6fa7-4b10-96f9-cd2a678da7e9-secret-volume\") pod \"collect-profiles-29553660-2qbqt\" (UID: \"6e57a2f8-6fa7-4b10-96f9-cd2a678da7e9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553660-2qbqt" Mar 11 09:00:00 crc kubenswrapper[4808]: I0311 09:00:00.494439 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7z9q9\" (UniqueName: \"kubernetes.io/projected/6e57a2f8-6fa7-4b10-96f9-cd2a678da7e9-kube-api-access-7z9q9\") pod \"collect-profiles-29553660-2qbqt\" (UID: \"6e57a2f8-6fa7-4b10-96f9-cd2a678da7e9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553660-2qbqt" Mar 11 09:00:00 crc kubenswrapper[4808]: I0311 09:00:00.550222 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29553660-2qbqt" Mar 11 09:00:01 crc kubenswrapper[4808]: I0311 09:00:01.772764 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-spf22"] Mar 11 09:00:01 crc kubenswrapper[4808]: I0311 09:00:01.774248 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-spf22" Mar 11 09:00:01 crc kubenswrapper[4808]: I0311 09:00:01.776887 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-x7sd9" Mar 11 09:00:01 crc kubenswrapper[4808]: I0311 09:00:01.777217 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Mar 11 09:00:01 crc kubenswrapper[4808]: I0311 09:00:01.777510 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Mar 11 09:00:01 crc kubenswrapper[4808]: I0311 09:00:01.782071 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-spf22"] Mar 11 09:00:01 crc kubenswrapper[4808]: I0311 09:00:01.817482 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-mbbhf"] Mar 11 09:00:01 crc kubenswrapper[4808]: I0311 09:00:01.818953 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-mbbhf" Mar 11 09:00:01 crc kubenswrapper[4808]: I0311 09:00:01.845180 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-mbbhf"] Mar 11 09:00:01 crc kubenswrapper[4808]: I0311 09:00:01.898123 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fd1979f-d1de-42a8-be8e-b61087f737bc-combined-ca-bundle\") pod \"ovn-controller-spf22\" (UID: \"3fd1979f-d1de-42a8-be8e-b61087f737bc\") " pod="openstack/ovn-controller-spf22" Mar 11 09:00:01 crc kubenswrapper[4808]: I0311 09:00:01.898180 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3fd1979f-d1de-42a8-be8e-b61087f737bc-scripts\") pod \"ovn-controller-spf22\" (UID: \"3fd1979f-d1de-42a8-be8e-b61087f737bc\") " pod="openstack/ovn-controller-spf22" Mar 11 09:00:01 crc kubenswrapper[4808]: I0311 09:00:01.898261 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/3fd1979f-d1de-42a8-be8e-b61087f737bc-ovn-controller-tls-certs\") pod \"ovn-controller-spf22\" (UID: \"3fd1979f-d1de-42a8-be8e-b61087f737bc\") " pod="openstack/ovn-controller-spf22" Mar 11 09:00:01 crc kubenswrapper[4808]: I0311 09:00:01.898317 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3fd1979f-d1de-42a8-be8e-b61087f737bc-var-log-ovn\") pod \"ovn-controller-spf22\" (UID: \"3fd1979f-d1de-42a8-be8e-b61087f737bc\") " pod="openstack/ovn-controller-spf22" Mar 11 09:00:01 crc kubenswrapper[4808]: I0311 09:00:01.898427 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3fd1979f-d1de-42a8-be8e-b61087f737bc-var-run\") pod \"ovn-controller-spf22\" (UID: \"3fd1979f-d1de-42a8-be8e-b61087f737bc\") " pod="openstack/ovn-controller-spf22" Mar 11 09:00:01 crc kubenswrapper[4808]: I0311 09:00:01.898479 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sj9dh\" (UniqueName: \"kubernetes.io/projected/3fd1979f-d1de-42a8-be8e-b61087f737bc-kube-api-access-sj9dh\") pod \"ovn-controller-spf22\" (UID: \"3fd1979f-d1de-42a8-be8e-b61087f737bc\") " pod="openstack/ovn-controller-spf22" Mar 11 09:00:01 crc kubenswrapper[4808]: I0311 09:00:01.898542 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3fd1979f-d1de-42a8-be8e-b61087f737bc-var-run-ovn\") pod \"ovn-controller-spf22\" (UID: \"3fd1979f-d1de-42a8-be8e-b61087f737bc\") " pod="openstack/ovn-controller-spf22" Mar 11 09:00:02 crc kubenswrapper[4808]: I0311 09:00:01.999954 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ac93b356-9c32-4094-9de5-8fd25c677810-scripts\") pod \"ovn-controller-ovs-mbbhf\" (UID: \"ac93b356-9c32-4094-9de5-8fd25c677810\") " pod="openstack/ovn-controller-ovs-mbbhf" Mar 11 09:00:02 crc kubenswrapper[4808]: I0311 09:00:02.000040 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fd1979f-d1de-42a8-be8e-b61087f737bc-combined-ca-bundle\") pod \"ovn-controller-spf22\" (UID: \"3fd1979f-d1de-42a8-be8e-b61087f737bc\") " pod="openstack/ovn-controller-spf22" Mar 11 09:00:02 crc kubenswrapper[4808]: I0311 09:00:02.000075 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3fd1979f-d1de-42a8-be8e-b61087f737bc-scripts\") pod \"ovn-controller-spf22\" (UID: \"3fd1979f-d1de-42a8-be8e-b61087f737bc\") " pod="openstack/ovn-controller-spf22" Mar 11 09:00:02 crc kubenswrapper[4808]: I0311 09:00:02.000150 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ac93b356-9c32-4094-9de5-8fd25c677810-var-run\") pod \"ovn-controller-ovs-mbbhf\" (UID: \"ac93b356-9c32-4094-9de5-8fd25c677810\") " pod="openstack/ovn-controller-ovs-mbbhf" Mar 11 09:00:02 crc kubenswrapper[4808]: I0311 09:00:02.000184 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/3fd1979f-d1de-42a8-be8e-b61087f737bc-ovn-controller-tls-certs\") pod \"ovn-controller-spf22\" (UID: \"3fd1979f-d1de-42a8-be8e-b61087f737bc\") " pod="openstack/ovn-controller-spf22" Mar 11 09:00:02 crc kubenswrapper[4808]: I0311 09:00:02.000206 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/ac93b356-9c32-4094-9de5-8fd25c677810-var-log\") pod \"ovn-controller-ovs-mbbhf\" (UID: \"ac93b356-9c32-4094-9de5-8fd25c677810\") " pod="openstack/ovn-controller-ovs-mbbhf" Mar 11 09:00:02 crc kubenswrapper[4808]: I0311 09:00:02.000224 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/ac93b356-9c32-4094-9de5-8fd25c677810-var-lib\") pod \"ovn-controller-ovs-mbbhf\" (UID: \"ac93b356-9c32-4094-9de5-8fd25c677810\") " pod="openstack/ovn-controller-ovs-mbbhf" Mar 11 09:00:02 crc kubenswrapper[4808]: I0311 09:00:02.000248 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3fd1979f-d1de-42a8-be8e-b61087f737bc-var-log-ovn\") pod \"ovn-controller-spf22\" (UID: \"3fd1979f-d1de-42a8-be8e-b61087f737bc\") " pod="openstack/ovn-controller-spf22" Mar 11 09:00:02 crc kubenswrapper[4808]: I0311 09:00:02.000274 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3fd1979f-d1de-42a8-be8e-b61087f737bc-var-run\") pod \"ovn-controller-spf22\" (UID: \"3fd1979f-d1de-42a8-be8e-b61087f737bc\") " pod="openstack/ovn-controller-spf22" Mar 11 09:00:02 crc kubenswrapper[4808]: I0311 09:00:02.000293 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sj9dh\" (UniqueName: \"kubernetes.io/projected/3fd1979f-d1de-42a8-be8e-b61087f737bc-kube-api-access-sj9dh\") pod \"ovn-controller-spf22\" (UID: \"3fd1979f-d1de-42a8-be8e-b61087f737bc\") " pod="openstack/ovn-controller-spf22" Mar 11 09:00:02 crc kubenswrapper[4808]: I0311 09:00:02.000325 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3fd1979f-d1de-42a8-be8e-b61087f737bc-var-run-ovn\") pod \"ovn-controller-spf22\" (UID: \"3fd1979f-d1de-42a8-be8e-b61087f737bc\") " pod="openstack/ovn-controller-spf22" Mar 11 09:00:02 crc kubenswrapper[4808]: I0311 09:00:02.000391 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsxq6\" (UniqueName: \"kubernetes.io/projected/ac93b356-9c32-4094-9de5-8fd25c677810-kube-api-access-tsxq6\") pod \"ovn-controller-ovs-mbbhf\" (UID: \"ac93b356-9c32-4094-9de5-8fd25c677810\") " pod="openstack/ovn-controller-ovs-mbbhf" Mar 11 09:00:02 crc kubenswrapper[4808]: I0311 09:00:02.000418 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/ac93b356-9c32-4094-9de5-8fd25c677810-etc-ovs\") pod \"ovn-controller-ovs-mbbhf\" (UID: \"ac93b356-9c32-4094-9de5-8fd25c677810\") " pod="openstack/ovn-controller-ovs-mbbhf" Mar 11 09:00:02 crc kubenswrapper[4808]: I0311 09:00:02.000912 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3fd1979f-d1de-42a8-be8e-b61087f737bc-var-log-ovn\") pod \"ovn-controller-spf22\" (UID: \"3fd1979f-d1de-42a8-be8e-b61087f737bc\") " pod="openstack/ovn-controller-spf22" Mar 11 09:00:02 crc kubenswrapper[4808]: I0311 09:00:02.000974 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3fd1979f-d1de-42a8-be8e-b61087f737bc-var-run-ovn\") pod \"ovn-controller-spf22\" (UID: \"3fd1979f-d1de-42a8-be8e-b61087f737bc\") " pod="openstack/ovn-controller-spf22" Mar 11 09:00:02 crc kubenswrapper[4808]: I0311 09:00:02.001031 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3fd1979f-d1de-42a8-be8e-b61087f737bc-var-run\") pod \"ovn-controller-spf22\" (UID: \"3fd1979f-d1de-42a8-be8e-b61087f737bc\") " pod="openstack/ovn-controller-spf22" Mar 11 09:00:02 crc kubenswrapper[4808]: I0311 09:00:02.002247 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3fd1979f-d1de-42a8-be8e-b61087f737bc-scripts\") pod \"ovn-controller-spf22\" (UID: \"3fd1979f-d1de-42a8-be8e-b61087f737bc\") " pod="openstack/ovn-controller-spf22" Mar 11 09:00:02 crc kubenswrapper[4808]: I0311 09:00:02.005142 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/3fd1979f-d1de-42a8-be8e-b61087f737bc-ovn-controller-tls-certs\") pod \"ovn-controller-spf22\" (UID: \"3fd1979f-d1de-42a8-be8e-b61087f737bc\") " pod="openstack/ovn-controller-spf22" Mar 11 09:00:02 crc kubenswrapper[4808]: I0311 09:00:02.013006 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fd1979f-d1de-42a8-be8e-b61087f737bc-combined-ca-bundle\") pod \"ovn-controller-spf22\" (UID: \"3fd1979f-d1de-42a8-be8e-b61087f737bc\") " pod="openstack/ovn-controller-spf22" Mar 11 09:00:02 crc kubenswrapper[4808]: I0311 09:00:02.018866 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sj9dh\" (UniqueName: \"kubernetes.io/projected/3fd1979f-d1de-42a8-be8e-b61087f737bc-kube-api-access-sj9dh\") pod \"ovn-controller-spf22\" (UID: \"3fd1979f-d1de-42a8-be8e-b61087f737bc\") " pod="openstack/ovn-controller-spf22" Mar 11 09:00:02 crc kubenswrapper[4808]: I0311 09:00:02.097387 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-spf22" Mar 11 09:00:02 crc kubenswrapper[4808]: I0311 09:00:02.101745 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ac93b356-9c32-4094-9de5-8fd25c677810-scripts\") pod \"ovn-controller-ovs-mbbhf\" (UID: \"ac93b356-9c32-4094-9de5-8fd25c677810\") " pod="openstack/ovn-controller-ovs-mbbhf" Mar 11 09:00:02 crc kubenswrapper[4808]: I0311 09:00:02.101865 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ac93b356-9c32-4094-9de5-8fd25c677810-var-run\") pod \"ovn-controller-ovs-mbbhf\" (UID: \"ac93b356-9c32-4094-9de5-8fd25c677810\") " pod="openstack/ovn-controller-ovs-mbbhf" Mar 11 09:00:02 crc kubenswrapper[4808]: I0311 09:00:02.101900 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/ac93b356-9c32-4094-9de5-8fd25c677810-var-log\") pod \"ovn-controller-ovs-mbbhf\" (UID: \"ac93b356-9c32-4094-9de5-8fd25c677810\") " pod="openstack/ovn-controller-ovs-mbbhf" Mar 11 09:00:02 crc kubenswrapper[4808]: I0311 09:00:02.101922 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/ac93b356-9c32-4094-9de5-8fd25c677810-var-lib\") pod \"ovn-controller-ovs-mbbhf\" (UID: \"ac93b356-9c32-4094-9de5-8fd25c677810\") " pod="openstack/ovn-controller-ovs-mbbhf" Mar 11 09:00:02 crc kubenswrapper[4808]: I0311 09:00:02.102005 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tsxq6\" (UniqueName: \"kubernetes.io/projected/ac93b356-9c32-4094-9de5-8fd25c677810-kube-api-access-tsxq6\") pod \"ovn-controller-ovs-mbbhf\" (UID: \"ac93b356-9c32-4094-9de5-8fd25c677810\") " pod="openstack/ovn-controller-ovs-mbbhf" Mar 11 09:00:02 crc kubenswrapper[4808]: I0311 09:00:02.102027 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/ac93b356-9c32-4094-9de5-8fd25c677810-etc-ovs\") pod \"ovn-controller-ovs-mbbhf\" (UID: \"ac93b356-9c32-4094-9de5-8fd25c677810\") " pod="openstack/ovn-controller-ovs-mbbhf" Mar 11 09:00:02 crc kubenswrapper[4808]: I0311 09:00:02.102076 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ac93b356-9c32-4094-9de5-8fd25c677810-var-run\") pod \"ovn-controller-ovs-mbbhf\" (UID: \"ac93b356-9c32-4094-9de5-8fd25c677810\") " pod="openstack/ovn-controller-ovs-mbbhf" Mar 11 09:00:02 crc kubenswrapper[4808]: I0311 09:00:02.102166 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/ac93b356-9c32-4094-9de5-8fd25c677810-var-log\") pod \"ovn-controller-ovs-mbbhf\" (UID: \"ac93b356-9c32-4094-9de5-8fd25c677810\") " pod="openstack/ovn-controller-ovs-mbbhf" Mar 11 09:00:02 crc kubenswrapper[4808]: I0311 09:00:02.102291 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/ac93b356-9c32-4094-9de5-8fd25c677810-etc-ovs\") pod \"ovn-controller-ovs-mbbhf\" (UID: \"ac93b356-9c32-4094-9de5-8fd25c677810\") " pod="openstack/ovn-controller-ovs-mbbhf" Mar 11 09:00:02 crc kubenswrapper[4808]: I0311 09:00:02.102373 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/ac93b356-9c32-4094-9de5-8fd25c677810-var-lib\") pod \"ovn-controller-ovs-mbbhf\" (UID: \"ac93b356-9c32-4094-9de5-8fd25c677810\") " pod="openstack/ovn-controller-ovs-mbbhf" Mar 11 09:00:02 crc kubenswrapper[4808]: I0311 09:00:02.103888 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ac93b356-9c32-4094-9de5-8fd25c677810-scripts\") pod \"ovn-controller-ovs-mbbhf\" (UID: \"ac93b356-9c32-4094-9de5-8fd25c677810\") " pod="openstack/ovn-controller-ovs-mbbhf" Mar 11 09:00:02 crc kubenswrapper[4808]: I0311 09:00:02.119955 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tsxq6\" (UniqueName: \"kubernetes.io/projected/ac93b356-9c32-4094-9de5-8fd25c677810-kube-api-access-tsxq6\") pod \"ovn-controller-ovs-mbbhf\" (UID: \"ac93b356-9c32-4094-9de5-8fd25c677810\") " pod="openstack/ovn-controller-ovs-mbbhf" Mar 11 09:00:02 crc kubenswrapper[4808]: I0311 09:00:02.141923 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-mbbhf" Mar 11 09:00:02 crc kubenswrapper[4808]: I0311 09:00:02.318847 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 11 09:00:02 crc kubenswrapper[4808]: I0311 09:00:02.320215 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Mar 11 09:00:02 crc kubenswrapper[4808]: I0311 09:00:02.324404 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-hvh87" Mar 11 09:00:02 crc kubenswrapper[4808]: I0311 09:00:02.324792 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Mar 11 09:00:02 crc kubenswrapper[4808]: I0311 09:00:02.324977 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Mar 11 09:00:02 crc kubenswrapper[4808]: I0311 09:00:02.327683 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Mar 11 09:00:02 crc kubenswrapper[4808]: I0311 09:00:02.327924 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Mar 11 09:00:02 crc kubenswrapper[4808]: I0311 09:00:02.328681 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 11 09:00:02 crc kubenswrapper[4808]: I0311 09:00:02.508559 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/073052f7-094c-467a-8910-b2ce25e5b981-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"073052f7-094c-467a-8910-b2ce25e5b981\") " pod="openstack/ovsdbserver-sb-0" Mar 11 09:00:02 crc kubenswrapper[4808]: I0311 09:00:02.508664 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/073052f7-094c-467a-8910-b2ce25e5b981-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"073052f7-094c-467a-8910-b2ce25e5b981\") " pod="openstack/ovsdbserver-sb-0" Mar 11 09:00:02 crc kubenswrapper[4808]: I0311 09:00:02.508755 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/073052f7-094c-467a-8910-b2ce25e5b981-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"073052f7-094c-467a-8910-b2ce25e5b981\") " pod="openstack/ovsdbserver-sb-0" Mar 11 09:00:02 crc kubenswrapper[4808]: I0311 09:00:02.508968 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/073052f7-094c-467a-8910-b2ce25e5b981-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"073052f7-094c-467a-8910-b2ce25e5b981\") " pod="openstack/ovsdbserver-sb-0" Mar 11 09:00:02 crc kubenswrapper[4808]: I0311 09:00:02.509012 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/073052f7-094c-467a-8910-b2ce25e5b981-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"073052f7-094c-467a-8910-b2ce25e5b981\") " pod="openstack/ovsdbserver-sb-0" Mar 11 09:00:02 crc kubenswrapper[4808]: I0311 09:00:02.509144 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/073052f7-094c-467a-8910-b2ce25e5b981-config\") pod \"ovsdbserver-sb-0\" (UID: \"073052f7-094c-467a-8910-b2ce25e5b981\") " pod="openstack/ovsdbserver-sb-0" Mar 11 09:00:02 crc kubenswrapper[4808]: I0311 09:00:02.509271 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7lhl\" (UniqueName: \"kubernetes.io/projected/073052f7-094c-467a-8910-b2ce25e5b981-kube-api-access-n7lhl\") pod \"ovsdbserver-sb-0\" (UID: \"073052f7-094c-467a-8910-b2ce25e5b981\") " pod="openstack/ovsdbserver-sb-0" Mar 11 09:00:02 crc kubenswrapper[4808]: I0311 09:00:02.509375 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-sb-0\" (UID: \"073052f7-094c-467a-8910-b2ce25e5b981\") " pod="openstack/ovsdbserver-sb-0" Mar 11 09:00:02 crc kubenswrapper[4808]: I0311 09:00:02.611544 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/073052f7-094c-467a-8910-b2ce25e5b981-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"073052f7-094c-467a-8910-b2ce25e5b981\") " pod="openstack/ovsdbserver-sb-0" Mar 11 09:00:02 crc kubenswrapper[4808]: I0311 09:00:02.611660 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/073052f7-094c-467a-8910-b2ce25e5b981-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"073052f7-094c-467a-8910-b2ce25e5b981\") " pod="openstack/ovsdbserver-sb-0" Mar 11 09:00:02 crc kubenswrapper[4808]: I0311 09:00:02.611709 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/073052f7-094c-467a-8910-b2ce25e5b981-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"073052f7-094c-467a-8910-b2ce25e5b981\") " pod="openstack/ovsdbserver-sb-0" Mar 11 09:00:02 crc kubenswrapper[4808]: I0311 09:00:02.611743 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/073052f7-094c-467a-8910-b2ce25e5b981-config\") pod \"ovsdbserver-sb-0\" (UID: \"073052f7-094c-467a-8910-b2ce25e5b981\") " pod="openstack/ovsdbserver-sb-0" Mar 11 09:00:02 crc kubenswrapper[4808]: I0311 09:00:02.611794 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7lhl\" (UniqueName: \"kubernetes.io/projected/073052f7-094c-467a-8910-b2ce25e5b981-kube-api-access-n7lhl\") pod \"ovsdbserver-sb-0\" (UID: \"073052f7-094c-467a-8910-b2ce25e5b981\") " pod="openstack/ovsdbserver-sb-0" Mar 11 09:00:02 crc kubenswrapper[4808]: I0311 09:00:02.611817 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-sb-0\" (UID: \"073052f7-094c-467a-8910-b2ce25e5b981\") " pod="openstack/ovsdbserver-sb-0" Mar 11 09:00:02 crc kubenswrapper[4808]: I0311 09:00:02.611880 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/073052f7-094c-467a-8910-b2ce25e5b981-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"073052f7-094c-467a-8910-b2ce25e5b981\") " pod="openstack/ovsdbserver-sb-0" Mar 11 09:00:02 crc kubenswrapper[4808]: I0311 09:00:02.611918 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/073052f7-094c-467a-8910-b2ce25e5b981-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"073052f7-094c-467a-8910-b2ce25e5b981\") " pod="openstack/ovsdbserver-sb-0" Mar 11 09:00:02 crc kubenswrapper[4808]: I0311 09:00:02.612409 4808 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-sb-0\" (UID: \"073052f7-094c-467a-8910-b2ce25e5b981\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/ovsdbserver-sb-0" Mar 11 09:00:02 crc kubenswrapper[4808]: I0311 09:00:02.612528 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/073052f7-094c-467a-8910-b2ce25e5b981-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"073052f7-094c-467a-8910-b2ce25e5b981\") " pod="openstack/ovsdbserver-sb-0" Mar 11 09:00:02 crc kubenswrapper[4808]: I0311 09:00:02.613068 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/073052f7-094c-467a-8910-b2ce25e5b981-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"073052f7-094c-467a-8910-b2ce25e5b981\") " pod="openstack/ovsdbserver-sb-0" Mar 11 09:00:02 crc kubenswrapper[4808]: I0311 09:00:02.613442 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/073052f7-094c-467a-8910-b2ce25e5b981-config\") pod \"ovsdbserver-sb-0\" (UID: \"073052f7-094c-467a-8910-b2ce25e5b981\") " pod="openstack/ovsdbserver-sb-0" Mar 11 09:00:02 crc kubenswrapper[4808]: I0311 09:00:02.615374 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/073052f7-094c-467a-8910-b2ce25e5b981-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"073052f7-094c-467a-8910-b2ce25e5b981\") " pod="openstack/ovsdbserver-sb-0" Mar 11 09:00:02 crc kubenswrapper[4808]: I0311 09:00:02.622139 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/073052f7-094c-467a-8910-b2ce25e5b981-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"073052f7-094c-467a-8910-b2ce25e5b981\") " pod="openstack/ovsdbserver-sb-0" Mar 11 09:00:02 crc kubenswrapper[4808]: I0311 09:00:02.622146 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/073052f7-094c-467a-8910-b2ce25e5b981-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"073052f7-094c-467a-8910-b2ce25e5b981\") " pod="openstack/ovsdbserver-sb-0" Mar 11 09:00:02 crc kubenswrapper[4808]: I0311 09:00:02.632909 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7lhl\" (UniqueName: \"kubernetes.io/projected/073052f7-094c-467a-8910-b2ce25e5b981-kube-api-access-n7lhl\") pod \"ovsdbserver-sb-0\" (UID: \"073052f7-094c-467a-8910-b2ce25e5b981\") " pod="openstack/ovsdbserver-sb-0" Mar 11 09:00:02 crc kubenswrapper[4808]: I0311 09:00:02.643397 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-sb-0\" (UID: \"073052f7-094c-467a-8910-b2ce25e5b981\") " pod="openstack/ovsdbserver-sb-0" Mar 11 09:00:02 crc kubenswrapper[4808]: I0311 09:00:02.935852 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Mar 11 09:00:05 crc kubenswrapper[4808]: I0311 09:00:05.561240 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 11 09:00:05 crc kubenswrapper[4808]: I0311 09:00:05.565327 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Mar 11 09:00:05 crc kubenswrapper[4808]: I0311 09:00:05.567770 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Mar 11 09:00:05 crc kubenswrapper[4808]: I0311 09:00:05.567770 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Mar 11 09:00:05 crc kubenswrapper[4808]: I0311 09:00:05.567950 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-jbzn5" Mar 11 09:00:05 crc kubenswrapper[4808]: I0311 09:00:05.568285 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Mar 11 09:00:05 crc kubenswrapper[4808]: I0311 09:00:05.575264 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 11 09:00:05 crc kubenswrapper[4808]: I0311 09:00:05.679794 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"03f33ae9-1e48-4adf-94bc-69ede69802d0\") " pod="openstack/ovsdbserver-nb-0" Mar 11 09:00:05 crc kubenswrapper[4808]: I0311 09:00:05.679864 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/03f33ae9-1e48-4adf-94bc-69ede69802d0-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"03f33ae9-1e48-4adf-94bc-69ede69802d0\") " pod="openstack/ovsdbserver-nb-0" Mar 11 09:00:05 crc kubenswrapper[4808]: I0311 09:00:05.679944 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/03f33ae9-1e48-4adf-94bc-69ede69802d0-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"03f33ae9-1e48-4adf-94bc-69ede69802d0\") " pod="openstack/ovsdbserver-nb-0" Mar 11 09:00:05 crc kubenswrapper[4808]: I0311 09:00:05.679981 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qltrn\" (UniqueName: \"kubernetes.io/projected/03f33ae9-1e48-4adf-94bc-69ede69802d0-kube-api-access-qltrn\") pod \"ovsdbserver-nb-0\" (UID: \"03f33ae9-1e48-4adf-94bc-69ede69802d0\") " pod="openstack/ovsdbserver-nb-0" Mar 11 09:00:05 crc kubenswrapper[4808]: I0311 09:00:05.680012 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/03f33ae9-1e48-4adf-94bc-69ede69802d0-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"03f33ae9-1e48-4adf-94bc-69ede69802d0\") " pod="openstack/ovsdbserver-nb-0" Mar 11 09:00:05 crc kubenswrapper[4808]: I0311 09:00:05.680163 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03f33ae9-1e48-4adf-94bc-69ede69802d0-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"03f33ae9-1e48-4adf-94bc-69ede69802d0\") " pod="openstack/ovsdbserver-nb-0" Mar 11 09:00:05 crc kubenswrapper[4808]: I0311 09:00:05.680216 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/03f33ae9-1e48-4adf-94bc-69ede69802d0-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"03f33ae9-1e48-4adf-94bc-69ede69802d0\") " pod="openstack/ovsdbserver-nb-0" Mar 11 09:00:05 crc kubenswrapper[4808]: I0311 09:00:05.680241 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03f33ae9-1e48-4adf-94bc-69ede69802d0-config\") pod \"ovsdbserver-nb-0\" (UID: \"03f33ae9-1e48-4adf-94bc-69ede69802d0\") " pod="openstack/ovsdbserver-nb-0" Mar 11 09:00:05 crc kubenswrapper[4808]: I0311 09:00:05.782134 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03f33ae9-1e48-4adf-94bc-69ede69802d0-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"03f33ae9-1e48-4adf-94bc-69ede69802d0\") " pod="openstack/ovsdbserver-nb-0" Mar 11 09:00:05 crc kubenswrapper[4808]: I0311 09:00:05.782202 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/03f33ae9-1e48-4adf-94bc-69ede69802d0-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"03f33ae9-1e48-4adf-94bc-69ede69802d0\") " pod="openstack/ovsdbserver-nb-0" Mar 11 09:00:05 crc kubenswrapper[4808]: I0311 09:00:05.782227 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03f33ae9-1e48-4adf-94bc-69ede69802d0-config\") pod \"ovsdbserver-nb-0\" (UID: \"03f33ae9-1e48-4adf-94bc-69ede69802d0\") " pod="openstack/ovsdbserver-nb-0" Mar 11 09:00:05 crc kubenswrapper[4808]: I0311 09:00:05.782254 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"03f33ae9-1e48-4adf-94bc-69ede69802d0\") " pod="openstack/ovsdbserver-nb-0" Mar 11 09:00:05 crc kubenswrapper[4808]: I0311 09:00:05.782276 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/03f33ae9-1e48-4adf-94bc-69ede69802d0-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"03f33ae9-1e48-4adf-94bc-69ede69802d0\") " pod="openstack/ovsdbserver-nb-0" Mar 11 09:00:05 crc kubenswrapper[4808]: I0311 09:00:05.782307 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/03f33ae9-1e48-4adf-94bc-69ede69802d0-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"03f33ae9-1e48-4adf-94bc-69ede69802d0\") " pod="openstack/ovsdbserver-nb-0" Mar 11 09:00:05 crc kubenswrapper[4808]: I0311 09:00:05.782328 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qltrn\" (UniqueName: \"kubernetes.io/projected/03f33ae9-1e48-4adf-94bc-69ede69802d0-kube-api-access-qltrn\") pod \"ovsdbserver-nb-0\" (UID: \"03f33ae9-1e48-4adf-94bc-69ede69802d0\") " pod="openstack/ovsdbserver-nb-0" Mar 11 09:00:05 crc kubenswrapper[4808]: I0311 09:00:05.782370 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/03f33ae9-1e48-4adf-94bc-69ede69802d0-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"03f33ae9-1e48-4adf-94bc-69ede69802d0\") " pod="openstack/ovsdbserver-nb-0" Mar 11 09:00:05 crc kubenswrapper[4808]: I0311 09:00:05.782955 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/03f33ae9-1e48-4adf-94bc-69ede69802d0-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"03f33ae9-1e48-4adf-94bc-69ede69802d0\") " pod="openstack/ovsdbserver-nb-0" Mar 11 09:00:05 crc kubenswrapper[4808]: I0311 09:00:05.783044 4808 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"03f33ae9-1e48-4adf-94bc-69ede69802d0\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/ovsdbserver-nb-0" Mar 11 09:00:05 crc kubenswrapper[4808]: I0311 09:00:05.784017 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03f33ae9-1e48-4adf-94bc-69ede69802d0-config\") pod \"ovsdbserver-nb-0\" (UID: \"03f33ae9-1e48-4adf-94bc-69ede69802d0\") " pod="openstack/ovsdbserver-nb-0" Mar 11 09:00:05 crc kubenswrapper[4808]: I0311 09:00:05.784159 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/03f33ae9-1e48-4adf-94bc-69ede69802d0-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"03f33ae9-1e48-4adf-94bc-69ede69802d0\") " pod="openstack/ovsdbserver-nb-0" Mar 11 09:00:05 crc kubenswrapper[4808]: I0311 09:00:05.798786 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03f33ae9-1e48-4adf-94bc-69ede69802d0-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"03f33ae9-1e48-4adf-94bc-69ede69802d0\") " pod="openstack/ovsdbserver-nb-0" Mar 11 09:00:05 crc kubenswrapper[4808]: I0311 09:00:05.799067 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/03f33ae9-1e48-4adf-94bc-69ede69802d0-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"03f33ae9-1e48-4adf-94bc-69ede69802d0\") " pod="openstack/ovsdbserver-nb-0" Mar 11 09:00:05 crc kubenswrapper[4808]: I0311 09:00:05.799391 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/03f33ae9-1e48-4adf-94bc-69ede69802d0-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"03f33ae9-1e48-4adf-94bc-69ede69802d0\") " pod="openstack/ovsdbserver-nb-0" Mar 11 09:00:05 crc kubenswrapper[4808]: I0311 09:00:05.803173 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qltrn\" (UniqueName: \"kubernetes.io/projected/03f33ae9-1e48-4adf-94bc-69ede69802d0-kube-api-access-qltrn\") pod \"ovsdbserver-nb-0\" (UID: \"03f33ae9-1e48-4adf-94bc-69ede69802d0\") " pod="openstack/ovsdbserver-nb-0" Mar 11 09:00:05 crc kubenswrapper[4808]: I0311 09:00:05.821220 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"03f33ae9-1e48-4adf-94bc-69ede69802d0\") " pod="openstack/ovsdbserver-nb-0" Mar 11 09:00:05 crc kubenswrapper[4808]: I0311 09:00:05.905606 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Mar 11 09:00:10 crc kubenswrapper[4808]: E0311 09:00:10.014573 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:9e7397d61095b02a8c1deb24bca874bc0032aa18019f12d53e0eda8998b85447" Mar 11 09:00:10 crc kubenswrapper[4808]: E0311 09:00:10.015047 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:9e7397d61095b02a8c1deb24bca874bc0032aa18019f12d53e0eda8998b85447,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k5444,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(549d4ad5-b5b0-45bd-87b0-b9a6ee77866e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 11 09:00:10 crc kubenswrapper[4808]: E0311 09:00:10.016239 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="549d4ad5-b5b0-45bd-87b0-b9a6ee77866e" Mar 11 09:00:10 crc kubenswrapper[4808]: E0311 09:00:10.331697 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:9e7397d61095b02a8c1deb24bca874bc0032aa18019f12d53e0eda8998b85447\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="549d4ad5-b5b0-45bd-87b0-b9a6ee77866e" Mar 11 09:00:13 crc kubenswrapper[4808]: E0311 09:00:13.783335 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:9e7397d61095b02a8c1deb24bca874bc0032aa18019f12d53e0eda8998b85447" Mar 11 09:00:13 crc kubenswrapper[4808]: E0311 09:00:13.784047 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:9e7397d61095b02a8c1deb24bca874bc0032aa18019f12d53e0eda8998b85447,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2rjfk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(a1e42e33-7453-4b97-abca-0c45cc27faa2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 11 09:00:13 crc kubenswrapper[4808]: E0311 09:00:13.785606 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="a1e42e33-7453-4b97-abca-0c45cc27faa2" Mar 11 09:00:14 crc kubenswrapper[4808]: E0311 09:00:14.358534 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:9e7397d61095b02a8c1deb24bca874bc0032aa18019f12d53e0eda8998b85447\\\"\"" pod="openstack/rabbitmq-server-0" podUID="a1e42e33-7453-4b97-abca-0c45cc27faa2" Mar 11 09:00:16 crc kubenswrapper[4808]: E0311 09:00:16.276798 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:944833b50342d462c10637342bc85197a8cf099a3650df12e23854dde99af514" Mar 11 09:00:16 crc kubenswrapper[4808]: E0311 09:00:16.277208 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:944833b50342d462c10637342bc85197a8cf099a3650df12e23854dde99af514,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mfh2c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-7c47bcb9f9-s9h2q_openstack(c5eae4bd-3de8-43d4-95d6-2f7ff2556bb1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 11 09:00:16 crc kubenswrapper[4808]: E0311 09:00:16.278551 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:944833b50342d462c10637342bc85197a8cf099a3650df12e23854dde99af514" Mar 11 09:00:16 crc kubenswrapper[4808]: E0311 09:00:16.278695 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:944833b50342d462c10637342bc85197a8cf099a3650df12e23854dde99af514,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f9hrt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-86bbd886cf-87zbx_openstack(dc1ab132-bd3f-4240-9682-135407f6b4fd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 11 09:00:16 crc kubenswrapper[4808]: E0311 09:00:16.279398 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:944833b50342d462c10637342bc85197a8cf099a3650df12e23854dde99af514" Mar 11 09:00:16 crc kubenswrapper[4808]: E0311 09:00:16.279559 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:944833b50342d462c10637342bc85197a8cf099a3650df12e23854dde99af514,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-748cp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-589db6c89c-s9w9c_openstack(8b680370-1655-424d-add1-0a15b5137299): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 11 09:00:16 crc kubenswrapper[4808]: E0311 09:00:16.280413 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-86bbd886cf-87zbx" podUID="dc1ab132-bd3f-4240-9682-135407f6b4fd" Mar 11 09:00:16 crc kubenswrapper[4808]: E0311 09:00:16.280485 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-7c47bcb9f9-s9h2q" podUID="c5eae4bd-3de8-43d4-95d6-2f7ff2556bb1" Mar 11 09:00:16 crc kubenswrapper[4808]: E0311 09:00:16.280670 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-589db6c89c-s9w9c" podUID="8b680370-1655-424d-add1-0a15b5137299" Mar 11 09:00:16 crc kubenswrapper[4808]: E0311 09:00:16.339555 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:944833b50342d462c10637342bc85197a8cf099a3650df12e23854dde99af514" Mar 11 09:00:16 crc kubenswrapper[4808]: E0311 09:00:16.339798 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:944833b50342d462c10637342bc85197a8cf099a3650df12e23854dde99af514,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nfdh5dfhb6h64h676hc4h78h97h669h54chfbh696hb5h54bh5d4h6bh64h644h677h584h5cbh698h9dh5bbh5f8h5b8hcdh644h5c7h694hbfh589q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7wrjg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78cb4465c9-5pxmr_openstack(7e0d4d29-dc82-4e09-bf90-874143df5d59): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 11 09:00:16 crc kubenswrapper[4808]: E0311 09:00:16.341420 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78cb4465c9-5pxmr" podUID="7e0d4d29-dc82-4e09-bf90-874143df5d59" Mar 11 09:00:16 crc kubenswrapper[4808]: E0311 09:00:16.379808 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:944833b50342d462c10637342bc85197a8cf099a3650df12e23854dde99af514\\\"\"" pod="openstack/dnsmasq-dns-7c47bcb9f9-s9h2q" podUID="c5eae4bd-3de8-43d4-95d6-2f7ff2556bb1" Mar 11 09:00:16 crc kubenswrapper[4808]: E0311 09:00:16.380144 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:944833b50342d462c10637342bc85197a8cf099a3650df12e23854dde99af514\\\"\"" pod="openstack/dnsmasq-dns-78cb4465c9-5pxmr" podUID="7e0d4d29-dc82-4e09-bf90-874143df5d59" Mar 11 09:00:17 crc kubenswrapper[4808]: I0311 09:00:17.046847 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86bbd886cf-87zbx" Mar 11 09:00:17 crc kubenswrapper[4808]: I0311 09:00:17.057550 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-589db6c89c-s9w9c" Mar 11 09:00:17 crc kubenswrapper[4808]: I0311 09:00:17.138801 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Mar 11 09:00:17 crc kubenswrapper[4808]: W0311 09:00:17.139462 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3fd1979f_d1de_42a8_be8e_b61087f737bc.slice/crio-ebdadbe8efd61c0f588c8c23d6c9c374038818d6f8894768278498a3945a465e WatchSource:0}: Error finding container ebdadbe8efd61c0f588c8c23d6c9c374038818d6f8894768278498a3945a465e: Status 404 returned error can't find the container with id ebdadbe8efd61c0f588c8c23d6c9c374038818d6f8894768278498a3945a465e Mar 11 09:00:17 crc kubenswrapper[4808]: W0311 09:00:17.143215 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc6baf079_20ab_45df_8e2d_2459a4286c9a.slice/crio-11d7c6ee6acbf7976a8efaf8a5635e0fb24158b78cb757031248629053d48f39 WatchSource:0}: Error finding container 11d7c6ee6acbf7976a8efaf8a5635e0fb24158b78cb757031248629053d48f39: Status 404 returned error can't find the container with id 11d7c6ee6acbf7976a8efaf8a5635e0fb24158b78cb757031248629053d48f39 Mar 11 09:00:17 crc kubenswrapper[4808]: I0311 09:00:17.175262 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-spf22"] Mar 11 09:00:17 crc kubenswrapper[4808]: I0311 09:00:17.186449 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 11 09:00:17 crc kubenswrapper[4808]: I0311 09:00:17.195014 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29553660-2qbqt"] Mar 11 09:00:17 crc kubenswrapper[4808]: I0311 09:00:17.201709 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b680370-1655-424d-add1-0a15b5137299-config\") pod \"8b680370-1655-424d-add1-0a15b5137299\" (UID: \"8b680370-1655-424d-add1-0a15b5137299\") " Mar 11 09:00:17 crc kubenswrapper[4808]: I0311 09:00:17.201782 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9hrt\" (UniqueName: \"kubernetes.io/projected/dc1ab132-bd3f-4240-9682-135407f6b4fd-kube-api-access-f9hrt\") pod \"dc1ab132-bd3f-4240-9682-135407f6b4fd\" (UID: \"dc1ab132-bd3f-4240-9682-135407f6b4fd\") " Mar 11 09:00:17 crc kubenswrapper[4808]: I0311 09:00:17.201827 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dc1ab132-bd3f-4240-9682-135407f6b4fd-dns-svc\") pod \"dc1ab132-bd3f-4240-9682-135407f6b4fd\" (UID: \"dc1ab132-bd3f-4240-9682-135407f6b4fd\") " Mar 11 09:00:17 crc kubenswrapper[4808]: I0311 09:00:17.201958 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc1ab132-bd3f-4240-9682-135407f6b4fd-config\") pod \"dc1ab132-bd3f-4240-9682-135407f6b4fd\" (UID: \"dc1ab132-bd3f-4240-9682-135407f6b4fd\") " Mar 11 09:00:17 crc kubenswrapper[4808]: I0311 09:00:17.201991 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-748cp\" (UniqueName: \"kubernetes.io/projected/8b680370-1655-424d-add1-0a15b5137299-kube-api-access-748cp\") pod \"8b680370-1655-424d-add1-0a15b5137299\" (UID: \"8b680370-1655-424d-add1-0a15b5137299\") " Mar 11 09:00:17 crc kubenswrapper[4808]: W0311 09:00:17.202529 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfadbd323_a0cc_4d54_b71f_bebe9c716af4.slice/crio-62f67b3796de52818a1d42cf4f841b061087e5b5db2e871df2e32afdf2c23447 WatchSource:0}: Error finding container 62f67b3796de52818a1d42cf4f841b061087e5b5db2e871df2e32afdf2c23447: Status 404 returned error can't find the container with id 62f67b3796de52818a1d42cf4f841b061087e5b5db2e871df2e32afdf2c23447 Mar 11 09:00:17 crc kubenswrapper[4808]: I0311 09:00:17.203116 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc1ab132-bd3f-4240-9682-135407f6b4fd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "dc1ab132-bd3f-4240-9682-135407f6b4fd" (UID: "dc1ab132-bd3f-4240-9682-135407f6b4fd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:00:17 crc kubenswrapper[4808]: I0311 09:00:17.203490 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b680370-1655-424d-add1-0a15b5137299-config" (OuterVolumeSpecName: "config") pod "8b680370-1655-424d-add1-0a15b5137299" (UID: "8b680370-1655-424d-add1-0a15b5137299"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:00:17 crc kubenswrapper[4808]: I0311 09:00:17.203634 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc1ab132-bd3f-4240-9682-135407f6b4fd-config" (OuterVolumeSpecName: "config") pod "dc1ab132-bd3f-4240-9682-135407f6b4fd" (UID: "dc1ab132-bd3f-4240-9682-135407f6b4fd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:00:17 crc kubenswrapper[4808]: I0311 09:00:17.208447 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b680370-1655-424d-add1-0a15b5137299-kube-api-access-748cp" (OuterVolumeSpecName: "kube-api-access-748cp") pod "8b680370-1655-424d-add1-0a15b5137299" (UID: "8b680370-1655-424d-add1-0a15b5137299"). InnerVolumeSpecName "kube-api-access-748cp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:00:17 crc kubenswrapper[4808]: I0311 09:00:17.209495 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc1ab132-bd3f-4240-9682-135407f6b4fd-kube-api-access-f9hrt" (OuterVolumeSpecName: "kube-api-access-f9hrt") pod "dc1ab132-bd3f-4240-9682-135407f6b4fd" (UID: "dc1ab132-bd3f-4240-9682-135407f6b4fd"). InnerVolumeSpecName "kube-api-access-f9hrt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:00:17 crc kubenswrapper[4808]: I0311 09:00:17.209562 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 11 09:00:17 crc kubenswrapper[4808]: I0311 09:00:17.251143 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553660-56kcb"] Mar 11 09:00:17 crc kubenswrapper[4808]: I0311 09:00:17.255694 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 11 09:00:17 crc kubenswrapper[4808]: W0311 09:00:17.265845 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod03f33ae9_1e48_4adf_94bc_69ede69802d0.slice/crio-a97b1f40d1f101ec8e766e45d5f1b3badd0cb9165ef6fd2c5df7c8bf2b059da8 WatchSource:0}: Error finding container a97b1f40d1f101ec8e766e45d5f1b3badd0cb9165ef6fd2c5df7c8bf2b059da8: Status 404 returned error can't find the container with id a97b1f40d1f101ec8e766e45d5f1b3badd0cb9165ef6fd2c5df7c8bf2b059da8 Mar 11 09:00:17 crc kubenswrapper[4808]: W0311 09:00:17.287742 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcdfb1b93_4963_4e67_a64b_6489306f8fc3.slice/crio-ed8ec7471292a2d83aac723eb464bd53bebc1671cfcb915a394e11f18ee35a05 WatchSource:0}: Error finding container ed8ec7471292a2d83aac723eb464bd53bebc1671cfcb915a394e11f18ee35a05: Status 404 returned error can't find the container with id ed8ec7471292a2d83aac723eb464bd53bebc1671cfcb915a394e11f18ee35a05 Mar 11 09:00:17 crc kubenswrapper[4808]: I0311 09:00:17.306962 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc1ab132-bd3f-4240-9682-135407f6b4fd-config\") on node \"crc\" DevicePath \"\"" Mar 11 09:00:17 crc kubenswrapper[4808]: I0311 09:00:17.306991 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-748cp\" (UniqueName: \"kubernetes.io/projected/8b680370-1655-424d-add1-0a15b5137299-kube-api-access-748cp\") on node \"crc\" DevicePath \"\"" Mar 11 09:00:17 crc kubenswrapper[4808]: I0311 09:00:17.307002 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b680370-1655-424d-add1-0a15b5137299-config\") on node \"crc\" DevicePath \"\"" Mar 11 09:00:17 crc kubenswrapper[4808]: I0311 09:00:17.307013 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f9hrt\" (UniqueName: \"kubernetes.io/projected/dc1ab132-bd3f-4240-9682-135407f6b4fd-kube-api-access-f9hrt\") on node \"crc\" DevicePath \"\"" Mar 11 09:00:17 crc kubenswrapper[4808]: I0311 09:00:17.307026 4808 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dc1ab132-bd3f-4240-9682-135407f6b4fd-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 11 09:00:17 crc kubenswrapper[4808]: I0311 09:00:17.345375 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-mbbhf"] Mar 11 09:00:17 crc kubenswrapper[4808]: I0311 09:00:17.387563 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-spf22" event={"ID":"3fd1979f-d1de-42a8-be8e-b61087f737bc","Type":"ContainerStarted","Data":"ebdadbe8efd61c0f588c8c23d6c9c374038818d6f8894768278498a3945a465e"} Mar 11 09:00:17 crc kubenswrapper[4808]: I0311 09:00:17.388845 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86bbd886cf-87zbx" event={"ID":"dc1ab132-bd3f-4240-9682-135407f6b4fd","Type":"ContainerDied","Data":"d3ab0ee3d77d15d6fffc71d2b032dcb23a34b4f2fe46506f401010c973e8d411"} Mar 11 09:00:17 crc kubenswrapper[4808]: I0311 09:00:17.388993 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86bbd886cf-87zbx" Mar 11 09:00:17 crc kubenswrapper[4808]: I0311 09:00:17.395660 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29553660-2qbqt" event={"ID":"6e57a2f8-6fa7-4b10-96f9-cd2a678da7e9","Type":"ContainerStarted","Data":"b83ba2ddcca5dbf52b6c44e7be61b1acd93f0e028dff7f413c33253fd21fa247"} Mar 11 09:00:17 crc kubenswrapper[4808]: I0311 09:00:17.395734 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29553660-2qbqt" event={"ID":"6e57a2f8-6fa7-4b10-96f9-cd2a678da7e9","Type":"ContainerStarted","Data":"8c3b6a55a8fb50451fb95850f85af8e7145e66da36d1bebad8ee3ade729aea32"} Mar 11 09:00:17 crc kubenswrapper[4808]: I0311 09:00:17.397668 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"5c805958-e789-4689-bbd0-dc1a1a116486","Type":"ContainerStarted","Data":"d446736c3f1848ec2ed002693bffc75f040a0f6d96d48605bb0b1e96e588b24d"} Mar 11 09:00:17 crc kubenswrapper[4808]: I0311 09:00:17.400652 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-589db6c89c-s9w9c" event={"ID":"8b680370-1655-424d-add1-0a15b5137299","Type":"ContainerDied","Data":"fbe4913e0561f49134c373807c24aa0b1b9814278452438e93338bd751b87452"} Mar 11 09:00:17 crc kubenswrapper[4808]: I0311 09:00:17.400668 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-589db6c89c-s9w9c" Mar 11 09:00:17 crc kubenswrapper[4808]: I0311 09:00:17.402400 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"49e0938f-9c77-4bf3-b649-1be492ef1647","Type":"ContainerStarted","Data":"a65df125d18e1f77fc1240d2aa4a854121a8ae7dbd36e648b9ddc3c72315bc77"} Mar 11 09:00:17 crc kubenswrapper[4808]: I0311 09:00:17.403418 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-mbbhf" event={"ID":"ac93b356-9c32-4094-9de5-8fd25c677810","Type":"ContainerStarted","Data":"6cb2cdfcadb65284c6e41825903608f534c1ecc2702f14259dc334c53b9795be"} Mar 11 09:00:17 crc kubenswrapper[4808]: I0311 09:00:17.404292 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"c6baf079-20ab-45df-8e2d-2459a4286c9a","Type":"ContainerStarted","Data":"11d7c6ee6acbf7976a8efaf8a5635e0fb24158b78cb757031248629053d48f39"} Mar 11 09:00:17 crc kubenswrapper[4808]: I0311 09:00:17.405156 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553660-56kcb" event={"ID":"cdfb1b93-4963-4e67-a64b-6489306f8fc3","Type":"ContainerStarted","Data":"ed8ec7471292a2d83aac723eb464bd53bebc1671cfcb915a394e11f18ee35a05"} Mar 11 09:00:17 crc kubenswrapper[4808]: I0311 09:00:17.406013 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"03f33ae9-1e48-4adf-94bc-69ede69802d0","Type":"ContainerStarted","Data":"a97b1f40d1f101ec8e766e45d5f1b3badd0cb9165ef6fd2c5df7c8bf2b059da8"} Mar 11 09:00:17 crc kubenswrapper[4808]: I0311 09:00:17.407023 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"fadbd323-a0cc-4d54-b71f-bebe9c716af4","Type":"ContainerStarted","Data":"62f67b3796de52818a1d42cf4f841b061087e5b5db2e871df2e32afdf2c23447"} Mar 11 09:00:17 crc kubenswrapper[4808]: I0311 09:00:17.417987 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29553660-2qbqt" podStartSLOduration=17.417972772 podStartE2EDuration="17.417972772s" podCreationTimestamp="2026-03-11 09:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:00:17.41274591 +0000 UTC m=+1268.366069230" watchObservedRunningTime="2026-03-11 09:00:17.417972772 +0000 UTC m=+1268.371296092" Mar 11 09:00:17 crc kubenswrapper[4808]: I0311 09:00:17.507675 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86bbd886cf-87zbx"] Mar 11 09:00:17 crc kubenswrapper[4808]: I0311 09:00:17.519644 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86bbd886cf-87zbx"] Mar 11 09:00:17 crc kubenswrapper[4808]: I0311 09:00:17.545414 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-589db6c89c-s9w9c"] Mar 11 09:00:17 crc kubenswrapper[4808]: I0311 09:00:17.551538 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-589db6c89c-s9w9c"] Mar 11 09:00:17 crc kubenswrapper[4808]: I0311 09:00:17.801033 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b680370-1655-424d-add1-0a15b5137299" path="/var/lib/kubelet/pods/8b680370-1655-424d-add1-0a15b5137299/volumes" Mar 11 09:00:17 crc kubenswrapper[4808]: I0311 09:00:17.801475 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc1ab132-bd3f-4240-9682-135407f6b4fd" path="/var/lib/kubelet/pods/dc1ab132-bd3f-4240-9682-135407f6b4fd/volumes" Mar 11 09:00:18 crc kubenswrapper[4808]: I0311 09:00:18.134249 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 11 09:00:18 crc kubenswrapper[4808]: I0311 09:00:18.425567 4808 generic.go:334] "Generic (PLEG): container finished" podID="6e57a2f8-6fa7-4b10-96f9-cd2a678da7e9" containerID="b83ba2ddcca5dbf52b6c44e7be61b1acd93f0e028dff7f413c33253fd21fa247" exitCode=0 Mar 11 09:00:18 crc kubenswrapper[4808]: I0311 09:00:18.425689 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29553660-2qbqt" event={"ID":"6e57a2f8-6fa7-4b10-96f9-cd2a678da7e9","Type":"ContainerDied","Data":"b83ba2ddcca5dbf52b6c44e7be61b1acd93f0e028dff7f413c33253fd21fa247"} Mar 11 09:00:18 crc kubenswrapper[4808]: I0311 09:00:18.427336 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"5c805958-e789-4689-bbd0-dc1a1a116486","Type":"ContainerStarted","Data":"e3e5d3531e06a473345f5a4c4e07e59335e271eccb4b7baaa2c90cf709058c91"} Mar 11 09:00:18 crc kubenswrapper[4808]: W0311 09:00:18.494371 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod073052f7_094c_467a_8910_b2ce25e5b981.slice/crio-fc6459998b365291973bdcf6e6c732481664a5654cff7869f5dee26b3f089e03 WatchSource:0}: Error finding container fc6459998b365291973bdcf6e6c732481664a5654cff7869f5dee26b3f089e03: Status 404 returned error can't find the container with id fc6459998b365291973bdcf6e6c732481664a5654cff7869f5dee26b3f089e03 Mar 11 09:00:19 crc kubenswrapper[4808]: I0311 09:00:19.446619 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"073052f7-094c-467a-8910-b2ce25e5b981","Type":"ContainerStarted","Data":"fc6459998b365291973bdcf6e6c732481664a5654cff7869f5dee26b3f089e03"} Mar 11 09:00:20 crc kubenswrapper[4808]: I0311 09:00:20.456726 4808 generic.go:334] "Generic (PLEG): container finished" podID="49e0938f-9c77-4bf3-b649-1be492ef1647" containerID="a65df125d18e1f77fc1240d2aa4a854121a8ae7dbd36e648b9ddc3c72315bc77" exitCode=0 Mar 11 09:00:20 crc kubenswrapper[4808]: I0311 09:00:20.456778 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"49e0938f-9c77-4bf3-b649-1be492ef1647","Type":"ContainerDied","Data":"a65df125d18e1f77fc1240d2aa4a854121a8ae7dbd36e648b9ddc3c72315bc77"} Mar 11 09:00:20 crc kubenswrapper[4808]: I0311 09:00:20.540763 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29553660-2qbqt" Mar 11 09:00:20 crc kubenswrapper[4808]: I0311 09:00:20.663641 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6e57a2f8-6fa7-4b10-96f9-cd2a678da7e9-secret-volume\") pod \"6e57a2f8-6fa7-4b10-96f9-cd2a678da7e9\" (UID: \"6e57a2f8-6fa7-4b10-96f9-cd2a678da7e9\") " Mar 11 09:00:20 crc kubenswrapper[4808]: I0311 09:00:20.663768 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7z9q9\" (UniqueName: \"kubernetes.io/projected/6e57a2f8-6fa7-4b10-96f9-cd2a678da7e9-kube-api-access-7z9q9\") pod \"6e57a2f8-6fa7-4b10-96f9-cd2a678da7e9\" (UID: \"6e57a2f8-6fa7-4b10-96f9-cd2a678da7e9\") " Mar 11 09:00:20 crc kubenswrapper[4808]: I0311 09:00:20.663843 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6e57a2f8-6fa7-4b10-96f9-cd2a678da7e9-config-volume\") pod \"6e57a2f8-6fa7-4b10-96f9-cd2a678da7e9\" (UID: \"6e57a2f8-6fa7-4b10-96f9-cd2a678da7e9\") " Mar 11 09:00:20 crc kubenswrapper[4808]: I0311 09:00:20.664482 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e57a2f8-6fa7-4b10-96f9-cd2a678da7e9-config-volume" (OuterVolumeSpecName: "config-volume") pod "6e57a2f8-6fa7-4b10-96f9-cd2a678da7e9" (UID: "6e57a2f8-6fa7-4b10-96f9-cd2a678da7e9"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:00:20 crc kubenswrapper[4808]: I0311 09:00:20.669003 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e57a2f8-6fa7-4b10-96f9-cd2a678da7e9-kube-api-access-7z9q9" (OuterVolumeSpecName: "kube-api-access-7z9q9") pod "6e57a2f8-6fa7-4b10-96f9-cd2a678da7e9" (UID: "6e57a2f8-6fa7-4b10-96f9-cd2a678da7e9"). InnerVolumeSpecName "kube-api-access-7z9q9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:00:20 crc kubenswrapper[4808]: I0311 09:00:20.669600 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e57a2f8-6fa7-4b10-96f9-cd2a678da7e9-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "6e57a2f8-6fa7-4b10-96f9-cd2a678da7e9" (UID: "6e57a2f8-6fa7-4b10-96f9-cd2a678da7e9"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:00:20 crc kubenswrapper[4808]: I0311 09:00:20.765970 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7z9q9\" (UniqueName: \"kubernetes.io/projected/6e57a2f8-6fa7-4b10-96f9-cd2a678da7e9-kube-api-access-7z9q9\") on node \"crc\" DevicePath \"\"" Mar 11 09:00:20 crc kubenswrapper[4808]: I0311 09:00:20.766017 4808 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6e57a2f8-6fa7-4b10-96f9-cd2a678da7e9-config-volume\") on node \"crc\" DevicePath \"\"" Mar 11 09:00:20 crc kubenswrapper[4808]: I0311 09:00:20.766034 4808 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6e57a2f8-6fa7-4b10-96f9-cd2a678da7e9-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 11 09:00:21 crc kubenswrapper[4808]: I0311 09:00:21.468881 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29553660-2qbqt" event={"ID":"6e57a2f8-6fa7-4b10-96f9-cd2a678da7e9","Type":"ContainerDied","Data":"8c3b6a55a8fb50451fb95850f85af8e7145e66da36d1bebad8ee3ade729aea32"} Mar 11 09:00:21 crc kubenswrapper[4808]: I0311 09:00:21.468943 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c3b6a55a8fb50451fb95850f85af8e7145e66da36d1bebad8ee3ade729aea32" Mar 11 09:00:21 crc kubenswrapper[4808]: I0311 09:00:21.468908 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29553660-2qbqt" Mar 11 09:00:21 crc kubenswrapper[4808]: I0311 09:00:21.470763 4808 generic.go:334] "Generic (PLEG): container finished" podID="5c805958-e789-4689-bbd0-dc1a1a116486" containerID="e3e5d3531e06a473345f5a4c4e07e59335e271eccb4b7baaa2c90cf709058c91" exitCode=0 Mar 11 09:00:21 crc kubenswrapper[4808]: I0311 09:00:21.470826 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"5c805958-e789-4689-bbd0-dc1a1a116486","Type":"ContainerDied","Data":"e3e5d3531e06a473345f5a4c4e07e59335e271eccb4b7baaa2c90cf709058c91"} Mar 11 09:00:23 crc kubenswrapper[4808]: I0311 09:00:23.488492 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"49e0938f-9c77-4bf3-b649-1be492ef1647","Type":"ContainerStarted","Data":"eb0fb77e5ed841a8b6370ae95eb99a470aa96b2d34bd6fc4310915ae83795884"} Mar 11 09:00:23 crc kubenswrapper[4808]: I0311 09:00:23.490169 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"073052f7-094c-467a-8910-b2ce25e5b981","Type":"ContainerStarted","Data":"464b1dff84d68cb54e2785c4998d80bcd9d2d9c96576bd186fecafee0ad6ee92"} Mar 11 09:00:23 crc kubenswrapper[4808]: I0311 09:00:23.492553 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"03f33ae9-1e48-4adf-94bc-69ede69802d0","Type":"ContainerStarted","Data":"3904a275fa7eb7164f6dcc6972082021fbcb12b9a1d3cf667a02688b376fc0eb"} Mar 11 09:00:23 crc kubenswrapper[4808]: I0311 09:00:23.493857 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-spf22" event={"ID":"3fd1979f-d1de-42a8-be8e-b61087f737bc","Type":"ContainerStarted","Data":"36c750f48ac33ddc1cee8a438d54bf885813c3f9d9bdb51b0dfec0a66d24f056"} Mar 11 09:00:23 crc kubenswrapper[4808]: I0311 09:00:23.494173 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-spf22" Mar 11 09:00:23 crc kubenswrapper[4808]: I0311 09:00:23.495676 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"fadbd323-a0cc-4d54-b71f-bebe9c716af4","Type":"ContainerStarted","Data":"8e3fe5ca2c9c5545def51d7a23ab4a606e863878ea492ecfbdf894f5166219ef"} Mar 11 09:00:23 crc kubenswrapper[4808]: I0311 09:00:23.495736 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Mar 11 09:00:23 crc kubenswrapper[4808]: I0311 09:00:23.497459 4808 generic.go:334] "Generic (PLEG): container finished" podID="cdfb1b93-4963-4e67-a64b-6489306f8fc3" containerID="86d8334eb64365eca4fd6bcd5fcc557f4b30602304be2ee5bbea4165f15dd9cd" exitCode=0 Mar 11 09:00:23 crc kubenswrapper[4808]: I0311 09:00:23.497519 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553660-56kcb" event={"ID":"cdfb1b93-4963-4e67-a64b-6489306f8fc3","Type":"ContainerDied","Data":"86d8334eb64365eca4fd6bcd5fcc557f4b30602304be2ee5bbea4165f15dd9cd"} Mar 11 09:00:23 crc kubenswrapper[4808]: I0311 09:00:23.499114 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"5c805958-e789-4689-bbd0-dc1a1a116486","Type":"ContainerStarted","Data":"9dc74041b5de0f337b97cef5d7a082b76d5ccc5083fc2046cdd3bdf353512e2e"} Mar 11 09:00:23 crc kubenswrapper[4808]: I0311 09:00:23.500805 4808 generic.go:334] "Generic (PLEG): container finished" podID="ac93b356-9c32-4094-9de5-8fd25c677810" containerID="4274d9b10427464d0ed03dea602f29a75967086af3a1931da7eaba3977be6f52" exitCode=0 Mar 11 09:00:23 crc kubenswrapper[4808]: I0311 09:00:23.500854 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-mbbhf" event={"ID":"ac93b356-9c32-4094-9de5-8fd25c677810","Type":"ContainerDied","Data":"4274d9b10427464d0ed03dea602f29a75967086af3a1931da7eaba3977be6f52"} Mar 11 09:00:23 crc kubenswrapper[4808]: I0311 09:00:23.508873 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"c6baf079-20ab-45df-8e2d-2459a4286c9a","Type":"ContainerStarted","Data":"82f8eb58e9f3ad3c8cc07e575893f64a2f9e5741e48c57fc799afc4a33b2dc6a"} Mar 11 09:00:23 crc kubenswrapper[4808]: I0311 09:00:23.508983 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Mar 11 09:00:23 crc kubenswrapper[4808]: I0311 09:00:23.513529 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=9.870944875 podStartE2EDuration="30.513509325s" podCreationTimestamp="2026-03-11 08:59:53 +0000 UTC" firstStartedPulling="2026-03-11 08:59:55.614260647 +0000 UTC m=+1246.567583967" lastFinishedPulling="2026-03-11 09:00:16.256825087 +0000 UTC m=+1267.210148417" observedRunningTime="2026-03-11 09:00:23.509302924 +0000 UTC m=+1274.462626264" watchObservedRunningTime="2026-03-11 09:00:23.513509325 +0000 UTC m=+1274.466832645" Mar 11 09:00:23 crc kubenswrapper[4808]: I0311 09:00:23.531998 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-spf22" podStartSLOduration=18.402861202 podStartE2EDuration="22.53197747s" podCreationTimestamp="2026-03-11 09:00:01 +0000 UTC" firstStartedPulling="2026-03-11 09:00:17.141260206 +0000 UTC m=+1268.094588936" lastFinishedPulling="2026-03-11 09:00:21.270381884 +0000 UTC m=+1272.223705204" observedRunningTime="2026-03-11 09:00:23.526003967 +0000 UTC m=+1274.479327287" watchObservedRunningTime="2026-03-11 09:00:23.53197747 +0000 UTC m=+1274.485300790" Mar 11 09:00:23 crc kubenswrapper[4808]: I0311 09:00:23.545952 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=20.402262371 podStartE2EDuration="25.545937044s" podCreationTimestamp="2026-03-11 08:59:58 +0000 UTC" firstStartedPulling="2026-03-11 09:00:17.217204253 +0000 UTC m=+1268.170527573" lastFinishedPulling="2026-03-11 09:00:22.360878916 +0000 UTC m=+1273.314202246" observedRunningTime="2026-03-11 09:00:23.54166307 +0000 UTC m=+1274.494986400" watchObservedRunningTime="2026-03-11 09:00:23.545937044 +0000 UTC m=+1274.499260364" Mar 11 09:00:23 crc kubenswrapper[4808]: I0311 09:00:23.599541 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=29.599523064 podStartE2EDuration="29.599523064s" podCreationTimestamp="2026-03-11 08:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:00:23.593263053 +0000 UTC m=+1274.546586383" watchObservedRunningTime="2026-03-11 09:00:23.599523064 +0000 UTC m=+1274.552846384" Mar 11 09:00:23 crc kubenswrapper[4808]: I0311 09:00:23.620175 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=23.769064768 podStartE2EDuration="27.620152551s" podCreationTimestamp="2026-03-11 08:59:56 +0000 UTC" firstStartedPulling="2026-03-11 09:00:17.153525111 +0000 UTC m=+1268.106848431" lastFinishedPulling="2026-03-11 09:00:21.004612884 +0000 UTC m=+1271.957936214" observedRunningTime="2026-03-11 09:00:23.609596066 +0000 UTC m=+1274.562919386" watchObservedRunningTime="2026-03-11 09:00:23.620152551 +0000 UTC m=+1274.573475871" Mar 11 09:00:24 crc kubenswrapper[4808]: I0311 09:00:24.532342 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-mbbhf" event={"ID":"ac93b356-9c32-4094-9de5-8fd25c677810","Type":"ContainerStarted","Data":"d0f6b59c6c2a37c85684bda61d18699ea95a7b07fe74cd2b454ef4d645be3cec"} Mar 11 09:00:24 crc kubenswrapper[4808]: I0311 09:00:24.532627 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-mbbhf" event={"ID":"ac93b356-9c32-4094-9de5-8fd25c677810","Type":"ContainerStarted","Data":"be587cd4387689015812b77c921ed987cf9e6f3848ffcd2be39c7c2a7a593cef"} Mar 11 09:00:24 crc kubenswrapper[4808]: I0311 09:00:24.532659 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-mbbhf" Mar 11 09:00:24 crc kubenswrapper[4808]: I0311 09:00:24.532676 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-mbbhf" Mar 11 09:00:24 crc kubenswrapper[4808]: I0311 09:00:24.535486 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"549d4ad5-b5b0-45bd-87b0-b9a6ee77866e","Type":"ContainerStarted","Data":"8b395b42706b1de9013f9b75864a0671c56c131544021b5094dacdd4a57911d9"} Mar 11 09:00:24 crc kubenswrapper[4808]: I0311 09:00:24.592622 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-mbbhf" podStartSLOduration=18.706468666 podStartE2EDuration="23.592598997s" podCreationTimestamp="2026-03-11 09:00:01 +0000 UTC" firstStartedPulling="2026-03-11 09:00:17.368113249 +0000 UTC m=+1268.321436569" lastFinishedPulling="2026-03-11 09:00:22.25424357 +0000 UTC m=+1273.207566900" observedRunningTime="2026-03-11 09:00:24.563612328 +0000 UTC m=+1275.516935648" watchObservedRunningTime="2026-03-11 09:00:24.592598997 +0000 UTC m=+1275.545922327" Mar 11 09:00:24 crc kubenswrapper[4808]: I0311 09:00:24.952563 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Mar 11 09:00:24 crc kubenswrapper[4808]: I0311 09:00:24.952617 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Mar 11 09:00:25 crc kubenswrapper[4808]: I0311 09:00:25.695139 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553660-56kcb" Mar 11 09:00:25 crc kubenswrapper[4808]: I0311 09:00:25.804945 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7p7j\" (UniqueName: \"kubernetes.io/projected/cdfb1b93-4963-4e67-a64b-6489306f8fc3-kube-api-access-w7p7j\") pod \"cdfb1b93-4963-4e67-a64b-6489306f8fc3\" (UID: \"cdfb1b93-4963-4e67-a64b-6489306f8fc3\") " Mar 11 09:00:25 crc kubenswrapper[4808]: I0311 09:00:25.810819 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdfb1b93-4963-4e67-a64b-6489306f8fc3-kube-api-access-w7p7j" (OuterVolumeSpecName: "kube-api-access-w7p7j") pod "cdfb1b93-4963-4e67-a64b-6489306f8fc3" (UID: "cdfb1b93-4963-4e67-a64b-6489306f8fc3"). InnerVolumeSpecName "kube-api-access-w7p7j". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:00:25 crc kubenswrapper[4808]: I0311 09:00:25.907956 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7p7j\" (UniqueName: \"kubernetes.io/projected/cdfb1b93-4963-4e67-a64b-6489306f8fc3-kube-api-access-w7p7j\") on node \"crc\" DevicePath \"\"" Mar 11 09:00:26 crc kubenswrapper[4808]: I0311 09:00:26.335659 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Mar 11 09:00:26 crc kubenswrapper[4808]: I0311 09:00:26.337182 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Mar 11 09:00:26 crc kubenswrapper[4808]: I0311 09:00:26.551622 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"073052f7-094c-467a-8910-b2ce25e5b981","Type":"ContainerStarted","Data":"0b1ad33990e113a73cab3cbdff3db9029771d7e96b54d5e45a675f0010b3a17c"} Mar 11 09:00:26 crc kubenswrapper[4808]: I0311 09:00:26.554606 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"03f33ae9-1e48-4adf-94bc-69ede69802d0","Type":"ContainerStarted","Data":"caf403e5666b9d2c99d9a9fbdab7975e088bc83ebcc56dde5fee8c7b6be528e4"} Mar 11 09:00:26 crc kubenswrapper[4808]: I0311 09:00:26.556864 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553660-56kcb" event={"ID":"cdfb1b93-4963-4e67-a64b-6489306f8fc3","Type":"ContainerDied","Data":"ed8ec7471292a2d83aac723eb464bd53bebc1671cfcb915a394e11f18ee35a05"} Mar 11 09:00:26 crc kubenswrapper[4808]: I0311 09:00:26.556883 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553660-56kcb" Mar 11 09:00:26 crc kubenswrapper[4808]: I0311 09:00:26.556890 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed8ec7471292a2d83aac723eb464bd53bebc1671cfcb915a394e11f18ee35a05" Mar 11 09:00:26 crc kubenswrapper[4808]: I0311 09:00:26.574303 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=17.733978848 podStartE2EDuration="25.574282624s" podCreationTimestamp="2026-03-11 09:00:01 +0000 UTC" firstStartedPulling="2026-03-11 09:00:18.495710884 +0000 UTC m=+1269.449034204" lastFinishedPulling="2026-03-11 09:00:26.33601466 +0000 UTC m=+1277.289337980" observedRunningTime="2026-03-11 09:00:26.570881076 +0000 UTC m=+1277.524204396" watchObservedRunningTime="2026-03-11 09:00:26.574282624 +0000 UTC m=+1277.527605954" Mar 11 09:00:26 crc kubenswrapper[4808]: I0311 09:00:26.599961 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=13.550255059 podStartE2EDuration="22.599934356s" podCreationTimestamp="2026-03-11 09:00:04 +0000 UTC" firstStartedPulling="2026-03-11 09:00:17.269668911 +0000 UTC m=+1268.222992231" lastFinishedPulling="2026-03-11 09:00:26.319348208 +0000 UTC m=+1277.272671528" observedRunningTime="2026-03-11 09:00:26.590296437 +0000 UTC m=+1277.543619797" watchObservedRunningTime="2026-03-11 09:00:26.599934356 +0000 UTC m=+1277.553257716" Mar 11 09:00:26 crc kubenswrapper[4808]: I0311 09:00:26.780750 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553654-fdqcq"] Mar 11 09:00:26 crc kubenswrapper[4808]: I0311 09:00:26.791925 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553654-fdqcq"] Mar 11 09:00:26 crc kubenswrapper[4808]: I0311 09:00:26.906307 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Mar 11 09:00:26 crc kubenswrapper[4808]: I0311 09:00:26.936999 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Mar 11 09:00:26 crc kubenswrapper[4808]: I0311 09:00:26.948227 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Mar 11 09:00:26 crc kubenswrapper[4808]: I0311 09:00:26.979690 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Mar 11 09:00:27 crc kubenswrapper[4808]: I0311 09:00:27.374275 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Mar 11 09:00:27 crc kubenswrapper[4808]: I0311 09:00:27.457438 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Mar 11 09:00:27 crc kubenswrapper[4808]: I0311 09:00:27.567609 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Mar 11 09:00:27 crc kubenswrapper[4808]: I0311 09:00:27.567953 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Mar 11 09:00:27 crc kubenswrapper[4808]: I0311 09:00:27.605721 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Mar 11 09:00:27 crc kubenswrapper[4808]: I0311 09:00:27.617710 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Mar 11 09:00:27 crc kubenswrapper[4808]: I0311 09:00:27.761930 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78cb4465c9-5pxmr"] Mar 11 09:00:27 crc kubenswrapper[4808]: I0311 09:00:27.811707 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6a1463b-9472-494c-b46e-75c7098a9e62" path="/var/lib/kubelet/pods/f6a1463b-9472-494c-b46e-75c7098a9e62/volumes" Mar 11 09:00:27 crc kubenswrapper[4808]: I0311 09:00:27.812453 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-795cf8b45c-wq24n"] Mar 11 09:00:27 crc kubenswrapper[4808]: E0311 09:00:27.812723 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdfb1b93-4963-4e67-a64b-6489306f8fc3" containerName="oc" Mar 11 09:00:27 crc kubenswrapper[4808]: I0311 09:00:27.812738 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdfb1b93-4963-4e67-a64b-6489306f8fc3" containerName="oc" Mar 11 09:00:27 crc kubenswrapper[4808]: E0311 09:00:27.812781 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e57a2f8-6fa7-4b10-96f9-cd2a678da7e9" containerName="collect-profiles" Mar 11 09:00:27 crc kubenswrapper[4808]: I0311 09:00:27.812787 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e57a2f8-6fa7-4b10-96f9-cd2a678da7e9" containerName="collect-profiles" Mar 11 09:00:27 crc kubenswrapper[4808]: I0311 09:00:27.812925 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e57a2f8-6fa7-4b10-96f9-cd2a678da7e9" containerName="collect-profiles" Mar 11 09:00:27 crc kubenswrapper[4808]: I0311 09:00:27.812949 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdfb1b93-4963-4e67-a64b-6489306f8fc3" containerName="oc" Mar 11 09:00:27 crc kubenswrapper[4808]: I0311 09:00:27.813885 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-795cf8b45c-wq24n" Mar 11 09:00:27 crc kubenswrapper[4808]: I0311 09:00:27.818796 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Mar 11 09:00:27 crc kubenswrapper[4808]: I0311 09:00:27.835637 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-795cf8b45c-wq24n"] Mar 11 09:00:27 crc kubenswrapper[4808]: I0311 09:00:27.931926 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-35b5-account-create-update-4zxv8"] Mar 11 09:00:27 crc kubenswrapper[4808]: I0311 09:00:27.933231 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-35b5-account-create-update-4zxv8" Mar 11 09:00:27 crc kubenswrapper[4808]: I0311 09:00:27.940455 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Mar 11 09:00:27 crc kubenswrapper[4808]: I0311 09:00:27.944257 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-l7k4m"] Mar 11 09:00:27 crc kubenswrapper[4808]: I0311 09:00:27.945353 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f098209-f2ee-4b73-bc32-e1ddb9aacb3b-config\") pod \"dnsmasq-dns-795cf8b45c-wq24n\" (UID: \"1f098209-f2ee-4b73-bc32-e1ddb9aacb3b\") " pod="openstack/dnsmasq-dns-795cf8b45c-wq24n" Mar 11 09:00:27 crc kubenswrapper[4808]: I0311 09:00:27.945502 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttnv5\" (UniqueName: \"kubernetes.io/projected/1f098209-f2ee-4b73-bc32-e1ddb9aacb3b-kube-api-access-ttnv5\") pod \"dnsmasq-dns-795cf8b45c-wq24n\" (UID: \"1f098209-f2ee-4b73-bc32-e1ddb9aacb3b\") " pod="openstack/dnsmasq-dns-795cf8b45c-wq24n" Mar 11 09:00:27 crc kubenswrapper[4808]: I0311 09:00:27.945612 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1f098209-f2ee-4b73-bc32-e1ddb9aacb3b-ovsdbserver-sb\") pod \"dnsmasq-dns-795cf8b45c-wq24n\" (UID: \"1f098209-f2ee-4b73-bc32-e1ddb9aacb3b\") " pod="openstack/dnsmasq-dns-795cf8b45c-wq24n" Mar 11 09:00:27 crc kubenswrapper[4808]: I0311 09:00:27.945700 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1f098209-f2ee-4b73-bc32-e1ddb9aacb3b-dns-svc\") pod \"dnsmasq-dns-795cf8b45c-wq24n\" (UID: \"1f098209-f2ee-4b73-bc32-e1ddb9aacb3b\") " pod="openstack/dnsmasq-dns-795cf8b45c-wq24n" Mar 11 09:00:27 crc kubenswrapper[4808]: I0311 09:00:27.945615 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-l7k4m" Mar 11 09:00:27 crc kubenswrapper[4808]: I0311 09:00:27.967525 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-35b5-account-create-update-4zxv8"] Mar 11 09:00:27 crc kubenswrapper[4808]: I0311 09:00:27.993947 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-zcd8b"] Mar 11 09:00:27 crc kubenswrapper[4808]: I0311 09:00:27.999519 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-zcd8b" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.006140 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.013949 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-l7k4m"] Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.048840 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f244d77-0b6a-4bcf-a6b4-dc7028019e29-config\") pod \"ovn-controller-metrics-zcd8b\" (UID: \"2f244d77-0b6a-4bcf-a6b4-dc7028019e29\") " pod="openstack/ovn-controller-metrics-zcd8b" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.049382 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/2f244d77-0b6a-4bcf-a6b4-dc7028019e29-ovn-rundir\") pod \"ovn-controller-metrics-zcd8b\" (UID: \"2f244d77-0b6a-4bcf-a6b4-dc7028019e29\") " pod="openstack/ovn-controller-metrics-zcd8b" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.049503 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zbxf\" (UniqueName: \"kubernetes.io/projected/93d7affe-753b-40fe-8e4a-fbe3d2618527-kube-api-access-5zbxf\") pod \"keystone-db-create-l7k4m\" (UID: \"93d7affe-753b-40fe-8e4a-fbe3d2618527\") " pod="openstack/keystone-db-create-l7k4m" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.049660 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1f098209-f2ee-4b73-bc32-e1ddb9aacb3b-ovsdbserver-sb\") pod \"dnsmasq-dns-795cf8b45c-wq24n\" (UID: \"1f098209-f2ee-4b73-bc32-e1ddb9aacb3b\") " pod="openstack/dnsmasq-dns-795cf8b45c-wq24n" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.049766 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93d7affe-753b-40fe-8e4a-fbe3d2618527-operator-scripts\") pod \"keystone-db-create-l7k4m\" (UID: \"93d7affe-753b-40fe-8e4a-fbe3d2618527\") " pod="openstack/keystone-db-create-l7k4m" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.049851 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1f098209-f2ee-4b73-bc32-e1ddb9aacb3b-dns-svc\") pod \"dnsmasq-dns-795cf8b45c-wq24n\" (UID: \"1f098209-f2ee-4b73-bc32-e1ddb9aacb3b\") " pod="openstack/dnsmasq-dns-795cf8b45c-wq24n" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.049942 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/2f244d77-0b6a-4bcf-a6b4-dc7028019e29-ovs-rundir\") pod \"ovn-controller-metrics-zcd8b\" (UID: \"2f244d77-0b6a-4bcf-a6b4-dc7028019e29\") " pod="openstack/ovn-controller-metrics-zcd8b" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.050075 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc2d33c7-29e8-46b7-a743-80d0eed3412d-operator-scripts\") pod \"keystone-35b5-account-create-update-4zxv8\" (UID: \"bc2d33c7-29e8-46b7-a743-80d0eed3412d\") " pod="openstack/keystone-35b5-account-create-update-4zxv8" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.050192 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqjfr\" (UniqueName: \"kubernetes.io/projected/bc2d33c7-29e8-46b7-a743-80d0eed3412d-kube-api-access-xqjfr\") pod \"keystone-35b5-account-create-update-4zxv8\" (UID: \"bc2d33c7-29e8-46b7-a743-80d0eed3412d\") " pod="openstack/keystone-35b5-account-create-update-4zxv8" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.050287 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f244d77-0b6a-4bcf-a6b4-dc7028019e29-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-zcd8b\" (UID: \"2f244d77-0b6a-4bcf-a6b4-dc7028019e29\") " pod="openstack/ovn-controller-metrics-zcd8b" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.050388 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlb92\" (UniqueName: \"kubernetes.io/projected/2f244d77-0b6a-4bcf-a6b4-dc7028019e29-kube-api-access-dlb92\") pod \"ovn-controller-metrics-zcd8b\" (UID: \"2f244d77-0b6a-4bcf-a6b4-dc7028019e29\") " pod="openstack/ovn-controller-metrics-zcd8b" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.050522 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f098209-f2ee-4b73-bc32-e1ddb9aacb3b-config\") pod \"dnsmasq-dns-795cf8b45c-wq24n\" (UID: \"1f098209-f2ee-4b73-bc32-e1ddb9aacb3b\") " pod="openstack/dnsmasq-dns-795cf8b45c-wq24n" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.050636 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f244d77-0b6a-4bcf-a6b4-dc7028019e29-combined-ca-bundle\") pod \"ovn-controller-metrics-zcd8b\" (UID: \"2f244d77-0b6a-4bcf-a6b4-dc7028019e29\") " pod="openstack/ovn-controller-metrics-zcd8b" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.050723 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttnv5\" (UniqueName: \"kubernetes.io/projected/1f098209-f2ee-4b73-bc32-e1ddb9aacb3b-kube-api-access-ttnv5\") pod \"dnsmasq-dns-795cf8b45c-wq24n\" (UID: \"1f098209-f2ee-4b73-bc32-e1ddb9aacb3b\") " pod="openstack/dnsmasq-dns-795cf8b45c-wq24n" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.052065 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1f098209-f2ee-4b73-bc32-e1ddb9aacb3b-ovsdbserver-sb\") pod \"dnsmasq-dns-795cf8b45c-wq24n\" (UID: \"1f098209-f2ee-4b73-bc32-e1ddb9aacb3b\") " pod="openstack/dnsmasq-dns-795cf8b45c-wq24n" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.052911 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1f098209-f2ee-4b73-bc32-e1ddb9aacb3b-dns-svc\") pod \"dnsmasq-dns-795cf8b45c-wq24n\" (UID: \"1f098209-f2ee-4b73-bc32-e1ddb9aacb3b\") " pod="openstack/dnsmasq-dns-795cf8b45c-wq24n" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.054247 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f098209-f2ee-4b73-bc32-e1ddb9aacb3b-config\") pod \"dnsmasq-dns-795cf8b45c-wq24n\" (UID: \"1f098209-f2ee-4b73-bc32-e1ddb9aacb3b\") " pod="openstack/dnsmasq-dns-795cf8b45c-wq24n" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.061287 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-zcd8b"] Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.106379 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttnv5\" (UniqueName: \"kubernetes.io/projected/1f098209-f2ee-4b73-bc32-e1ddb9aacb3b-kube-api-access-ttnv5\") pod \"dnsmasq-dns-795cf8b45c-wq24n\" (UID: \"1f098209-f2ee-4b73-bc32-e1ddb9aacb3b\") " pod="openstack/dnsmasq-dns-795cf8b45c-wq24n" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.140702 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c47bcb9f9-s9h2q"] Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.152526 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.154244 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.156227 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f244d77-0b6a-4bcf-a6b4-dc7028019e29-combined-ca-bundle\") pod \"ovn-controller-metrics-zcd8b\" (UID: \"2f244d77-0b6a-4bcf-a6b4-dc7028019e29\") " pod="openstack/ovn-controller-metrics-zcd8b" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.156401 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f244d77-0b6a-4bcf-a6b4-dc7028019e29-config\") pod \"ovn-controller-metrics-zcd8b\" (UID: \"2f244d77-0b6a-4bcf-a6b4-dc7028019e29\") " pod="openstack/ovn-controller-metrics-zcd8b" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.156532 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/2f244d77-0b6a-4bcf-a6b4-dc7028019e29-ovn-rundir\") pod \"ovn-controller-metrics-zcd8b\" (UID: \"2f244d77-0b6a-4bcf-a6b4-dc7028019e29\") " pod="openstack/ovn-controller-metrics-zcd8b" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.156669 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.156671 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zbxf\" (UniqueName: \"kubernetes.io/projected/93d7affe-753b-40fe-8e4a-fbe3d2618527-kube-api-access-5zbxf\") pod \"keystone-db-create-l7k4m\" (UID: \"93d7affe-753b-40fe-8e4a-fbe3d2618527\") " pod="openstack/keystone-db-create-l7k4m" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.156878 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93d7affe-753b-40fe-8e4a-fbe3d2618527-operator-scripts\") pod \"keystone-db-create-l7k4m\" (UID: \"93d7affe-753b-40fe-8e4a-fbe3d2618527\") " pod="openstack/keystone-db-create-l7k4m" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.156921 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/2f244d77-0b6a-4bcf-a6b4-dc7028019e29-ovs-rundir\") pod \"ovn-controller-metrics-zcd8b\" (UID: \"2f244d77-0b6a-4bcf-a6b4-dc7028019e29\") " pod="openstack/ovn-controller-metrics-zcd8b" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.156975 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc2d33c7-29e8-46b7-a743-80d0eed3412d-operator-scripts\") pod \"keystone-35b5-account-create-update-4zxv8\" (UID: \"bc2d33c7-29e8-46b7-a743-80d0eed3412d\") " pod="openstack/keystone-35b5-account-create-update-4zxv8" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.157013 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqjfr\" (UniqueName: \"kubernetes.io/projected/bc2d33c7-29e8-46b7-a743-80d0eed3412d-kube-api-access-xqjfr\") pod \"keystone-35b5-account-create-update-4zxv8\" (UID: \"bc2d33c7-29e8-46b7-a743-80d0eed3412d\") " pod="openstack/keystone-35b5-account-create-update-4zxv8" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.157040 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f244d77-0b6a-4bcf-a6b4-dc7028019e29-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-zcd8b\" (UID: \"2f244d77-0b6a-4bcf-a6b4-dc7028019e29\") " pod="openstack/ovn-controller-metrics-zcd8b" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.157060 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlb92\" (UniqueName: \"kubernetes.io/projected/2f244d77-0b6a-4bcf-a6b4-dc7028019e29-kube-api-access-dlb92\") pod \"ovn-controller-metrics-zcd8b\" (UID: \"2f244d77-0b6a-4bcf-a6b4-dc7028019e29\") " pod="openstack/ovn-controller-metrics-zcd8b" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.160816 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93d7affe-753b-40fe-8e4a-fbe3d2618527-operator-scripts\") pod \"keystone-db-create-l7k4m\" (UID: \"93d7affe-753b-40fe-8e4a-fbe3d2618527\") " pod="openstack/keystone-db-create-l7k4m" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.161113 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/2f244d77-0b6a-4bcf-a6b4-dc7028019e29-ovs-rundir\") pod \"ovn-controller-metrics-zcd8b\" (UID: \"2f244d77-0b6a-4bcf-a6b4-dc7028019e29\") " pod="openstack/ovn-controller-metrics-zcd8b" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.161641 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.162088 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-24c8l" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.162155 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.162321 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.162381 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/2f244d77-0b6a-4bcf-a6b4-dc7028019e29-ovn-rundir\") pod \"ovn-controller-metrics-zcd8b\" (UID: \"2f244d77-0b6a-4bcf-a6b4-dc7028019e29\") " pod="openstack/ovn-controller-metrics-zcd8b" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.162797 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-795cf8b45c-wq24n" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.162828 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f244d77-0b6a-4bcf-a6b4-dc7028019e29-config\") pod \"ovn-controller-metrics-zcd8b\" (UID: \"2f244d77-0b6a-4bcf-a6b4-dc7028019e29\") " pod="openstack/ovn-controller-metrics-zcd8b" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.164215 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc2d33c7-29e8-46b7-a743-80d0eed3412d-operator-scripts\") pod \"keystone-35b5-account-create-update-4zxv8\" (UID: \"bc2d33c7-29e8-46b7-a743-80d0eed3412d\") " pod="openstack/keystone-35b5-account-create-update-4zxv8" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.165090 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f244d77-0b6a-4bcf-a6b4-dc7028019e29-combined-ca-bundle\") pod \"ovn-controller-metrics-zcd8b\" (UID: \"2f244d77-0b6a-4bcf-a6b4-dc7028019e29\") " pod="openstack/ovn-controller-metrics-zcd8b" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.189834 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f244d77-0b6a-4bcf-a6b4-dc7028019e29-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-zcd8b\" (UID: \"2f244d77-0b6a-4bcf-a6b4-dc7028019e29\") " pod="openstack/ovn-controller-metrics-zcd8b" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.218033 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlb92\" (UniqueName: \"kubernetes.io/projected/2f244d77-0b6a-4bcf-a6b4-dc7028019e29-kube-api-access-dlb92\") pod \"ovn-controller-metrics-zcd8b\" (UID: \"2f244d77-0b6a-4bcf-a6b4-dc7028019e29\") " pod="openstack/ovn-controller-metrics-zcd8b" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.218632 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zbxf\" (UniqueName: \"kubernetes.io/projected/93d7affe-753b-40fe-8e4a-fbe3d2618527-kube-api-access-5zbxf\") pod \"keystone-db-create-l7k4m\" (UID: \"93d7affe-753b-40fe-8e4a-fbe3d2618527\") " pod="openstack/keystone-db-create-l7k4m" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.224485 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7b57d9888c-4djd4"] Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.233947 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqjfr\" (UniqueName: \"kubernetes.io/projected/bc2d33c7-29e8-46b7-a743-80d0eed3412d-kube-api-access-xqjfr\") pod \"keystone-35b5-account-create-update-4zxv8\" (UID: \"bc2d33c7-29e8-46b7-a743-80d0eed3412d\") " pod="openstack/keystone-35b5-account-create-update-4zxv8" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.241050 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b57d9888c-4djd4" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.242864 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.258620 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/08b88a34-0eac-4a47-b3b3-89a8024bbe7b-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"08b88a34-0eac-4a47-b3b3-89a8024bbe7b\") " pod="openstack/ovn-northd-0" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.258682 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08b88a34-0eac-4a47-b3b3-89a8024bbe7b-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"08b88a34-0eac-4a47-b3b3-89a8024bbe7b\") " pod="openstack/ovn-northd-0" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.258708 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/08b88a34-0eac-4a47-b3b3-89a8024bbe7b-scripts\") pod \"ovn-northd-0\" (UID: \"08b88a34-0eac-4a47-b3b3-89a8024bbe7b\") " pod="openstack/ovn-northd-0" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.258738 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6z458\" (UniqueName: \"kubernetes.io/projected/08b88a34-0eac-4a47-b3b3-89a8024bbe7b-kube-api-access-6z458\") pod \"ovn-northd-0\" (UID: \"08b88a34-0eac-4a47-b3b3-89a8024bbe7b\") " pod="openstack/ovn-northd-0" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.258773 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/08b88a34-0eac-4a47-b3b3-89a8024bbe7b-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"08b88a34-0eac-4a47-b3b3-89a8024bbe7b\") " pod="openstack/ovn-northd-0" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.258792 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08b88a34-0eac-4a47-b3b3-89a8024bbe7b-config\") pod \"ovn-northd-0\" (UID: \"08b88a34-0eac-4a47-b3b3-89a8024bbe7b\") " pod="openstack/ovn-northd-0" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.258837 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/08b88a34-0eac-4a47-b3b3-89a8024bbe7b-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"08b88a34-0eac-4a47-b3b3-89a8024bbe7b\") " pod="openstack/ovn-northd-0" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.259155 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-35b5-account-create-update-4zxv8" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.271123 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-l7k4m" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.273610 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7b57d9888c-4djd4"] Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.282823 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-2cjt7"] Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.284342 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-2cjt7" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.294837 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-e766-account-create-update-9vw89"] Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.295806 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-e766-account-create-update-9vw89" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.297546 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.305309 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-2cjt7"] Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.316761 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-e766-account-create-update-9vw89"] Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.348211 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-zcd8b" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.351416 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78cb4465c9-5pxmr" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.360001 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/08b88a34-0eac-4a47-b3b3-89a8024bbe7b-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"08b88a34-0eac-4a47-b3b3-89a8024bbe7b\") " pod="openstack/ovn-northd-0" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.360050 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08b88a34-0eac-4a47-b3b3-89a8024bbe7b-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"08b88a34-0eac-4a47-b3b3-89a8024bbe7b\") " pod="openstack/ovn-northd-0" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.360083 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60b98410-cbed-4562-b46c-0c34025045b6-operator-scripts\") pod \"placement-db-create-2cjt7\" (UID: \"60b98410-cbed-4562-b46c-0c34025045b6\") " pod="openstack/placement-db-create-2cjt7" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.360113 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/08b88a34-0eac-4a47-b3b3-89a8024bbe7b-scripts\") pod \"ovn-northd-0\" (UID: \"08b88a34-0eac-4a47-b3b3-89a8024bbe7b\") " pod="openstack/ovn-northd-0" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.360150 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6z458\" (UniqueName: \"kubernetes.io/projected/08b88a34-0eac-4a47-b3b3-89a8024bbe7b-kube-api-access-6z458\") pod \"ovn-northd-0\" (UID: \"08b88a34-0eac-4a47-b3b3-89a8024bbe7b\") " pod="openstack/ovn-northd-0" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.360178 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d5c43da-a137-4ac3-be09-ec76e9c204d1-operator-scripts\") pod \"placement-e766-account-create-update-9vw89\" (UID: \"6d5c43da-a137-4ac3-be09-ec76e9c204d1\") " pod="openstack/placement-e766-account-create-update-9vw89" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.360213 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/08b88a34-0eac-4a47-b3b3-89a8024bbe7b-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"08b88a34-0eac-4a47-b3b3-89a8024bbe7b\") " pod="openstack/ovn-northd-0" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.360237 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08b88a34-0eac-4a47-b3b3-89a8024bbe7b-config\") pod \"ovn-northd-0\" (UID: \"08b88a34-0eac-4a47-b3b3-89a8024bbe7b\") " pod="openstack/ovn-northd-0" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.360261 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcj9c\" (UniqueName: \"kubernetes.io/projected/532358e7-4941-439f-b43b-b7ba3cf7c772-kube-api-access-hcj9c\") pod \"dnsmasq-dns-7b57d9888c-4djd4\" (UID: \"532358e7-4941-439f-b43b-b7ba3cf7c772\") " pod="openstack/dnsmasq-dns-7b57d9888c-4djd4" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.360288 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kw9vm\" (UniqueName: \"kubernetes.io/projected/60b98410-cbed-4562-b46c-0c34025045b6-kube-api-access-kw9vm\") pod \"placement-db-create-2cjt7\" (UID: \"60b98410-cbed-4562-b46c-0c34025045b6\") " pod="openstack/placement-db-create-2cjt7" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.360319 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7c87\" (UniqueName: \"kubernetes.io/projected/6d5c43da-a137-4ac3-be09-ec76e9c204d1-kube-api-access-z7c87\") pod \"placement-e766-account-create-update-9vw89\" (UID: \"6d5c43da-a137-4ac3-be09-ec76e9c204d1\") " pod="openstack/placement-e766-account-create-update-9vw89" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.360350 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/532358e7-4941-439f-b43b-b7ba3cf7c772-ovsdbserver-nb\") pod \"dnsmasq-dns-7b57d9888c-4djd4\" (UID: \"532358e7-4941-439f-b43b-b7ba3cf7c772\") " pod="openstack/dnsmasq-dns-7b57d9888c-4djd4" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.360491 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/532358e7-4941-439f-b43b-b7ba3cf7c772-ovsdbserver-sb\") pod \"dnsmasq-dns-7b57d9888c-4djd4\" (UID: \"532358e7-4941-439f-b43b-b7ba3cf7c772\") " pod="openstack/dnsmasq-dns-7b57d9888c-4djd4" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.360529 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/08b88a34-0eac-4a47-b3b3-89a8024bbe7b-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"08b88a34-0eac-4a47-b3b3-89a8024bbe7b\") " pod="openstack/ovn-northd-0" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.360553 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/532358e7-4941-439f-b43b-b7ba3cf7c772-dns-svc\") pod \"dnsmasq-dns-7b57d9888c-4djd4\" (UID: \"532358e7-4941-439f-b43b-b7ba3cf7c772\") " pod="openstack/dnsmasq-dns-7b57d9888c-4djd4" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.360594 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/532358e7-4941-439f-b43b-b7ba3cf7c772-config\") pod \"dnsmasq-dns-7b57d9888c-4djd4\" (UID: \"532358e7-4941-439f-b43b-b7ba3cf7c772\") " pod="openstack/dnsmasq-dns-7b57d9888c-4djd4" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.361474 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08b88a34-0eac-4a47-b3b3-89a8024bbe7b-config\") pod \"ovn-northd-0\" (UID: \"08b88a34-0eac-4a47-b3b3-89a8024bbe7b\") " pod="openstack/ovn-northd-0" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.362263 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/08b88a34-0eac-4a47-b3b3-89a8024bbe7b-scripts\") pod \"ovn-northd-0\" (UID: \"08b88a34-0eac-4a47-b3b3-89a8024bbe7b\") " pod="openstack/ovn-northd-0" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.366538 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/08b88a34-0eac-4a47-b3b3-89a8024bbe7b-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"08b88a34-0eac-4a47-b3b3-89a8024bbe7b\") " pod="openstack/ovn-northd-0" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.369081 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/08b88a34-0eac-4a47-b3b3-89a8024bbe7b-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"08b88a34-0eac-4a47-b3b3-89a8024bbe7b\") " pod="openstack/ovn-northd-0" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.374216 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/08b88a34-0eac-4a47-b3b3-89a8024bbe7b-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"08b88a34-0eac-4a47-b3b3-89a8024bbe7b\") " pod="openstack/ovn-northd-0" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.378398 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08b88a34-0eac-4a47-b3b3-89a8024bbe7b-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"08b88a34-0eac-4a47-b3b3-89a8024bbe7b\") " pod="openstack/ovn-northd-0" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.391271 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6z458\" (UniqueName: \"kubernetes.io/projected/08b88a34-0eac-4a47-b3b3-89a8024bbe7b-kube-api-access-6z458\") pod \"ovn-northd-0\" (UID: \"08b88a34-0eac-4a47-b3b3-89a8024bbe7b\") " pod="openstack/ovn-northd-0" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.462912 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7e0d4d29-dc82-4e09-bf90-874143df5d59-dns-svc\") pod \"7e0d4d29-dc82-4e09-bf90-874143df5d59\" (UID: \"7e0d4d29-dc82-4e09-bf90-874143df5d59\") " Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.462953 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e0d4d29-dc82-4e09-bf90-874143df5d59-config\") pod \"7e0d4d29-dc82-4e09-bf90-874143df5d59\" (UID: \"7e0d4d29-dc82-4e09-bf90-874143df5d59\") " Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.463038 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7wrjg\" (UniqueName: \"kubernetes.io/projected/7e0d4d29-dc82-4e09-bf90-874143df5d59-kube-api-access-7wrjg\") pod \"7e0d4d29-dc82-4e09-bf90-874143df5d59\" (UID: \"7e0d4d29-dc82-4e09-bf90-874143df5d59\") " Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.463265 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/532358e7-4941-439f-b43b-b7ba3cf7c772-config\") pod \"dnsmasq-dns-7b57d9888c-4djd4\" (UID: \"532358e7-4941-439f-b43b-b7ba3cf7c772\") " pod="openstack/dnsmasq-dns-7b57d9888c-4djd4" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.463321 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e0d4d29-dc82-4e09-bf90-874143df5d59-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7e0d4d29-dc82-4e09-bf90-874143df5d59" (UID: "7e0d4d29-dc82-4e09-bf90-874143df5d59"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.463405 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60b98410-cbed-4562-b46c-0c34025045b6-operator-scripts\") pod \"placement-db-create-2cjt7\" (UID: \"60b98410-cbed-4562-b46c-0c34025045b6\") " pod="openstack/placement-db-create-2cjt7" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.463458 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d5c43da-a137-4ac3-be09-ec76e9c204d1-operator-scripts\") pod \"placement-e766-account-create-update-9vw89\" (UID: \"6d5c43da-a137-4ac3-be09-ec76e9c204d1\") " pod="openstack/placement-e766-account-create-update-9vw89" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.463961 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e0d4d29-dc82-4e09-bf90-874143df5d59-config" (OuterVolumeSpecName: "config") pod "7e0d4d29-dc82-4e09-bf90-874143df5d59" (UID: "7e0d4d29-dc82-4e09-bf90-874143df5d59"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.464082 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcj9c\" (UniqueName: \"kubernetes.io/projected/532358e7-4941-439f-b43b-b7ba3cf7c772-kube-api-access-hcj9c\") pod \"dnsmasq-dns-7b57d9888c-4djd4\" (UID: \"532358e7-4941-439f-b43b-b7ba3cf7c772\") " pod="openstack/dnsmasq-dns-7b57d9888c-4djd4" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.464120 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kw9vm\" (UniqueName: \"kubernetes.io/projected/60b98410-cbed-4562-b46c-0c34025045b6-kube-api-access-kw9vm\") pod \"placement-db-create-2cjt7\" (UID: \"60b98410-cbed-4562-b46c-0c34025045b6\") " pod="openstack/placement-db-create-2cjt7" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.464153 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7c87\" (UniqueName: \"kubernetes.io/projected/6d5c43da-a137-4ac3-be09-ec76e9c204d1-kube-api-access-z7c87\") pod \"placement-e766-account-create-update-9vw89\" (UID: \"6d5c43da-a137-4ac3-be09-ec76e9c204d1\") " pod="openstack/placement-e766-account-create-update-9vw89" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.464179 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/532358e7-4941-439f-b43b-b7ba3cf7c772-ovsdbserver-nb\") pod \"dnsmasq-dns-7b57d9888c-4djd4\" (UID: \"532358e7-4941-439f-b43b-b7ba3cf7c772\") " pod="openstack/dnsmasq-dns-7b57d9888c-4djd4" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.464200 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/532358e7-4941-439f-b43b-b7ba3cf7c772-ovsdbserver-sb\") pod \"dnsmasq-dns-7b57d9888c-4djd4\" (UID: \"532358e7-4941-439f-b43b-b7ba3cf7c772\") " pod="openstack/dnsmasq-dns-7b57d9888c-4djd4" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.464235 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/532358e7-4941-439f-b43b-b7ba3cf7c772-dns-svc\") pod \"dnsmasq-dns-7b57d9888c-4djd4\" (UID: \"532358e7-4941-439f-b43b-b7ba3cf7c772\") " pod="openstack/dnsmasq-dns-7b57d9888c-4djd4" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.464556 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60b98410-cbed-4562-b46c-0c34025045b6-operator-scripts\") pod \"placement-db-create-2cjt7\" (UID: \"60b98410-cbed-4562-b46c-0c34025045b6\") " pod="openstack/placement-db-create-2cjt7" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.465036 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d5c43da-a137-4ac3-be09-ec76e9c204d1-operator-scripts\") pod \"placement-e766-account-create-update-9vw89\" (UID: \"6d5c43da-a137-4ac3-be09-ec76e9c204d1\") " pod="openstack/placement-e766-account-create-update-9vw89" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.465063 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/532358e7-4941-439f-b43b-b7ba3cf7c772-dns-svc\") pod \"dnsmasq-dns-7b57d9888c-4djd4\" (UID: \"532358e7-4941-439f-b43b-b7ba3cf7c772\") " pod="openstack/dnsmasq-dns-7b57d9888c-4djd4" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.465288 4808 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7e0d4d29-dc82-4e09-bf90-874143df5d59-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.466236 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/532358e7-4941-439f-b43b-b7ba3cf7c772-config\") pod \"dnsmasq-dns-7b57d9888c-4djd4\" (UID: \"532358e7-4941-439f-b43b-b7ba3cf7c772\") " pod="openstack/dnsmasq-dns-7b57d9888c-4djd4" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.466889 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/532358e7-4941-439f-b43b-b7ba3cf7c772-ovsdbserver-sb\") pod \"dnsmasq-dns-7b57d9888c-4djd4\" (UID: \"532358e7-4941-439f-b43b-b7ba3cf7c772\") " pod="openstack/dnsmasq-dns-7b57d9888c-4djd4" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.467072 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e0d4d29-dc82-4e09-bf90-874143df5d59-config\") on node \"crc\" DevicePath \"\"" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.467275 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/532358e7-4941-439f-b43b-b7ba3cf7c772-ovsdbserver-nb\") pod \"dnsmasq-dns-7b57d9888c-4djd4\" (UID: \"532358e7-4941-439f-b43b-b7ba3cf7c772\") " pod="openstack/dnsmasq-dns-7b57d9888c-4djd4" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.467973 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e0d4d29-dc82-4e09-bf90-874143df5d59-kube-api-access-7wrjg" (OuterVolumeSpecName: "kube-api-access-7wrjg") pod "7e0d4d29-dc82-4e09-bf90-874143df5d59" (UID: "7e0d4d29-dc82-4e09-bf90-874143df5d59"). InnerVolumeSpecName "kube-api-access-7wrjg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.482929 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7c87\" (UniqueName: \"kubernetes.io/projected/6d5c43da-a137-4ac3-be09-ec76e9c204d1-kube-api-access-z7c87\") pod \"placement-e766-account-create-update-9vw89\" (UID: \"6d5c43da-a137-4ac3-be09-ec76e9c204d1\") " pod="openstack/placement-e766-account-create-update-9vw89" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.484624 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcj9c\" (UniqueName: \"kubernetes.io/projected/532358e7-4941-439f-b43b-b7ba3cf7c772-kube-api-access-hcj9c\") pod \"dnsmasq-dns-7b57d9888c-4djd4\" (UID: \"532358e7-4941-439f-b43b-b7ba3cf7c772\") " pod="openstack/dnsmasq-dns-7b57d9888c-4djd4" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.488016 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kw9vm\" (UniqueName: \"kubernetes.io/projected/60b98410-cbed-4562-b46c-0c34025045b6-kube-api-access-kw9vm\") pod \"placement-db-create-2cjt7\" (UID: \"60b98410-cbed-4562-b46c-0c34025045b6\") " pod="openstack/placement-db-create-2cjt7" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.569055 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7wrjg\" (UniqueName: \"kubernetes.io/projected/7e0d4d29-dc82-4e09-bf90-874143df5d59-kube-api-access-7wrjg\") on node \"crc\" DevicePath \"\"" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.578028 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78cb4465c9-5pxmr" event={"ID":"7e0d4d29-dc82-4e09-bf90-874143df5d59","Type":"ContainerDied","Data":"74fcf50ab2a02ba0a530c763f92e3e8565239181759b04f93ddac50d33301e92"} Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.578111 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78cb4465c9-5pxmr" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.586761 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a1e42e33-7453-4b97-abca-0c45cc27faa2","Type":"ContainerStarted","Data":"b5cfaa5690cabe34a4d4686bbe5047a703650cf16cae45773a50285423e560b6"} Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.635935 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.654196 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b57d9888c-4djd4" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.672925 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78cb4465c9-5pxmr"] Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.681387 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78cb4465c9-5pxmr"] Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.697920 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-2cjt7" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.706625 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-e766-account-create-update-9vw89" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.733173 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-795cf8b45c-wq24n"] Mar 11 09:00:28 crc kubenswrapper[4808]: W0311 09:00:28.777715 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1f098209_f2ee_4b73_bc32_e1ddb9aacb3b.slice/crio-27869a00d49770df570e328996eca8b92e472bd2fcddb6ca764367100a725d2d WatchSource:0}: Error finding container 27869a00d49770df570e328996eca8b92e472bd2fcddb6ca764367100a725d2d: Status 404 returned error can't find the container with id 27869a00d49770df570e328996eca8b92e472bd2fcddb6ca764367100a725d2d Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.807013 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-35b5-account-create-update-4zxv8"] Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.819459 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-l7k4m"] Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.829292 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.840713 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.940085 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-zcd8b"] Mar 11 09:00:28 crc kubenswrapper[4808]: I0311 09:00:28.961840 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Mar 11 09:00:29 crc kubenswrapper[4808]: I0311 09:00:29.189887 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Mar 11 09:00:29 crc kubenswrapper[4808]: I0311 09:00:29.309683 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7b57d9888c-4djd4"] Mar 11 09:00:29 crc kubenswrapper[4808]: W0311 09:00:29.319703 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod532358e7_4941_439f_b43b_b7ba3cf7c772.slice/crio-dab59514d2007fd6308989867468ecacabdda42295c4da9e3b93928a884b5712 WatchSource:0}: Error finding container dab59514d2007fd6308989867468ecacabdda42295c4da9e3b93928a884b5712: Status 404 returned error can't find the container with id dab59514d2007fd6308989867468ecacabdda42295c4da9e3b93928a884b5712 Mar 11 09:00:29 crc kubenswrapper[4808]: I0311 09:00:29.341303 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-2cjt7"] Mar 11 09:00:29 crc kubenswrapper[4808]: I0311 09:00:29.388929 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-e766-account-create-update-9vw89"] Mar 11 09:00:29 crc kubenswrapper[4808]: I0311 09:00:29.606942 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-l7k4m" event={"ID":"93d7affe-753b-40fe-8e4a-fbe3d2618527","Type":"ContainerDied","Data":"828d32f123ce58adca34aac2e5a6c5de4c805db6efdb744bad2536f49ee36dc6"} Mar 11 09:00:29 crc kubenswrapper[4808]: I0311 09:00:29.607101 4808 generic.go:334] "Generic (PLEG): container finished" podID="93d7affe-753b-40fe-8e4a-fbe3d2618527" containerID="828d32f123ce58adca34aac2e5a6c5de4c805db6efdb744bad2536f49ee36dc6" exitCode=0 Mar 11 09:00:29 crc kubenswrapper[4808]: I0311 09:00:29.607391 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-l7k4m" event={"ID":"93d7affe-753b-40fe-8e4a-fbe3d2618527","Type":"ContainerStarted","Data":"df57b3f435c6e21fa73b72e3ffe90a90c202485f75fa53717e1ade2026b9f500"} Mar 11 09:00:29 crc kubenswrapper[4808]: I0311 09:00:29.611060 4808 generic.go:334] "Generic (PLEG): container finished" podID="1f098209-f2ee-4b73-bc32-e1ddb9aacb3b" containerID="3d89b8ea7bac545a25a80a928456d55bfc80d795a6777772b7320db9992489c6" exitCode=0 Mar 11 09:00:29 crc kubenswrapper[4808]: I0311 09:00:29.611118 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-795cf8b45c-wq24n" event={"ID":"1f098209-f2ee-4b73-bc32-e1ddb9aacb3b","Type":"ContainerDied","Data":"3d89b8ea7bac545a25a80a928456d55bfc80d795a6777772b7320db9992489c6"} Mar 11 09:00:29 crc kubenswrapper[4808]: I0311 09:00:29.611170 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-795cf8b45c-wq24n" event={"ID":"1f098209-f2ee-4b73-bc32-e1ddb9aacb3b","Type":"ContainerStarted","Data":"27869a00d49770df570e328996eca8b92e472bd2fcddb6ca764367100a725d2d"} Mar 11 09:00:29 crc kubenswrapper[4808]: I0311 09:00:29.613903 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-2cjt7" event={"ID":"60b98410-cbed-4562-b46c-0c34025045b6","Type":"ContainerStarted","Data":"9a6403d940d4f7dd2ee9f128a6cf2b9c33e875ef7ace27e7bc576b19c88bd450"} Mar 11 09:00:29 crc kubenswrapper[4808]: I0311 09:00:29.615888 4808 generic.go:334] "Generic (PLEG): container finished" podID="bc2d33c7-29e8-46b7-a743-80d0eed3412d" containerID="1e5053fde5053593012a204609352be46ea1f706659e831ffb42e9784089cb3d" exitCode=0 Mar 11 09:00:29 crc kubenswrapper[4808]: I0311 09:00:29.616018 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-35b5-account-create-update-4zxv8" event={"ID":"bc2d33c7-29e8-46b7-a743-80d0eed3412d","Type":"ContainerDied","Data":"1e5053fde5053593012a204609352be46ea1f706659e831ffb42e9784089cb3d"} Mar 11 09:00:29 crc kubenswrapper[4808]: I0311 09:00:29.616062 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-35b5-account-create-update-4zxv8" event={"ID":"bc2d33c7-29e8-46b7-a743-80d0eed3412d","Type":"ContainerStarted","Data":"0956f946cb8492e5cef20da24a0aa9da18a3d5a848face73392b96272d5ed920"} Mar 11 09:00:29 crc kubenswrapper[4808]: I0311 09:00:29.618000 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-zcd8b" event={"ID":"2f244d77-0b6a-4bcf-a6b4-dc7028019e29","Type":"ContainerStarted","Data":"6fd674f4d1eb009a734a5c6b505490fc1b34cc96907e81fed8a9032b16af9052"} Mar 11 09:00:29 crc kubenswrapper[4808]: I0311 09:00:29.618071 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-zcd8b" event={"ID":"2f244d77-0b6a-4bcf-a6b4-dc7028019e29","Type":"ContainerStarted","Data":"a19043764ff278bbdc9aa76ca369bf97c17bacbe1c814fc53c6cdd85814e418b"} Mar 11 09:00:29 crc kubenswrapper[4808]: I0311 09:00:29.623079 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-e766-account-create-update-9vw89" event={"ID":"6d5c43da-a137-4ac3-be09-ec76e9c204d1","Type":"ContainerStarted","Data":"638d0959e078f0628699bc7dff17a722bae223af517a821d1f3874a28633594b"} Mar 11 09:00:29 crc kubenswrapper[4808]: I0311 09:00:29.625064 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b57d9888c-4djd4" event={"ID":"532358e7-4941-439f-b43b-b7ba3cf7c772","Type":"ContainerStarted","Data":"dab59514d2007fd6308989867468ecacabdda42295c4da9e3b93928a884b5712"} Mar 11 09:00:29 crc kubenswrapper[4808]: I0311 09:00:29.627727 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"08b88a34-0eac-4a47-b3b3-89a8024bbe7b","Type":"ContainerStarted","Data":"a80a84c7a02538db7dd104a1c25f58454d8b8ad58240ca76147e40d3184d3d26"} Mar 11 09:00:29 crc kubenswrapper[4808]: I0311 09:00:29.629305 4808 generic.go:334] "Generic (PLEG): container finished" podID="c5eae4bd-3de8-43d4-95d6-2f7ff2556bb1" containerID="c473418539d00690ea3b86150cfb60f14a8c06be4e2b498ab41241c09f30dfda" exitCode=0 Mar 11 09:00:29 crc kubenswrapper[4808]: I0311 09:00:29.629948 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c47bcb9f9-s9h2q" event={"ID":"c5eae4bd-3de8-43d4-95d6-2f7ff2556bb1","Type":"ContainerDied","Data":"c473418539d00690ea3b86150cfb60f14a8c06be4e2b498ab41241c09f30dfda"} Mar 11 09:00:29 crc kubenswrapper[4808]: I0311 09:00:29.640245 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-zcd8b" podStartSLOduration=2.640224522 podStartE2EDuration="2.640224522s" podCreationTimestamp="2026-03-11 09:00:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:00:29.635495415 +0000 UTC m=+1280.588818735" watchObservedRunningTime="2026-03-11 09:00:29.640224522 +0000 UTC m=+1280.593547832" Mar 11 09:00:29 crc kubenswrapper[4808]: I0311 09:00:29.808389 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e0d4d29-dc82-4e09-bf90-874143df5d59" path="/var/lib/kubelet/pods/7e0d4d29-dc82-4e09-bf90-874143df5d59/volumes" Mar 11 09:00:30 crc kubenswrapper[4808]: I0311 09:00:30.044904 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c47bcb9f9-s9h2q" Mar 11 09:00:30 crc kubenswrapper[4808]: I0311 09:00:30.218789 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfh2c\" (UniqueName: \"kubernetes.io/projected/c5eae4bd-3de8-43d4-95d6-2f7ff2556bb1-kube-api-access-mfh2c\") pod \"c5eae4bd-3de8-43d4-95d6-2f7ff2556bb1\" (UID: \"c5eae4bd-3de8-43d4-95d6-2f7ff2556bb1\") " Mar 11 09:00:30 crc kubenswrapper[4808]: I0311 09:00:30.218950 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5eae4bd-3de8-43d4-95d6-2f7ff2556bb1-config\") pod \"c5eae4bd-3de8-43d4-95d6-2f7ff2556bb1\" (UID: \"c5eae4bd-3de8-43d4-95d6-2f7ff2556bb1\") " Mar 11 09:00:30 crc kubenswrapper[4808]: I0311 09:00:30.218992 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c5eae4bd-3de8-43d4-95d6-2f7ff2556bb1-dns-svc\") pod \"c5eae4bd-3de8-43d4-95d6-2f7ff2556bb1\" (UID: \"c5eae4bd-3de8-43d4-95d6-2f7ff2556bb1\") " Mar 11 09:00:30 crc kubenswrapper[4808]: I0311 09:00:30.226940 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5eae4bd-3de8-43d4-95d6-2f7ff2556bb1-kube-api-access-mfh2c" (OuterVolumeSpecName: "kube-api-access-mfh2c") pod "c5eae4bd-3de8-43d4-95d6-2f7ff2556bb1" (UID: "c5eae4bd-3de8-43d4-95d6-2f7ff2556bb1"). InnerVolumeSpecName "kube-api-access-mfh2c". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:00:30 crc kubenswrapper[4808]: I0311 09:00:30.242558 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5eae4bd-3de8-43d4-95d6-2f7ff2556bb1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c5eae4bd-3de8-43d4-95d6-2f7ff2556bb1" (UID: "c5eae4bd-3de8-43d4-95d6-2f7ff2556bb1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:00:30 crc kubenswrapper[4808]: I0311 09:00:30.247099 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5eae4bd-3de8-43d4-95d6-2f7ff2556bb1-config" (OuterVolumeSpecName: "config") pod "c5eae4bd-3de8-43d4-95d6-2f7ff2556bb1" (UID: "c5eae4bd-3de8-43d4-95d6-2f7ff2556bb1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:00:30 crc kubenswrapper[4808]: I0311 09:00:30.320682 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5eae4bd-3de8-43d4-95d6-2f7ff2556bb1-config\") on node \"crc\" DevicePath \"\"" Mar 11 09:00:30 crc kubenswrapper[4808]: I0311 09:00:30.320875 4808 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c5eae4bd-3de8-43d4-95d6-2f7ff2556bb1-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 11 09:00:30 crc kubenswrapper[4808]: I0311 09:00:30.320953 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mfh2c\" (UniqueName: \"kubernetes.io/projected/c5eae4bd-3de8-43d4-95d6-2f7ff2556bb1-kube-api-access-mfh2c\") on node \"crc\" DevicePath \"\"" Mar 11 09:00:30 crc kubenswrapper[4808]: I0311 09:00:30.639871 4808 generic.go:334] "Generic (PLEG): container finished" podID="532358e7-4941-439f-b43b-b7ba3cf7c772" containerID="b75c9e4259eb049b9eec80508597bce906d4849073c1899478bf714a695578d1" exitCode=0 Mar 11 09:00:30 crc kubenswrapper[4808]: I0311 09:00:30.639953 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b57d9888c-4djd4" event={"ID":"532358e7-4941-439f-b43b-b7ba3cf7c772","Type":"ContainerDied","Data":"b75c9e4259eb049b9eec80508597bce906d4849073c1899478bf714a695578d1"} Mar 11 09:00:30 crc kubenswrapper[4808]: I0311 09:00:30.642236 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-795cf8b45c-wq24n" event={"ID":"1f098209-f2ee-4b73-bc32-e1ddb9aacb3b","Type":"ContainerStarted","Data":"c41658b35c34e584af2ffa9df43b231fe6403f219da4a7368432e5046132508a"} Mar 11 09:00:30 crc kubenswrapper[4808]: I0311 09:00:30.642377 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-795cf8b45c-wq24n" Mar 11 09:00:30 crc kubenswrapper[4808]: I0311 09:00:30.644710 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c47bcb9f9-s9h2q" event={"ID":"c5eae4bd-3de8-43d4-95d6-2f7ff2556bb1","Type":"ContainerDied","Data":"3f398b2a084a66daa16864bfafd6679c8b79fd196a3415ee3e0c2d16d4029926"} Mar 11 09:00:30 crc kubenswrapper[4808]: I0311 09:00:30.644747 4808 scope.go:117] "RemoveContainer" containerID="c473418539d00690ea3b86150cfb60f14a8c06be4e2b498ab41241c09f30dfda" Mar 11 09:00:30 crc kubenswrapper[4808]: I0311 09:00:30.644881 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c47bcb9f9-s9h2q" Mar 11 09:00:30 crc kubenswrapper[4808]: I0311 09:00:30.654892 4808 generic.go:334] "Generic (PLEG): container finished" podID="60b98410-cbed-4562-b46c-0c34025045b6" containerID="1e0b35da78fe287646897a43e662b36a085c2c17ed5eebfd88eca56be8ccd50b" exitCode=0 Mar 11 09:00:30 crc kubenswrapper[4808]: I0311 09:00:30.654959 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-2cjt7" event={"ID":"60b98410-cbed-4562-b46c-0c34025045b6","Type":"ContainerDied","Data":"1e0b35da78fe287646897a43e662b36a085c2c17ed5eebfd88eca56be8ccd50b"} Mar 11 09:00:30 crc kubenswrapper[4808]: I0311 09:00:30.657111 4808 generic.go:334] "Generic (PLEG): container finished" podID="6d5c43da-a137-4ac3-be09-ec76e9c204d1" containerID="a6e24348634dbc9149a69b40299ae6faaf01c5af688867530224a61ee87e04f9" exitCode=0 Mar 11 09:00:30 crc kubenswrapper[4808]: I0311 09:00:30.657185 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-e766-account-create-update-9vw89" event={"ID":"6d5c43da-a137-4ac3-be09-ec76e9c204d1","Type":"ContainerDied","Data":"a6e24348634dbc9149a69b40299ae6faaf01c5af688867530224a61ee87e04f9"} Mar 11 09:00:30 crc kubenswrapper[4808]: I0311 09:00:30.722051 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-795cf8b45c-wq24n" podStartSLOduration=3.722032633 podStartE2EDuration="3.722032633s" podCreationTimestamp="2026-03-11 09:00:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:00:30.714894776 +0000 UTC m=+1281.668218106" watchObservedRunningTime="2026-03-11 09:00:30.722032633 +0000 UTC m=+1281.675355953" Mar 11 09:00:30 crc kubenswrapper[4808]: I0311 09:00:30.798339 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c47bcb9f9-s9h2q"] Mar 11 09:00:30 crc kubenswrapper[4808]: I0311 09:00:30.817148 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7c47bcb9f9-s9h2q"] Mar 11 09:00:31 crc kubenswrapper[4808]: I0311 09:00:31.026709 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-35b5-account-create-update-4zxv8" Mar 11 09:00:31 crc kubenswrapper[4808]: I0311 09:00:31.089125 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-l7k4m" Mar 11 09:00:31 crc kubenswrapper[4808]: I0311 09:00:31.132437 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xqjfr\" (UniqueName: \"kubernetes.io/projected/bc2d33c7-29e8-46b7-a743-80d0eed3412d-kube-api-access-xqjfr\") pod \"bc2d33c7-29e8-46b7-a743-80d0eed3412d\" (UID: \"bc2d33c7-29e8-46b7-a743-80d0eed3412d\") " Mar 11 09:00:31 crc kubenswrapper[4808]: I0311 09:00:31.132618 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc2d33c7-29e8-46b7-a743-80d0eed3412d-operator-scripts\") pod \"bc2d33c7-29e8-46b7-a743-80d0eed3412d\" (UID: \"bc2d33c7-29e8-46b7-a743-80d0eed3412d\") " Mar 11 09:00:31 crc kubenswrapper[4808]: I0311 09:00:31.133783 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc2d33c7-29e8-46b7-a743-80d0eed3412d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bc2d33c7-29e8-46b7-a743-80d0eed3412d" (UID: "bc2d33c7-29e8-46b7-a743-80d0eed3412d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:00:31 crc kubenswrapper[4808]: I0311 09:00:31.136919 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc2d33c7-29e8-46b7-a743-80d0eed3412d-kube-api-access-xqjfr" (OuterVolumeSpecName: "kube-api-access-xqjfr") pod "bc2d33c7-29e8-46b7-a743-80d0eed3412d" (UID: "bc2d33c7-29e8-46b7-a743-80d0eed3412d"). InnerVolumeSpecName "kube-api-access-xqjfr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:00:31 crc kubenswrapper[4808]: I0311 09:00:31.234846 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93d7affe-753b-40fe-8e4a-fbe3d2618527-operator-scripts\") pod \"93d7affe-753b-40fe-8e4a-fbe3d2618527\" (UID: \"93d7affe-753b-40fe-8e4a-fbe3d2618527\") " Mar 11 09:00:31 crc kubenswrapper[4808]: I0311 09:00:31.235004 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5zbxf\" (UniqueName: \"kubernetes.io/projected/93d7affe-753b-40fe-8e4a-fbe3d2618527-kube-api-access-5zbxf\") pod \"93d7affe-753b-40fe-8e4a-fbe3d2618527\" (UID: \"93d7affe-753b-40fe-8e4a-fbe3d2618527\") " Mar 11 09:00:31 crc kubenswrapper[4808]: I0311 09:00:31.235582 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93d7affe-753b-40fe-8e4a-fbe3d2618527-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "93d7affe-753b-40fe-8e4a-fbe3d2618527" (UID: "93d7affe-753b-40fe-8e4a-fbe3d2618527"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:00:31 crc kubenswrapper[4808]: I0311 09:00:31.235754 4808 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc2d33c7-29e8-46b7-a743-80d0eed3412d-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:00:31 crc kubenswrapper[4808]: I0311 09:00:31.235790 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xqjfr\" (UniqueName: \"kubernetes.io/projected/bc2d33c7-29e8-46b7-a743-80d0eed3412d-kube-api-access-xqjfr\") on node \"crc\" DevicePath \"\"" Mar 11 09:00:31 crc kubenswrapper[4808]: I0311 09:00:31.235812 4808 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93d7affe-753b-40fe-8e4a-fbe3d2618527-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:00:31 crc kubenswrapper[4808]: I0311 09:00:31.240684 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93d7affe-753b-40fe-8e4a-fbe3d2618527-kube-api-access-5zbxf" (OuterVolumeSpecName: "kube-api-access-5zbxf") pod "93d7affe-753b-40fe-8e4a-fbe3d2618527" (UID: "93d7affe-753b-40fe-8e4a-fbe3d2618527"). InnerVolumeSpecName "kube-api-access-5zbxf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:00:31 crc kubenswrapper[4808]: I0311 09:00:31.338115 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5zbxf\" (UniqueName: \"kubernetes.io/projected/93d7affe-753b-40fe-8e4a-fbe3d2618527-kube-api-access-5zbxf\") on node \"crc\" DevicePath \"\"" Mar 11 09:00:31 crc kubenswrapper[4808]: I0311 09:00:31.667775 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-l7k4m" Mar 11 09:00:31 crc kubenswrapper[4808]: I0311 09:00:31.667770 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-l7k4m" event={"ID":"93d7affe-753b-40fe-8e4a-fbe3d2618527","Type":"ContainerDied","Data":"df57b3f435c6e21fa73b72e3ffe90a90c202485f75fa53717e1ade2026b9f500"} Mar 11 09:00:31 crc kubenswrapper[4808]: I0311 09:00:31.667942 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df57b3f435c6e21fa73b72e3ffe90a90c202485f75fa53717e1ade2026b9f500" Mar 11 09:00:31 crc kubenswrapper[4808]: I0311 09:00:31.669717 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b57d9888c-4djd4" event={"ID":"532358e7-4941-439f-b43b-b7ba3cf7c772","Type":"ContainerStarted","Data":"813afba209d46fc9c8821e4e477607182a369a3d843cf10a2486174345506f37"} Mar 11 09:00:31 crc kubenswrapper[4808]: I0311 09:00:31.669880 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7b57d9888c-4djd4" Mar 11 09:00:31 crc kubenswrapper[4808]: I0311 09:00:31.671693 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"08b88a34-0eac-4a47-b3b3-89a8024bbe7b","Type":"ContainerStarted","Data":"6ce012fcceee4fbe97b03ef149a01fa24e752a2989ef48bafa67da5e91a5d644"} Mar 11 09:00:31 crc kubenswrapper[4808]: I0311 09:00:31.671848 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"08b88a34-0eac-4a47-b3b3-89a8024bbe7b","Type":"ContainerStarted","Data":"5b2448d64b61c9eccd7290e46f4f9a7b28f6868fca3fd9535f5871db771e8840"} Mar 11 09:00:31 crc kubenswrapper[4808]: I0311 09:00:31.672586 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Mar 11 09:00:31 crc kubenswrapper[4808]: I0311 09:00:31.675037 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-35b5-account-create-update-4zxv8" event={"ID":"bc2d33c7-29e8-46b7-a743-80d0eed3412d","Type":"ContainerDied","Data":"0956f946cb8492e5cef20da24a0aa9da18a3d5a848face73392b96272d5ed920"} Mar 11 09:00:31 crc kubenswrapper[4808]: I0311 09:00:31.675078 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0956f946cb8492e5cef20da24a0aa9da18a3d5a848face73392b96272d5ed920" Mar 11 09:00:31 crc kubenswrapper[4808]: I0311 09:00:31.675237 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-35b5-account-create-update-4zxv8" Mar 11 09:00:31 crc kubenswrapper[4808]: I0311 09:00:31.694889 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7b57d9888c-4djd4" podStartSLOduration=3.69486728 podStartE2EDuration="3.69486728s" podCreationTimestamp="2026-03-11 09:00:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:00:31.687254589 +0000 UTC m=+1282.640577909" watchObservedRunningTime="2026-03-11 09:00:31.69486728 +0000 UTC m=+1282.648190610" Mar 11 09:00:31 crc kubenswrapper[4808]: I0311 09:00:31.710221 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.107010346 podStartE2EDuration="3.710201823s" podCreationTimestamp="2026-03-11 09:00:28 +0000 UTC" firstStartedPulling="2026-03-11 09:00:29.203373632 +0000 UTC m=+1280.156696952" lastFinishedPulling="2026-03-11 09:00:30.806565109 +0000 UTC m=+1281.759888429" observedRunningTime="2026-03-11 09:00:31.710032368 +0000 UTC m=+1282.663355688" watchObservedRunningTime="2026-03-11 09:00:31.710201823 +0000 UTC m=+1282.663525143" Mar 11 09:00:31 crc kubenswrapper[4808]: I0311 09:00:31.714877 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Mar 11 09:00:31 crc kubenswrapper[4808]: I0311 09:00:31.804742 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5eae4bd-3de8-43d4-95d6-2f7ff2556bb1" path="/var/lib/kubelet/pods/c5eae4bd-3de8-43d4-95d6-2f7ff2556bb1/volumes" Mar 11 09:00:31 crc kubenswrapper[4808]: I0311 09:00:31.907137 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-p2tj7"] Mar 11 09:00:31 crc kubenswrapper[4808]: E0311 09:00:31.907475 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc2d33c7-29e8-46b7-a743-80d0eed3412d" containerName="mariadb-account-create-update" Mar 11 09:00:31 crc kubenswrapper[4808]: I0311 09:00:31.907492 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc2d33c7-29e8-46b7-a743-80d0eed3412d" containerName="mariadb-account-create-update" Mar 11 09:00:31 crc kubenswrapper[4808]: E0311 09:00:31.907513 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93d7affe-753b-40fe-8e4a-fbe3d2618527" containerName="mariadb-database-create" Mar 11 09:00:31 crc kubenswrapper[4808]: I0311 09:00:31.907519 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="93d7affe-753b-40fe-8e4a-fbe3d2618527" containerName="mariadb-database-create" Mar 11 09:00:31 crc kubenswrapper[4808]: E0311 09:00:31.907532 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5eae4bd-3de8-43d4-95d6-2f7ff2556bb1" containerName="init" Mar 11 09:00:31 crc kubenswrapper[4808]: I0311 09:00:31.907538 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5eae4bd-3de8-43d4-95d6-2f7ff2556bb1" containerName="init" Mar 11 09:00:31 crc kubenswrapper[4808]: I0311 09:00:31.907689 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc2d33c7-29e8-46b7-a743-80d0eed3412d" containerName="mariadb-account-create-update" Mar 11 09:00:31 crc kubenswrapper[4808]: I0311 09:00:31.907704 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="93d7affe-753b-40fe-8e4a-fbe3d2618527" containerName="mariadb-database-create" Mar 11 09:00:31 crc kubenswrapper[4808]: I0311 09:00:31.907724 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5eae4bd-3de8-43d4-95d6-2f7ff2556bb1" containerName="init" Mar 11 09:00:31 crc kubenswrapper[4808]: I0311 09:00:31.908193 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-p2tj7" Mar 11 09:00:31 crc kubenswrapper[4808]: I0311 09:00:31.917402 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-p2tj7"] Mar 11 09:00:32 crc kubenswrapper[4808]: I0311 09:00:32.018930 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-b04e-account-create-update-v6p9b"] Mar 11 09:00:32 crc kubenswrapper[4808]: I0311 09:00:32.019883 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b04e-account-create-update-v6p9b" Mar 11 09:00:32 crc kubenswrapper[4808]: I0311 09:00:32.021763 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Mar 11 09:00:32 crc kubenswrapper[4808]: I0311 09:00:32.022727 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-e766-account-create-update-9vw89" Mar 11 09:00:32 crc kubenswrapper[4808]: I0311 09:00:32.026925 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-b04e-account-create-update-v6p9b"] Mar 11 09:00:32 crc kubenswrapper[4808]: I0311 09:00:32.053240 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/14fb8c78-3b3c-4c0c-afba-59b67c406374-operator-scripts\") pod \"glance-db-create-p2tj7\" (UID: \"14fb8c78-3b3c-4c0c-afba-59b67c406374\") " pod="openstack/glance-db-create-p2tj7" Mar 11 09:00:32 crc kubenswrapper[4808]: I0311 09:00:32.053399 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79wfq\" (UniqueName: \"kubernetes.io/projected/14fb8c78-3b3c-4c0c-afba-59b67c406374-kube-api-access-79wfq\") pod \"glance-db-create-p2tj7\" (UID: \"14fb8c78-3b3c-4c0c-afba-59b67c406374\") " pod="openstack/glance-db-create-p2tj7" Mar 11 09:00:32 crc kubenswrapper[4808]: I0311 09:00:32.085201 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-2cjt7" Mar 11 09:00:32 crc kubenswrapper[4808]: I0311 09:00:32.154636 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z7c87\" (UniqueName: \"kubernetes.io/projected/6d5c43da-a137-4ac3-be09-ec76e9c204d1-kube-api-access-z7c87\") pod \"6d5c43da-a137-4ac3-be09-ec76e9c204d1\" (UID: \"6d5c43da-a137-4ac3-be09-ec76e9c204d1\") " Mar 11 09:00:32 crc kubenswrapper[4808]: I0311 09:00:32.154825 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d5c43da-a137-4ac3-be09-ec76e9c204d1-operator-scripts\") pod \"6d5c43da-a137-4ac3-be09-ec76e9c204d1\" (UID: \"6d5c43da-a137-4ac3-be09-ec76e9c204d1\") " Mar 11 09:00:32 crc kubenswrapper[4808]: I0311 09:00:32.155005 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/14fb8c78-3b3c-4c0c-afba-59b67c406374-operator-scripts\") pod \"glance-db-create-p2tj7\" (UID: \"14fb8c78-3b3c-4c0c-afba-59b67c406374\") " pod="openstack/glance-db-create-p2tj7" Mar 11 09:00:32 crc kubenswrapper[4808]: I0311 09:00:32.155034 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3df778c6-d975-42f0-853c-37b8840e76a5-operator-scripts\") pod \"glance-b04e-account-create-update-v6p9b\" (UID: \"3df778c6-d975-42f0-853c-37b8840e76a5\") " pod="openstack/glance-b04e-account-create-update-v6p9b" Mar 11 09:00:32 crc kubenswrapper[4808]: I0311 09:00:32.155060 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7xlp\" (UniqueName: \"kubernetes.io/projected/3df778c6-d975-42f0-853c-37b8840e76a5-kube-api-access-n7xlp\") pod \"glance-b04e-account-create-update-v6p9b\" (UID: \"3df778c6-d975-42f0-853c-37b8840e76a5\") " pod="openstack/glance-b04e-account-create-update-v6p9b" Mar 11 09:00:32 crc kubenswrapper[4808]: I0311 09:00:32.155091 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79wfq\" (UniqueName: \"kubernetes.io/projected/14fb8c78-3b3c-4c0c-afba-59b67c406374-kube-api-access-79wfq\") pod \"glance-db-create-p2tj7\" (UID: \"14fb8c78-3b3c-4c0c-afba-59b67c406374\") " pod="openstack/glance-db-create-p2tj7" Mar 11 09:00:32 crc kubenswrapper[4808]: I0311 09:00:32.155836 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d5c43da-a137-4ac3-be09-ec76e9c204d1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6d5c43da-a137-4ac3-be09-ec76e9c204d1" (UID: "6d5c43da-a137-4ac3-be09-ec76e9c204d1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:00:32 crc kubenswrapper[4808]: I0311 09:00:32.155948 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/14fb8c78-3b3c-4c0c-afba-59b67c406374-operator-scripts\") pod \"glance-db-create-p2tj7\" (UID: \"14fb8c78-3b3c-4c0c-afba-59b67c406374\") " pod="openstack/glance-db-create-p2tj7" Mar 11 09:00:32 crc kubenswrapper[4808]: I0311 09:00:32.159854 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d5c43da-a137-4ac3-be09-ec76e9c204d1-kube-api-access-z7c87" (OuterVolumeSpecName: "kube-api-access-z7c87") pod "6d5c43da-a137-4ac3-be09-ec76e9c204d1" (UID: "6d5c43da-a137-4ac3-be09-ec76e9c204d1"). InnerVolumeSpecName "kube-api-access-z7c87". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:00:32 crc kubenswrapper[4808]: I0311 09:00:32.171201 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79wfq\" (UniqueName: \"kubernetes.io/projected/14fb8c78-3b3c-4c0c-afba-59b67c406374-kube-api-access-79wfq\") pod \"glance-db-create-p2tj7\" (UID: \"14fb8c78-3b3c-4c0c-afba-59b67c406374\") " pod="openstack/glance-db-create-p2tj7" Mar 11 09:00:32 crc kubenswrapper[4808]: I0311 09:00:32.225724 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-p2tj7" Mar 11 09:00:32 crc kubenswrapper[4808]: I0311 09:00:32.256567 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kw9vm\" (UniqueName: \"kubernetes.io/projected/60b98410-cbed-4562-b46c-0c34025045b6-kube-api-access-kw9vm\") pod \"60b98410-cbed-4562-b46c-0c34025045b6\" (UID: \"60b98410-cbed-4562-b46c-0c34025045b6\") " Mar 11 09:00:32 crc kubenswrapper[4808]: I0311 09:00:32.256642 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60b98410-cbed-4562-b46c-0c34025045b6-operator-scripts\") pod \"60b98410-cbed-4562-b46c-0c34025045b6\" (UID: \"60b98410-cbed-4562-b46c-0c34025045b6\") " Mar 11 09:00:32 crc kubenswrapper[4808]: I0311 09:00:32.256931 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3df778c6-d975-42f0-853c-37b8840e76a5-operator-scripts\") pod \"glance-b04e-account-create-update-v6p9b\" (UID: \"3df778c6-d975-42f0-853c-37b8840e76a5\") " pod="openstack/glance-b04e-account-create-update-v6p9b" Mar 11 09:00:32 crc kubenswrapper[4808]: I0311 09:00:32.256972 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7xlp\" (UniqueName: \"kubernetes.io/projected/3df778c6-d975-42f0-853c-37b8840e76a5-kube-api-access-n7xlp\") pod \"glance-b04e-account-create-update-v6p9b\" (UID: \"3df778c6-d975-42f0-853c-37b8840e76a5\") " pod="openstack/glance-b04e-account-create-update-v6p9b" Mar 11 09:00:32 crc kubenswrapper[4808]: I0311 09:00:32.257117 4808 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d5c43da-a137-4ac3-be09-ec76e9c204d1-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:00:32 crc kubenswrapper[4808]: I0311 09:00:32.257135 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z7c87\" (UniqueName: \"kubernetes.io/projected/6d5c43da-a137-4ac3-be09-ec76e9c204d1-kube-api-access-z7c87\") on node \"crc\" DevicePath \"\"" Mar 11 09:00:32 crc kubenswrapper[4808]: I0311 09:00:32.258030 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60b98410-cbed-4562-b46c-0c34025045b6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "60b98410-cbed-4562-b46c-0c34025045b6" (UID: "60b98410-cbed-4562-b46c-0c34025045b6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:00:32 crc kubenswrapper[4808]: I0311 09:00:32.258631 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3df778c6-d975-42f0-853c-37b8840e76a5-operator-scripts\") pod \"glance-b04e-account-create-update-v6p9b\" (UID: \"3df778c6-d975-42f0-853c-37b8840e76a5\") " pod="openstack/glance-b04e-account-create-update-v6p9b" Mar 11 09:00:32 crc kubenswrapper[4808]: I0311 09:00:32.262734 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60b98410-cbed-4562-b46c-0c34025045b6-kube-api-access-kw9vm" (OuterVolumeSpecName: "kube-api-access-kw9vm") pod "60b98410-cbed-4562-b46c-0c34025045b6" (UID: "60b98410-cbed-4562-b46c-0c34025045b6"). InnerVolumeSpecName "kube-api-access-kw9vm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:00:32 crc kubenswrapper[4808]: I0311 09:00:32.271204 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7xlp\" (UniqueName: \"kubernetes.io/projected/3df778c6-d975-42f0-853c-37b8840e76a5-kube-api-access-n7xlp\") pod \"glance-b04e-account-create-update-v6p9b\" (UID: \"3df778c6-d975-42f0-853c-37b8840e76a5\") " pod="openstack/glance-b04e-account-create-update-v6p9b" Mar 11 09:00:32 crc kubenswrapper[4808]: I0311 09:00:32.341095 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b04e-account-create-update-v6p9b" Mar 11 09:00:32 crc kubenswrapper[4808]: I0311 09:00:32.358546 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kw9vm\" (UniqueName: \"kubernetes.io/projected/60b98410-cbed-4562-b46c-0c34025045b6-kube-api-access-kw9vm\") on node \"crc\" DevicePath \"\"" Mar 11 09:00:32 crc kubenswrapper[4808]: I0311 09:00:32.358800 4808 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60b98410-cbed-4562-b46c-0c34025045b6-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:00:32 crc kubenswrapper[4808]: I0311 09:00:32.535142 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-p2tj7"] Mar 11 09:00:32 crc kubenswrapper[4808]: I0311 09:00:32.684825 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-2cjt7" event={"ID":"60b98410-cbed-4562-b46c-0c34025045b6","Type":"ContainerDied","Data":"9a6403d940d4f7dd2ee9f128a6cf2b9c33e875ef7ace27e7bc576b19c88bd450"} Mar 11 09:00:32 crc kubenswrapper[4808]: I0311 09:00:32.684870 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a6403d940d4f7dd2ee9f128a6cf2b9c33e875ef7ace27e7bc576b19c88bd450" Mar 11 09:00:32 crc kubenswrapper[4808]: I0311 09:00:32.684945 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-2cjt7" Mar 11 09:00:32 crc kubenswrapper[4808]: I0311 09:00:32.688227 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-e766-account-create-update-9vw89" event={"ID":"6d5c43da-a137-4ac3-be09-ec76e9c204d1","Type":"ContainerDied","Data":"638d0959e078f0628699bc7dff17a722bae223af517a821d1f3874a28633594b"} Mar 11 09:00:32 crc kubenswrapper[4808]: I0311 09:00:32.688262 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="638d0959e078f0628699bc7dff17a722bae223af517a821d1f3874a28633594b" Mar 11 09:00:32 crc kubenswrapper[4808]: I0311 09:00:32.688300 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-e766-account-create-update-9vw89" Mar 11 09:00:32 crc kubenswrapper[4808]: I0311 09:00:32.693205 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-p2tj7" event={"ID":"14fb8c78-3b3c-4c0c-afba-59b67c406374","Type":"ContainerStarted","Data":"95c644b4b0d15c515ede5f2dde4990d97a7dfef8bc918c8330cd05a3cd5763c6"} Mar 11 09:00:32 crc kubenswrapper[4808]: I0311 09:00:32.720514 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-p2tj7" podStartSLOduration=1.720494855 podStartE2EDuration="1.720494855s" podCreationTimestamp="2026-03-11 09:00:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:00:32.709061194 +0000 UTC m=+1283.662384524" watchObservedRunningTime="2026-03-11 09:00:32.720494855 +0000 UTC m=+1283.673818175" Mar 11 09:00:32 crc kubenswrapper[4808]: I0311 09:00:32.888693 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-b04e-account-create-update-v6p9b"] Mar 11 09:00:32 crc kubenswrapper[4808]: W0311 09:00:32.904639 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3df778c6_d975_42f0_853c_37b8840e76a5.slice/crio-b25f726b5d0e37841260b870d21f2624fae7000b90b5ef3003efa5926d343ac0 WatchSource:0}: Error finding container b25f726b5d0e37841260b870d21f2624fae7000b90b5ef3003efa5926d343ac0: Status 404 returned error can't find the container with id b25f726b5d0e37841260b870d21f2624fae7000b90b5ef3003efa5926d343ac0 Mar 11 09:00:33 crc kubenswrapper[4808]: I0311 09:00:33.566748 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-sv8r8"] Mar 11 09:00:33 crc kubenswrapper[4808]: E0311 09:00:33.567410 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d5c43da-a137-4ac3-be09-ec76e9c204d1" containerName="mariadb-account-create-update" Mar 11 09:00:33 crc kubenswrapper[4808]: I0311 09:00:33.567426 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d5c43da-a137-4ac3-be09-ec76e9c204d1" containerName="mariadb-account-create-update" Mar 11 09:00:33 crc kubenswrapper[4808]: E0311 09:00:33.567465 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60b98410-cbed-4562-b46c-0c34025045b6" containerName="mariadb-database-create" Mar 11 09:00:33 crc kubenswrapper[4808]: I0311 09:00:33.567474 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="60b98410-cbed-4562-b46c-0c34025045b6" containerName="mariadb-database-create" Mar 11 09:00:33 crc kubenswrapper[4808]: I0311 09:00:33.567613 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="60b98410-cbed-4562-b46c-0c34025045b6" containerName="mariadb-database-create" Mar 11 09:00:33 crc kubenswrapper[4808]: I0311 09:00:33.567627 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d5c43da-a137-4ac3-be09-ec76e9c204d1" containerName="mariadb-account-create-update" Mar 11 09:00:33 crc kubenswrapper[4808]: I0311 09:00:33.568120 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-sv8r8" Mar 11 09:00:33 crc kubenswrapper[4808]: I0311 09:00:33.602962 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-sv8r8"] Mar 11 09:00:33 crc kubenswrapper[4808]: I0311 09:00:33.631000 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Mar 11 09:00:33 crc kubenswrapper[4808]: I0311 09:00:33.690425 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6k7sz\" (UniqueName: \"kubernetes.io/projected/ae4a4c44-d360-4ee0-bf75-64b28396b127-kube-api-access-6k7sz\") pod \"root-account-create-update-sv8r8\" (UID: \"ae4a4c44-d360-4ee0-bf75-64b28396b127\") " pod="openstack/root-account-create-update-sv8r8" Mar 11 09:00:33 crc kubenswrapper[4808]: I0311 09:00:33.690589 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ae4a4c44-d360-4ee0-bf75-64b28396b127-operator-scripts\") pod \"root-account-create-update-sv8r8\" (UID: \"ae4a4c44-d360-4ee0-bf75-64b28396b127\") " pod="openstack/root-account-create-update-sv8r8" Mar 11 09:00:33 crc kubenswrapper[4808]: I0311 09:00:33.710950 4808 generic.go:334] "Generic (PLEG): container finished" podID="14fb8c78-3b3c-4c0c-afba-59b67c406374" containerID="0389425c7f75e1346c18db409021d7a05794095c602a590c601ece68d2aec62f" exitCode=0 Mar 11 09:00:33 crc kubenswrapper[4808]: I0311 09:00:33.711007 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-p2tj7" event={"ID":"14fb8c78-3b3c-4c0c-afba-59b67c406374","Type":"ContainerDied","Data":"0389425c7f75e1346c18db409021d7a05794095c602a590c601ece68d2aec62f"} Mar 11 09:00:33 crc kubenswrapper[4808]: I0311 09:00:33.713622 4808 generic.go:334] "Generic (PLEG): container finished" podID="3df778c6-d975-42f0-853c-37b8840e76a5" containerID="92849822152ba4ba167bce3b72b0023eb4ac5a5867b0b2030c0a54cc5e08bb5f" exitCode=0 Mar 11 09:00:33 crc kubenswrapper[4808]: I0311 09:00:33.713682 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b04e-account-create-update-v6p9b" event={"ID":"3df778c6-d975-42f0-853c-37b8840e76a5","Type":"ContainerDied","Data":"92849822152ba4ba167bce3b72b0023eb4ac5a5867b0b2030c0a54cc5e08bb5f"} Mar 11 09:00:33 crc kubenswrapper[4808]: I0311 09:00:33.713710 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b04e-account-create-update-v6p9b" event={"ID":"3df778c6-d975-42f0-853c-37b8840e76a5","Type":"ContainerStarted","Data":"b25f726b5d0e37841260b870d21f2624fae7000b90b5ef3003efa5926d343ac0"} Mar 11 09:00:33 crc kubenswrapper[4808]: I0311 09:00:33.792050 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ae4a4c44-d360-4ee0-bf75-64b28396b127-operator-scripts\") pod \"root-account-create-update-sv8r8\" (UID: \"ae4a4c44-d360-4ee0-bf75-64b28396b127\") " pod="openstack/root-account-create-update-sv8r8" Mar 11 09:00:33 crc kubenswrapper[4808]: I0311 09:00:33.792143 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6k7sz\" (UniqueName: \"kubernetes.io/projected/ae4a4c44-d360-4ee0-bf75-64b28396b127-kube-api-access-6k7sz\") pod \"root-account-create-update-sv8r8\" (UID: \"ae4a4c44-d360-4ee0-bf75-64b28396b127\") " pod="openstack/root-account-create-update-sv8r8" Mar 11 09:00:33 crc kubenswrapper[4808]: I0311 09:00:33.792737 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ae4a4c44-d360-4ee0-bf75-64b28396b127-operator-scripts\") pod \"root-account-create-update-sv8r8\" (UID: \"ae4a4c44-d360-4ee0-bf75-64b28396b127\") " pod="openstack/root-account-create-update-sv8r8" Mar 11 09:00:33 crc kubenswrapper[4808]: I0311 09:00:33.811827 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6k7sz\" (UniqueName: \"kubernetes.io/projected/ae4a4c44-d360-4ee0-bf75-64b28396b127-kube-api-access-6k7sz\") pod \"root-account-create-update-sv8r8\" (UID: \"ae4a4c44-d360-4ee0-bf75-64b28396b127\") " pod="openstack/root-account-create-update-sv8r8" Mar 11 09:00:33 crc kubenswrapper[4808]: I0311 09:00:33.944836 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-sv8r8" Mar 11 09:00:34 crc kubenswrapper[4808]: I0311 09:00:34.372647 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-sv8r8"] Mar 11 09:00:34 crc kubenswrapper[4808]: W0311 09:00:34.383551 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podae4a4c44_d360_4ee0_bf75_64b28396b127.slice/crio-3f0077bea01bb94470ea6c44ac9102a90efddf9136dce08d86d999d55d204886 WatchSource:0}: Error finding container 3f0077bea01bb94470ea6c44ac9102a90efddf9136dce08d86d999d55d204886: Status 404 returned error can't find the container with id 3f0077bea01bb94470ea6c44ac9102a90efddf9136dce08d86d999d55d204886 Mar 11 09:00:34 crc kubenswrapper[4808]: I0311 09:00:34.728597 4808 generic.go:334] "Generic (PLEG): container finished" podID="ae4a4c44-d360-4ee0-bf75-64b28396b127" containerID="799583fc7147c5f13c161f65b5dc957a705e3b62301307bedb0b4e78c75d73db" exitCode=0 Mar 11 09:00:34 crc kubenswrapper[4808]: I0311 09:00:34.728730 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-sv8r8" event={"ID":"ae4a4c44-d360-4ee0-bf75-64b28396b127","Type":"ContainerDied","Data":"799583fc7147c5f13c161f65b5dc957a705e3b62301307bedb0b4e78c75d73db"} Mar 11 09:00:34 crc kubenswrapper[4808]: I0311 09:00:34.729693 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-sv8r8" event={"ID":"ae4a4c44-d360-4ee0-bf75-64b28396b127","Type":"ContainerStarted","Data":"3f0077bea01bb94470ea6c44ac9102a90efddf9136dce08d86d999d55d204886"} Mar 11 09:00:35 crc kubenswrapper[4808]: I0311 09:00:35.205218 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b04e-account-create-update-v6p9b" Mar 11 09:00:35 crc kubenswrapper[4808]: I0311 09:00:35.211562 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-p2tj7" Mar 11 09:00:35 crc kubenswrapper[4808]: I0311 09:00:35.344800 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/14fb8c78-3b3c-4c0c-afba-59b67c406374-operator-scripts\") pod \"14fb8c78-3b3c-4c0c-afba-59b67c406374\" (UID: \"14fb8c78-3b3c-4c0c-afba-59b67c406374\") " Mar 11 09:00:35 crc kubenswrapper[4808]: I0311 09:00:35.344839 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3df778c6-d975-42f0-853c-37b8840e76a5-operator-scripts\") pod \"3df778c6-d975-42f0-853c-37b8840e76a5\" (UID: \"3df778c6-d975-42f0-853c-37b8840e76a5\") " Mar 11 09:00:35 crc kubenswrapper[4808]: I0311 09:00:35.344890 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7xlp\" (UniqueName: \"kubernetes.io/projected/3df778c6-d975-42f0-853c-37b8840e76a5-kube-api-access-n7xlp\") pod \"3df778c6-d975-42f0-853c-37b8840e76a5\" (UID: \"3df778c6-d975-42f0-853c-37b8840e76a5\") " Mar 11 09:00:35 crc kubenswrapper[4808]: I0311 09:00:35.344915 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-79wfq\" (UniqueName: \"kubernetes.io/projected/14fb8c78-3b3c-4c0c-afba-59b67c406374-kube-api-access-79wfq\") pod \"14fb8c78-3b3c-4c0c-afba-59b67c406374\" (UID: \"14fb8c78-3b3c-4c0c-afba-59b67c406374\") " Mar 11 09:00:35 crc kubenswrapper[4808]: I0311 09:00:35.345402 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14fb8c78-3b3c-4c0c-afba-59b67c406374-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "14fb8c78-3b3c-4c0c-afba-59b67c406374" (UID: "14fb8c78-3b3c-4c0c-afba-59b67c406374"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:00:35 crc kubenswrapper[4808]: I0311 09:00:35.345641 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3df778c6-d975-42f0-853c-37b8840e76a5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3df778c6-d975-42f0-853c-37b8840e76a5" (UID: "3df778c6-d975-42f0-853c-37b8840e76a5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:00:35 crc kubenswrapper[4808]: I0311 09:00:35.349780 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3df778c6-d975-42f0-853c-37b8840e76a5-kube-api-access-n7xlp" (OuterVolumeSpecName: "kube-api-access-n7xlp") pod "3df778c6-d975-42f0-853c-37b8840e76a5" (UID: "3df778c6-d975-42f0-853c-37b8840e76a5"). InnerVolumeSpecName "kube-api-access-n7xlp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:00:35 crc kubenswrapper[4808]: I0311 09:00:35.350995 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14fb8c78-3b3c-4c0c-afba-59b67c406374-kube-api-access-79wfq" (OuterVolumeSpecName: "kube-api-access-79wfq") pod "14fb8c78-3b3c-4c0c-afba-59b67c406374" (UID: "14fb8c78-3b3c-4c0c-afba-59b67c406374"). InnerVolumeSpecName "kube-api-access-79wfq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:00:35 crc kubenswrapper[4808]: I0311 09:00:35.446948 4808 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/14fb8c78-3b3c-4c0c-afba-59b67c406374-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:00:35 crc kubenswrapper[4808]: I0311 09:00:35.446981 4808 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3df778c6-d975-42f0-853c-37b8840e76a5-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:00:35 crc kubenswrapper[4808]: I0311 09:00:35.446994 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n7xlp\" (UniqueName: \"kubernetes.io/projected/3df778c6-d975-42f0-853c-37b8840e76a5-kube-api-access-n7xlp\") on node \"crc\" DevicePath \"\"" Mar 11 09:00:35 crc kubenswrapper[4808]: I0311 09:00:35.447008 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-79wfq\" (UniqueName: \"kubernetes.io/projected/14fb8c78-3b3c-4c0c-afba-59b67c406374-kube-api-access-79wfq\") on node \"crc\" DevicePath \"\"" Mar 11 09:00:35 crc kubenswrapper[4808]: I0311 09:00:35.743167 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b04e-account-create-update-v6p9b" event={"ID":"3df778c6-d975-42f0-853c-37b8840e76a5","Type":"ContainerDied","Data":"b25f726b5d0e37841260b870d21f2624fae7000b90b5ef3003efa5926d343ac0"} Mar 11 09:00:35 crc kubenswrapper[4808]: I0311 09:00:35.743224 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b25f726b5d0e37841260b870d21f2624fae7000b90b5ef3003efa5926d343ac0" Mar 11 09:00:35 crc kubenswrapper[4808]: I0311 09:00:35.744593 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b04e-account-create-update-v6p9b" Mar 11 09:00:35 crc kubenswrapper[4808]: I0311 09:00:35.745224 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-p2tj7" event={"ID":"14fb8c78-3b3c-4c0c-afba-59b67c406374","Type":"ContainerDied","Data":"95c644b4b0d15c515ede5f2dde4990d97a7dfef8bc918c8330cd05a3cd5763c6"} Mar 11 09:00:35 crc kubenswrapper[4808]: I0311 09:00:35.745287 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="95c644b4b0d15c515ede5f2dde4990d97a7dfef8bc918c8330cd05a3cd5763c6" Mar 11 09:00:35 crc kubenswrapper[4808]: I0311 09:00:35.745252 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-p2tj7" Mar 11 09:00:36 crc kubenswrapper[4808]: I0311 09:00:36.121437 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-sv8r8" Mar 11 09:00:36 crc kubenswrapper[4808]: I0311 09:00:36.261265 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6k7sz\" (UniqueName: \"kubernetes.io/projected/ae4a4c44-d360-4ee0-bf75-64b28396b127-kube-api-access-6k7sz\") pod \"ae4a4c44-d360-4ee0-bf75-64b28396b127\" (UID: \"ae4a4c44-d360-4ee0-bf75-64b28396b127\") " Mar 11 09:00:36 crc kubenswrapper[4808]: I0311 09:00:36.261409 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ae4a4c44-d360-4ee0-bf75-64b28396b127-operator-scripts\") pod \"ae4a4c44-d360-4ee0-bf75-64b28396b127\" (UID: \"ae4a4c44-d360-4ee0-bf75-64b28396b127\") " Mar 11 09:00:36 crc kubenswrapper[4808]: I0311 09:00:36.262991 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae4a4c44-d360-4ee0-bf75-64b28396b127-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ae4a4c44-d360-4ee0-bf75-64b28396b127" (UID: "ae4a4c44-d360-4ee0-bf75-64b28396b127"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:00:36 crc kubenswrapper[4808]: I0311 09:00:36.263825 4808 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ae4a4c44-d360-4ee0-bf75-64b28396b127-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:00:36 crc kubenswrapper[4808]: I0311 09:00:36.281004 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae4a4c44-d360-4ee0-bf75-64b28396b127-kube-api-access-6k7sz" (OuterVolumeSpecName: "kube-api-access-6k7sz") pod "ae4a4c44-d360-4ee0-bf75-64b28396b127" (UID: "ae4a4c44-d360-4ee0-bf75-64b28396b127"). InnerVolumeSpecName "kube-api-access-6k7sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:00:36 crc kubenswrapper[4808]: I0311 09:00:36.335856 4808 scope.go:117] "RemoveContainer" containerID="060f361a44d029c1563e602d7c5ca51bccbcfb1fa002f6f0dfd510cad78dd996" Mar 11 09:00:36 crc kubenswrapper[4808]: I0311 09:00:36.365687 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6k7sz\" (UniqueName: \"kubernetes.io/projected/ae4a4c44-d360-4ee0-bf75-64b28396b127-kube-api-access-6k7sz\") on node \"crc\" DevicePath \"\"" Mar 11 09:00:36 crc kubenswrapper[4808]: I0311 09:00:36.762902 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-sv8r8" event={"ID":"ae4a4c44-d360-4ee0-bf75-64b28396b127","Type":"ContainerDied","Data":"3f0077bea01bb94470ea6c44ac9102a90efddf9136dce08d86d999d55d204886"} Mar 11 09:00:36 crc kubenswrapper[4808]: I0311 09:00:36.763492 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f0077bea01bb94470ea6c44ac9102a90efddf9136dce08d86d999d55d204886" Mar 11 09:00:36 crc kubenswrapper[4808]: I0311 09:00:36.763338 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-sv8r8" Mar 11 09:00:37 crc kubenswrapper[4808]: I0311 09:00:37.079161 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-bvzx7"] Mar 11 09:00:37 crc kubenswrapper[4808]: E0311 09:00:37.079770 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14fb8c78-3b3c-4c0c-afba-59b67c406374" containerName="mariadb-database-create" Mar 11 09:00:37 crc kubenswrapper[4808]: I0311 09:00:37.079834 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="14fb8c78-3b3c-4c0c-afba-59b67c406374" containerName="mariadb-database-create" Mar 11 09:00:37 crc kubenswrapper[4808]: E0311 09:00:37.079886 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae4a4c44-d360-4ee0-bf75-64b28396b127" containerName="mariadb-account-create-update" Mar 11 09:00:37 crc kubenswrapper[4808]: I0311 09:00:37.079932 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae4a4c44-d360-4ee0-bf75-64b28396b127" containerName="mariadb-account-create-update" Mar 11 09:00:37 crc kubenswrapper[4808]: E0311 09:00:37.080002 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3df778c6-d975-42f0-853c-37b8840e76a5" containerName="mariadb-account-create-update" Mar 11 09:00:37 crc kubenswrapper[4808]: I0311 09:00:37.080053 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="3df778c6-d975-42f0-853c-37b8840e76a5" containerName="mariadb-account-create-update" Mar 11 09:00:37 crc kubenswrapper[4808]: I0311 09:00:37.080244 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="14fb8c78-3b3c-4c0c-afba-59b67c406374" containerName="mariadb-database-create" Mar 11 09:00:37 crc kubenswrapper[4808]: I0311 09:00:37.080304 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae4a4c44-d360-4ee0-bf75-64b28396b127" containerName="mariadb-account-create-update" Mar 11 09:00:37 crc kubenswrapper[4808]: I0311 09:00:37.080372 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="3df778c6-d975-42f0-853c-37b8840e76a5" containerName="mariadb-account-create-update" Mar 11 09:00:37 crc kubenswrapper[4808]: I0311 09:00:37.080934 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-bvzx7" Mar 11 09:00:37 crc kubenswrapper[4808]: I0311 09:00:37.082881 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-vkl5m" Mar 11 09:00:37 crc kubenswrapper[4808]: I0311 09:00:37.083583 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Mar 11 09:00:37 crc kubenswrapper[4808]: I0311 09:00:37.093998 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-bvzx7"] Mar 11 09:00:37 crc kubenswrapper[4808]: I0311 09:00:37.183397 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9a7b14d5-82e6-402d-a64d-cec1541d5195-db-sync-config-data\") pod \"glance-db-sync-bvzx7\" (UID: \"9a7b14d5-82e6-402d-a64d-cec1541d5195\") " pod="openstack/glance-db-sync-bvzx7" Mar 11 09:00:37 crc kubenswrapper[4808]: I0311 09:00:37.183563 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5npn\" (UniqueName: \"kubernetes.io/projected/9a7b14d5-82e6-402d-a64d-cec1541d5195-kube-api-access-q5npn\") pod \"glance-db-sync-bvzx7\" (UID: \"9a7b14d5-82e6-402d-a64d-cec1541d5195\") " pod="openstack/glance-db-sync-bvzx7" Mar 11 09:00:37 crc kubenswrapper[4808]: I0311 09:00:37.183603 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a7b14d5-82e6-402d-a64d-cec1541d5195-combined-ca-bundle\") pod \"glance-db-sync-bvzx7\" (UID: \"9a7b14d5-82e6-402d-a64d-cec1541d5195\") " pod="openstack/glance-db-sync-bvzx7" Mar 11 09:00:37 crc kubenswrapper[4808]: I0311 09:00:37.183676 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a7b14d5-82e6-402d-a64d-cec1541d5195-config-data\") pod \"glance-db-sync-bvzx7\" (UID: \"9a7b14d5-82e6-402d-a64d-cec1541d5195\") " pod="openstack/glance-db-sync-bvzx7" Mar 11 09:00:37 crc kubenswrapper[4808]: I0311 09:00:37.285563 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5npn\" (UniqueName: \"kubernetes.io/projected/9a7b14d5-82e6-402d-a64d-cec1541d5195-kube-api-access-q5npn\") pod \"glance-db-sync-bvzx7\" (UID: \"9a7b14d5-82e6-402d-a64d-cec1541d5195\") " pod="openstack/glance-db-sync-bvzx7" Mar 11 09:00:37 crc kubenswrapper[4808]: I0311 09:00:37.285637 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a7b14d5-82e6-402d-a64d-cec1541d5195-combined-ca-bundle\") pod \"glance-db-sync-bvzx7\" (UID: \"9a7b14d5-82e6-402d-a64d-cec1541d5195\") " pod="openstack/glance-db-sync-bvzx7" Mar 11 09:00:37 crc kubenswrapper[4808]: I0311 09:00:37.285706 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a7b14d5-82e6-402d-a64d-cec1541d5195-config-data\") pod \"glance-db-sync-bvzx7\" (UID: \"9a7b14d5-82e6-402d-a64d-cec1541d5195\") " pod="openstack/glance-db-sync-bvzx7" Mar 11 09:00:37 crc kubenswrapper[4808]: I0311 09:00:37.285760 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9a7b14d5-82e6-402d-a64d-cec1541d5195-db-sync-config-data\") pod \"glance-db-sync-bvzx7\" (UID: \"9a7b14d5-82e6-402d-a64d-cec1541d5195\") " pod="openstack/glance-db-sync-bvzx7" Mar 11 09:00:37 crc kubenswrapper[4808]: I0311 09:00:37.290722 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9a7b14d5-82e6-402d-a64d-cec1541d5195-db-sync-config-data\") pod \"glance-db-sync-bvzx7\" (UID: \"9a7b14d5-82e6-402d-a64d-cec1541d5195\") " pod="openstack/glance-db-sync-bvzx7" Mar 11 09:00:37 crc kubenswrapper[4808]: I0311 09:00:37.290732 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a7b14d5-82e6-402d-a64d-cec1541d5195-combined-ca-bundle\") pod \"glance-db-sync-bvzx7\" (UID: \"9a7b14d5-82e6-402d-a64d-cec1541d5195\") " pod="openstack/glance-db-sync-bvzx7" Mar 11 09:00:37 crc kubenswrapper[4808]: I0311 09:00:37.291156 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a7b14d5-82e6-402d-a64d-cec1541d5195-config-data\") pod \"glance-db-sync-bvzx7\" (UID: \"9a7b14d5-82e6-402d-a64d-cec1541d5195\") " pod="openstack/glance-db-sync-bvzx7" Mar 11 09:00:37 crc kubenswrapper[4808]: I0311 09:00:37.306577 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5npn\" (UniqueName: \"kubernetes.io/projected/9a7b14d5-82e6-402d-a64d-cec1541d5195-kube-api-access-q5npn\") pod \"glance-db-sync-bvzx7\" (UID: \"9a7b14d5-82e6-402d-a64d-cec1541d5195\") " pod="openstack/glance-db-sync-bvzx7" Mar 11 09:00:37 crc kubenswrapper[4808]: I0311 09:00:37.397067 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-bvzx7" Mar 11 09:00:37 crc kubenswrapper[4808]: W0311 09:00:37.883228 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9a7b14d5_82e6_402d_a64d_cec1541d5195.slice/crio-6812b1e28314e6d111bf20de20510cd8245fa345d173cd3f2f90892987fa1e2f WatchSource:0}: Error finding container 6812b1e28314e6d111bf20de20510cd8245fa345d173cd3f2f90892987fa1e2f: Status 404 returned error can't find the container with id 6812b1e28314e6d111bf20de20510cd8245fa345d173cd3f2f90892987fa1e2f Mar 11 09:00:37 crc kubenswrapper[4808]: I0311 09:00:37.884210 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-bvzx7"] Mar 11 09:00:38 crc kubenswrapper[4808]: I0311 09:00:38.163528 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-795cf8b45c-wq24n" Mar 11 09:00:38 crc kubenswrapper[4808]: I0311 09:00:38.656589 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7b57d9888c-4djd4" Mar 11 09:00:38 crc kubenswrapper[4808]: I0311 09:00:38.732680 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-795cf8b45c-wq24n"] Mar 11 09:00:38 crc kubenswrapper[4808]: I0311 09:00:38.776857 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f7dd995-lrrzj"] Mar 11 09:00:38 crc kubenswrapper[4808]: I0311 09:00:38.782631 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f7dd995-lrrzj" Mar 11 09:00:38 crc kubenswrapper[4808]: I0311 09:00:38.790287 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f7dd995-lrrzj"] Mar 11 09:00:38 crc kubenswrapper[4808]: I0311 09:00:38.803331 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-795cf8b45c-wq24n" podUID="1f098209-f2ee-4b73-bc32-e1ddb9aacb3b" containerName="dnsmasq-dns" containerID="cri-o://c41658b35c34e584af2ffa9df43b231fe6403f219da4a7368432e5046132508a" gracePeriod=10 Mar 11 09:00:38 crc kubenswrapper[4808]: I0311 09:00:38.803400 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-bvzx7" event={"ID":"9a7b14d5-82e6-402d-a64d-cec1541d5195","Type":"ContainerStarted","Data":"6812b1e28314e6d111bf20de20510cd8245fa345d173cd3f2f90892987fa1e2f"} Mar 11 09:00:38 crc kubenswrapper[4808]: I0311 09:00:38.810904 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6ad27efa-241d-4ee1-978d-2dacc25cb7e1-ovsdbserver-nb\") pod \"dnsmasq-dns-675f7dd995-lrrzj\" (UID: \"6ad27efa-241d-4ee1-978d-2dacc25cb7e1\") " pod="openstack/dnsmasq-dns-675f7dd995-lrrzj" Mar 11 09:00:38 crc kubenswrapper[4808]: I0311 09:00:38.810957 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltfv2\" (UniqueName: \"kubernetes.io/projected/6ad27efa-241d-4ee1-978d-2dacc25cb7e1-kube-api-access-ltfv2\") pod \"dnsmasq-dns-675f7dd995-lrrzj\" (UID: \"6ad27efa-241d-4ee1-978d-2dacc25cb7e1\") " pod="openstack/dnsmasq-dns-675f7dd995-lrrzj" Mar 11 09:00:38 crc kubenswrapper[4808]: I0311 09:00:38.811008 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6ad27efa-241d-4ee1-978d-2dacc25cb7e1-dns-svc\") pod \"dnsmasq-dns-675f7dd995-lrrzj\" (UID: \"6ad27efa-241d-4ee1-978d-2dacc25cb7e1\") " pod="openstack/dnsmasq-dns-675f7dd995-lrrzj" Mar 11 09:00:38 crc kubenswrapper[4808]: I0311 09:00:38.811266 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6ad27efa-241d-4ee1-978d-2dacc25cb7e1-ovsdbserver-sb\") pod \"dnsmasq-dns-675f7dd995-lrrzj\" (UID: \"6ad27efa-241d-4ee1-978d-2dacc25cb7e1\") " pod="openstack/dnsmasq-dns-675f7dd995-lrrzj" Mar 11 09:00:38 crc kubenswrapper[4808]: I0311 09:00:38.811329 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ad27efa-241d-4ee1-978d-2dacc25cb7e1-config\") pod \"dnsmasq-dns-675f7dd995-lrrzj\" (UID: \"6ad27efa-241d-4ee1-978d-2dacc25cb7e1\") " pod="openstack/dnsmasq-dns-675f7dd995-lrrzj" Mar 11 09:00:38 crc kubenswrapper[4808]: I0311 09:00:38.912933 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6ad27efa-241d-4ee1-978d-2dacc25cb7e1-ovsdbserver-nb\") pod \"dnsmasq-dns-675f7dd995-lrrzj\" (UID: \"6ad27efa-241d-4ee1-978d-2dacc25cb7e1\") " pod="openstack/dnsmasq-dns-675f7dd995-lrrzj" Mar 11 09:00:38 crc kubenswrapper[4808]: I0311 09:00:38.912983 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltfv2\" (UniqueName: \"kubernetes.io/projected/6ad27efa-241d-4ee1-978d-2dacc25cb7e1-kube-api-access-ltfv2\") pod \"dnsmasq-dns-675f7dd995-lrrzj\" (UID: \"6ad27efa-241d-4ee1-978d-2dacc25cb7e1\") " pod="openstack/dnsmasq-dns-675f7dd995-lrrzj" Mar 11 09:00:38 crc kubenswrapper[4808]: I0311 09:00:38.913045 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6ad27efa-241d-4ee1-978d-2dacc25cb7e1-dns-svc\") pod \"dnsmasq-dns-675f7dd995-lrrzj\" (UID: \"6ad27efa-241d-4ee1-978d-2dacc25cb7e1\") " pod="openstack/dnsmasq-dns-675f7dd995-lrrzj" Mar 11 09:00:38 crc kubenswrapper[4808]: I0311 09:00:38.913111 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6ad27efa-241d-4ee1-978d-2dacc25cb7e1-ovsdbserver-sb\") pod \"dnsmasq-dns-675f7dd995-lrrzj\" (UID: \"6ad27efa-241d-4ee1-978d-2dacc25cb7e1\") " pod="openstack/dnsmasq-dns-675f7dd995-lrrzj" Mar 11 09:00:38 crc kubenswrapper[4808]: I0311 09:00:38.913133 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ad27efa-241d-4ee1-978d-2dacc25cb7e1-config\") pod \"dnsmasq-dns-675f7dd995-lrrzj\" (UID: \"6ad27efa-241d-4ee1-978d-2dacc25cb7e1\") " pod="openstack/dnsmasq-dns-675f7dd995-lrrzj" Mar 11 09:00:38 crc kubenswrapper[4808]: I0311 09:00:38.913984 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ad27efa-241d-4ee1-978d-2dacc25cb7e1-config\") pod \"dnsmasq-dns-675f7dd995-lrrzj\" (UID: \"6ad27efa-241d-4ee1-978d-2dacc25cb7e1\") " pod="openstack/dnsmasq-dns-675f7dd995-lrrzj" Mar 11 09:00:38 crc kubenswrapper[4808]: I0311 09:00:38.914838 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6ad27efa-241d-4ee1-978d-2dacc25cb7e1-ovsdbserver-nb\") pod \"dnsmasq-dns-675f7dd995-lrrzj\" (UID: \"6ad27efa-241d-4ee1-978d-2dacc25cb7e1\") " pod="openstack/dnsmasq-dns-675f7dd995-lrrzj" Mar 11 09:00:38 crc kubenswrapper[4808]: I0311 09:00:38.915037 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6ad27efa-241d-4ee1-978d-2dacc25cb7e1-ovsdbserver-sb\") pod \"dnsmasq-dns-675f7dd995-lrrzj\" (UID: \"6ad27efa-241d-4ee1-978d-2dacc25cb7e1\") " pod="openstack/dnsmasq-dns-675f7dd995-lrrzj" Mar 11 09:00:38 crc kubenswrapper[4808]: I0311 09:00:38.915322 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6ad27efa-241d-4ee1-978d-2dacc25cb7e1-dns-svc\") pod \"dnsmasq-dns-675f7dd995-lrrzj\" (UID: \"6ad27efa-241d-4ee1-978d-2dacc25cb7e1\") " pod="openstack/dnsmasq-dns-675f7dd995-lrrzj" Mar 11 09:00:38 crc kubenswrapper[4808]: I0311 09:00:38.936784 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltfv2\" (UniqueName: \"kubernetes.io/projected/6ad27efa-241d-4ee1-978d-2dacc25cb7e1-kube-api-access-ltfv2\") pod \"dnsmasq-dns-675f7dd995-lrrzj\" (UID: \"6ad27efa-241d-4ee1-978d-2dacc25cb7e1\") " pod="openstack/dnsmasq-dns-675f7dd995-lrrzj" Mar 11 09:00:39 crc kubenswrapper[4808]: I0311 09:00:39.134091 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f7dd995-lrrzj" Mar 11 09:00:39 crc kubenswrapper[4808]: I0311 09:00:39.356753 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-795cf8b45c-wq24n" Mar 11 09:00:39 crc kubenswrapper[4808]: I0311 09:00:39.421904 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ttnv5\" (UniqueName: \"kubernetes.io/projected/1f098209-f2ee-4b73-bc32-e1ddb9aacb3b-kube-api-access-ttnv5\") pod \"1f098209-f2ee-4b73-bc32-e1ddb9aacb3b\" (UID: \"1f098209-f2ee-4b73-bc32-e1ddb9aacb3b\") " Mar 11 09:00:39 crc kubenswrapper[4808]: I0311 09:00:39.425274 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1f098209-f2ee-4b73-bc32-e1ddb9aacb3b-dns-svc\") pod \"1f098209-f2ee-4b73-bc32-e1ddb9aacb3b\" (UID: \"1f098209-f2ee-4b73-bc32-e1ddb9aacb3b\") " Mar 11 09:00:39 crc kubenswrapper[4808]: I0311 09:00:39.425376 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1f098209-f2ee-4b73-bc32-e1ddb9aacb3b-ovsdbserver-sb\") pod \"1f098209-f2ee-4b73-bc32-e1ddb9aacb3b\" (UID: \"1f098209-f2ee-4b73-bc32-e1ddb9aacb3b\") " Mar 11 09:00:39 crc kubenswrapper[4808]: I0311 09:00:39.425449 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f098209-f2ee-4b73-bc32-e1ddb9aacb3b-config\") pod \"1f098209-f2ee-4b73-bc32-e1ddb9aacb3b\" (UID: \"1f098209-f2ee-4b73-bc32-e1ddb9aacb3b\") " Mar 11 09:00:39 crc kubenswrapper[4808]: I0311 09:00:39.430888 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f098209-f2ee-4b73-bc32-e1ddb9aacb3b-kube-api-access-ttnv5" (OuterVolumeSpecName: "kube-api-access-ttnv5") pod "1f098209-f2ee-4b73-bc32-e1ddb9aacb3b" (UID: "1f098209-f2ee-4b73-bc32-e1ddb9aacb3b"). InnerVolumeSpecName "kube-api-access-ttnv5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:00:39 crc kubenswrapper[4808]: I0311 09:00:39.477483 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f098209-f2ee-4b73-bc32-e1ddb9aacb3b-config" (OuterVolumeSpecName: "config") pod "1f098209-f2ee-4b73-bc32-e1ddb9aacb3b" (UID: "1f098209-f2ee-4b73-bc32-e1ddb9aacb3b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:00:39 crc kubenswrapper[4808]: I0311 09:00:39.495057 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f098209-f2ee-4b73-bc32-e1ddb9aacb3b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1f098209-f2ee-4b73-bc32-e1ddb9aacb3b" (UID: "1f098209-f2ee-4b73-bc32-e1ddb9aacb3b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:00:39 crc kubenswrapper[4808]: I0311 09:00:39.498382 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f098209-f2ee-4b73-bc32-e1ddb9aacb3b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1f098209-f2ee-4b73-bc32-e1ddb9aacb3b" (UID: "1f098209-f2ee-4b73-bc32-e1ddb9aacb3b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:00:39 crc kubenswrapper[4808]: I0311 09:00:39.527592 4808 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1f098209-f2ee-4b73-bc32-e1ddb9aacb3b-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 11 09:00:39 crc kubenswrapper[4808]: I0311 09:00:39.527833 4808 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1f098209-f2ee-4b73-bc32-e1ddb9aacb3b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 11 09:00:39 crc kubenswrapper[4808]: I0311 09:00:39.527905 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f098209-f2ee-4b73-bc32-e1ddb9aacb3b-config\") on node \"crc\" DevicePath \"\"" Mar 11 09:00:39 crc kubenswrapper[4808]: I0311 09:00:39.527990 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ttnv5\" (UniqueName: \"kubernetes.io/projected/1f098209-f2ee-4b73-bc32-e1ddb9aacb3b-kube-api-access-ttnv5\") on node \"crc\" DevicePath \"\"" Mar 11 09:00:39 crc kubenswrapper[4808]: I0311 09:00:39.638101 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f7dd995-lrrzj"] Mar 11 09:00:39 crc kubenswrapper[4808]: I0311 09:00:39.827521 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f7dd995-lrrzj" event={"ID":"6ad27efa-241d-4ee1-978d-2dacc25cb7e1","Type":"ContainerStarted","Data":"719f5e0895343dc9d8628ff011d0e449d17587f13f927a5c19c6bc369923f55d"} Mar 11 09:00:39 crc kubenswrapper[4808]: I0311 09:00:39.831775 4808 generic.go:334] "Generic (PLEG): container finished" podID="1f098209-f2ee-4b73-bc32-e1ddb9aacb3b" containerID="c41658b35c34e584af2ffa9df43b231fe6403f219da4a7368432e5046132508a" exitCode=0 Mar 11 09:00:39 crc kubenswrapper[4808]: I0311 09:00:39.831805 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-795cf8b45c-wq24n" event={"ID":"1f098209-f2ee-4b73-bc32-e1ddb9aacb3b","Type":"ContainerDied","Data":"c41658b35c34e584af2ffa9df43b231fe6403f219da4a7368432e5046132508a"} Mar 11 09:00:39 crc kubenswrapper[4808]: I0311 09:00:39.831825 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-795cf8b45c-wq24n" event={"ID":"1f098209-f2ee-4b73-bc32-e1ddb9aacb3b","Type":"ContainerDied","Data":"27869a00d49770df570e328996eca8b92e472bd2fcddb6ca764367100a725d2d"} Mar 11 09:00:39 crc kubenswrapper[4808]: I0311 09:00:39.831842 4808 scope.go:117] "RemoveContainer" containerID="c41658b35c34e584af2ffa9df43b231fe6403f219da4a7368432e5046132508a" Mar 11 09:00:39 crc kubenswrapper[4808]: I0311 09:00:39.831851 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-795cf8b45c-wq24n" Mar 11 09:00:39 crc kubenswrapper[4808]: I0311 09:00:39.866822 4808 scope.go:117] "RemoveContainer" containerID="3d89b8ea7bac545a25a80a928456d55bfc80d795a6777772b7320db9992489c6" Mar 11 09:00:39 crc kubenswrapper[4808]: I0311 09:00:39.878556 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-795cf8b45c-wq24n"] Mar 11 09:00:39 crc kubenswrapper[4808]: I0311 09:00:39.898095 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-795cf8b45c-wq24n"] Mar 11 09:00:39 crc kubenswrapper[4808]: I0311 09:00:39.911927 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Mar 11 09:00:39 crc kubenswrapper[4808]: E0311 09:00:39.912537 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f098209-f2ee-4b73-bc32-e1ddb9aacb3b" containerName="init" Mar 11 09:00:39 crc kubenswrapper[4808]: I0311 09:00:39.912655 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f098209-f2ee-4b73-bc32-e1ddb9aacb3b" containerName="init" Mar 11 09:00:39 crc kubenswrapper[4808]: E0311 09:00:39.912748 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f098209-f2ee-4b73-bc32-e1ddb9aacb3b" containerName="dnsmasq-dns" Mar 11 09:00:39 crc kubenswrapper[4808]: I0311 09:00:39.912815 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f098209-f2ee-4b73-bc32-e1ddb9aacb3b" containerName="dnsmasq-dns" Mar 11 09:00:39 crc kubenswrapper[4808]: I0311 09:00:39.913091 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f098209-f2ee-4b73-bc32-e1ddb9aacb3b" containerName="dnsmasq-dns" Mar 11 09:00:39 crc kubenswrapper[4808]: I0311 09:00:39.917020 4808 scope.go:117] "RemoveContainer" containerID="c41658b35c34e584af2ffa9df43b231fe6403f219da4a7368432e5046132508a" Mar 11 09:00:39 crc kubenswrapper[4808]: E0311 09:00:39.917586 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c41658b35c34e584af2ffa9df43b231fe6403f219da4a7368432e5046132508a\": container with ID starting with c41658b35c34e584af2ffa9df43b231fe6403f219da4a7368432e5046132508a not found: ID does not exist" containerID="c41658b35c34e584af2ffa9df43b231fe6403f219da4a7368432e5046132508a" Mar 11 09:00:39 crc kubenswrapper[4808]: I0311 09:00:39.917617 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c41658b35c34e584af2ffa9df43b231fe6403f219da4a7368432e5046132508a"} err="failed to get container status \"c41658b35c34e584af2ffa9df43b231fe6403f219da4a7368432e5046132508a\": rpc error: code = NotFound desc = could not find container \"c41658b35c34e584af2ffa9df43b231fe6403f219da4a7368432e5046132508a\": container with ID starting with c41658b35c34e584af2ffa9df43b231fe6403f219da4a7368432e5046132508a not found: ID does not exist" Mar 11 09:00:39 crc kubenswrapper[4808]: I0311 09:00:39.917636 4808 scope.go:117] "RemoveContainer" containerID="3d89b8ea7bac545a25a80a928456d55bfc80d795a6777772b7320db9992489c6" Mar 11 09:00:39 crc kubenswrapper[4808]: E0311 09:00:39.917872 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d89b8ea7bac545a25a80a928456d55bfc80d795a6777772b7320db9992489c6\": container with ID starting with 3d89b8ea7bac545a25a80a928456d55bfc80d795a6777772b7320db9992489c6 not found: ID does not exist" containerID="3d89b8ea7bac545a25a80a928456d55bfc80d795a6777772b7320db9992489c6" Mar 11 09:00:39 crc kubenswrapper[4808]: I0311 09:00:39.917964 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d89b8ea7bac545a25a80a928456d55bfc80d795a6777772b7320db9992489c6"} err="failed to get container status \"3d89b8ea7bac545a25a80a928456d55bfc80d795a6777772b7320db9992489c6\": rpc error: code = NotFound desc = could not find container \"3d89b8ea7bac545a25a80a928456d55bfc80d795a6777772b7320db9992489c6\": container with ID starting with 3d89b8ea7bac545a25a80a928456d55bfc80d795a6777772b7320db9992489c6 not found: ID does not exist" Mar 11 09:00:39 crc kubenswrapper[4808]: I0311 09:00:39.920838 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Mar 11 09:00:39 crc kubenswrapper[4808]: I0311 09:00:39.922538 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Mar 11 09:00:39 crc kubenswrapper[4808]: I0311 09:00:39.923601 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Mar 11 09:00:39 crc kubenswrapper[4808]: I0311 09:00:39.923774 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-r7rkp" Mar 11 09:00:39 crc kubenswrapper[4808]: I0311 09:00:39.924642 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Mar 11 09:00:39 crc kubenswrapper[4808]: I0311 09:00:39.924651 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Mar 11 09:00:39 crc kubenswrapper[4808]: I0311 09:00:39.938054 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssfkg\" (UniqueName: \"kubernetes.io/projected/b2531f01-6ef8-4583-b788-97e0c8b4b50b-kube-api-access-ssfkg\") pod \"swift-storage-0\" (UID: \"b2531f01-6ef8-4583-b788-97e0c8b4b50b\") " pod="openstack/swift-storage-0" Mar 11 09:00:39 crc kubenswrapper[4808]: I0311 09:00:39.938148 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2531f01-6ef8-4583-b788-97e0c8b4b50b-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"b2531f01-6ef8-4583-b788-97e0c8b4b50b\") " pod="openstack/swift-storage-0" Mar 11 09:00:39 crc kubenswrapper[4808]: I0311 09:00:39.938181 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/b2531f01-6ef8-4583-b788-97e0c8b4b50b-cache\") pod \"swift-storage-0\" (UID: \"b2531f01-6ef8-4583-b788-97e0c8b4b50b\") " pod="openstack/swift-storage-0" Mar 11 09:00:39 crc kubenswrapper[4808]: I0311 09:00:39.938246 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b2531f01-6ef8-4583-b788-97e0c8b4b50b-etc-swift\") pod \"swift-storage-0\" (UID: \"b2531f01-6ef8-4583-b788-97e0c8b4b50b\") " pod="openstack/swift-storage-0" Mar 11 09:00:39 crc kubenswrapper[4808]: I0311 09:00:39.938292 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"swift-storage-0\" (UID: \"b2531f01-6ef8-4583-b788-97e0c8b4b50b\") " pod="openstack/swift-storage-0" Mar 11 09:00:39 crc kubenswrapper[4808]: I0311 09:00:39.938321 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/b2531f01-6ef8-4583-b788-97e0c8b4b50b-lock\") pod \"swift-storage-0\" (UID: \"b2531f01-6ef8-4583-b788-97e0c8b4b50b\") " pod="openstack/swift-storage-0" Mar 11 09:00:40 crc kubenswrapper[4808]: I0311 09:00:40.021852 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-sv8r8"] Mar 11 09:00:40 crc kubenswrapper[4808]: I0311 09:00:40.028404 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-sv8r8"] Mar 11 09:00:40 crc kubenswrapper[4808]: I0311 09:00:40.039476 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssfkg\" (UniqueName: \"kubernetes.io/projected/b2531f01-6ef8-4583-b788-97e0c8b4b50b-kube-api-access-ssfkg\") pod \"swift-storage-0\" (UID: \"b2531f01-6ef8-4583-b788-97e0c8b4b50b\") " pod="openstack/swift-storage-0" Mar 11 09:00:40 crc kubenswrapper[4808]: I0311 09:00:40.039635 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2531f01-6ef8-4583-b788-97e0c8b4b50b-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"b2531f01-6ef8-4583-b788-97e0c8b4b50b\") " pod="openstack/swift-storage-0" Mar 11 09:00:40 crc kubenswrapper[4808]: I0311 09:00:40.039709 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/b2531f01-6ef8-4583-b788-97e0c8b4b50b-cache\") pod \"swift-storage-0\" (UID: \"b2531f01-6ef8-4583-b788-97e0c8b4b50b\") " pod="openstack/swift-storage-0" Mar 11 09:00:40 crc kubenswrapper[4808]: I0311 09:00:40.039814 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b2531f01-6ef8-4583-b788-97e0c8b4b50b-etc-swift\") pod \"swift-storage-0\" (UID: \"b2531f01-6ef8-4583-b788-97e0c8b4b50b\") " pod="openstack/swift-storage-0" Mar 11 09:00:40 crc kubenswrapper[4808]: E0311 09:00:40.039896 4808 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 11 09:00:40 crc kubenswrapper[4808]: E0311 09:00:40.039919 4808 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 11 09:00:40 crc kubenswrapper[4808]: E0311 09:00:40.039966 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b2531f01-6ef8-4583-b788-97e0c8b4b50b-etc-swift podName:b2531f01-6ef8-4583-b788-97e0c8b4b50b nodeName:}" failed. No retries permitted until 2026-03-11 09:00:40.539945992 +0000 UTC m=+1291.493269312 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b2531f01-6ef8-4583-b788-97e0c8b4b50b-etc-swift") pod "swift-storage-0" (UID: "b2531f01-6ef8-4583-b788-97e0c8b4b50b") : configmap "swift-ring-files" not found Mar 11 09:00:40 crc kubenswrapper[4808]: I0311 09:00:40.039898 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"swift-storage-0\" (UID: \"b2531f01-6ef8-4583-b788-97e0c8b4b50b\") " pod="openstack/swift-storage-0" Mar 11 09:00:40 crc kubenswrapper[4808]: I0311 09:00:40.040012 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/b2531f01-6ef8-4583-b788-97e0c8b4b50b-lock\") pod \"swift-storage-0\" (UID: \"b2531f01-6ef8-4583-b788-97e0c8b4b50b\") " pod="openstack/swift-storage-0" Mar 11 09:00:40 crc kubenswrapper[4808]: I0311 09:00:40.040197 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/b2531f01-6ef8-4583-b788-97e0c8b4b50b-cache\") pod \"swift-storage-0\" (UID: \"b2531f01-6ef8-4583-b788-97e0c8b4b50b\") " pod="openstack/swift-storage-0" Mar 11 09:00:40 crc kubenswrapper[4808]: I0311 09:00:40.040348 4808 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"swift-storage-0\" (UID: \"b2531f01-6ef8-4583-b788-97e0c8b4b50b\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/swift-storage-0" Mar 11 09:00:40 crc kubenswrapper[4808]: I0311 09:00:40.040538 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/b2531f01-6ef8-4583-b788-97e0c8b4b50b-lock\") pod \"swift-storage-0\" (UID: \"b2531f01-6ef8-4583-b788-97e0c8b4b50b\") " pod="openstack/swift-storage-0" Mar 11 09:00:40 crc kubenswrapper[4808]: I0311 09:00:40.044557 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2531f01-6ef8-4583-b788-97e0c8b4b50b-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"b2531f01-6ef8-4583-b788-97e0c8b4b50b\") " pod="openstack/swift-storage-0" Mar 11 09:00:40 crc kubenswrapper[4808]: I0311 09:00:40.059756 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"swift-storage-0\" (UID: \"b2531f01-6ef8-4583-b788-97e0c8b4b50b\") " pod="openstack/swift-storage-0" Mar 11 09:00:40 crc kubenswrapper[4808]: I0311 09:00:40.061071 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssfkg\" (UniqueName: \"kubernetes.io/projected/b2531f01-6ef8-4583-b788-97e0c8b4b50b-kube-api-access-ssfkg\") pod \"swift-storage-0\" (UID: \"b2531f01-6ef8-4583-b788-97e0c8b4b50b\") " pod="openstack/swift-storage-0" Mar 11 09:00:40 crc kubenswrapper[4808]: I0311 09:00:40.405905 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-2sbpn"] Mar 11 09:00:40 crc kubenswrapper[4808]: I0311 09:00:40.407080 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-2sbpn" Mar 11 09:00:40 crc kubenswrapper[4808]: I0311 09:00:40.408573 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Mar 11 09:00:40 crc kubenswrapper[4808]: I0311 09:00:40.412885 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Mar 11 09:00:40 crc kubenswrapper[4808]: I0311 09:00:40.413061 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Mar 11 09:00:40 crc kubenswrapper[4808]: I0311 09:00:40.417872 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-2sbpn"] Mar 11 09:00:40 crc kubenswrapper[4808]: I0311 09:00:40.444885 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/0d430928-4434-4037-8d1c-d8cb7c8ff0f8-etc-swift\") pod \"swift-ring-rebalance-2sbpn\" (UID: \"0d430928-4434-4037-8d1c-d8cb7c8ff0f8\") " pod="openstack/swift-ring-rebalance-2sbpn" Mar 11 09:00:40 crc kubenswrapper[4808]: I0311 09:00:40.444940 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sths\" (UniqueName: \"kubernetes.io/projected/0d430928-4434-4037-8d1c-d8cb7c8ff0f8-kube-api-access-5sths\") pod \"swift-ring-rebalance-2sbpn\" (UID: \"0d430928-4434-4037-8d1c-d8cb7c8ff0f8\") " pod="openstack/swift-ring-rebalance-2sbpn" Mar 11 09:00:40 crc kubenswrapper[4808]: I0311 09:00:40.445081 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0d430928-4434-4037-8d1c-d8cb7c8ff0f8-scripts\") pod \"swift-ring-rebalance-2sbpn\" (UID: \"0d430928-4434-4037-8d1c-d8cb7c8ff0f8\") " pod="openstack/swift-ring-rebalance-2sbpn" Mar 11 09:00:40 crc kubenswrapper[4808]: I0311 09:00:40.445130 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/0d430928-4434-4037-8d1c-d8cb7c8ff0f8-ring-data-devices\") pod \"swift-ring-rebalance-2sbpn\" (UID: \"0d430928-4434-4037-8d1c-d8cb7c8ff0f8\") " pod="openstack/swift-ring-rebalance-2sbpn" Mar 11 09:00:40 crc kubenswrapper[4808]: I0311 09:00:40.445260 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d430928-4434-4037-8d1c-d8cb7c8ff0f8-combined-ca-bundle\") pod \"swift-ring-rebalance-2sbpn\" (UID: \"0d430928-4434-4037-8d1c-d8cb7c8ff0f8\") " pod="openstack/swift-ring-rebalance-2sbpn" Mar 11 09:00:40 crc kubenswrapper[4808]: I0311 09:00:40.445338 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/0d430928-4434-4037-8d1c-d8cb7c8ff0f8-dispersionconf\") pod \"swift-ring-rebalance-2sbpn\" (UID: \"0d430928-4434-4037-8d1c-d8cb7c8ff0f8\") " pod="openstack/swift-ring-rebalance-2sbpn" Mar 11 09:00:40 crc kubenswrapper[4808]: I0311 09:00:40.445662 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/0d430928-4434-4037-8d1c-d8cb7c8ff0f8-swiftconf\") pod \"swift-ring-rebalance-2sbpn\" (UID: \"0d430928-4434-4037-8d1c-d8cb7c8ff0f8\") " pod="openstack/swift-ring-rebalance-2sbpn" Mar 11 09:00:40 crc kubenswrapper[4808]: I0311 09:00:40.547232 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d430928-4434-4037-8d1c-d8cb7c8ff0f8-combined-ca-bundle\") pod \"swift-ring-rebalance-2sbpn\" (UID: \"0d430928-4434-4037-8d1c-d8cb7c8ff0f8\") " pod="openstack/swift-ring-rebalance-2sbpn" Mar 11 09:00:40 crc kubenswrapper[4808]: I0311 09:00:40.547295 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b2531f01-6ef8-4583-b788-97e0c8b4b50b-etc-swift\") pod \"swift-storage-0\" (UID: \"b2531f01-6ef8-4583-b788-97e0c8b4b50b\") " pod="openstack/swift-storage-0" Mar 11 09:00:40 crc kubenswrapper[4808]: I0311 09:00:40.547317 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/0d430928-4434-4037-8d1c-d8cb7c8ff0f8-dispersionconf\") pod \"swift-ring-rebalance-2sbpn\" (UID: \"0d430928-4434-4037-8d1c-d8cb7c8ff0f8\") " pod="openstack/swift-ring-rebalance-2sbpn" Mar 11 09:00:40 crc kubenswrapper[4808]: I0311 09:00:40.547404 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/0d430928-4434-4037-8d1c-d8cb7c8ff0f8-swiftconf\") pod \"swift-ring-rebalance-2sbpn\" (UID: \"0d430928-4434-4037-8d1c-d8cb7c8ff0f8\") " pod="openstack/swift-ring-rebalance-2sbpn" Mar 11 09:00:40 crc kubenswrapper[4808]: I0311 09:00:40.547427 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/0d430928-4434-4037-8d1c-d8cb7c8ff0f8-etc-swift\") pod \"swift-ring-rebalance-2sbpn\" (UID: \"0d430928-4434-4037-8d1c-d8cb7c8ff0f8\") " pod="openstack/swift-ring-rebalance-2sbpn" Mar 11 09:00:40 crc kubenswrapper[4808]: I0311 09:00:40.547449 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5sths\" (UniqueName: \"kubernetes.io/projected/0d430928-4434-4037-8d1c-d8cb7c8ff0f8-kube-api-access-5sths\") pod \"swift-ring-rebalance-2sbpn\" (UID: \"0d430928-4434-4037-8d1c-d8cb7c8ff0f8\") " pod="openstack/swift-ring-rebalance-2sbpn" Mar 11 09:00:40 crc kubenswrapper[4808]: I0311 09:00:40.547483 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0d430928-4434-4037-8d1c-d8cb7c8ff0f8-scripts\") pod \"swift-ring-rebalance-2sbpn\" (UID: \"0d430928-4434-4037-8d1c-d8cb7c8ff0f8\") " pod="openstack/swift-ring-rebalance-2sbpn" Mar 11 09:00:40 crc kubenswrapper[4808]: I0311 09:00:40.547507 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/0d430928-4434-4037-8d1c-d8cb7c8ff0f8-ring-data-devices\") pod \"swift-ring-rebalance-2sbpn\" (UID: \"0d430928-4434-4037-8d1c-d8cb7c8ff0f8\") " pod="openstack/swift-ring-rebalance-2sbpn" Mar 11 09:00:40 crc kubenswrapper[4808]: E0311 09:00:40.548217 4808 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 11 09:00:40 crc kubenswrapper[4808]: E0311 09:00:40.548254 4808 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 11 09:00:40 crc kubenswrapper[4808]: I0311 09:00:40.548259 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/0d430928-4434-4037-8d1c-d8cb7c8ff0f8-ring-data-devices\") pod \"swift-ring-rebalance-2sbpn\" (UID: \"0d430928-4434-4037-8d1c-d8cb7c8ff0f8\") " pod="openstack/swift-ring-rebalance-2sbpn" Mar 11 09:00:40 crc kubenswrapper[4808]: E0311 09:00:40.548311 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b2531f01-6ef8-4583-b788-97e0c8b4b50b-etc-swift podName:b2531f01-6ef8-4583-b788-97e0c8b4b50b nodeName:}" failed. No retries permitted until 2026-03-11 09:00:41.54828943 +0000 UTC m=+1292.501612750 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b2531f01-6ef8-4583-b788-97e0c8b4b50b-etc-swift") pod "swift-storage-0" (UID: "b2531f01-6ef8-4583-b788-97e0c8b4b50b") : configmap "swift-ring-files" not found Mar 11 09:00:40 crc kubenswrapper[4808]: I0311 09:00:40.548580 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/0d430928-4434-4037-8d1c-d8cb7c8ff0f8-etc-swift\") pod \"swift-ring-rebalance-2sbpn\" (UID: \"0d430928-4434-4037-8d1c-d8cb7c8ff0f8\") " pod="openstack/swift-ring-rebalance-2sbpn" Mar 11 09:00:40 crc kubenswrapper[4808]: I0311 09:00:40.549060 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0d430928-4434-4037-8d1c-d8cb7c8ff0f8-scripts\") pod \"swift-ring-rebalance-2sbpn\" (UID: \"0d430928-4434-4037-8d1c-d8cb7c8ff0f8\") " pod="openstack/swift-ring-rebalance-2sbpn" Mar 11 09:00:40 crc kubenswrapper[4808]: I0311 09:00:40.551405 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/0d430928-4434-4037-8d1c-d8cb7c8ff0f8-dispersionconf\") pod \"swift-ring-rebalance-2sbpn\" (UID: \"0d430928-4434-4037-8d1c-d8cb7c8ff0f8\") " pod="openstack/swift-ring-rebalance-2sbpn" Mar 11 09:00:40 crc kubenswrapper[4808]: I0311 09:00:40.553190 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/0d430928-4434-4037-8d1c-d8cb7c8ff0f8-swiftconf\") pod \"swift-ring-rebalance-2sbpn\" (UID: \"0d430928-4434-4037-8d1c-d8cb7c8ff0f8\") " pod="openstack/swift-ring-rebalance-2sbpn" Mar 11 09:00:40 crc kubenswrapper[4808]: I0311 09:00:40.553941 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d430928-4434-4037-8d1c-d8cb7c8ff0f8-combined-ca-bundle\") pod \"swift-ring-rebalance-2sbpn\" (UID: \"0d430928-4434-4037-8d1c-d8cb7c8ff0f8\") " pod="openstack/swift-ring-rebalance-2sbpn" Mar 11 09:00:40 crc kubenswrapper[4808]: I0311 09:00:40.565615 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5sths\" (UniqueName: \"kubernetes.io/projected/0d430928-4434-4037-8d1c-d8cb7c8ff0f8-kube-api-access-5sths\") pod \"swift-ring-rebalance-2sbpn\" (UID: \"0d430928-4434-4037-8d1c-d8cb7c8ff0f8\") " pod="openstack/swift-ring-rebalance-2sbpn" Mar 11 09:00:40 crc kubenswrapper[4808]: I0311 09:00:40.727268 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-2sbpn" Mar 11 09:00:40 crc kubenswrapper[4808]: I0311 09:00:40.848754 4808 generic.go:334] "Generic (PLEG): container finished" podID="6ad27efa-241d-4ee1-978d-2dacc25cb7e1" containerID="c79da07aca0afa9e6f59f1241aa80a3dc20f000679b23e35d9b1c2b0255848c3" exitCode=0 Mar 11 09:00:40 crc kubenswrapper[4808]: I0311 09:00:40.848802 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f7dd995-lrrzj" event={"ID":"6ad27efa-241d-4ee1-978d-2dacc25cb7e1","Type":"ContainerDied","Data":"c79da07aca0afa9e6f59f1241aa80a3dc20f000679b23e35d9b1c2b0255848c3"} Mar 11 09:00:41 crc kubenswrapper[4808]: I0311 09:00:41.230899 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-2sbpn"] Mar 11 09:00:41 crc kubenswrapper[4808]: W0311 09:00:41.239665 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0d430928_4434_4037_8d1c_d8cb7c8ff0f8.slice/crio-74e030503e5cd9b2d5f87e531b8ad5e1da86ac93320b6749a40949eb9fb96101 WatchSource:0}: Error finding container 74e030503e5cd9b2d5f87e531b8ad5e1da86ac93320b6749a40949eb9fb96101: Status 404 returned error can't find the container with id 74e030503e5cd9b2d5f87e531b8ad5e1da86ac93320b6749a40949eb9fb96101 Mar 11 09:00:41 crc kubenswrapper[4808]: I0311 09:00:41.564992 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b2531f01-6ef8-4583-b788-97e0c8b4b50b-etc-swift\") pod \"swift-storage-0\" (UID: \"b2531f01-6ef8-4583-b788-97e0c8b4b50b\") " pod="openstack/swift-storage-0" Mar 11 09:00:41 crc kubenswrapper[4808]: E0311 09:00:41.565187 4808 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 11 09:00:41 crc kubenswrapper[4808]: E0311 09:00:41.565205 4808 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 11 09:00:41 crc kubenswrapper[4808]: E0311 09:00:41.565259 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b2531f01-6ef8-4583-b788-97e0c8b4b50b-etc-swift podName:b2531f01-6ef8-4583-b788-97e0c8b4b50b nodeName:}" failed. No retries permitted until 2026-03-11 09:00:43.565242484 +0000 UTC m=+1294.518565804 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b2531f01-6ef8-4583-b788-97e0c8b4b50b-etc-swift") pod "swift-storage-0" (UID: "b2531f01-6ef8-4583-b788-97e0c8b4b50b") : configmap "swift-ring-files" not found Mar 11 09:00:41 crc kubenswrapper[4808]: I0311 09:00:41.832993 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f098209-f2ee-4b73-bc32-e1ddb9aacb3b" path="/var/lib/kubelet/pods/1f098209-f2ee-4b73-bc32-e1ddb9aacb3b/volumes" Mar 11 09:00:41 crc kubenswrapper[4808]: I0311 09:00:41.836442 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae4a4c44-d360-4ee0-bf75-64b28396b127" path="/var/lib/kubelet/pods/ae4a4c44-d360-4ee0-bf75-64b28396b127/volumes" Mar 11 09:00:41 crc kubenswrapper[4808]: I0311 09:00:41.860294 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-2sbpn" event={"ID":"0d430928-4434-4037-8d1c-d8cb7c8ff0f8","Type":"ContainerStarted","Data":"74e030503e5cd9b2d5f87e531b8ad5e1da86ac93320b6749a40949eb9fb96101"} Mar 11 09:00:41 crc kubenswrapper[4808]: I0311 09:00:41.862706 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f7dd995-lrrzj" event={"ID":"6ad27efa-241d-4ee1-978d-2dacc25cb7e1","Type":"ContainerStarted","Data":"e06cdeb9590b9d80c01d88d8ccb42a576bbd93754fc91726c0f46b7324808de6"} Mar 11 09:00:41 crc kubenswrapper[4808]: I0311 09:00:41.865649 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-675f7dd995-lrrzj" Mar 11 09:00:41 crc kubenswrapper[4808]: I0311 09:00:41.888770 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-675f7dd995-lrrzj" podStartSLOduration=3.888752824 podStartE2EDuration="3.888752824s" podCreationTimestamp="2026-03-11 09:00:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:00:41.887969521 +0000 UTC m=+1292.841292851" watchObservedRunningTime="2026-03-11 09:00:41.888752824 +0000 UTC m=+1292.842076154" Mar 11 09:00:43 crc kubenswrapper[4808]: I0311 09:00:43.598438 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b2531f01-6ef8-4583-b788-97e0c8b4b50b-etc-swift\") pod \"swift-storage-0\" (UID: \"b2531f01-6ef8-4583-b788-97e0c8b4b50b\") " pod="openstack/swift-storage-0" Mar 11 09:00:43 crc kubenswrapper[4808]: E0311 09:00:43.598828 4808 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 11 09:00:43 crc kubenswrapper[4808]: E0311 09:00:43.598875 4808 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 11 09:00:43 crc kubenswrapper[4808]: E0311 09:00:43.599039 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b2531f01-6ef8-4583-b788-97e0c8b4b50b-etc-swift podName:b2531f01-6ef8-4583-b788-97e0c8b4b50b nodeName:}" failed. No retries permitted until 2026-03-11 09:00:47.598933625 +0000 UTC m=+1298.552256945 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b2531f01-6ef8-4583-b788-97e0c8b4b50b-etc-swift") pod "swift-storage-0" (UID: "b2531f01-6ef8-4583-b788-97e0c8b4b50b") : configmap "swift-ring-files" not found Mar 11 09:00:45 crc kubenswrapper[4808]: I0311 09:00:45.033292 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-t747d"] Mar 11 09:00:45 crc kubenswrapper[4808]: I0311 09:00:45.034558 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-t747d" Mar 11 09:00:45 crc kubenswrapper[4808]: I0311 09:00:45.036966 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Mar 11 09:00:45 crc kubenswrapper[4808]: I0311 09:00:45.041595 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-t747d"] Mar 11 09:00:45 crc kubenswrapper[4808]: I0311 09:00:45.124577 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c64c919f-53f9-4423-8ff5-76b34fa213ec-operator-scripts\") pod \"root-account-create-update-t747d\" (UID: \"c64c919f-53f9-4423-8ff5-76b34fa213ec\") " pod="openstack/root-account-create-update-t747d" Mar 11 09:00:45 crc kubenswrapper[4808]: I0311 09:00:45.125165 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pztwh\" (UniqueName: \"kubernetes.io/projected/c64c919f-53f9-4423-8ff5-76b34fa213ec-kube-api-access-pztwh\") pod \"root-account-create-update-t747d\" (UID: \"c64c919f-53f9-4423-8ff5-76b34fa213ec\") " pod="openstack/root-account-create-update-t747d" Mar 11 09:00:45 crc kubenswrapper[4808]: I0311 09:00:45.227266 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pztwh\" (UniqueName: \"kubernetes.io/projected/c64c919f-53f9-4423-8ff5-76b34fa213ec-kube-api-access-pztwh\") pod \"root-account-create-update-t747d\" (UID: \"c64c919f-53f9-4423-8ff5-76b34fa213ec\") " pod="openstack/root-account-create-update-t747d" Mar 11 09:00:45 crc kubenswrapper[4808]: I0311 09:00:45.227437 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c64c919f-53f9-4423-8ff5-76b34fa213ec-operator-scripts\") pod \"root-account-create-update-t747d\" (UID: \"c64c919f-53f9-4423-8ff5-76b34fa213ec\") " pod="openstack/root-account-create-update-t747d" Mar 11 09:00:45 crc kubenswrapper[4808]: I0311 09:00:45.228099 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c64c919f-53f9-4423-8ff5-76b34fa213ec-operator-scripts\") pod \"root-account-create-update-t747d\" (UID: \"c64c919f-53f9-4423-8ff5-76b34fa213ec\") " pod="openstack/root-account-create-update-t747d" Mar 11 09:00:45 crc kubenswrapper[4808]: I0311 09:00:45.249250 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pztwh\" (UniqueName: \"kubernetes.io/projected/c64c919f-53f9-4423-8ff5-76b34fa213ec-kube-api-access-pztwh\") pod \"root-account-create-update-t747d\" (UID: \"c64c919f-53f9-4423-8ff5-76b34fa213ec\") " pod="openstack/root-account-create-update-t747d" Mar 11 09:00:45 crc kubenswrapper[4808]: I0311 09:00:45.359663 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-t747d" Mar 11 09:00:46 crc kubenswrapper[4808]: I0311 09:00:46.027445 4808 patch_prober.go:28] interesting pod/machine-config-daemon-tfsm9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 09:00:46 crc kubenswrapper[4808]: I0311 09:00:46.027502 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 09:00:47 crc kubenswrapper[4808]: I0311 09:00:47.667837 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b2531f01-6ef8-4583-b788-97e0c8b4b50b-etc-swift\") pod \"swift-storage-0\" (UID: \"b2531f01-6ef8-4583-b788-97e0c8b4b50b\") " pod="openstack/swift-storage-0" Mar 11 09:00:47 crc kubenswrapper[4808]: E0311 09:00:47.668386 4808 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 11 09:00:47 crc kubenswrapper[4808]: E0311 09:00:47.668404 4808 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 11 09:00:47 crc kubenswrapper[4808]: E0311 09:00:47.668455 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b2531f01-6ef8-4583-b788-97e0c8b4b50b-etc-swift podName:b2531f01-6ef8-4583-b788-97e0c8b4b50b nodeName:}" failed. No retries permitted until 2026-03-11 09:00:55.66843827 +0000 UTC m=+1306.621761590 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b2531f01-6ef8-4583-b788-97e0c8b4b50b-etc-swift") pod "swift-storage-0" (UID: "b2531f01-6ef8-4583-b788-97e0c8b4b50b") : configmap "swift-ring-files" not found Mar 11 09:00:48 crc kubenswrapper[4808]: I0311 09:00:48.691234 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Mar 11 09:00:49 crc kubenswrapper[4808]: I0311 09:00:49.135532 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-675f7dd995-lrrzj" Mar 11 09:00:49 crc kubenswrapper[4808]: I0311 09:00:49.195880 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7b57d9888c-4djd4"] Mar 11 09:00:49 crc kubenswrapper[4808]: I0311 09:00:49.196139 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7b57d9888c-4djd4" podUID="532358e7-4941-439f-b43b-b7ba3cf7c772" containerName="dnsmasq-dns" containerID="cri-o://813afba209d46fc9c8821e4e477607182a369a3d843cf10a2486174345506f37" gracePeriod=10 Mar 11 09:00:49 crc kubenswrapper[4808]: I0311 09:00:49.974686 4808 generic.go:334] "Generic (PLEG): container finished" podID="532358e7-4941-439f-b43b-b7ba3cf7c772" containerID="813afba209d46fc9c8821e4e477607182a369a3d843cf10a2486174345506f37" exitCode=0 Mar 11 09:00:49 crc kubenswrapper[4808]: I0311 09:00:49.974729 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b57d9888c-4djd4" event={"ID":"532358e7-4941-439f-b43b-b7ba3cf7c772","Type":"ContainerDied","Data":"813afba209d46fc9c8821e4e477607182a369a3d843cf10a2486174345506f37"} Mar 11 09:00:52 crc kubenswrapper[4808]: I0311 09:00:52.175383 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-spf22" podUID="3fd1979f-d1de-42a8-be8e-b61087f737bc" containerName="ovn-controller" probeResult="failure" output=< Mar 11 09:00:52 crc kubenswrapper[4808]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Mar 11 09:00:52 crc kubenswrapper[4808]: > Mar 11 09:00:53 crc kubenswrapper[4808]: I0311 09:00:53.655785 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7b57d9888c-4djd4" podUID="532358e7-4941-439f-b43b-b7ba3cf7c772" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.120:5353: connect: connection refused" Mar 11 09:00:55 crc kubenswrapper[4808]: I0311 09:00:55.704800 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b2531f01-6ef8-4583-b788-97e0c8b4b50b-etc-swift\") pod \"swift-storage-0\" (UID: \"b2531f01-6ef8-4583-b788-97e0c8b4b50b\") " pod="openstack/swift-storage-0" Mar 11 09:00:55 crc kubenswrapper[4808]: E0311 09:00:55.705036 4808 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 11 09:00:55 crc kubenswrapper[4808]: E0311 09:00:55.705068 4808 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 11 09:00:55 crc kubenswrapper[4808]: E0311 09:00:55.705140 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b2531f01-6ef8-4583-b788-97e0c8b4b50b-etc-swift podName:b2531f01-6ef8-4583-b788-97e0c8b4b50b nodeName:}" failed. No retries permitted until 2026-03-11 09:01:11.705117446 +0000 UTC m=+1322.658440766 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b2531f01-6ef8-4583-b788-97e0c8b4b50b-etc-swift") pod "swift-storage-0" (UID: "b2531f01-6ef8-4583-b788-97e0c8b4b50b") : configmap "swift-ring-files" not found Mar 11 09:00:55 crc kubenswrapper[4808]: E0311 09:00:55.939767 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api@sha256:ed912eee9adeda5c44804688cc7661695a42ab1a40fa46b28bdc819cefa98f07" Mar 11 09:00:55 crc kubenswrapper[4808]: E0311 09:00:55.940481 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api@sha256:ed912eee9adeda5c44804688cc7661695a42ab1a40fa46b28bdc819cefa98f07,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q5npn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-bvzx7_openstack(9a7b14d5-82e6-402d-a64d-cec1541d5195): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 11 09:00:55 crc kubenswrapper[4808]: E0311 09:00:55.941807 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-bvzx7" podUID="9a7b14d5-82e6-402d-a64d-cec1541d5195" Mar 11 09:00:56 crc kubenswrapper[4808]: I0311 09:00:56.020987 4808 generic.go:334] "Generic (PLEG): container finished" podID="549d4ad5-b5b0-45bd-87b0-b9a6ee77866e" containerID="8b395b42706b1de9013f9b75864a0671c56c131544021b5094dacdd4a57911d9" exitCode=0 Mar 11 09:00:56 crc kubenswrapper[4808]: I0311 09:00:56.022320 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"549d4ad5-b5b0-45bd-87b0-b9a6ee77866e","Type":"ContainerDied","Data":"8b395b42706b1de9013f9b75864a0671c56c131544021b5094dacdd4a57911d9"} Mar 11 09:00:56 crc kubenswrapper[4808]: E0311 09:00:56.059157 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api@sha256:ed912eee9adeda5c44804688cc7661695a42ab1a40fa46b28bdc819cefa98f07\\\"\"" pod="openstack/glance-db-sync-bvzx7" podUID="9a7b14d5-82e6-402d-a64d-cec1541d5195" Mar 11 09:00:56 crc kubenswrapper[4808]: I0311 09:00:56.327835 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b57d9888c-4djd4" Mar 11 09:00:56 crc kubenswrapper[4808]: I0311 09:00:56.518221 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hcj9c\" (UniqueName: \"kubernetes.io/projected/532358e7-4941-439f-b43b-b7ba3cf7c772-kube-api-access-hcj9c\") pod \"532358e7-4941-439f-b43b-b7ba3cf7c772\" (UID: \"532358e7-4941-439f-b43b-b7ba3cf7c772\") " Mar 11 09:00:56 crc kubenswrapper[4808]: I0311 09:00:56.518429 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/532358e7-4941-439f-b43b-b7ba3cf7c772-config\") pod \"532358e7-4941-439f-b43b-b7ba3cf7c772\" (UID: \"532358e7-4941-439f-b43b-b7ba3cf7c772\") " Mar 11 09:00:56 crc kubenswrapper[4808]: I0311 09:00:56.518483 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/532358e7-4941-439f-b43b-b7ba3cf7c772-ovsdbserver-nb\") pod \"532358e7-4941-439f-b43b-b7ba3cf7c772\" (UID: \"532358e7-4941-439f-b43b-b7ba3cf7c772\") " Mar 11 09:00:56 crc kubenswrapper[4808]: I0311 09:00:56.518512 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/532358e7-4941-439f-b43b-b7ba3cf7c772-ovsdbserver-sb\") pod \"532358e7-4941-439f-b43b-b7ba3cf7c772\" (UID: \"532358e7-4941-439f-b43b-b7ba3cf7c772\") " Mar 11 09:00:56 crc kubenswrapper[4808]: I0311 09:00:56.518542 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/532358e7-4941-439f-b43b-b7ba3cf7c772-dns-svc\") pod \"532358e7-4941-439f-b43b-b7ba3cf7c772\" (UID: \"532358e7-4941-439f-b43b-b7ba3cf7c772\") " Mar 11 09:00:56 crc kubenswrapper[4808]: I0311 09:00:56.522995 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/532358e7-4941-439f-b43b-b7ba3cf7c772-kube-api-access-hcj9c" (OuterVolumeSpecName: "kube-api-access-hcj9c") pod "532358e7-4941-439f-b43b-b7ba3cf7c772" (UID: "532358e7-4941-439f-b43b-b7ba3cf7c772"). InnerVolumeSpecName "kube-api-access-hcj9c". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:00:56 crc kubenswrapper[4808]: I0311 09:00:56.534856 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-t747d"] Mar 11 09:00:56 crc kubenswrapper[4808]: W0311 09:00:56.537171 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc64c919f_53f9_4423_8ff5_76b34fa213ec.slice/crio-6fd6eb3dd64a5713cf8da3231ce3375f2a28852dd83cb0691a266f2cd78010dd WatchSource:0}: Error finding container 6fd6eb3dd64a5713cf8da3231ce3375f2a28852dd83cb0691a266f2cd78010dd: Status 404 returned error can't find the container with id 6fd6eb3dd64a5713cf8da3231ce3375f2a28852dd83cb0691a266f2cd78010dd Mar 11 09:00:56 crc kubenswrapper[4808]: I0311 09:00:56.565253 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/532358e7-4941-439f-b43b-b7ba3cf7c772-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "532358e7-4941-439f-b43b-b7ba3cf7c772" (UID: "532358e7-4941-439f-b43b-b7ba3cf7c772"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:00:56 crc kubenswrapper[4808]: I0311 09:00:56.567328 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/532358e7-4941-439f-b43b-b7ba3cf7c772-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "532358e7-4941-439f-b43b-b7ba3cf7c772" (UID: "532358e7-4941-439f-b43b-b7ba3cf7c772"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:00:56 crc kubenswrapper[4808]: I0311 09:00:56.568295 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/532358e7-4941-439f-b43b-b7ba3cf7c772-config" (OuterVolumeSpecName: "config") pod "532358e7-4941-439f-b43b-b7ba3cf7c772" (UID: "532358e7-4941-439f-b43b-b7ba3cf7c772"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:00:56 crc kubenswrapper[4808]: I0311 09:00:56.569580 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/532358e7-4941-439f-b43b-b7ba3cf7c772-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "532358e7-4941-439f-b43b-b7ba3cf7c772" (UID: "532358e7-4941-439f-b43b-b7ba3cf7c772"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:00:56 crc kubenswrapper[4808]: I0311 09:00:56.620169 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/532358e7-4941-439f-b43b-b7ba3cf7c772-config\") on node \"crc\" DevicePath \"\"" Mar 11 09:00:56 crc kubenswrapper[4808]: I0311 09:00:56.620204 4808 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/532358e7-4941-439f-b43b-b7ba3cf7c772-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 11 09:00:56 crc kubenswrapper[4808]: I0311 09:00:56.620219 4808 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/532358e7-4941-439f-b43b-b7ba3cf7c772-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 11 09:00:56 crc kubenswrapper[4808]: I0311 09:00:56.620231 4808 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/532358e7-4941-439f-b43b-b7ba3cf7c772-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 11 09:00:56 crc kubenswrapper[4808]: I0311 09:00:56.620243 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hcj9c\" (UniqueName: \"kubernetes.io/projected/532358e7-4941-439f-b43b-b7ba3cf7c772-kube-api-access-hcj9c\") on node \"crc\" DevicePath \"\"" Mar 11 09:00:57 crc kubenswrapper[4808]: I0311 09:00:57.030485 4808 generic.go:334] "Generic (PLEG): container finished" podID="c64c919f-53f9-4423-8ff5-76b34fa213ec" containerID="b1dd2cef2df602ec6e3bf88dd7ffec4d1afd4d689d44b78e7842637f3a9514e0" exitCode=0 Mar 11 09:00:57 crc kubenswrapper[4808]: I0311 09:00:57.030532 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-t747d" event={"ID":"c64c919f-53f9-4423-8ff5-76b34fa213ec","Type":"ContainerDied","Data":"b1dd2cef2df602ec6e3bf88dd7ffec4d1afd4d689d44b78e7842637f3a9514e0"} Mar 11 09:00:57 crc kubenswrapper[4808]: I0311 09:00:57.030899 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-t747d" event={"ID":"c64c919f-53f9-4423-8ff5-76b34fa213ec","Type":"ContainerStarted","Data":"6fd6eb3dd64a5713cf8da3231ce3375f2a28852dd83cb0691a266f2cd78010dd"} Mar 11 09:00:57 crc kubenswrapper[4808]: I0311 09:00:57.032795 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b57d9888c-4djd4" event={"ID":"532358e7-4941-439f-b43b-b7ba3cf7c772","Type":"ContainerDied","Data":"dab59514d2007fd6308989867468ecacabdda42295c4da9e3b93928a884b5712"} Mar 11 09:00:57 crc kubenswrapper[4808]: I0311 09:00:57.032824 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b57d9888c-4djd4" Mar 11 09:00:57 crc kubenswrapper[4808]: I0311 09:00:57.032845 4808 scope.go:117] "RemoveContainer" containerID="813afba209d46fc9c8821e4e477607182a369a3d843cf10a2486174345506f37" Mar 11 09:00:57 crc kubenswrapper[4808]: I0311 09:00:57.034963 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"549d4ad5-b5b0-45bd-87b0-b9a6ee77866e","Type":"ContainerStarted","Data":"05f865332615ad9f698e0cf3c33551f4d94238b92da639d722149c1d2ab22b35"} Mar 11 09:00:57 crc kubenswrapper[4808]: I0311 09:00:57.035146 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Mar 11 09:00:57 crc kubenswrapper[4808]: I0311 09:00:57.036599 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-2sbpn" event={"ID":"0d430928-4434-4037-8d1c-d8cb7c8ff0f8","Type":"ContainerStarted","Data":"439d8d9098e6f2a4d0490da8ab9290e2957fa9f44dcf12cb0782f780b00d1f27"} Mar 11 09:00:57 crc kubenswrapper[4808]: I0311 09:00:57.056575 4808 scope.go:117] "RemoveContainer" containerID="b75c9e4259eb049b9eec80508597bce906d4849073c1899478bf714a695578d1" Mar 11 09:00:57 crc kubenswrapper[4808]: I0311 09:00:57.082185 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-2sbpn" podStartSLOduration=2.35591254 podStartE2EDuration="17.082162668s" podCreationTimestamp="2026-03-11 09:00:40 +0000 UTC" firstStartedPulling="2026-03-11 09:00:41.242391553 +0000 UTC m=+1292.195714873" lastFinishedPulling="2026-03-11 09:00:55.968641631 +0000 UTC m=+1306.921965001" observedRunningTime="2026-03-11 09:00:57.078571164 +0000 UTC m=+1308.031894494" watchObservedRunningTime="2026-03-11 09:00:57.082162668 +0000 UTC m=+1308.035485988" Mar 11 09:00:57 crc kubenswrapper[4808]: I0311 09:00:57.106318 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.661658266 podStartE2EDuration="1m6.106302397s" podCreationTimestamp="2026-03-11 08:59:51 +0000 UTC" firstStartedPulling="2026-03-11 08:59:52.951165115 +0000 UTC m=+1243.904488435" lastFinishedPulling="2026-03-11 09:00:22.395809246 +0000 UTC m=+1273.349132566" observedRunningTime="2026-03-11 09:00:57.105277037 +0000 UTC m=+1308.058600377" watchObservedRunningTime="2026-03-11 09:00:57.106302397 +0000 UTC m=+1308.059625717" Mar 11 09:00:57 crc kubenswrapper[4808]: I0311 09:00:57.129689 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7b57d9888c-4djd4"] Mar 11 09:00:57 crc kubenswrapper[4808]: I0311 09:00:57.137886 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-spf22" podUID="3fd1979f-d1de-42a8-be8e-b61087f737bc" containerName="ovn-controller" probeResult="failure" output=< Mar 11 09:00:57 crc kubenswrapper[4808]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Mar 11 09:00:57 crc kubenswrapper[4808]: > Mar 11 09:00:57 crc kubenswrapper[4808]: I0311 09:00:57.140403 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7b57d9888c-4djd4"] Mar 11 09:00:57 crc kubenswrapper[4808]: I0311 09:00:57.181457 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-mbbhf" Mar 11 09:00:57 crc kubenswrapper[4808]: I0311 09:00:57.185172 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-mbbhf" Mar 11 09:00:57 crc kubenswrapper[4808]: I0311 09:00:57.440196 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-spf22-config-dq4hx"] Mar 11 09:00:57 crc kubenswrapper[4808]: E0311 09:00:57.440640 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="532358e7-4941-439f-b43b-b7ba3cf7c772" containerName="dnsmasq-dns" Mar 11 09:00:57 crc kubenswrapper[4808]: I0311 09:00:57.440665 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="532358e7-4941-439f-b43b-b7ba3cf7c772" containerName="dnsmasq-dns" Mar 11 09:00:57 crc kubenswrapper[4808]: E0311 09:00:57.440704 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="532358e7-4941-439f-b43b-b7ba3cf7c772" containerName="init" Mar 11 09:00:57 crc kubenswrapper[4808]: I0311 09:00:57.440715 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="532358e7-4941-439f-b43b-b7ba3cf7c772" containerName="init" Mar 11 09:00:57 crc kubenswrapper[4808]: I0311 09:00:57.440929 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="532358e7-4941-439f-b43b-b7ba3cf7c772" containerName="dnsmasq-dns" Mar 11 09:00:57 crc kubenswrapper[4808]: I0311 09:00:57.441641 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-spf22-config-dq4hx" Mar 11 09:00:57 crc kubenswrapper[4808]: I0311 09:00:57.444634 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Mar 11 09:00:57 crc kubenswrapper[4808]: I0311 09:00:57.454342 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-spf22-config-dq4hx"] Mar 11 09:00:57 crc kubenswrapper[4808]: I0311 09:00:57.533618 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9cd59a2e-7753-40d1-a875-0aa72a2a163c-scripts\") pod \"ovn-controller-spf22-config-dq4hx\" (UID: \"9cd59a2e-7753-40d1-a875-0aa72a2a163c\") " pod="openstack/ovn-controller-spf22-config-dq4hx" Mar 11 09:00:57 crc kubenswrapper[4808]: I0311 09:00:57.533709 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9cd59a2e-7753-40d1-a875-0aa72a2a163c-additional-scripts\") pod \"ovn-controller-spf22-config-dq4hx\" (UID: \"9cd59a2e-7753-40d1-a875-0aa72a2a163c\") " pod="openstack/ovn-controller-spf22-config-dq4hx" Mar 11 09:00:57 crc kubenswrapper[4808]: I0311 09:00:57.533880 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bd6q\" (UniqueName: \"kubernetes.io/projected/9cd59a2e-7753-40d1-a875-0aa72a2a163c-kube-api-access-5bd6q\") pod \"ovn-controller-spf22-config-dq4hx\" (UID: \"9cd59a2e-7753-40d1-a875-0aa72a2a163c\") " pod="openstack/ovn-controller-spf22-config-dq4hx" Mar 11 09:00:57 crc kubenswrapper[4808]: I0311 09:00:57.533987 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9cd59a2e-7753-40d1-a875-0aa72a2a163c-var-run\") pod \"ovn-controller-spf22-config-dq4hx\" (UID: \"9cd59a2e-7753-40d1-a875-0aa72a2a163c\") " pod="openstack/ovn-controller-spf22-config-dq4hx" Mar 11 09:00:57 crc kubenswrapper[4808]: I0311 09:00:57.534066 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9cd59a2e-7753-40d1-a875-0aa72a2a163c-var-run-ovn\") pod \"ovn-controller-spf22-config-dq4hx\" (UID: \"9cd59a2e-7753-40d1-a875-0aa72a2a163c\") " pod="openstack/ovn-controller-spf22-config-dq4hx" Mar 11 09:00:57 crc kubenswrapper[4808]: I0311 09:00:57.534104 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9cd59a2e-7753-40d1-a875-0aa72a2a163c-var-log-ovn\") pod \"ovn-controller-spf22-config-dq4hx\" (UID: \"9cd59a2e-7753-40d1-a875-0aa72a2a163c\") " pod="openstack/ovn-controller-spf22-config-dq4hx" Mar 11 09:00:57 crc kubenswrapper[4808]: I0311 09:00:57.635454 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9cd59a2e-7753-40d1-a875-0aa72a2a163c-additional-scripts\") pod \"ovn-controller-spf22-config-dq4hx\" (UID: \"9cd59a2e-7753-40d1-a875-0aa72a2a163c\") " pod="openstack/ovn-controller-spf22-config-dq4hx" Mar 11 09:00:57 crc kubenswrapper[4808]: I0311 09:00:57.635568 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bd6q\" (UniqueName: \"kubernetes.io/projected/9cd59a2e-7753-40d1-a875-0aa72a2a163c-kube-api-access-5bd6q\") pod \"ovn-controller-spf22-config-dq4hx\" (UID: \"9cd59a2e-7753-40d1-a875-0aa72a2a163c\") " pod="openstack/ovn-controller-spf22-config-dq4hx" Mar 11 09:00:57 crc kubenswrapper[4808]: I0311 09:00:57.635611 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9cd59a2e-7753-40d1-a875-0aa72a2a163c-var-run\") pod \"ovn-controller-spf22-config-dq4hx\" (UID: \"9cd59a2e-7753-40d1-a875-0aa72a2a163c\") " pod="openstack/ovn-controller-spf22-config-dq4hx" Mar 11 09:00:57 crc kubenswrapper[4808]: I0311 09:00:57.635639 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9cd59a2e-7753-40d1-a875-0aa72a2a163c-var-run-ovn\") pod \"ovn-controller-spf22-config-dq4hx\" (UID: \"9cd59a2e-7753-40d1-a875-0aa72a2a163c\") " pod="openstack/ovn-controller-spf22-config-dq4hx" Mar 11 09:00:57 crc kubenswrapper[4808]: I0311 09:00:57.635665 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9cd59a2e-7753-40d1-a875-0aa72a2a163c-var-log-ovn\") pod \"ovn-controller-spf22-config-dq4hx\" (UID: \"9cd59a2e-7753-40d1-a875-0aa72a2a163c\") " pod="openstack/ovn-controller-spf22-config-dq4hx" Mar 11 09:00:57 crc kubenswrapper[4808]: I0311 09:00:57.635773 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9cd59a2e-7753-40d1-a875-0aa72a2a163c-scripts\") pod \"ovn-controller-spf22-config-dq4hx\" (UID: \"9cd59a2e-7753-40d1-a875-0aa72a2a163c\") " pod="openstack/ovn-controller-spf22-config-dq4hx" Mar 11 09:00:57 crc kubenswrapper[4808]: I0311 09:00:57.635899 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9cd59a2e-7753-40d1-a875-0aa72a2a163c-var-run-ovn\") pod \"ovn-controller-spf22-config-dq4hx\" (UID: \"9cd59a2e-7753-40d1-a875-0aa72a2a163c\") " pod="openstack/ovn-controller-spf22-config-dq4hx" Mar 11 09:00:57 crc kubenswrapper[4808]: I0311 09:00:57.635971 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9cd59a2e-7753-40d1-a875-0aa72a2a163c-var-log-ovn\") pod \"ovn-controller-spf22-config-dq4hx\" (UID: \"9cd59a2e-7753-40d1-a875-0aa72a2a163c\") " pod="openstack/ovn-controller-spf22-config-dq4hx" Mar 11 09:00:57 crc kubenswrapper[4808]: I0311 09:00:57.635974 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9cd59a2e-7753-40d1-a875-0aa72a2a163c-var-run\") pod \"ovn-controller-spf22-config-dq4hx\" (UID: \"9cd59a2e-7753-40d1-a875-0aa72a2a163c\") " pod="openstack/ovn-controller-spf22-config-dq4hx" Mar 11 09:00:57 crc kubenswrapper[4808]: I0311 09:00:57.636256 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9cd59a2e-7753-40d1-a875-0aa72a2a163c-additional-scripts\") pod \"ovn-controller-spf22-config-dq4hx\" (UID: \"9cd59a2e-7753-40d1-a875-0aa72a2a163c\") " pod="openstack/ovn-controller-spf22-config-dq4hx" Mar 11 09:00:57 crc kubenswrapper[4808]: I0311 09:00:57.637804 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9cd59a2e-7753-40d1-a875-0aa72a2a163c-scripts\") pod \"ovn-controller-spf22-config-dq4hx\" (UID: \"9cd59a2e-7753-40d1-a875-0aa72a2a163c\") " pod="openstack/ovn-controller-spf22-config-dq4hx" Mar 11 09:00:57 crc kubenswrapper[4808]: I0311 09:00:57.658151 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bd6q\" (UniqueName: \"kubernetes.io/projected/9cd59a2e-7753-40d1-a875-0aa72a2a163c-kube-api-access-5bd6q\") pod \"ovn-controller-spf22-config-dq4hx\" (UID: \"9cd59a2e-7753-40d1-a875-0aa72a2a163c\") " pod="openstack/ovn-controller-spf22-config-dq4hx" Mar 11 09:00:57 crc kubenswrapper[4808]: I0311 09:00:57.768086 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-spf22-config-dq4hx" Mar 11 09:00:57 crc kubenswrapper[4808]: I0311 09:00:57.801484 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="532358e7-4941-439f-b43b-b7ba3cf7c772" path="/var/lib/kubelet/pods/532358e7-4941-439f-b43b-b7ba3cf7c772/volumes" Mar 11 09:00:58 crc kubenswrapper[4808]: I0311 09:00:58.279522 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-spf22-config-dq4hx"] Mar 11 09:00:58 crc kubenswrapper[4808]: I0311 09:00:58.408990 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-t747d" Mar 11 09:00:58 crc kubenswrapper[4808]: I0311 09:00:58.550984 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pztwh\" (UniqueName: \"kubernetes.io/projected/c64c919f-53f9-4423-8ff5-76b34fa213ec-kube-api-access-pztwh\") pod \"c64c919f-53f9-4423-8ff5-76b34fa213ec\" (UID: \"c64c919f-53f9-4423-8ff5-76b34fa213ec\") " Mar 11 09:00:58 crc kubenswrapper[4808]: I0311 09:00:58.551037 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c64c919f-53f9-4423-8ff5-76b34fa213ec-operator-scripts\") pod \"c64c919f-53f9-4423-8ff5-76b34fa213ec\" (UID: \"c64c919f-53f9-4423-8ff5-76b34fa213ec\") " Mar 11 09:00:58 crc kubenswrapper[4808]: I0311 09:00:58.551768 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c64c919f-53f9-4423-8ff5-76b34fa213ec-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c64c919f-53f9-4423-8ff5-76b34fa213ec" (UID: "c64c919f-53f9-4423-8ff5-76b34fa213ec"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:00:58 crc kubenswrapper[4808]: I0311 09:00:58.556965 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c64c919f-53f9-4423-8ff5-76b34fa213ec-kube-api-access-pztwh" (OuterVolumeSpecName: "kube-api-access-pztwh") pod "c64c919f-53f9-4423-8ff5-76b34fa213ec" (UID: "c64c919f-53f9-4423-8ff5-76b34fa213ec"). InnerVolumeSpecName "kube-api-access-pztwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:00:58 crc kubenswrapper[4808]: I0311 09:00:58.652960 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pztwh\" (UniqueName: \"kubernetes.io/projected/c64c919f-53f9-4423-8ff5-76b34fa213ec-kube-api-access-pztwh\") on node \"crc\" DevicePath \"\"" Mar 11 09:00:58 crc kubenswrapper[4808]: I0311 09:00:58.652998 4808 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c64c919f-53f9-4423-8ff5-76b34fa213ec-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:00:59 crc kubenswrapper[4808]: I0311 09:00:59.063828 4808 generic.go:334] "Generic (PLEG): container finished" podID="9cd59a2e-7753-40d1-a875-0aa72a2a163c" containerID="6cf76293f7898e4cc19049e75e962faaa5a680212f9f242bc918b30601d04608" exitCode=0 Mar 11 09:00:59 crc kubenswrapper[4808]: I0311 09:00:59.063892 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-spf22-config-dq4hx" event={"ID":"9cd59a2e-7753-40d1-a875-0aa72a2a163c","Type":"ContainerDied","Data":"6cf76293f7898e4cc19049e75e962faaa5a680212f9f242bc918b30601d04608"} Mar 11 09:00:59 crc kubenswrapper[4808]: I0311 09:00:59.063915 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-spf22-config-dq4hx" event={"ID":"9cd59a2e-7753-40d1-a875-0aa72a2a163c","Type":"ContainerStarted","Data":"d02dc9acfb7d5f2b49da7fd7476b08f6fddfba9932f0d3af2cb55a6819edf3f4"} Mar 11 09:00:59 crc kubenswrapper[4808]: I0311 09:00:59.066004 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-t747d" event={"ID":"c64c919f-53f9-4423-8ff5-76b34fa213ec","Type":"ContainerDied","Data":"6fd6eb3dd64a5713cf8da3231ce3375f2a28852dd83cb0691a266f2cd78010dd"} Mar 11 09:00:59 crc kubenswrapper[4808]: I0311 09:00:59.066033 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6fd6eb3dd64a5713cf8da3231ce3375f2a28852dd83cb0691a266f2cd78010dd" Mar 11 09:00:59 crc kubenswrapper[4808]: I0311 09:00:59.066095 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-t747d" Mar 11 09:01:00 crc kubenswrapper[4808]: I0311 09:01:00.075589 4808 generic.go:334] "Generic (PLEG): container finished" podID="a1e42e33-7453-4b97-abca-0c45cc27faa2" containerID="b5cfaa5690cabe34a4d4686bbe5047a703650cf16cae45773a50285423e560b6" exitCode=0 Mar 11 09:01:00 crc kubenswrapper[4808]: I0311 09:01:00.075652 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a1e42e33-7453-4b97-abca-0c45cc27faa2","Type":"ContainerDied","Data":"b5cfaa5690cabe34a4d4686bbe5047a703650cf16cae45773a50285423e560b6"} Mar 11 09:01:00 crc kubenswrapper[4808]: I0311 09:01:00.417198 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-spf22-config-dq4hx" Mar 11 09:01:00 crc kubenswrapper[4808]: I0311 09:01:00.590743 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9cd59a2e-7753-40d1-a875-0aa72a2a163c-additional-scripts\") pod \"9cd59a2e-7753-40d1-a875-0aa72a2a163c\" (UID: \"9cd59a2e-7753-40d1-a875-0aa72a2a163c\") " Mar 11 09:01:00 crc kubenswrapper[4808]: I0311 09:01:00.590824 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9cd59a2e-7753-40d1-a875-0aa72a2a163c-scripts\") pod \"9cd59a2e-7753-40d1-a875-0aa72a2a163c\" (UID: \"9cd59a2e-7753-40d1-a875-0aa72a2a163c\") " Mar 11 09:01:00 crc kubenswrapper[4808]: I0311 09:01:00.590852 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9cd59a2e-7753-40d1-a875-0aa72a2a163c-var-run\") pod \"9cd59a2e-7753-40d1-a875-0aa72a2a163c\" (UID: \"9cd59a2e-7753-40d1-a875-0aa72a2a163c\") " Mar 11 09:01:00 crc kubenswrapper[4808]: I0311 09:01:00.590984 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9cd59a2e-7753-40d1-a875-0aa72a2a163c-var-log-ovn\") pod \"9cd59a2e-7753-40d1-a875-0aa72a2a163c\" (UID: \"9cd59a2e-7753-40d1-a875-0aa72a2a163c\") " Mar 11 09:01:00 crc kubenswrapper[4808]: I0311 09:01:00.591022 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5bd6q\" (UniqueName: \"kubernetes.io/projected/9cd59a2e-7753-40d1-a875-0aa72a2a163c-kube-api-access-5bd6q\") pod \"9cd59a2e-7753-40d1-a875-0aa72a2a163c\" (UID: \"9cd59a2e-7753-40d1-a875-0aa72a2a163c\") " Mar 11 09:01:00 crc kubenswrapper[4808]: I0311 09:01:00.591042 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9cd59a2e-7753-40d1-a875-0aa72a2a163c-var-run-ovn\") pod \"9cd59a2e-7753-40d1-a875-0aa72a2a163c\" (UID: \"9cd59a2e-7753-40d1-a875-0aa72a2a163c\") " Mar 11 09:01:00 crc kubenswrapper[4808]: I0311 09:01:00.591554 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9cd59a2e-7753-40d1-a875-0aa72a2a163c-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "9cd59a2e-7753-40d1-a875-0aa72a2a163c" (UID: "9cd59a2e-7753-40d1-a875-0aa72a2a163c"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 11 09:01:00 crc kubenswrapper[4808]: I0311 09:01:00.592334 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9cd59a2e-7753-40d1-a875-0aa72a2a163c-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "9cd59a2e-7753-40d1-a875-0aa72a2a163c" (UID: "9cd59a2e-7753-40d1-a875-0aa72a2a163c"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:01:00 crc kubenswrapper[4808]: I0311 09:01:00.592391 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9cd59a2e-7753-40d1-a875-0aa72a2a163c-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "9cd59a2e-7753-40d1-a875-0aa72a2a163c" (UID: "9cd59a2e-7753-40d1-a875-0aa72a2a163c"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 11 09:01:00 crc kubenswrapper[4808]: I0311 09:01:00.592410 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9cd59a2e-7753-40d1-a875-0aa72a2a163c-var-run" (OuterVolumeSpecName: "var-run") pod "9cd59a2e-7753-40d1-a875-0aa72a2a163c" (UID: "9cd59a2e-7753-40d1-a875-0aa72a2a163c"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 11 09:01:00 crc kubenswrapper[4808]: I0311 09:01:00.592821 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9cd59a2e-7753-40d1-a875-0aa72a2a163c-scripts" (OuterVolumeSpecName: "scripts") pod "9cd59a2e-7753-40d1-a875-0aa72a2a163c" (UID: "9cd59a2e-7753-40d1-a875-0aa72a2a163c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:01:00 crc kubenswrapper[4808]: I0311 09:01:00.596320 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9cd59a2e-7753-40d1-a875-0aa72a2a163c-kube-api-access-5bd6q" (OuterVolumeSpecName: "kube-api-access-5bd6q") pod "9cd59a2e-7753-40d1-a875-0aa72a2a163c" (UID: "9cd59a2e-7753-40d1-a875-0aa72a2a163c"). InnerVolumeSpecName "kube-api-access-5bd6q". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:01:00 crc kubenswrapper[4808]: I0311 09:01:00.692605 4808 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9cd59a2e-7753-40d1-a875-0aa72a2a163c-var-log-ovn\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:00 crc kubenswrapper[4808]: I0311 09:01:00.692646 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5bd6q\" (UniqueName: \"kubernetes.io/projected/9cd59a2e-7753-40d1-a875-0aa72a2a163c-kube-api-access-5bd6q\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:00 crc kubenswrapper[4808]: I0311 09:01:00.692659 4808 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9cd59a2e-7753-40d1-a875-0aa72a2a163c-var-run-ovn\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:00 crc kubenswrapper[4808]: I0311 09:01:00.692671 4808 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9cd59a2e-7753-40d1-a875-0aa72a2a163c-additional-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:00 crc kubenswrapper[4808]: I0311 09:01:00.692683 4808 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9cd59a2e-7753-40d1-a875-0aa72a2a163c-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:00 crc kubenswrapper[4808]: I0311 09:01:00.692694 4808 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9cd59a2e-7753-40d1-a875-0aa72a2a163c-var-run\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:01 crc kubenswrapper[4808]: I0311 09:01:01.085448 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a1e42e33-7453-4b97-abca-0c45cc27faa2","Type":"ContainerStarted","Data":"0f8754c1594d2feb21d234a73d819b71534a5e40f918961ac2feb938e1330d6c"} Mar 11 09:01:01 crc kubenswrapper[4808]: I0311 09:01:01.085985 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Mar 11 09:01:01 crc kubenswrapper[4808]: I0311 09:01:01.087612 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-spf22-config-dq4hx" event={"ID":"9cd59a2e-7753-40d1-a875-0aa72a2a163c","Type":"ContainerDied","Data":"d02dc9acfb7d5f2b49da7fd7476b08f6fddfba9932f0d3af2cb55a6819edf3f4"} Mar 11 09:01:01 crc kubenswrapper[4808]: I0311 09:01:01.087646 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d02dc9acfb7d5f2b49da7fd7476b08f6fddfba9932f0d3af2cb55a6819edf3f4" Mar 11 09:01:01 crc kubenswrapper[4808]: I0311 09:01:01.087716 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-spf22-config-dq4hx" Mar 11 09:01:01 crc kubenswrapper[4808]: I0311 09:01:01.113512 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=-9223371967.741285 podStartE2EDuration="1m9.113491349s" podCreationTimestamp="2026-03-11 08:59:52 +0000 UTC" firstStartedPulling="2026-03-11 08:59:54.189743251 +0000 UTC m=+1245.143066571" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:01:01.108558306 +0000 UTC m=+1312.061881636" watchObservedRunningTime="2026-03-11 09:01:01.113491349 +0000 UTC m=+1312.066814669" Mar 11 09:01:01 crc kubenswrapper[4808]: I0311 09:01:01.545140 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-spf22-config-dq4hx"] Mar 11 09:01:01 crc kubenswrapper[4808]: I0311 09:01:01.553262 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-spf22-config-dq4hx"] Mar 11 09:01:01 crc kubenswrapper[4808]: I0311 09:01:01.801278 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9cd59a2e-7753-40d1-a875-0aa72a2a163c" path="/var/lib/kubelet/pods/9cd59a2e-7753-40d1-a875-0aa72a2a163c/volumes" Mar 11 09:01:02 crc kubenswrapper[4808]: I0311 09:01:02.169604 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-spf22" Mar 11 09:01:03 crc kubenswrapper[4808]: I0311 09:01:03.102544 4808 generic.go:334] "Generic (PLEG): container finished" podID="0d430928-4434-4037-8d1c-d8cb7c8ff0f8" containerID="439d8d9098e6f2a4d0490da8ab9290e2957fa9f44dcf12cb0782f780b00d1f27" exitCode=0 Mar 11 09:01:03 crc kubenswrapper[4808]: I0311 09:01:03.102628 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-2sbpn" event={"ID":"0d430928-4434-4037-8d1c-d8cb7c8ff0f8","Type":"ContainerDied","Data":"439d8d9098e6f2a4d0490da8ab9290e2957fa9f44dcf12cb0782f780b00d1f27"} Mar 11 09:01:04 crc kubenswrapper[4808]: I0311 09:01:04.436619 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-2sbpn" Mar 11 09:01:04 crc kubenswrapper[4808]: I0311 09:01:04.552643 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d430928-4434-4037-8d1c-d8cb7c8ff0f8-combined-ca-bundle\") pod \"0d430928-4434-4037-8d1c-d8cb7c8ff0f8\" (UID: \"0d430928-4434-4037-8d1c-d8cb7c8ff0f8\") " Mar 11 09:01:04 crc kubenswrapper[4808]: I0311 09:01:04.552717 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5sths\" (UniqueName: \"kubernetes.io/projected/0d430928-4434-4037-8d1c-d8cb7c8ff0f8-kube-api-access-5sths\") pod \"0d430928-4434-4037-8d1c-d8cb7c8ff0f8\" (UID: \"0d430928-4434-4037-8d1c-d8cb7c8ff0f8\") " Mar 11 09:01:04 crc kubenswrapper[4808]: I0311 09:01:04.552772 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/0d430928-4434-4037-8d1c-d8cb7c8ff0f8-ring-data-devices\") pod \"0d430928-4434-4037-8d1c-d8cb7c8ff0f8\" (UID: \"0d430928-4434-4037-8d1c-d8cb7c8ff0f8\") " Mar 11 09:01:04 crc kubenswrapper[4808]: I0311 09:01:04.552856 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/0d430928-4434-4037-8d1c-d8cb7c8ff0f8-etc-swift\") pod \"0d430928-4434-4037-8d1c-d8cb7c8ff0f8\" (UID: \"0d430928-4434-4037-8d1c-d8cb7c8ff0f8\") " Mar 11 09:01:04 crc kubenswrapper[4808]: I0311 09:01:04.552940 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0d430928-4434-4037-8d1c-d8cb7c8ff0f8-scripts\") pod \"0d430928-4434-4037-8d1c-d8cb7c8ff0f8\" (UID: \"0d430928-4434-4037-8d1c-d8cb7c8ff0f8\") " Mar 11 09:01:04 crc kubenswrapper[4808]: I0311 09:01:04.552970 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/0d430928-4434-4037-8d1c-d8cb7c8ff0f8-swiftconf\") pod \"0d430928-4434-4037-8d1c-d8cb7c8ff0f8\" (UID: \"0d430928-4434-4037-8d1c-d8cb7c8ff0f8\") " Mar 11 09:01:04 crc kubenswrapper[4808]: I0311 09:01:04.552993 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/0d430928-4434-4037-8d1c-d8cb7c8ff0f8-dispersionconf\") pod \"0d430928-4434-4037-8d1c-d8cb7c8ff0f8\" (UID: \"0d430928-4434-4037-8d1c-d8cb7c8ff0f8\") " Mar 11 09:01:04 crc kubenswrapper[4808]: I0311 09:01:04.554226 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d430928-4434-4037-8d1c-d8cb7c8ff0f8-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "0d430928-4434-4037-8d1c-d8cb7c8ff0f8" (UID: "0d430928-4434-4037-8d1c-d8cb7c8ff0f8"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:01:04 crc kubenswrapper[4808]: I0311 09:01:04.554419 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d430928-4434-4037-8d1c-d8cb7c8ff0f8-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "0d430928-4434-4037-8d1c-d8cb7c8ff0f8" (UID: "0d430928-4434-4037-8d1c-d8cb7c8ff0f8"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:01:04 crc kubenswrapper[4808]: I0311 09:01:04.558123 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d430928-4434-4037-8d1c-d8cb7c8ff0f8-kube-api-access-5sths" (OuterVolumeSpecName: "kube-api-access-5sths") pod "0d430928-4434-4037-8d1c-d8cb7c8ff0f8" (UID: "0d430928-4434-4037-8d1c-d8cb7c8ff0f8"). InnerVolumeSpecName "kube-api-access-5sths". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:01:04 crc kubenswrapper[4808]: I0311 09:01:04.561108 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d430928-4434-4037-8d1c-d8cb7c8ff0f8-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "0d430928-4434-4037-8d1c-d8cb7c8ff0f8" (UID: "0d430928-4434-4037-8d1c-d8cb7c8ff0f8"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:01:04 crc kubenswrapper[4808]: I0311 09:01:04.573067 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d430928-4434-4037-8d1c-d8cb7c8ff0f8-scripts" (OuterVolumeSpecName: "scripts") pod "0d430928-4434-4037-8d1c-d8cb7c8ff0f8" (UID: "0d430928-4434-4037-8d1c-d8cb7c8ff0f8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:01:04 crc kubenswrapper[4808]: I0311 09:01:04.575702 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d430928-4434-4037-8d1c-d8cb7c8ff0f8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0d430928-4434-4037-8d1c-d8cb7c8ff0f8" (UID: "0d430928-4434-4037-8d1c-d8cb7c8ff0f8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:01:04 crc kubenswrapper[4808]: I0311 09:01:04.576190 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d430928-4434-4037-8d1c-d8cb7c8ff0f8-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "0d430928-4434-4037-8d1c-d8cb7c8ff0f8" (UID: "0d430928-4434-4037-8d1c-d8cb7c8ff0f8"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:01:04 crc kubenswrapper[4808]: I0311 09:01:04.655266 4808 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0d430928-4434-4037-8d1c-d8cb7c8ff0f8-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:04 crc kubenswrapper[4808]: I0311 09:01:04.655308 4808 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/0d430928-4434-4037-8d1c-d8cb7c8ff0f8-swiftconf\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:04 crc kubenswrapper[4808]: I0311 09:01:04.655321 4808 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/0d430928-4434-4037-8d1c-d8cb7c8ff0f8-dispersionconf\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:04 crc kubenswrapper[4808]: I0311 09:01:04.655333 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d430928-4434-4037-8d1c-d8cb7c8ff0f8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:04 crc kubenswrapper[4808]: I0311 09:01:04.655346 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5sths\" (UniqueName: \"kubernetes.io/projected/0d430928-4434-4037-8d1c-d8cb7c8ff0f8-kube-api-access-5sths\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:04 crc kubenswrapper[4808]: I0311 09:01:04.655381 4808 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/0d430928-4434-4037-8d1c-d8cb7c8ff0f8-ring-data-devices\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:04 crc kubenswrapper[4808]: I0311 09:01:04.655392 4808 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/0d430928-4434-4037-8d1c-d8cb7c8ff0f8-etc-swift\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:05 crc kubenswrapper[4808]: I0311 09:01:05.120978 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-2sbpn" event={"ID":"0d430928-4434-4037-8d1c-d8cb7c8ff0f8","Type":"ContainerDied","Data":"74e030503e5cd9b2d5f87e531b8ad5e1da86ac93320b6749a40949eb9fb96101"} Mar 11 09:01:05 crc kubenswrapper[4808]: I0311 09:01:05.121046 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74e030503e5cd9b2d5f87e531b8ad5e1da86ac93320b6749a40949eb9fb96101" Mar 11 09:01:05 crc kubenswrapper[4808]: I0311 09:01:05.121082 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-2sbpn" Mar 11 09:01:09 crc kubenswrapper[4808]: I0311 09:01:09.153554 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-bvzx7" event={"ID":"9a7b14d5-82e6-402d-a64d-cec1541d5195","Type":"ContainerStarted","Data":"29557113955ec225b88890526bdc8ee7308caa89a7f024d9bfa00ba222ead59c"} Mar 11 09:01:09 crc kubenswrapper[4808]: I0311 09:01:09.171004 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-bvzx7" podStartSLOduration=1.834287609 podStartE2EDuration="32.1709879s" podCreationTimestamp="2026-03-11 09:00:37 +0000 UTC" firstStartedPulling="2026-03-11 09:00:37.885900488 +0000 UTC m=+1288.839223808" lastFinishedPulling="2026-03-11 09:01:08.222600789 +0000 UTC m=+1319.175924099" observedRunningTime="2026-03-11 09:01:09.170735262 +0000 UTC m=+1320.124058602" watchObservedRunningTime="2026-03-11 09:01:09.1709879 +0000 UTC m=+1320.124311220" Mar 11 09:01:11 crc kubenswrapper[4808]: I0311 09:01:11.792071 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b2531f01-6ef8-4583-b788-97e0c8b4b50b-etc-swift\") pod \"swift-storage-0\" (UID: \"b2531f01-6ef8-4583-b788-97e0c8b4b50b\") " pod="openstack/swift-storage-0" Mar 11 09:01:11 crc kubenswrapper[4808]: I0311 09:01:11.804778 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b2531f01-6ef8-4583-b788-97e0c8b4b50b-etc-swift\") pod \"swift-storage-0\" (UID: \"b2531f01-6ef8-4583-b788-97e0c8b4b50b\") " pod="openstack/swift-storage-0" Mar 11 09:01:12 crc kubenswrapper[4808]: I0311 09:01:12.078123 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Mar 11 09:01:12 crc kubenswrapper[4808]: I0311 09:01:12.608798 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Mar 11 09:01:12 crc kubenswrapper[4808]: I0311 09:01:12.622592 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Mar 11 09:01:13 crc kubenswrapper[4808]: I0311 09:01:13.199015 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2531f01-6ef8-4583-b788-97e0c8b4b50b","Type":"ContainerStarted","Data":"14a821528e04adaea86b29736c1578ca034f3a46815e09028c6efdf76369972a"} Mar 11 09:01:13 crc kubenswrapper[4808]: I0311 09:01:13.560960 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Mar 11 09:01:14 crc kubenswrapper[4808]: I0311 09:01:14.208958 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2531f01-6ef8-4583-b788-97e0c8b4b50b","Type":"ContainerStarted","Data":"681ee01ae1a57e4a64b46a088c7eb77d95b1fdf9586896e89410116abec29a90"} Mar 11 09:01:14 crc kubenswrapper[4808]: I0311 09:01:14.209388 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2531f01-6ef8-4583-b788-97e0c8b4b50b","Type":"ContainerStarted","Data":"82f8358e3c23ae5caf4686e4d2ed129be3db93f6c1646acae71a1541a038ba65"} Mar 11 09:01:14 crc kubenswrapper[4808]: E0311 09:01:14.260715 4808 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.113:47716->38.102.83.113:39975: write tcp 38.102.83.113:47716->38.102.83.113:39975: write: broken pipe Mar 11 09:01:14 crc kubenswrapper[4808]: I0311 09:01:14.524961 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-sz9x6"] Mar 11 09:01:14 crc kubenswrapper[4808]: E0311 09:01:14.525346 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cd59a2e-7753-40d1-a875-0aa72a2a163c" containerName="ovn-config" Mar 11 09:01:14 crc kubenswrapper[4808]: I0311 09:01:14.525387 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cd59a2e-7753-40d1-a875-0aa72a2a163c" containerName="ovn-config" Mar 11 09:01:14 crc kubenswrapper[4808]: E0311 09:01:14.525419 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c64c919f-53f9-4423-8ff5-76b34fa213ec" containerName="mariadb-account-create-update" Mar 11 09:01:14 crc kubenswrapper[4808]: I0311 09:01:14.525428 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="c64c919f-53f9-4423-8ff5-76b34fa213ec" containerName="mariadb-account-create-update" Mar 11 09:01:14 crc kubenswrapper[4808]: E0311 09:01:14.525442 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d430928-4434-4037-8d1c-d8cb7c8ff0f8" containerName="swift-ring-rebalance" Mar 11 09:01:14 crc kubenswrapper[4808]: I0311 09:01:14.525453 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d430928-4434-4037-8d1c-d8cb7c8ff0f8" containerName="swift-ring-rebalance" Mar 11 09:01:14 crc kubenswrapper[4808]: I0311 09:01:14.525629 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="c64c919f-53f9-4423-8ff5-76b34fa213ec" containerName="mariadb-account-create-update" Mar 11 09:01:14 crc kubenswrapper[4808]: I0311 09:01:14.525659 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="9cd59a2e-7753-40d1-a875-0aa72a2a163c" containerName="ovn-config" Mar 11 09:01:14 crc kubenswrapper[4808]: I0311 09:01:14.525678 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d430928-4434-4037-8d1c-d8cb7c8ff0f8" containerName="swift-ring-rebalance" Mar 11 09:01:14 crc kubenswrapper[4808]: I0311 09:01:14.526324 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-sz9x6" Mar 11 09:01:14 crc kubenswrapper[4808]: I0311 09:01:14.539655 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-sz9x6"] Mar 11 09:01:14 crc kubenswrapper[4808]: I0311 09:01:14.617993 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-e4df-account-create-update-dwl4j"] Mar 11 09:01:14 crc kubenswrapper[4808]: I0311 09:01:14.619063 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-e4df-account-create-update-dwl4j" Mar 11 09:01:14 crc kubenswrapper[4808]: I0311 09:01:14.625715 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Mar 11 09:01:14 crc kubenswrapper[4808]: I0311 09:01:14.628785 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-e4df-account-create-update-dwl4j"] Mar 11 09:01:14 crc kubenswrapper[4808]: I0311 09:01:14.637090 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqnjb\" (UniqueName: \"kubernetes.io/projected/02770ee6-83bc-4c09-a98e-d1e1624bd759-kube-api-access-rqnjb\") pod \"cinder-db-create-sz9x6\" (UID: \"02770ee6-83bc-4c09-a98e-d1e1624bd759\") " pod="openstack/cinder-db-create-sz9x6" Mar 11 09:01:14 crc kubenswrapper[4808]: I0311 09:01:14.637154 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/02770ee6-83bc-4c09-a98e-d1e1624bd759-operator-scripts\") pod \"cinder-db-create-sz9x6\" (UID: \"02770ee6-83bc-4c09-a98e-d1e1624bd759\") " pod="openstack/cinder-db-create-sz9x6" Mar 11 09:01:14 crc kubenswrapper[4808]: I0311 09:01:14.727278 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-d84g2"] Mar 11 09:01:14 crc kubenswrapper[4808]: I0311 09:01:14.728475 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-d84g2" Mar 11 09:01:14 crc kubenswrapper[4808]: I0311 09:01:14.736119 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-c800-account-create-update-qnvq4"] Mar 11 09:01:14 crc kubenswrapper[4808]: I0311 09:01:14.737226 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-c800-account-create-update-qnvq4" Mar 11 09:01:14 crc kubenswrapper[4808]: I0311 09:01:14.738339 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqnjb\" (UniqueName: \"kubernetes.io/projected/02770ee6-83bc-4c09-a98e-d1e1624bd759-kube-api-access-rqnjb\") pod \"cinder-db-create-sz9x6\" (UID: \"02770ee6-83bc-4c09-a98e-d1e1624bd759\") " pod="openstack/cinder-db-create-sz9x6" Mar 11 09:01:14 crc kubenswrapper[4808]: I0311 09:01:14.738399 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6da51ff-4f87-4c78-aff9-1b60b3a23633-operator-scripts\") pod \"cinder-e4df-account-create-update-dwl4j\" (UID: \"e6da51ff-4f87-4c78-aff9-1b60b3a23633\") " pod="openstack/cinder-e4df-account-create-update-dwl4j" Mar 11 09:01:14 crc kubenswrapper[4808]: I0311 09:01:14.738437 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x86cc\" (UniqueName: \"kubernetes.io/projected/e6da51ff-4f87-4c78-aff9-1b60b3a23633-kube-api-access-x86cc\") pod \"cinder-e4df-account-create-update-dwl4j\" (UID: \"e6da51ff-4f87-4c78-aff9-1b60b3a23633\") " pod="openstack/cinder-e4df-account-create-update-dwl4j" Mar 11 09:01:14 crc kubenswrapper[4808]: I0311 09:01:14.738492 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/02770ee6-83bc-4c09-a98e-d1e1624bd759-operator-scripts\") pod \"cinder-db-create-sz9x6\" (UID: \"02770ee6-83bc-4c09-a98e-d1e1624bd759\") " pod="openstack/cinder-db-create-sz9x6" Mar 11 09:01:14 crc kubenswrapper[4808]: I0311 09:01:14.739123 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/02770ee6-83bc-4c09-a98e-d1e1624bd759-operator-scripts\") pod \"cinder-db-create-sz9x6\" (UID: \"02770ee6-83bc-4c09-a98e-d1e1624bd759\") " pod="openstack/cinder-db-create-sz9x6" Mar 11 09:01:14 crc kubenswrapper[4808]: I0311 09:01:14.742740 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Mar 11 09:01:14 crc kubenswrapper[4808]: I0311 09:01:14.743722 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-d84g2"] Mar 11 09:01:14 crc kubenswrapper[4808]: I0311 09:01:14.775158 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqnjb\" (UniqueName: \"kubernetes.io/projected/02770ee6-83bc-4c09-a98e-d1e1624bd759-kube-api-access-rqnjb\") pod \"cinder-db-create-sz9x6\" (UID: \"02770ee6-83bc-4c09-a98e-d1e1624bd759\") " pod="openstack/cinder-db-create-sz9x6" Mar 11 09:01:14 crc kubenswrapper[4808]: I0311 09:01:14.816090 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-c800-account-create-update-qnvq4"] Mar 11 09:01:14 crc kubenswrapper[4808]: I0311 09:01:14.841400 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3eb827b1-9ec7-4234-a8e6-38072c48a09c-operator-scripts\") pod \"barbican-c800-account-create-update-qnvq4\" (UID: \"3eb827b1-9ec7-4234-a8e6-38072c48a09c\") " pod="openstack/barbican-c800-account-create-update-qnvq4" Mar 11 09:01:14 crc kubenswrapper[4808]: I0311 09:01:14.841473 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6da51ff-4f87-4c78-aff9-1b60b3a23633-operator-scripts\") pod \"cinder-e4df-account-create-update-dwl4j\" (UID: \"e6da51ff-4f87-4c78-aff9-1b60b3a23633\") " pod="openstack/cinder-e4df-account-create-update-dwl4j" Mar 11 09:01:14 crc kubenswrapper[4808]: I0311 09:01:14.841499 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x86cc\" (UniqueName: \"kubernetes.io/projected/e6da51ff-4f87-4c78-aff9-1b60b3a23633-kube-api-access-x86cc\") pod \"cinder-e4df-account-create-update-dwl4j\" (UID: \"e6da51ff-4f87-4c78-aff9-1b60b3a23633\") " pod="openstack/cinder-e4df-account-create-update-dwl4j" Mar 11 09:01:14 crc kubenswrapper[4808]: I0311 09:01:14.841518 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0beff323-e564-46ab-b5d1-ff40920e373e-operator-scripts\") pod \"barbican-db-create-d84g2\" (UID: \"0beff323-e564-46ab-b5d1-ff40920e373e\") " pod="openstack/barbican-db-create-d84g2" Mar 11 09:01:14 crc kubenswrapper[4808]: I0311 09:01:14.841536 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwtxj\" (UniqueName: \"kubernetes.io/projected/0beff323-e564-46ab-b5d1-ff40920e373e-kube-api-access-wwtxj\") pod \"barbican-db-create-d84g2\" (UID: \"0beff323-e564-46ab-b5d1-ff40920e373e\") " pod="openstack/barbican-db-create-d84g2" Mar 11 09:01:14 crc kubenswrapper[4808]: I0311 09:01:14.841580 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffzx4\" (UniqueName: \"kubernetes.io/projected/3eb827b1-9ec7-4234-a8e6-38072c48a09c-kube-api-access-ffzx4\") pod \"barbican-c800-account-create-update-qnvq4\" (UID: \"3eb827b1-9ec7-4234-a8e6-38072c48a09c\") " pod="openstack/barbican-c800-account-create-update-qnvq4" Mar 11 09:01:14 crc kubenswrapper[4808]: I0311 09:01:14.842206 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6da51ff-4f87-4c78-aff9-1b60b3a23633-operator-scripts\") pod \"cinder-e4df-account-create-update-dwl4j\" (UID: \"e6da51ff-4f87-4c78-aff9-1b60b3a23633\") " pod="openstack/cinder-e4df-account-create-update-dwl4j" Mar 11 09:01:14 crc kubenswrapper[4808]: I0311 09:01:14.843372 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-sz9x6" Mar 11 09:01:14 crc kubenswrapper[4808]: I0311 09:01:14.889959 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x86cc\" (UniqueName: \"kubernetes.io/projected/e6da51ff-4f87-4c78-aff9-1b60b3a23633-kube-api-access-x86cc\") pod \"cinder-e4df-account-create-update-dwl4j\" (UID: \"e6da51ff-4f87-4c78-aff9-1b60b3a23633\") " pod="openstack/cinder-e4df-account-create-update-dwl4j" Mar 11 09:01:14 crc kubenswrapper[4808]: I0311 09:01:14.890042 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-cgtv2"] Mar 11 09:01:14 crc kubenswrapper[4808]: I0311 09:01:14.891214 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-cgtv2" Mar 11 09:01:14 crc kubenswrapper[4808]: I0311 09:01:14.910240 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 11 09:01:14 crc kubenswrapper[4808]: I0311 09:01:14.910411 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 11 09:01:14 crc kubenswrapper[4808]: I0311 09:01:14.910506 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-7fg4s" Mar 11 09:01:14 crc kubenswrapper[4808]: I0311 09:01:14.910610 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 11 09:01:14 crc kubenswrapper[4808]: I0311 09:01:14.911689 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-cgtv2"] Mar 11 09:01:14 crc kubenswrapper[4808]: I0311 09:01:14.936111 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-e4df-account-create-update-dwl4j" Mar 11 09:01:14 crc kubenswrapper[4808]: I0311 09:01:14.943073 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0beff323-e564-46ab-b5d1-ff40920e373e-operator-scripts\") pod \"barbican-db-create-d84g2\" (UID: \"0beff323-e564-46ab-b5d1-ff40920e373e\") " pod="openstack/barbican-db-create-d84g2" Mar 11 09:01:14 crc kubenswrapper[4808]: I0311 09:01:14.943107 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwtxj\" (UniqueName: \"kubernetes.io/projected/0beff323-e564-46ab-b5d1-ff40920e373e-kube-api-access-wwtxj\") pod \"barbican-db-create-d84g2\" (UID: \"0beff323-e564-46ab-b5d1-ff40920e373e\") " pod="openstack/barbican-db-create-d84g2" Mar 11 09:01:14 crc kubenswrapper[4808]: I0311 09:01:14.943155 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnxzj\" (UniqueName: \"kubernetes.io/projected/a124b7a1-bcf0-4d53-82b1-2a32992b56b8-kube-api-access-xnxzj\") pod \"keystone-db-sync-cgtv2\" (UID: \"a124b7a1-bcf0-4d53-82b1-2a32992b56b8\") " pod="openstack/keystone-db-sync-cgtv2" Mar 11 09:01:14 crc kubenswrapper[4808]: I0311 09:01:14.943175 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a124b7a1-bcf0-4d53-82b1-2a32992b56b8-combined-ca-bundle\") pod \"keystone-db-sync-cgtv2\" (UID: \"a124b7a1-bcf0-4d53-82b1-2a32992b56b8\") " pod="openstack/keystone-db-sync-cgtv2" Mar 11 09:01:14 crc kubenswrapper[4808]: I0311 09:01:14.943194 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffzx4\" (UniqueName: \"kubernetes.io/projected/3eb827b1-9ec7-4234-a8e6-38072c48a09c-kube-api-access-ffzx4\") pod \"barbican-c800-account-create-update-qnvq4\" (UID: \"3eb827b1-9ec7-4234-a8e6-38072c48a09c\") " pod="openstack/barbican-c800-account-create-update-qnvq4" Mar 11 09:01:14 crc kubenswrapper[4808]: I0311 09:01:14.943243 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a124b7a1-bcf0-4d53-82b1-2a32992b56b8-config-data\") pod \"keystone-db-sync-cgtv2\" (UID: \"a124b7a1-bcf0-4d53-82b1-2a32992b56b8\") " pod="openstack/keystone-db-sync-cgtv2" Mar 11 09:01:14 crc kubenswrapper[4808]: I0311 09:01:14.943282 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3eb827b1-9ec7-4234-a8e6-38072c48a09c-operator-scripts\") pod \"barbican-c800-account-create-update-qnvq4\" (UID: \"3eb827b1-9ec7-4234-a8e6-38072c48a09c\") " pod="openstack/barbican-c800-account-create-update-qnvq4" Mar 11 09:01:14 crc kubenswrapper[4808]: I0311 09:01:14.943911 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3eb827b1-9ec7-4234-a8e6-38072c48a09c-operator-scripts\") pod \"barbican-c800-account-create-update-qnvq4\" (UID: \"3eb827b1-9ec7-4234-a8e6-38072c48a09c\") " pod="openstack/barbican-c800-account-create-update-qnvq4" Mar 11 09:01:14 crc kubenswrapper[4808]: I0311 09:01:14.944350 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0beff323-e564-46ab-b5d1-ff40920e373e-operator-scripts\") pod \"barbican-db-create-d84g2\" (UID: \"0beff323-e564-46ab-b5d1-ff40920e373e\") " pod="openstack/barbican-db-create-d84g2" Mar 11 09:01:14 crc kubenswrapper[4808]: I0311 09:01:14.946647 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-j6v9x"] Mar 11 09:01:14 crc kubenswrapper[4808]: I0311 09:01:14.947630 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-j6v9x" Mar 11 09:01:14 crc kubenswrapper[4808]: I0311 09:01:14.974780 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwtxj\" (UniqueName: \"kubernetes.io/projected/0beff323-e564-46ab-b5d1-ff40920e373e-kube-api-access-wwtxj\") pod \"barbican-db-create-d84g2\" (UID: \"0beff323-e564-46ab-b5d1-ff40920e373e\") " pod="openstack/barbican-db-create-d84g2" Mar 11 09:01:14 crc kubenswrapper[4808]: I0311 09:01:14.993051 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffzx4\" (UniqueName: \"kubernetes.io/projected/3eb827b1-9ec7-4234-a8e6-38072c48a09c-kube-api-access-ffzx4\") pod \"barbican-c800-account-create-update-qnvq4\" (UID: \"3eb827b1-9ec7-4234-a8e6-38072c48a09c\") " pod="openstack/barbican-c800-account-create-update-qnvq4" Mar 11 09:01:15 crc kubenswrapper[4808]: I0311 09:01:15.044934 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-j6v9x"] Mar 11 09:01:15 crc kubenswrapper[4808]: I0311 09:01:15.046162 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a124b7a1-bcf0-4d53-82b1-2a32992b56b8-config-data\") pod \"keystone-db-sync-cgtv2\" (UID: \"a124b7a1-bcf0-4d53-82b1-2a32992b56b8\") " pod="openstack/keystone-db-sync-cgtv2" Mar 11 09:01:15 crc kubenswrapper[4808]: I0311 09:01:15.046334 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/afa60fd1-a148-4127-ae9d-72f6a61f6cce-operator-scripts\") pod \"neutron-db-create-j6v9x\" (UID: \"afa60fd1-a148-4127-ae9d-72f6a61f6cce\") " pod="openstack/neutron-db-create-j6v9x" Mar 11 09:01:15 crc kubenswrapper[4808]: I0311 09:01:15.046425 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnxzj\" (UniqueName: \"kubernetes.io/projected/a124b7a1-bcf0-4d53-82b1-2a32992b56b8-kube-api-access-xnxzj\") pod \"keystone-db-sync-cgtv2\" (UID: \"a124b7a1-bcf0-4d53-82b1-2a32992b56b8\") " pod="openstack/keystone-db-sync-cgtv2" Mar 11 09:01:15 crc kubenswrapper[4808]: I0311 09:01:15.046458 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a124b7a1-bcf0-4d53-82b1-2a32992b56b8-combined-ca-bundle\") pod \"keystone-db-sync-cgtv2\" (UID: \"a124b7a1-bcf0-4d53-82b1-2a32992b56b8\") " pod="openstack/keystone-db-sync-cgtv2" Mar 11 09:01:15 crc kubenswrapper[4808]: I0311 09:01:15.046502 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2j5h\" (UniqueName: \"kubernetes.io/projected/afa60fd1-a148-4127-ae9d-72f6a61f6cce-kube-api-access-v2j5h\") pod \"neutron-db-create-j6v9x\" (UID: \"afa60fd1-a148-4127-ae9d-72f6a61f6cce\") " pod="openstack/neutron-db-create-j6v9x" Mar 11 09:01:15 crc kubenswrapper[4808]: I0311 09:01:15.050712 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a124b7a1-bcf0-4d53-82b1-2a32992b56b8-combined-ca-bundle\") pod \"keystone-db-sync-cgtv2\" (UID: \"a124b7a1-bcf0-4d53-82b1-2a32992b56b8\") " pod="openstack/keystone-db-sync-cgtv2" Mar 11 09:01:15 crc kubenswrapper[4808]: I0311 09:01:15.055424 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-d84g2" Mar 11 09:01:15 crc kubenswrapper[4808]: I0311 09:01:15.058203 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a124b7a1-bcf0-4d53-82b1-2a32992b56b8-config-data\") pod \"keystone-db-sync-cgtv2\" (UID: \"a124b7a1-bcf0-4d53-82b1-2a32992b56b8\") " pod="openstack/keystone-db-sync-cgtv2" Mar 11 09:01:15 crc kubenswrapper[4808]: I0311 09:01:15.062119 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-dde6-account-create-update-kdxb5"] Mar 11 09:01:15 crc kubenswrapper[4808]: I0311 09:01:15.063798 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dde6-account-create-update-kdxb5" Mar 11 09:01:15 crc kubenswrapper[4808]: I0311 09:01:15.066714 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnxzj\" (UniqueName: \"kubernetes.io/projected/a124b7a1-bcf0-4d53-82b1-2a32992b56b8-kube-api-access-xnxzj\") pod \"keystone-db-sync-cgtv2\" (UID: \"a124b7a1-bcf0-4d53-82b1-2a32992b56b8\") " pod="openstack/keystone-db-sync-cgtv2" Mar 11 09:01:15 crc kubenswrapper[4808]: I0311 09:01:15.066777 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-c800-account-create-update-qnvq4" Mar 11 09:01:15 crc kubenswrapper[4808]: I0311 09:01:15.066988 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Mar 11 09:01:15 crc kubenswrapper[4808]: I0311 09:01:15.086728 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-dde6-account-create-update-kdxb5"] Mar 11 09:01:15 crc kubenswrapper[4808]: I0311 09:01:15.148105 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55vd4\" (UniqueName: \"kubernetes.io/projected/52065a8d-3ef3-4770-9f24-60d40800efcb-kube-api-access-55vd4\") pod \"neutron-dde6-account-create-update-kdxb5\" (UID: \"52065a8d-3ef3-4770-9f24-60d40800efcb\") " pod="openstack/neutron-dde6-account-create-update-kdxb5" Mar 11 09:01:15 crc kubenswrapper[4808]: I0311 09:01:15.148164 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2j5h\" (UniqueName: \"kubernetes.io/projected/afa60fd1-a148-4127-ae9d-72f6a61f6cce-kube-api-access-v2j5h\") pod \"neutron-db-create-j6v9x\" (UID: \"afa60fd1-a148-4127-ae9d-72f6a61f6cce\") " pod="openstack/neutron-db-create-j6v9x" Mar 11 09:01:15 crc kubenswrapper[4808]: I0311 09:01:15.148253 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52065a8d-3ef3-4770-9f24-60d40800efcb-operator-scripts\") pod \"neutron-dde6-account-create-update-kdxb5\" (UID: \"52065a8d-3ef3-4770-9f24-60d40800efcb\") " pod="openstack/neutron-dde6-account-create-update-kdxb5" Mar 11 09:01:15 crc kubenswrapper[4808]: I0311 09:01:15.148457 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/afa60fd1-a148-4127-ae9d-72f6a61f6cce-operator-scripts\") pod \"neutron-db-create-j6v9x\" (UID: \"afa60fd1-a148-4127-ae9d-72f6a61f6cce\") " pod="openstack/neutron-db-create-j6v9x" Mar 11 09:01:15 crc kubenswrapper[4808]: I0311 09:01:15.149267 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/afa60fd1-a148-4127-ae9d-72f6a61f6cce-operator-scripts\") pod \"neutron-db-create-j6v9x\" (UID: \"afa60fd1-a148-4127-ae9d-72f6a61f6cce\") " pod="openstack/neutron-db-create-j6v9x" Mar 11 09:01:15 crc kubenswrapper[4808]: I0311 09:01:15.165455 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2j5h\" (UniqueName: \"kubernetes.io/projected/afa60fd1-a148-4127-ae9d-72f6a61f6cce-kube-api-access-v2j5h\") pod \"neutron-db-create-j6v9x\" (UID: \"afa60fd1-a148-4127-ae9d-72f6a61f6cce\") " pod="openstack/neutron-db-create-j6v9x" Mar 11 09:01:15 crc kubenswrapper[4808]: I0311 09:01:15.233710 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2531f01-6ef8-4583-b788-97e0c8b4b50b","Type":"ContainerStarted","Data":"081fbc280b7cac976f12e7408ccac953bdb3e17b1d8f4bac92f9023da0402d27"} Mar 11 09:01:15 crc kubenswrapper[4808]: I0311 09:01:15.233748 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2531f01-6ef8-4583-b788-97e0c8b4b50b","Type":"ContainerStarted","Data":"267b966e81c3ff2dcbbdf7e8e9d6fb5a08be9d360dec65483ea3395dbc33a811"} Mar 11 09:01:15 crc kubenswrapper[4808]: I0311 09:01:15.236326 4808 generic.go:334] "Generic (PLEG): container finished" podID="9a7b14d5-82e6-402d-a64d-cec1541d5195" containerID="29557113955ec225b88890526bdc8ee7308caa89a7f024d9bfa00ba222ead59c" exitCode=0 Mar 11 09:01:15 crc kubenswrapper[4808]: I0311 09:01:15.236367 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-bvzx7" event={"ID":"9a7b14d5-82e6-402d-a64d-cec1541d5195","Type":"ContainerDied","Data":"29557113955ec225b88890526bdc8ee7308caa89a7f024d9bfa00ba222ead59c"} Mar 11 09:01:15 crc kubenswrapper[4808]: I0311 09:01:15.250263 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55vd4\" (UniqueName: \"kubernetes.io/projected/52065a8d-3ef3-4770-9f24-60d40800efcb-kube-api-access-55vd4\") pod \"neutron-dde6-account-create-update-kdxb5\" (UID: \"52065a8d-3ef3-4770-9f24-60d40800efcb\") " pod="openstack/neutron-dde6-account-create-update-kdxb5" Mar 11 09:01:15 crc kubenswrapper[4808]: I0311 09:01:15.250564 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52065a8d-3ef3-4770-9f24-60d40800efcb-operator-scripts\") pod \"neutron-dde6-account-create-update-kdxb5\" (UID: \"52065a8d-3ef3-4770-9f24-60d40800efcb\") " pod="openstack/neutron-dde6-account-create-update-kdxb5" Mar 11 09:01:15 crc kubenswrapper[4808]: I0311 09:01:15.251314 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52065a8d-3ef3-4770-9f24-60d40800efcb-operator-scripts\") pod \"neutron-dde6-account-create-update-kdxb5\" (UID: \"52065a8d-3ef3-4770-9f24-60d40800efcb\") " pod="openstack/neutron-dde6-account-create-update-kdxb5" Mar 11 09:01:15 crc kubenswrapper[4808]: I0311 09:01:15.288251 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55vd4\" (UniqueName: \"kubernetes.io/projected/52065a8d-3ef3-4770-9f24-60d40800efcb-kube-api-access-55vd4\") pod \"neutron-dde6-account-create-update-kdxb5\" (UID: \"52065a8d-3ef3-4770-9f24-60d40800efcb\") " pod="openstack/neutron-dde6-account-create-update-kdxb5" Mar 11 09:01:15 crc kubenswrapper[4808]: I0311 09:01:15.307253 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-cgtv2" Mar 11 09:01:15 crc kubenswrapper[4808]: I0311 09:01:15.326950 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-j6v9x" Mar 11 09:01:15 crc kubenswrapper[4808]: I0311 09:01:15.406711 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dde6-account-create-update-kdxb5" Mar 11 09:01:15 crc kubenswrapper[4808]: I0311 09:01:15.413133 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-sz9x6"] Mar 11 09:01:15 crc kubenswrapper[4808]: I0311 09:01:15.624446 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-e4df-account-create-update-dwl4j"] Mar 11 09:01:15 crc kubenswrapper[4808]: I0311 09:01:15.697539 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-d84g2"] Mar 11 09:01:15 crc kubenswrapper[4808]: I0311 09:01:15.712030 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-c800-account-create-update-qnvq4"] Mar 11 09:01:15 crc kubenswrapper[4808]: I0311 09:01:15.922254 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-cgtv2"] Mar 11 09:01:15 crc kubenswrapper[4808]: W0311 09:01:15.984121 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda124b7a1_bcf0_4d53_82b1_2a32992b56b8.slice/crio-b4b696b22a4f2871fe6b667f53a1536f885cb6b5bacab70f18d7736dd64c55dc WatchSource:0}: Error finding container b4b696b22a4f2871fe6b667f53a1536f885cb6b5bacab70f18d7736dd64c55dc: Status 404 returned error can't find the container with id b4b696b22a4f2871fe6b667f53a1536f885cb6b5bacab70f18d7736dd64c55dc Mar 11 09:01:15 crc kubenswrapper[4808]: W0311 09:01:15.985981 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3eb827b1_9ec7_4234_a8e6_38072c48a09c.slice/crio-92a52b74229a8b089d3af8854e0cb035cc8ef8a9fd4c527ec16b1066050287e9 WatchSource:0}: Error finding container 92a52b74229a8b089d3af8854e0cb035cc8ef8a9fd4c527ec16b1066050287e9: Status 404 returned error can't find the container with id 92a52b74229a8b089d3af8854e0cb035cc8ef8a9fd4c527ec16b1066050287e9 Mar 11 09:01:16 crc kubenswrapper[4808]: I0311 09:01:16.007548 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-j6v9x"] Mar 11 09:01:16 crc kubenswrapper[4808]: I0311 09:01:16.016280 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-dde6-account-create-update-kdxb5"] Mar 11 09:01:16 crc kubenswrapper[4808]: I0311 09:01:16.028795 4808 patch_prober.go:28] interesting pod/machine-config-daemon-tfsm9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 09:01:16 crc kubenswrapper[4808]: I0311 09:01:16.028843 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 09:01:16 crc kubenswrapper[4808]: W0311 09:01:16.052893 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod52065a8d_3ef3_4770_9f24_60d40800efcb.slice/crio-899dc0118b34b7fdd7f2d0c685682160183b1e0dcc9045b53854cc76d22fda08 WatchSource:0}: Error finding container 899dc0118b34b7fdd7f2d0c685682160183b1e0dcc9045b53854cc76d22fda08: Status 404 returned error can't find the container with id 899dc0118b34b7fdd7f2d0c685682160183b1e0dcc9045b53854cc76d22fda08 Mar 11 09:01:16 crc kubenswrapper[4808]: I0311 09:01:16.245445 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dde6-account-create-update-kdxb5" event={"ID":"52065a8d-3ef3-4770-9f24-60d40800efcb","Type":"ContainerStarted","Data":"899dc0118b34b7fdd7f2d0c685682160183b1e0dcc9045b53854cc76d22fda08"} Mar 11 09:01:16 crc kubenswrapper[4808]: I0311 09:01:16.247794 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-e4df-account-create-update-dwl4j" event={"ID":"e6da51ff-4f87-4c78-aff9-1b60b3a23633","Type":"ContainerStarted","Data":"63c3878f799661a8107b4ab4884d9b3b435825ecb495eafd1aa272df9f3dac40"} Mar 11 09:01:16 crc kubenswrapper[4808]: I0311 09:01:16.247842 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-e4df-account-create-update-dwl4j" event={"ID":"e6da51ff-4f87-4c78-aff9-1b60b3a23633","Type":"ContainerStarted","Data":"485740f51a86bb0a24287f599778cedf468953fa581256622194167a08ef4792"} Mar 11 09:01:16 crc kubenswrapper[4808]: I0311 09:01:16.248706 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-j6v9x" event={"ID":"afa60fd1-a148-4127-ae9d-72f6a61f6cce","Type":"ContainerStarted","Data":"d2bb0001d0eeb44415d4c300e17d7487a6a70d3c49e58fe30d2dcfcffd92a4c1"} Mar 11 09:01:16 crc kubenswrapper[4808]: I0311 09:01:16.249786 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-cgtv2" event={"ID":"a124b7a1-bcf0-4d53-82b1-2a32992b56b8","Type":"ContainerStarted","Data":"b4b696b22a4f2871fe6b667f53a1536f885cb6b5bacab70f18d7736dd64c55dc"} Mar 11 09:01:16 crc kubenswrapper[4808]: I0311 09:01:16.252803 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-d84g2" event={"ID":"0beff323-e564-46ab-b5d1-ff40920e373e","Type":"ContainerStarted","Data":"a9af3faba61ee935bc1364d08d4073c06283c9c6cc04235fb28e0798f6b9f905"} Mar 11 09:01:16 crc kubenswrapper[4808]: I0311 09:01:16.259185 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-sz9x6" event={"ID":"02770ee6-83bc-4c09-a98e-d1e1624bd759","Type":"ContainerStarted","Data":"9eff5d169ad499295a27031b8c4fa2bdf75a4be1ad00b111d6ac055fc71535bb"} Mar 11 09:01:16 crc kubenswrapper[4808]: I0311 09:01:16.259219 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-sz9x6" event={"ID":"02770ee6-83bc-4c09-a98e-d1e1624bd759","Type":"ContainerStarted","Data":"5b3ac7ea52a29bfa2a05b928f3ca6b39cdb10b51fe99db02ce3a6bc8740f11fa"} Mar 11 09:01:16 crc kubenswrapper[4808]: I0311 09:01:16.264531 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-c800-account-create-update-qnvq4" event={"ID":"3eb827b1-9ec7-4234-a8e6-38072c48a09c","Type":"ContainerStarted","Data":"92a52b74229a8b089d3af8854e0cb035cc8ef8a9fd4c527ec16b1066050287e9"} Mar 11 09:01:16 crc kubenswrapper[4808]: I0311 09:01:16.269527 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-e4df-account-create-update-dwl4j" podStartSLOduration=2.269505864 podStartE2EDuration="2.269505864s" podCreationTimestamp="2026-03-11 09:01:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:01:16.264801408 +0000 UTC m=+1327.218124728" watchObservedRunningTime="2026-03-11 09:01:16.269505864 +0000 UTC m=+1327.222829184" Mar 11 09:01:16 crc kubenswrapper[4808]: I0311 09:01:16.295004 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-sz9x6" podStartSLOduration=2.294983261 podStartE2EDuration="2.294983261s" podCreationTimestamp="2026-03-11 09:01:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:01:16.288247606 +0000 UTC m=+1327.241570926" watchObservedRunningTime="2026-03-11 09:01:16.294983261 +0000 UTC m=+1327.248306581" Mar 11 09:01:16 crc kubenswrapper[4808]: I0311 09:01:16.893856 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-bvzx7" Mar 11 09:01:16 crc kubenswrapper[4808]: I0311 09:01:16.985492 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5npn\" (UniqueName: \"kubernetes.io/projected/9a7b14d5-82e6-402d-a64d-cec1541d5195-kube-api-access-q5npn\") pod \"9a7b14d5-82e6-402d-a64d-cec1541d5195\" (UID: \"9a7b14d5-82e6-402d-a64d-cec1541d5195\") " Mar 11 09:01:16 crc kubenswrapper[4808]: I0311 09:01:16.985988 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a7b14d5-82e6-402d-a64d-cec1541d5195-config-data\") pod \"9a7b14d5-82e6-402d-a64d-cec1541d5195\" (UID: \"9a7b14d5-82e6-402d-a64d-cec1541d5195\") " Mar 11 09:01:16 crc kubenswrapper[4808]: I0311 09:01:16.986018 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9a7b14d5-82e6-402d-a64d-cec1541d5195-db-sync-config-data\") pod \"9a7b14d5-82e6-402d-a64d-cec1541d5195\" (UID: \"9a7b14d5-82e6-402d-a64d-cec1541d5195\") " Mar 11 09:01:16 crc kubenswrapper[4808]: I0311 09:01:16.986040 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a7b14d5-82e6-402d-a64d-cec1541d5195-combined-ca-bundle\") pod \"9a7b14d5-82e6-402d-a64d-cec1541d5195\" (UID: \"9a7b14d5-82e6-402d-a64d-cec1541d5195\") " Mar 11 09:01:16 crc kubenswrapper[4808]: I0311 09:01:16.989907 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a7b14d5-82e6-402d-a64d-cec1541d5195-kube-api-access-q5npn" (OuterVolumeSpecName: "kube-api-access-q5npn") pod "9a7b14d5-82e6-402d-a64d-cec1541d5195" (UID: "9a7b14d5-82e6-402d-a64d-cec1541d5195"). InnerVolumeSpecName "kube-api-access-q5npn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:01:16 crc kubenswrapper[4808]: I0311 09:01:16.992749 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a7b14d5-82e6-402d-a64d-cec1541d5195-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "9a7b14d5-82e6-402d-a64d-cec1541d5195" (UID: "9a7b14d5-82e6-402d-a64d-cec1541d5195"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:01:17 crc kubenswrapper[4808]: I0311 09:01:17.065573 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a7b14d5-82e6-402d-a64d-cec1541d5195-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9a7b14d5-82e6-402d-a64d-cec1541d5195" (UID: "9a7b14d5-82e6-402d-a64d-cec1541d5195"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:01:17 crc kubenswrapper[4808]: I0311 09:01:17.092582 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q5npn\" (UniqueName: \"kubernetes.io/projected/9a7b14d5-82e6-402d-a64d-cec1541d5195-kube-api-access-q5npn\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:17 crc kubenswrapper[4808]: I0311 09:01:17.092610 4808 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9a7b14d5-82e6-402d-a64d-cec1541d5195-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:17 crc kubenswrapper[4808]: I0311 09:01:17.092619 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a7b14d5-82e6-402d-a64d-cec1541d5195-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:17 crc kubenswrapper[4808]: I0311 09:01:17.116581 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a7b14d5-82e6-402d-a64d-cec1541d5195-config-data" (OuterVolumeSpecName: "config-data") pod "9a7b14d5-82e6-402d-a64d-cec1541d5195" (UID: "9a7b14d5-82e6-402d-a64d-cec1541d5195"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:01:17 crc kubenswrapper[4808]: I0311 09:01:17.194232 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a7b14d5-82e6-402d-a64d-cec1541d5195-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:17 crc kubenswrapper[4808]: I0311 09:01:17.272789 4808 generic.go:334] "Generic (PLEG): container finished" podID="afa60fd1-a148-4127-ae9d-72f6a61f6cce" containerID="7d3f919e929f2c67fd8f52fc95d0e63db5f86456fb6f7636f5fe99bacfe5ab3e" exitCode=0 Mar 11 09:01:17 crc kubenswrapper[4808]: I0311 09:01:17.272874 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-j6v9x" event={"ID":"afa60fd1-a148-4127-ae9d-72f6a61f6cce","Type":"ContainerDied","Data":"7d3f919e929f2c67fd8f52fc95d0e63db5f86456fb6f7636f5fe99bacfe5ab3e"} Mar 11 09:01:17 crc kubenswrapper[4808]: I0311 09:01:17.276000 4808 generic.go:334] "Generic (PLEG): container finished" podID="0beff323-e564-46ab-b5d1-ff40920e373e" containerID="53d4dc592fa925d1f668a0777b7efa1859394b6a110fbb6a7bc3f84c3d69c10b" exitCode=0 Mar 11 09:01:17 crc kubenswrapper[4808]: I0311 09:01:17.276043 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-d84g2" event={"ID":"0beff323-e564-46ab-b5d1-ff40920e373e","Type":"ContainerDied","Data":"53d4dc592fa925d1f668a0777b7efa1859394b6a110fbb6a7bc3f84c3d69c10b"} Mar 11 09:01:17 crc kubenswrapper[4808]: I0311 09:01:17.295831 4808 generic.go:334] "Generic (PLEG): container finished" podID="3eb827b1-9ec7-4234-a8e6-38072c48a09c" containerID="e7f73eff01ef0223f26e774598f8d99a1acae0436283634adab1b2c6d4f6862e" exitCode=0 Mar 11 09:01:17 crc kubenswrapper[4808]: I0311 09:01:17.295937 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-c800-account-create-update-qnvq4" event={"ID":"3eb827b1-9ec7-4234-a8e6-38072c48a09c","Type":"ContainerDied","Data":"e7f73eff01ef0223f26e774598f8d99a1acae0436283634adab1b2c6d4f6862e"} Mar 11 09:01:17 crc kubenswrapper[4808]: I0311 09:01:17.303309 4808 generic.go:334] "Generic (PLEG): container finished" podID="02770ee6-83bc-4c09-a98e-d1e1624bd759" containerID="9eff5d169ad499295a27031b8c4fa2bdf75a4be1ad00b111d6ac055fc71535bb" exitCode=0 Mar 11 09:01:17 crc kubenswrapper[4808]: I0311 09:01:17.303508 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-sz9x6" event={"ID":"02770ee6-83bc-4c09-a98e-d1e1624bd759","Type":"ContainerDied","Data":"9eff5d169ad499295a27031b8c4fa2bdf75a4be1ad00b111d6ac055fc71535bb"} Mar 11 09:01:17 crc kubenswrapper[4808]: I0311 09:01:17.313739 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2531f01-6ef8-4583-b788-97e0c8b4b50b","Type":"ContainerStarted","Data":"37d0039d1a631f590de0477dafeba545b23952735655b1396940b1af448dcf5c"} Mar 11 09:01:17 crc kubenswrapper[4808]: I0311 09:01:17.313779 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2531f01-6ef8-4583-b788-97e0c8b4b50b","Type":"ContainerStarted","Data":"b34c64e3d4e3825344ce6d566596454bdaa22b31521f5119b23e2df58bc1f23d"} Mar 11 09:01:17 crc kubenswrapper[4808]: I0311 09:01:17.313789 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2531f01-6ef8-4583-b788-97e0c8b4b50b","Type":"ContainerStarted","Data":"ba474a4645c4e2ea029e0f9ff8bded4ccbfb98f7157a6bc5a8efcb5ca613c7de"} Mar 11 09:01:17 crc kubenswrapper[4808]: I0311 09:01:17.315109 4808 generic.go:334] "Generic (PLEG): container finished" podID="52065a8d-3ef3-4770-9f24-60d40800efcb" containerID="1e995a57e4f031753f2434695222feaa9b97c3336d52b19d5f236611f1937daa" exitCode=0 Mar 11 09:01:17 crc kubenswrapper[4808]: I0311 09:01:17.315147 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dde6-account-create-update-kdxb5" event={"ID":"52065a8d-3ef3-4770-9f24-60d40800efcb","Type":"ContainerDied","Data":"1e995a57e4f031753f2434695222feaa9b97c3336d52b19d5f236611f1937daa"} Mar 11 09:01:17 crc kubenswrapper[4808]: I0311 09:01:17.316512 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-bvzx7" event={"ID":"9a7b14d5-82e6-402d-a64d-cec1541d5195","Type":"ContainerDied","Data":"6812b1e28314e6d111bf20de20510cd8245fa345d173cd3f2f90892987fa1e2f"} Mar 11 09:01:17 crc kubenswrapper[4808]: I0311 09:01:17.316562 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6812b1e28314e6d111bf20de20510cd8245fa345d173cd3f2f90892987fa1e2f" Mar 11 09:01:17 crc kubenswrapper[4808]: I0311 09:01:17.316637 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-bvzx7" Mar 11 09:01:17 crc kubenswrapper[4808]: I0311 09:01:17.320196 4808 generic.go:334] "Generic (PLEG): container finished" podID="e6da51ff-4f87-4c78-aff9-1b60b3a23633" containerID="63c3878f799661a8107b4ab4884d9b3b435825ecb495eafd1aa272df9f3dac40" exitCode=0 Mar 11 09:01:17 crc kubenswrapper[4808]: I0311 09:01:17.320228 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-e4df-account-create-update-dwl4j" event={"ID":"e6da51ff-4f87-4c78-aff9-1b60b3a23633","Type":"ContainerDied","Data":"63c3878f799661a8107b4ab4884d9b3b435825ecb495eafd1aa272df9f3dac40"} Mar 11 09:01:17 crc kubenswrapper[4808]: I0311 09:01:17.611866 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7f58d6bb6f-d48mb"] Mar 11 09:01:17 crc kubenswrapper[4808]: E0311 09:01:17.612493 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a7b14d5-82e6-402d-a64d-cec1541d5195" containerName="glance-db-sync" Mar 11 09:01:17 crc kubenswrapper[4808]: I0311 09:01:17.612505 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a7b14d5-82e6-402d-a64d-cec1541d5195" containerName="glance-db-sync" Mar 11 09:01:17 crc kubenswrapper[4808]: I0311 09:01:17.612666 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a7b14d5-82e6-402d-a64d-cec1541d5195" containerName="glance-db-sync" Mar 11 09:01:17 crc kubenswrapper[4808]: I0311 09:01:17.613524 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f58d6bb6f-d48mb" Mar 11 09:01:17 crc kubenswrapper[4808]: I0311 09:01:17.639851 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f58d6bb6f-d48mb"] Mar 11 09:01:17 crc kubenswrapper[4808]: I0311 09:01:17.705711 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef7117e6-2beb-4153-85b5-6b75d68e1a9a-config\") pod \"dnsmasq-dns-7f58d6bb6f-d48mb\" (UID: \"ef7117e6-2beb-4153-85b5-6b75d68e1a9a\") " pod="openstack/dnsmasq-dns-7f58d6bb6f-d48mb" Mar 11 09:01:17 crc kubenswrapper[4808]: I0311 09:01:17.705861 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ef7117e6-2beb-4153-85b5-6b75d68e1a9a-ovsdbserver-sb\") pod \"dnsmasq-dns-7f58d6bb6f-d48mb\" (UID: \"ef7117e6-2beb-4153-85b5-6b75d68e1a9a\") " pod="openstack/dnsmasq-dns-7f58d6bb6f-d48mb" Mar 11 09:01:17 crc kubenswrapper[4808]: I0311 09:01:17.705892 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlhbq\" (UniqueName: \"kubernetes.io/projected/ef7117e6-2beb-4153-85b5-6b75d68e1a9a-kube-api-access-xlhbq\") pod \"dnsmasq-dns-7f58d6bb6f-d48mb\" (UID: \"ef7117e6-2beb-4153-85b5-6b75d68e1a9a\") " pod="openstack/dnsmasq-dns-7f58d6bb6f-d48mb" Mar 11 09:01:17 crc kubenswrapper[4808]: I0311 09:01:17.705916 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef7117e6-2beb-4153-85b5-6b75d68e1a9a-dns-svc\") pod \"dnsmasq-dns-7f58d6bb6f-d48mb\" (UID: \"ef7117e6-2beb-4153-85b5-6b75d68e1a9a\") " pod="openstack/dnsmasq-dns-7f58d6bb6f-d48mb" Mar 11 09:01:17 crc kubenswrapper[4808]: I0311 09:01:17.705944 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ef7117e6-2beb-4153-85b5-6b75d68e1a9a-ovsdbserver-nb\") pod \"dnsmasq-dns-7f58d6bb6f-d48mb\" (UID: \"ef7117e6-2beb-4153-85b5-6b75d68e1a9a\") " pod="openstack/dnsmasq-dns-7f58d6bb6f-d48mb" Mar 11 09:01:17 crc kubenswrapper[4808]: I0311 09:01:17.809812 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef7117e6-2beb-4153-85b5-6b75d68e1a9a-config\") pod \"dnsmasq-dns-7f58d6bb6f-d48mb\" (UID: \"ef7117e6-2beb-4153-85b5-6b75d68e1a9a\") " pod="openstack/dnsmasq-dns-7f58d6bb6f-d48mb" Mar 11 09:01:17 crc kubenswrapper[4808]: I0311 09:01:17.809939 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ef7117e6-2beb-4153-85b5-6b75d68e1a9a-ovsdbserver-sb\") pod \"dnsmasq-dns-7f58d6bb6f-d48mb\" (UID: \"ef7117e6-2beb-4153-85b5-6b75d68e1a9a\") " pod="openstack/dnsmasq-dns-7f58d6bb6f-d48mb" Mar 11 09:01:17 crc kubenswrapper[4808]: I0311 09:01:17.809963 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlhbq\" (UniqueName: \"kubernetes.io/projected/ef7117e6-2beb-4153-85b5-6b75d68e1a9a-kube-api-access-xlhbq\") pod \"dnsmasq-dns-7f58d6bb6f-d48mb\" (UID: \"ef7117e6-2beb-4153-85b5-6b75d68e1a9a\") " pod="openstack/dnsmasq-dns-7f58d6bb6f-d48mb" Mar 11 09:01:17 crc kubenswrapper[4808]: I0311 09:01:17.810000 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef7117e6-2beb-4153-85b5-6b75d68e1a9a-dns-svc\") pod \"dnsmasq-dns-7f58d6bb6f-d48mb\" (UID: \"ef7117e6-2beb-4153-85b5-6b75d68e1a9a\") " pod="openstack/dnsmasq-dns-7f58d6bb6f-d48mb" Mar 11 09:01:17 crc kubenswrapper[4808]: I0311 09:01:17.810030 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ef7117e6-2beb-4153-85b5-6b75d68e1a9a-ovsdbserver-nb\") pod \"dnsmasq-dns-7f58d6bb6f-d48mb\" (UID: \"ef7117e6-2beb-4153-85b5-6b75d68e1a9a\") " pod="openstack/dnsmasq-dns-7f58d6bb6f-d48mb" Mar 11 09:01:17 crc kubenswrapper[4808]: I0311 09:01:17.811621 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ef7117e6-2beb-4153-85b5-6b75d68e1a9a-ovsdbserver-nb\") pod \"dnsmasq-dns-7f58d6bb6f-d48mb\" (UID: \"ef7117e6-2beb-4153-85b5-6b75d68e1a9a\") " pod="openstack/dnsmasq-dns-7f58d6bb6f-d48mb" Mar 11 09:01:17 crc kubenswrapper[4808]: I0311 09:01:17.811691 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef7117e6-2beb-4153-85b5-6b75d68e1a9a-config\") pod \"dnsmasq-dns-7f58d6bb6f-d48mb\" (UID: \"ef7117e6-2beb-4153-85b5-6b75d68e1a9a\") " pod="openstack/dnsmasq-dns-7f58d6bb6f-d48mb" Mar 11 09:01:17 crc kubenswrapper[4808]: I0311 09:01:17.811741 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef7117e6-2beb-4153-85b5-6b75d68e1a9a-dns-svc\") pod \"dnsmasq-dns-7f58d6bb6f-d48mb\" (UID: \"ef7117e6-2beb-4153-85b5-6b75d68e1a9a\") " pod="openstack/dnsmasq-dns-7f58d6bb6f-d48mb" Mar 11 09:01:17 crc kubenswrapper[4808]: I0311 09:01:17.812490 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ef7117e6-2beb-4153-85b5-6b75d68e1a9a-ovsdbserver-sb\") pod \"dnsmasq-dns-7f58d6bb6f-d48mb\" (UID: \"ef7117e6-2beb-4153-85b5-6b75d68e1a9a\") " pod="openstack/dnsmasq-dns-7f58d6bb6f-d48mb" Mar 11 09:01:17 crc kubenswrapper[4808]: I0311 09:01:17.830908 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlhbq\" (UniqueName: \"kubernetes.io/projected/ef7117e6-2beb-4153-85b5-6b75d68e1a9a-kube-api-access-xlhbq\") pod \"dnsmasq-dns-7f58d6bb6f-d48mb\" (UID: \"ef7117e6-2beb-4153-85b5-6b75d68e1a9a\") " pod="openstack/dnsmasq-dns-7f58d6bb6f-d48mb" Mar 11 09:01:17 crc kubenswrapper[4808]: I0311 09:01:17.949913 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f58d6bb6f-d48mb" Mar 11 09:01:18 crc kubenswrapper[4808]: I0311 09:01:18.335911 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2531f01-6ef8-4583-b788-97e0c8b4b50b","Type":"ContainerStarted","Data":"91cb6e0a1f936b6dc40058ec5049a18b1acb5b6601c22fa29cce4e18b74747dd"} Mar 11 09:01:18 crc kubenswrapper[4808]: I0311 09:01:18.444663 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f58d6bb6f-d48mb"] Mar 11 09:01:19 crc kubenswrapper[4808]: I0311 09:01:19.007979 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-c800-account-create-update-qnvq4" Mar 11 09:01:19 crc kubenswrapper[4808]: I0311 09:01:19.014414 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dde6-account-create-update-kdxb5" Mar 11 09:01:19 crc kubenswrapper[4808]: I0311 09:01:19.031268 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-e4df-account-create-update-dwl4j" Mar 11 09:01:19 crc kubenswrapper[4808]: I0311 09:01:19.034720 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3eb827b1-9ec7-4234-a8e6-38072c48a09c-operator-scripts\") pod \"3eb827b1-9ec7-4234-a8e6-38072c48a09c\" (UID: \"3eb827b1-9ec7-4234-a8e6-38072c48a09c\") " Mar 11 09:01:19 crc kubenswrapper[4808]: I0311 09:01:19.034868 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ffzx4\" (UniqueName: \"kubernetes.io/projected/3eb827b1-9ec7-4234-a8e6-38072c48a09c-kube-api-access-ffzx4\") pod \"3eb827b1-9ec7-4234-a8e6-38072c48a09c\" (UID: \"3eb827b1-9ec7-4234-a8e6-38072c48a09c\") " Mar 11 09:01:19 crc kubenswrapper[4808]: I0311 09:01:19.043940 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3eb827b1-9ec7-4234-a8e6-38072c48a09c-kube-api-access-ffzx4" (OuterVolumeSpecName: "kube-api-access-ffzx4") pod "3eb827b1-9ec7-4234-a8e6-38072c48a09c" (UID: "3eb827b1-9ec7-4234-a8e6-38072c48a09c"). InnerVolumeSpecName "kube-api-access-ffzx4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:01:19 crc kubenswrapper[4808]: I0311 09:01:19.045681 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3eb827b1-9ec7-4234-a8e6-38072c48a09c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3eb827b1-9ec7-4234-a8e6-38072c48a09c" (UID: "3eb827b1-9ec7-4234-a8e6-38072c48a09c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:01:19 crc kubenswrapper[4808]: I0311 09:01:19.057683 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-j6v9x" Mar 11 09:01:19 crc kubenswrapper[4808]: I0311 09:01:19.060380 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-sz9x6" Mar 11 09:01:19 crc kubenswrapper[4808]: I0311 09:01:19.071790 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-d84g2" Mar 11 09:01:19 crc kubenswrapper[4808]: I0311 09:01:19.136056 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wwtxj\" (UniqueName: \"kubernetes.io/projected/0beff323-e564-46ab-b5d1-ff40920e373e-kube-api-access-wwtxj\") pod \"0beff323-e564-46ab-b5d1-ff40920e373e\" (UID: \"0beff323-e564-46ab-b5d1-ff40920e373e\") " Mar 11 09:01:19 crc kubenswrapper[4808]: I0311 09:01:19.136124 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rqnjb\" (UniqueName: \"kubernetes.io/projected/02770ee6-83bc-4c09-a98e-d1e1624bd759-kube-api-access-rqnjb\") pod \"02770ee6-83bc-4c09-a98e-d1e1624bd759\" (UID: \"02770ee6-83bc-4c09-a98e-d1e1624bd759\") " Mar 11 09:01:19 crc kubenswrapper[4808]: I0311 09:01:19.136155 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6da51ff-4f87-4c78-aff9-1b60b3a23633-operator-scripts\") pod \"e6da51ff-4f87-4c78-aff9-1b60b3a23633\" (UID: \"e6da51ff-4f87-4c78-aff9-1b60b3a23633\") " Mar 11 09:01:19 crc kubenswrapper[4808]: I0311 09:01:19.136217 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52065a8d-3ef3-4770-9f24-60d40800efcb-operator-scripts\") pod \"52065a8d-3ef3-4770-9f24-60d40800efcb\" (UID: \"52065a8d-3ef3-4770-9f24-60d40800efcb\") " Mar 11 09:01:19 crc kubenswrapper[4808]: I0311 09:01:19.136292 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55vd4\" (UniqueName: \"kubernetes.io/projected/52065a8d-3ef3-4770-9f24-60d40800efcb-kube-api-access-55vd4\") pod \"52065a8d-3ef3-4770-9f24-60d40800efcb\" (UID: \"52065a8d-3ef3-4770-9f24-60d40800efcb\") " Mar 11 09:01:19 crc kubenswrapper[4808]: I0311 09:01:19.136317 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v2j5h\" (UniqueName: \"kubernetes.io/projected/afa60fd1-a148-4127-ae9d-72f6a61f6cce-kube-api-access-v2j5h\") pod \"afa60fd1-a148-4127-ae9d-72f6a61f6cce\" (UID: \"afa60fd1-a148-4127-ae9d-72f6a61f6cce\") " Mar 11 09:01:19 crc kubenswrapper[4808]: I0311 09:01:19.136378 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/02770ee6-83bc-4c09-a98e-d1e1624bd759-operator-scripts\") pod \"02770ee6-83bc-4c09-a98e-d1e1624bd759\" (UID: \"02770ee6-83bc-4c09-a98e-d1e1624bd759\") " Mar 11 09:01:19 crc kubenswrapper[4808]: I0311 09:01:19.136452 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0beff323-e564-46ab-b5d1-ff40920e373e-operator-scripts\") pod \"0beff323-e564-46ab-b5d1-ff40920e373e\" (UID: \"0beff323-e564-46ab-b5d1-ff40920e373e\") " Mar 11 09:01:19 crc kubenswrapper[4808]: I0311 09:01:19.136497 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/afa60fd1-a148-4127-ae9d-72f6a61f6cce-operator-scripts\") pod \"afa60fd1-a148-4127-ae9d-72f6a61f6cce\" (UID: \"afa60fd1-a148-4127-ae9d-72f6a61f6cce\") " Mar 11 09:01:19 crc kubenswrapper[4808]: I0311 09:01:19.136540 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x86cc\" (UniqueName: \"kubernetes.io/projected/e6da51ff-4f87-4c78-aff9-1b60b3a23633-kube-api-access-x86cc\") pod \"e6da51ff-4f87-4c78-aff9-1b60b3a23633\" (UID: \"e6da51ff-4f87-4c78-aff9-1b60b3a23633\") " Mar 11 09:01:19 crc kubenswrapper[4808]: I0311 09:01:19.136848 4808 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3eb827b1-9ec7-4234-a8e6-38072c48a09c-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:19 crc kubenswrapper[4808]: I0311 09:01:19.136867 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ffzx4\" (UniqueName: \"kubernetes.io/projected/3eb827b1-9ec7-4234-a8e6-38072c48a09c-kube-api-access-ffzx4\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:19 crc kubenswrapper[4808]: I0311 09:01:19.137406 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02770ee6-83bc-4c09-a98e-d1e1624bd759-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "02770ee6-83bc-4c09-a98e-d1e1624bd759" (UID: "02770ee6-83bc-4c09-a98e-d1e1624bd759"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:01:19 crc kubenswrapper[4808]: I0311 09:01:19.137815 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0beff323-e564-46ab-b5d1-ff40920e373e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0beff323-e564-46ab-b5d1-ff40920e373e" (UID: "0beff323-e564-46ab-b5d1-ff40920e373e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:01:19 crc kubenswrapper[4808]: I0311 09:01:19.137881 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6da51ff-4f87-4c78-aff9-1b60b3a23633-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e6da51ff-4f87-4c78-aff9-1b60b3a23633" (UID: "e6da51ff-4f87-4c78-aff9-1b60b3a23633"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:01:19 crc kubenswrapper[4808]: I0311 09:01:19.138320 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52065a8d-3ef3-4770-9f24-60d40800efcb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "52065a8d-3ef3-4770-9f24-60d40800efcb" (UID: "52065a8d-3ef3-4770-9f24-60d40800efcb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:01:19 crc kubenswrapper[4808]: I0311 09:01:19.138316 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afa60fd1-a148-4127-ae9d-72f6a61f6cce-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "afa60fd1-a148-4127-ae9d-72f6a61f6cce" (UID: "afa60fd1-a148-4127-ae9d-72f6a61f6cce"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:01:19 crc kubenswrapper[4808]: I0311 09:01:19.159512 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52065a8d-3ef3-4770-9f24-60d40800efcb-kube-api-access-55vd4" (OuterVolumeSpecName: "kube-api-access-55vd4") pod "52065a8d-3ef3-4770-9f24-60d40800efcb" (UID: "52065a8d-3ef3-4770-9f24-60d40800efcb"). InnerVolumeSpecName "kube-api-access-55vd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:01:19 crc kubenswrapper[4808]: I0311 09:01:19.169487 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0beff323-e564-46ab-b5d1-ff40920e373e-kube-api-access-wwtxj" (OuterVolumeSpecName: "kube-api-access-wwtxj") pod "0beff323-e564-46ab-b5d1-ff40920e373e" (UID: "0beff323-e564-46ab-b5d1-ff40920e373e"). InnerVolumeSpecName "kube-api-access-wwtxj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:01:19 crc kubenswrapper[4808]: I0311 09:01:19.169563 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afa60fd1-a148-4127-ae9d-72f6a61f6cce-kube-api-access-v2j5h" (OuterVolumeSpecName: "kube-api-access-v2j5h") pod "afa60fd1-a148-4127-ae9d-72f6a61f6cce" (UID: "afa60fd1-a148-4127-ae9d-72f6a61f6cce"). InnerVolumeSpecName "kube-api-access-v2j5h". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:01:19 crc kubenswrapper[4808]: I0311 09:01:19.169687 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6da51ff-4f87-4c78-aff9-1b60b3a23633-kube-api-access-x86cc" (OuterVolumeSpecName: "kube-api-access-x86cc") pod "e6da51ff-4f87-4c78-aff9-1b60b3a23633" (UID: "e6da51ff-4f87-4c78-aff9-1b60b3a23633"). InnerVolumeSpecName "kube-api-access-x86cc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:01:19 crc kubenswrapper[4808]: I0311 09:01:19.170223 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02770ee6-83bc-4c09-a98e-d1e1624bd759-kube-api-access-rqnjb" (OuterVolumeSpecName: "kube-api-access-rqnjb") pod "02770ee6-83bc-4c09-a98e-d1e1624bd759" (UID: "02770ee6-83bc-4c09-a98e-d1e1624bd759"). InnerVolumeSpecName "kube-api-access-rqnjb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:01:19 crc kubenswrapper[4808]: I0311 09:01:19.238448 4808 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52065a8d-3ef3-4770-9f24-60d40800efcb-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:19 crc kubenswrapper[4808]: I0311 09:01:19.238496 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-55vd4\" (UniqueName: \"kubernetes.io/projected/52065a8d-3ef3-4770-9f24-60d40800efcb-kube-api-access-55vd4\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:19 crc kubenswrapper[4808]: I0311 09:01:19.238516 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v2j5h\" (UniqueName: \"kubernetes.io/projected/afa60fd1-a148-4127-ae9d-72f6a61f6cce-kube-api-access-v2j5h\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:19 crc kubenswrapper[4808]: I0311 09:01:19.238528 4808 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/02770ee6-83bc-4c09-a98e-d1e1624bd759-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:19 crc kubenswrapper[4808]: I0311 09:01:19.238540 4808 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0beff323-e564-46ab-b5d1-ff40920e373e-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:19 crc kubenswrapper[4808]: I0311 09:01:19.238552 4808 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/afa60fd1-a148-4127-ae9d-72f6a61f6cce-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:19 crc kubenswrapper[4808]: I0311 09:01:19.238566 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x86cc\" (UniqueName: \"kubernetes.io/projected/e6da51ff-4f87-4c78-aff9-1b60b3a23633-kube-api-access-x86cc\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:19 crc kubenswrapper[4808]: I0311 09:01:19.238580 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wwtxj\" (UniqueName: \"kubernetes.io/projected/0beff323-e564-46ab-b5d1-ff40920e373e-kube-api-access-wwtxj\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:19 crc kubenswrapper[4808]: I0311 09:01:19.238595 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rqnjb\" (UniqueName: \"kubernetes.io/projected/02770ee6-83bc-4c09-a98e-d1e1624bd759-kube-api-access-rqnjb\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:19 crc kubenswrapper[4808]: I0311 09:01:19.238607 4808 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6da51ff-4f87-4c78-aff9-1b60b3a23633-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:19 crc kubenswrapper[4808]: I0311 09:01:19.344517 4808 generic.go:334] "Generic (PLEG): container finished" podID="ef7117e6-2beb-4153-85b5-6b75d68e1a9a" containerID="50e421110418d18cfa8f4c1a70be55ff60183956b30975b358220fb0de877db6" exitCode=0 Mar 11 09:01:19 crc kubenswrapper[4808]: I0311 09:01:19.344558 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f58d6bb6f-d48mb" event={"ID":"ef7117e6-2beb-4153-85b5-6b75d68e1a9a","Type":"ContainerDied","Data":"50e421110418d18cfa8f4c1a70be55ff60183956b30975b358220fb0de877db6"} Mar 11 09:01:19 crc kubenswrapper[4808]: I0311 09:01:19.344878 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f58d6bb6f-d48mb" event={"ID":"ef7117e6-2beb-4153-85b5-6b75d68e1a9a","Type":"ContainerStarted","Data":"581e4bb948b84a06e9d471382fef5174f557535519e31f3036e2c3863f3c7203"} Mar 11 09:01:19 crc kubenswrapper[4808]: I0311 09:01:19.348591 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dde6-account-create-update-kdxb5" event={"ID":"52065a8d-3ef3-4770-9f24-60d40800efcb","Type":"ContainerDied","Data":"899dc0118b34b7fdd7f2d0c685682160183b1e0dcc9045b53854cc76d22fda08"} Mar 11 09:01:19 crc kubenswrapper[4808]: I0311 09:01:19.348631 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="899dc0118b34b7fdd7f2d0c685682160183b1e0dcc9045b53854cc76d22fda08" Mar 11 09:01:19 crc kubenswrapper[4808]: I0311 09:01:19.348601 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dde6-account-create-update-kdxb5" Mar 11 09:01:19 crc kubenswrapper[4808]: I0311 09:01:19.352245 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-e4df-account-create-update-dwl4j" event={"ID":"e6da51ff-4f87-4c78-aff9-1b60b3a23633","Type":"ContainerDied","Data":"485740f51a86bb0a24287f599778cedf468953fa581256622194167a08ef4792"} Mar 11 09:01:19 crc kubenswrapper[4808]: I0311 09:01:19.352291 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="485740f51a86bb0a24287f599778cedf468953fa581256622194167a08ef4792" Mar 11 09:01:19 crc kubenswrapper[4808]: I0311 09:01:19.352258 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-e4df-account-create-update-dwl4j" Mar 11 09:01:19 crc kubenswrapper[4808]: I0311 09:01:19.354319 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-j6v9x" event={"ID":"afa60fd1-a148-4127-ae9d-72f6a61f6cce","Type":"ContainerDied","Data":"d2bb0001d0eeb44415d4c300e17d7487a6a70d3c49e58fe30d2dcfcffd92a4c1"} Mar 11 09:01:19 crc kubenswrapper[4808]: I0311 09:01:19.354343 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-j6v9x" Mar 11 09:01:19 crc kubenswrapper[4808]: I0311 09:01:19.354418 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2bb0001d0eeb44415d4c300e17d7487a6a70d3c49e58fe30d2dcfcffd92a4c1" Mar 11 09:01:19 crc kubenswrapper[4808]: I0311 09:01:19.367821 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-d84g2" Mar 11 09:01:19 crc kubenswrapper[4808]: I0311 09:01:19.367839 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-d84g2" event={"ID":"0beff323-e564-46ab-b5d1-ff40920e373e","Type":"ContainerDied","Data":"a9af3faba61ee935bc1364d08d4073c06283c9c6cc04235fb28e0798f6b9f905"} Mar 11 09:01:19 crc kubenswrapper[4808]: I0311 09:01:19.367946 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9af3faba61ee935bc1364d08d4073c06283c9c6cc04235fb28e0798f6b9f905" Mar 11 09:01:19 crc kubenswrapper[4808]: I0311 09:01:19.372350 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-c800-account-create-update-qnvq4" event={"ID":"3eb827b1-9ec7-4234-a8e6-38072c48a09c","Type":"ContainerDied","Data":"92a52b74229a8b089d3af8854e0cb035cc8ef8a9fd4c527ec16b1066050287e9"} Mar 11 09:01:19 crc kubenswrapper[4808]: I0311 09:01:19.372386 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-c800-account-create-update-qnvq4" Mar 11 09:01:19 crc kubenswrapper[4808]: I0311 09:01:19.372396 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="92a52b74229a8b089d3af8854e0cb035cc8ef8a9fd4c527ec16b1066050287e9" Mar 11 09:01:19 crc kubenswrapper[4808]: I0311 09:01:19.376696 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-sz9x6" event={"ID":"02770ee6-83bc-4c09-a98e-d1e1624bd759","Type":"ContainerDied","Data":"5b3ac7ea52a29bfa2a05b928f3ca6b39cdb10b51fe99db02ce3a6bc8740f11fa"} Mar 11 09:01:19 crc kubenswrapper[4808]: I0311 09:01:19.376720 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b3ac7ea52a29bfa2a05b928f3ca6b39cdb10b51fe99db02ce3a6bc8740f11fa" Mar 11 09:01:19 crc kubenswrapper[4808]: I0311 09:01:19.376760 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-sz9x6" Mar 11 09:01:22 crc kubenswrapper[4808]: I0311 09:01:22.403865 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-cgtv2" event={"ID":"a124b7a1-bcf0-4d53-82b1-2a32992b56b8","Type":"ContainerStarted","Data":"af8c28b1bc01858c7533acc734199553d01fb4d4f078a8c4a4f0170fb48a19fd"} Mar 11 09:01:22 crc kubenswrapper[4808]: I0311 09:01:22.408131 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f58d6bb6f-d48mb" event={"ID":"ef7117e6-2beb-4153-85b5-6b75d68e1a9a","Type":"ContainerStarted","Data":"244870f312ab843139d0c1e5739e03a22fa4f0708af8304b86526c208c1dea11"} Mar 11 09:01:22 crc kubenswrapper[4808]: I0311 09:01:22.408605 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7f58d6bb6f-d48mb" Mar 11 09:01:22 crc kubenswrapper[4808]: I0311 09:01:22.413548 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2531f01-6ef8-4583-b788-97e0c8b4b50b","Type":"ContainerStarted","Data":"dd4a12d0f40b70bed0ff12cd9961f609f614dcb0c77bb2b41c37fb77c51b62c9"} Mar 11 09:01:22 crc kubenswrapper[4808]: I0311 09:01:22.413582 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2531f01-6ef8-4583-b788-97e0c8b4b50b","Type":"ContainerStarted","Data":"36d57f8f584cb1d8ebdc130edda1da090b5344f1959c1b5fbee4de63ad660d1d"} Mar 11 09:01:22 crc kubenswrapper[4808]: I0311 09:01:22.413593 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2531f01-6ef8-4583-b788-97e0c8b4b50b","Type":"ContainerStarted","Data":"5610b5414b923dbe5f29196fe9b69a93bc333d712a301053a8290a033d1900e2"} Mar 11 09:01:22 crc kubenswrapper[4808]: I0311 09:01:22.449306 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7f58d6bb6f-d48mb" podStartSLOduration=5.449239965 podStartE2EDuration="5.449239965s" podCreationTimestamp="2026-03-11 09:01:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:01:22.447795273 +0000 UTC m=+1333.401118603" watchObservedRunningTime="2026-03-11 09:01:22.449239965 +0000 UTC m=+1333.402563295" Mar 11 09:01:22 crc kubenswrapper[4808]: I0311 09:01:22.455189 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-cgtv2" podStartSLOduration=2.622912709 podStartE2EDuration="8.455171016s" podCreationTimestamp="2026-03-11 09:01:14 +0000 UTC" firstStartedPulling="2026-03-11 09:01:15.986967559 +0000 UTC m=+1326.940290879" lastFinishedPulling="2026-03-11 09:01:21.819225866 +0000 UTC m=+1332.772549186" observedRunningTime="2026-03-11 09:01:22.433048966 +0000 UTC m=+1333.386372296" watchObservedRunningTime="2026-03-11 09:01:22.455171016 +0000 UTC m=+1333.408494346" Mar 11 09:01:23 crc kubenswrapper[4808]: I0311 09:01:23.431684 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2531f01-6ef8-4583-b788-97e0c8b4b50b","Type":"ContainerStarted","Data":"8703e021e4664d65f7198a1e10e27fae65ecd623ec350adc5affd0e319e1f91c"} Mar 11 09:01:23 crc kubenswrapper[4808]: I0311 09:01:23.431998 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2531f01-6ef8-4583-b788-97e0c8b4b50b","Type":"ContainerStarted","Data":"1c242216161e4c3f5f19cceec89f8a4f772fb8534970698b0f08070d25afb355"} Mar 11 09:01:23 crc kubenswrapper[4808]: I0311 09:01:23.432015 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2531f01-6ef8-4583-b788-97e0c8b4b50b","Type":"ContainerStarted","Data":"1fe8a90328e3c5fc5913211fdc47a8b9d9c43d633f87180fdcbdef9a02958f4e"} Mar 11 09:01:24 crc kubenswrapper[4808]: I0311 09:01:24.447182 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2531f01-6ef8-4583-b788-97e0c8b4b50b","Type":"ContainerStarted","Data":"073e4d7fcffe762aa5c3e2750fba257255a258fb332465e09d8529f78025ea59"} Mar 11 09:01:24 crc kubenswrapper[4808]: I0311 09:01:24.495799 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=40.278178142 podStartE2EDuration="46.495772918s" podCreationTimestamp="2026-03-11 09:00:38 +0000 UTC" firstStartedPulling="2026-03-11 09:01:12.616083368 +0000 UTC m=+1323.569406708" lastFinishedPulling="2026-03-11 09:01:18.833678164 +0000 UTC m=+1329.787001484" observedRunningTime="2026-03-11 09:01:24.485517432 +0000 UTC m=+1335.438840752" watchObservedRunningTime="2026-03-11 09:01:24.495772918 +0000 UTC m=+1335.449096278" Mar 11 09:01:24 crc kubenswrapper[4808]: I0311 09:01:24.796766 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f58d6bb6f-d48mb"] Mar 11 09:01:24 crc kubenswrapper[4808]: I0311 09:01:24.823270 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-75c886f8b5-92fbr"] Mar 11 09:01:24 crc kubenswrapper[4808]: E0311 09:01:24.824183 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afa60fd1-a148-4127-ae9d-72f6a61f6cce" containerName="mariadb-database-create" Mar 11 09:01:24 crc kubenswrapper[4808]: I0311 09:01:24.824213 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="afa60fd1-a148-4127-ae9d-72f6a61f6cce" containerName="mariadb-database-create" Mar 11 09:01:24 crc kubenswrapper[4808]: E0311 09:01:24.824240 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6da51ff-4f87-4c78-aff9-1b60b3a23633" containerName="mariadb-account-create-update" Mar 11 09:01:24 crc kubenswrapper[4808]: I0311 09:01:24.824249 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6da51ff-4f87-4c78-aff9-1b60b3a23633" containerName="mariadb-account-create-update" Mar 11 09:01:24 crc kubenswrapper[4808]: E0311 09:01:24.824275 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52065a8d-3ef3-4770-9f24-60d40800efcb" containerName="mariadb-account-create-update" Mar 11 09:01:24 crc kubenswrapper[4808]: I0311 09:01:24.824284 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="52065a8d-3ef3-4770-9f24-60d40800efcb" containerName="mariadb-account-create-update" Mar 11 09:01:24 crc kubenswrapper[4808]: E0311 09:01:24.824299 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3eb827b1-9ec7-4234-a8e6-38072c48a09c" containerName="mariadb-account-create-update" Mar 11 09:01:24 crc kubenswrapper[4808]: I0311 09:01:24.824307 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="3eb827b1-9ec7-4234-a8e6-38072c48a09c" containerName="mariadb-account-create-update" Mar 11 09:01:24 crc kubenswrapper[4808]: E0311 09:01:24.824320 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0beff323-e564-46ab-b5d1-ff40920e373e" containerName="mariadb-database-create" Mar 11 09:01:24 crc kubenswrapper[4808]: I0311 09:01:24.824329 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="0beff323-e564-46ab-b5d1-ff40920e373e" containerName="mariadb-database-create" Mar 11 09:01:24 crc kubenswrapper[4808]: E0311 09:01:24.824344 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02770ee6-83bc-4c09-a98e-d1e1624bd759" containerName="mariadb-database-create" Mar 11 09:01:24 crc kubenswrapper[4808]: I0311 09:01:24.824353 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="02770ee6-83bc-4c09-a98e-d1e1624bd759" containerName="mariadb-database-create" Mar 11 09:01:24 crc kubenswrapper[4808]: I0311 09:01:24.824606 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="02770ee6-83bc-4c09-a98e-d1e1624bd759" containerName="mariadb-database-create" Mar 11 09:01:24 crc kubenswrapper[4808]: I0311 09:01:24.824626 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6da51ff-4f87-4c78-aff9-1b60b3a23633" containerName="mariadb-account-create-update" Mar 11 09:01:24 crc kubenswrapper[4808]: I0311 09:01:24.824638 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="0beff323-e564-46ab-b5d1-ff40920e373e" containerName="mariadb-database-create" Mar 11 09:01:24 crc kubenswrapper[4808]: I0311 09:01:24.824654 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="3eb827b1-9ec7-4234-a8e6-38072c48a09c" containerName="mariadb-account-create-update" Mar 11 09:01:24 crc kubenswrapper[4808]: I0311 09:01:24.824671 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="afa60fd1-a148-4127-ae9d-72f6a61f6cce" containerName="mariadb-database-create" Mar 11 09:01:24 crc kubenswrapper[4808]: I0311 09:01:24.824683 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="52065a8d-3ef3-4770-9f24-60d40800efcb" containerName="mariadb-account-create-update" Mar 11 09:01:24 crc kubenswrapper[4808]: I0311 09:01:24.825755 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c886f8b5-92fbr" Mar 11 09:01:24 crc kubenswrapper[4808]: I0311 09:01:24.827590 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Mar 11 09:01:24 crc kubenswrapper[4808]: I0311 09:01:24.837861 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75c886f8b5-92fbr"] Mar 11 09:01:24 crc kubenswrapper[4808]: I0311 09:01:24.841516 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2c773fa5-d0f3-424a-8cc6-09a36e87cd5e-dns-swift-storage-0\") pod \"dnsmasq-dns-75c886f8b5-92fbr\" (UID: \"2c773fa5-d0f3-424a-8cc6-09a36e87cd5e\") " pod="openstack/dnsmasq-dns-75c886f8b5-92fbr" Mar 11 09:01:24 crc kubenswrapper[4808]: I0311 09:01:24.841580 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2c773fa5-d0f3-424a-8cc6-09a36e87cd5e-dns-svc\") pod \"dnsmasq-dns-75c886f8b5-92fbr\" (UID: \"2c773fa5-d0f3-424a-8cc6-09a36e87cd5e\") " pod="openstack/dnsmasq-dns-75c886f8b5-92fbr" Mar 11 09:01:24 crc kubenswrapper[4808]: I0311 09:01:24.845598 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2c773fa5-d0f3-424a-8cc6-09a36e87cd5e-ovsdbserver-sb\") pod \"dnsmasq-dns-75c886f8b5-92fbr\" (UID: \"2c773fa5-d0f3-424a-8cc6-09a36e87cd5e\") " pod="openstack/dnsmasq-dns-75c886f8b5-92fbr" Mar 11 09:01:24 crc kubenswrapper[4808]: I0311 09:01:24.845669 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2c773fa5-d0f3-424a-8cc6-09a36e87cd5e-ovsdbserver-nb\") pod \"dnsmasq-dns-75c886f8b5-92fbr\" (UID: \"2c773fa5-d0f3-424a-8cc6-09a36e87cd5e\") " pod="openstack/dnsmasq-dns-75c886f8b5-92fbr" Mar 11 09:01:24 crc kubenswrapper[4808]: I0311 09:01:24.845784 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqjdh\" (UniqueName: \"kubernetes.io/projected/2c773fa5-d0f3-424a-8cc6-09a36e87cd5e-kube-api-access-jqjdh\") pod \"dnsmasq-dns-75c886f8b5-92fbr\" (UID: \"2c773fa5-d0f3-424a-8cc6-09a36e87cd5e\") " pod="openstack/dnsmasq-dns-75c886f8b5-92fbr" Mar 11 09:01:24 crc kubenswrapper[4808]: I0311 09:01:24.845916 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c773fa5-d0f3-424a-8cc6-09a36e87cd5e-config\") pod \"dnsmasq-dns-75c886f8b5-92fbr\" (UID: \"2c773fa5-d0f3-424a-8cc6-09a36e87cd5e\") " pod="openstack/dnsmasq-dns-75c886f8b5-92fbr" Mar 11 09:01:24 crc kubenswrapper[4808]: I0311 09:01:24.948155 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2c773fa5-d0f3-424a-8cc6-09a36e87cd5e-ovsdbserver-nb\") pod \"dnsmasq-dns-75c886f8b5-92fbr\" (UID: \"2c773fa5-d0f3-424a-8cc6-09a36e87cd5e\") " pod="openstack/dnsmasq-dns-75c886f8b5-92fbr" Mar 11 09:01:24 crc kubenswrapper[4808]: I0311 09:01:24.948286 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqjdh\" (UniqueName: \"kubernetes.io/projected/2c773fa5-d0f3-424a-8cc6-09a36e87cd5e-kube-api-access-jqjdh\") pod \"dnsmasq-dns-75c886f8b5-92fbr\" (UID: \"2c773fa5-d0f3-424a-8cc6-09a36e87cd5e\") " pod="openstack/dnsmasq-dns-75c886f8b5-92fbr" Mar 11 09:01:24 crc kubenswrapper[4808]: I0311 09:01:24.948382 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c773fa5-d0f3-424a-8cc6-09a36e87cd5e-config\") pod \"dnsmasq-dns-75c886f8b5-92fbr\" (UID: \"2c773fa5-d0f3-424a-8cc6-09a36e87cd5e\") " pod="openstack/dnsmasq-dns-75c886f8b5-92fbr" Mar 11 09:01:24 crc kubenswrapper[4808]: I0311 09:01:24.948504 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2c773fa5-d0f3-424a-8cc6-09a36e87cd5e-dns-swift-storage-0\") pod \"dnsmasq-dns-75c886f8b5-92fbr\" (UID: \"2c773fa5-d0f3-424a-8cc6-09a36e87cd5e\") " pod="openstack/dnsmasq-dns-75c886f8b5-92fbr" Mar 11 09:01:24 crc kubenswrapper[4808]: I0311 09:01:24.948553 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2c773fa5-d0f3-424a-8cc6-09a36e87cd5e-dns-svc\") pod \"dnsmasq-dns-75c886f8b5-92fbr\" (UID: \"2c773fa5-d0f3-424a-8cc6-09a36e87cd5e\") " pod="openstack/dnsmasq-dns-75c886f8b5-92fbr" Mar 11 09:01:24 crc kubenswrapper[4808]: I0311 09:01:24.948610 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2c773fa5-d0f3-424a-8cc6-09a36e87cd5e-ovsdbserver-sb\") pod \"dnsmasq-dns-75c886f8b5-92fbr\" (UID: \"2c773fa5-d0f3-424a-8cc6-09a36e87cd5e\") " pod="openstack/dnsmasq-dns-75c886f8b5-92fbr" Mar 11 09:01:24 crc kubenswrapper[4808]: I0311 09:01:24.951344 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2c773fa5-d0f3-424a-8cc6-09a36e87cd5e-ovsdbserver-nb\") pod \"dnsmasq-dns-75c886f8b5-92fbr\" (UID: \"2c773fa5-d0f3-424a-8cc6-09a36e87cd5e\") " pod="openstack/dnsmasq-dns-75c886f8b5-92fbr" Mar 11 09:01:24 crc kubenswrapper[4808]: I0311 09:01:24.951571 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c773fa5-d0f3-424a-8cc6-09a36e87cd5e-config\") pod \"dnsmasq-dns-75c886f8b5-92fbr\" (UID: \"2c773fa5-d0f3-424a-8cc6-09a36e87cd5e\") " pod="openstack/dnsmasq-dns-75c886f8b5-92fbr" Mar 11 09:01:24 crc kubenswrapper[4808]: I0311 09:01:24.952333 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2c773fa5-d0f3-424a-8cc6-09a36e87cd5e-ovsdbserver-sb\") pod \"dnsmasq-dns-75c886f8b5-92fbr\" (UID: \"2c773fa5-d0f3-424a-8cc6-09a36e87cd5e\") " pod="openstack/dnsmasq-dns-75c886f8b5-92fbr" Mar 11 09:01:24 crc kubenswrapper[4808]: I0311 09:01:24.952772 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2c773fa5-d0f3-424a-8cc6-09a36e87cd5e-dns-svc\") pod \"dnsmasq-dns-75c886f8b5-92fbr\" (UID: \"2c773fa5-d0f3-424a-8cc6-09a36e87cd5e\") " pod="openstack/dnsmasq-dns-75c886f8b5-92fbr" Mar 11 09:01:24 crc kubenswrapper[4808]: I0311 09:01:24.953010 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2c773fa5-d0f3-424a-8cc6-09a36e87cd5e-dns-swift-storage-0\") pod \"dnsmasq-dns-75c886f8b5-92fbr\" (UID: \"2c773fa5-d0f3-424a-8cc6-09a36e87cd5e\") " pod="openstack/dnsmasq-dns-75c886f8b5-92fbr" Mar 11 09:01:24 crc kubenswrapper[4808]: I0311 09:01:24.975309 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqjdh\" (UniqueName: \"kubernetes.io/projected/2c773fa5-d0f3-424a-8cc6-09a36e87cd5e-kube-api-access-jqjdh\") pod \"dnsmasq-dns-75c886f8b5-92fbr\" (UID: \"2c773fa5-d0f3-424a-8cc6-09a36e87cd5e\") " pod="openstack/dnsmasq-dns-75c886f8b5-92fbr" Mar 11 09:01:25 crc kubenswrapper[4808]: I0311 09:01:25.145248 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c886f8b5-92fbr" Mar 11 09:01:25 crc kubenswrapper[4808]: I0311 09:01:25.453993 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7f58d6bb6f-d48mb" podUID="ef7117e6-2beb-4153-85b5-6b75d68e1a9a" containerName="dnsmasq-dns" containerID="cri-o://244870f312ab843139d0c1e5739e03a22fa4f0708af8304b86526c208c1dea11" gracePeriod=10 Mar 11 09:01:25 crc kubenswrapper[4808]: I0311 09:01:25.600637 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75c886f8b5-92fbr"] Mar 11 09:01:25 crc kubenswrapper[4808]: W0311 09:01:25.614071 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2c773fa5_d0f3_424a_8cc6_09a36e87cd5e.slice/crio-ab4fae23ba47d7ebbea74fe0e06313b6954435a59003b8be69cfb9bc2063fcbd WatchSource:0}: Error finding container ab4fae23ba47d7ebbea74fe0e06313b6954435a59003b8be69cfb9bc2063fcbd: Status 404 returned error can't find the container with id ab4fae23ba47d7ebbea74fe0e06313b6954435a59003b8be69cfb9bc2063fcbd Mar 11 09:01:25 crc kubenswrapper[4808]: I0311 09:01:25.849853 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f58d6bb6f-d48mb" Mar 11 09:01:25 crc kubenswrapper[4808]: I0311 09:01:25.871880 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ef7117e6-2beb-4153-85b5-6b75d68e1a9a-ovsdbserver-sb\") pod \"ef7117e6-2beb-4153-85b5-6b75d68e1a9a\" (UID: \"ef7117e6-2beb-4153-85b5-6b75d68e1a9a\") " Mar 11 09:01:25 crc kubenswrapper[4808]: I0311 09:01:25.871954 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ef7117e6-2beb-4153-85b5-6b75d68e1a9a-ovsdbserver-nb\") pod \"ef7117e6-2beb-4153-85b5-6b75d68e1a9a\" (UID: \"ef7117e6-2beb-4153-85b5-6b75d68e1a9a\") " Mar 11 09:01:25 crc kubenswrapper[4808]: I0311 09:01:25.871989 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xlhbq\" (UniqueName: \"kubernetes.io/projected/ef7117e6-2beb-4153-85b5-6b75d68e1a9a-kube-api-access-xlhbq\") pod \"ef7117e6-2beb-4153-85b5-6b75d68e1a9a\" (UID: \"ef7117e6-2beb-4153-85b5-6b75d68e1a9a\") " Mar 11 09:01:25 crc kubenswrapper[4808]: I0311 09:01:25.872082 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef7117e6-2beb-4153-85b5-6b75d68e1a9a-dns-svc\") pod \"ef7117e6-2beb-4153-85b5-6b75d68e1a9a\" (UID: \"ef7117e6-2beb-4153-85b5-6b75d68e1a9a\") " Mar 11 09:01:25 crc kubenswrapper[4808]: I0311 09:01:25.872113 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef7117e6-2beb-4153-85b5-6b75d68e1a9a-config\") pod \"ef7117e6-2beb-4153-85b5-6b75d68e1a9a\" (UID: \"ef7117e6-2beb-4153-85b5-6b75d68e1a9a\") " Mar 11 09:01:25 crc kubenswrapper[4808]: I0311 09:01:25.878663 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef7117e6-2beb-4153-85b5-6b75d68e1a9a-kube-api-access-xlhbq" (OuterVolumeSpecName: "kube-api-access-xlhbq") pod "ef7117e6-2beb-4153-85b5-6b75d68e1a9a" (UID: "ef7117e6-2beb-4153-85b5-6b75d68e1a9a"). InnerVolumeSpecName "kube-api-access-xlhbq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:01:25 crc kubenswrapper[4808]: I0311 09:01:25.977914 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xlhbq\" (UniqueName: \"kubernetes.io/projected/ef7117e6-2beb-4153-85b5-6b75d68e1a9a-kube-api-access-xlhbq\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:25 crc kubenswrapper[4808]: I0311 09:01:25.979213 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef7117e6-2beb-4153-85b5-6b75d68e1a9a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ef7117e6-2beb-4153-85b5-6b75d68e1a9a" (UID: "ef7117e6-2beb-4153-85b5-6b75d68e1a9a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:01:25 crc kubenswrapper[4808]: I0311 09:01:25.979882 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef7117e6-2beb-4153-85b5-6b75d68e1a9a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ef7117e6-2beb-4153-85b5-6b75d68e1a9a" (UID: "ef7117e6-2beb-4153-85b5-6b75d68e1a9a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:01:25 crc kubenswrapper[4808]: I0311 09:01:25.985759 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef7117e6-2beb-4153-85b5-6b75d68e1a9a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ef7117e6-2beb-4153-85b5-6b75d68e1a9a" (UID: "ef7117e6-2beb-4153-85b5-6b75d68e1a9a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:01:25 crc kubenswrapper[4808]: I0311 09:01:25.995875 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef7117e6-2beb-4153-85b5-6b75d68e1a9a-config" (OuterVolumeSpecName: "config") pod "ef7117e6-2beb-4153-85b5-6b75d68e1a9a" (UID: "ef7117e6-2beb-4153-85b5-6b75d68e1a9a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:01:26 crc kubenswrapper[4808]: I0311 09:01:26.079306 4808 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef7117e6-2beb-4153-85b5-6b75d68e1a9a-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:26 crc kubenswrapper[4808]: I0311 09:01:26.079337 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef7117e6-2beb-4153-85b5-6b75d68e1a9a-config\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:26 crc kubenswrapper[4808]: I0311 09:01:26.079366 4808 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ef7117e6-2beb-4153-85b5-6b75d68e1a9a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:26 crc kubenswrapper[4808]: I0311 09:01:26.079378 4808 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ef7117e6-2beb-4153-85b5-6b75d68e1a9a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:26 crc kubenswrapper[4808]: I0311 09:01:26.464134 4808 generic.go:334] "Generic (PLEG): container finished" podID="a124b7a1-bcf0-4d53-82b1-2a32992b56b8" containerID="af8c28b1bc01858c7533acc734199553d01fb4d4f078a8c4a4f0170fb48a19fd" exitCode=0 Mar 11 09:01:26 crc kubenswrapper[4808]: I0311 09:01:26.464279 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-cgtv2" event={"ID":"a124b7a1-bcf0-4d53-82b1-2a32992b56b8","Type":"ContainerDied","Data":"af8c28b1bc01858c7533acc734199553d01fb4d4f078a8c4a4f0170fb48a19fd"} Mar 11 09:01:26 crc kubenswrapper[4808]: I0311 09:01:26.470989 4808 generic.go:334] "Generic (PLEG): container finished" podID="ef7117e6-2beb-4153-85b5-6b75d68e1a9a" containerID="244870f312ab843139d0c1e5739e03a22fa4f0708af8304b86526c208c1dea11" exitCode=0 Mar 11 09:01:26 crc kubenswrapper[4808]: I0311 09:01:26.471031 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f58d6bb6f-d48mb" event={"ID":"ef7117e6-2beb-4153-85b5-6b75d68e1a9a","Type":"ContainerDied","Data":"244870f312ab843139d0c1e5739e03a22fa4f0708af8304b86526c208c1dea11"} Mar 11 09:01:26 crc kubenswrapper[4808]: I0311 09:01:26.471063 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f58d6bb6f-d48mb" Mar 11 09:01:26 crc kubenswrapper[4808]: I0311 09:01:26.471099 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f58d6bb6f-d48mb" event={"ID":"ef7117e6-2beb-4153-85b5-6b75d68e1a9a","Type":"ContainerDied","Data":"581e4bb948b84a06e9d471382fef5174f557535519e31f3036e2c3863f3c7203"} Mar 11 09:01:26 crc kubenswrapper[4808]: I0311 09:01:26.471137 4808 scope.go:117] "RemoveContainer" containerID="244870f312ab843139d0c1e5739e03a22fa4f0708af8304b86526c208c1dea11" Mar 11 09:01:26 crc kubenswrapper[4808]: I0311 09:01:26.473271 4808 generic.go:334] "Generic (PLEG): container finished" podID="2c773fa5-d0f3-424a-8cc6-09a36e87cd5e" containerID="faf97e0cc98aac2f8539339849061471c9ce7efbb3483298ef41c527f5d7c39e" exitCode=0 Mar 11 09:01:26 crc kubenswrapper[4808]: I0311 09:01:26.473382 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c886f8b5-92fbr" event={"ID":"2c773fa5-d0f3-424a-8cc6-09a36e87cd5e","Type":"ContainerDied","Data":"faf97e0cc98aac2f8539339849061471c9ce7efbb3483298ef41c527f5d7c39e"} Mar 11 09:01:26 crc kubenswrapper[4808]: I0311 09:01:26.473422 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c886f8b5-92fbr" event={"ID":"2c773fa5-d0f3-424a-8cc6-09a36e87cd5e","Type":"ContainerStarted","Data":"ab4fae23ba47d7ebbea74fe0e06313b6954435a59003b8be69cfb9bc2063fcbd"} Mar 11 09:01:26 crc kubenswrapper[4808]: I0311 09:01:26.495492 4808 scope.go:117] "RemoveContainer" containerID="50e421110418d18cfa8f4c1a70be55ff60183956b30975b358220fb0de877db6" Mar 11 09:01:26 crc kubenswrapper[4808]: I0311 09:01:26.625752 4808 scope.go:117] "RemoveContainer" containerID="244870f312ab843139d0c1e5739e03a22fa4f0708af8304b86526c208c1dea11" Mar 11 09:01:26 crc kubenswrapper[4808]: E0311 09:01:26.626547 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"244870f312ab843139d0c1e5739e03a22fa4f0708af8304b86526c208c1dea11\": container with ID starting with 244870f312ab843139d0c1e5739e03a22fa4f0708af8304b86526c208c1dea11 not found: ID does not exist" containerID="244870f312ab843139d0c1e5739e03a22fa4f0708af8304b86526c208c1dea11" Mar 11 09:01:26 crc kubenswrapper[4808]: I0311 09:01:26.626588 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"244870f312ab843139d0c1e5739e03a22fa4f0708af8304b86526c208c1dea11"} err="failed to get container status \"244870f312ab843139d0c1e5739e03a22fa4f0708af8304b86526c208c1dea11\": rpc error: code = NotFound desc = could not find container \"244870f312ab843139d0c1e5739e03a22fa4f0708af8304b86526c208c1dea11\": container with ID starting with 244870f312ab843139d0c1e5739e03a22fa4f0708af8304b86526c208c1dea11 not found: ID does not exist" Mar 11 09:01:26 crc kubenswrapper[4808]: I0311 09:01:26.626614 4808 scope.go:117] "RemoveContainer" containerID="50e421110418d18cfa8f4c1a70be55ff60183956b30975b358220fb0de877db6" Mar 11 09:01:26 crc kubenswrapper[4808]: E0311 09:01:26.627133 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50e421110418d18cfa8f4c1a70be55ff60183956b30975b358220fb0de877db6\": container with ID starting with 50e421110418d18cfa8f4c1a70be55ff60183956b30975b358220fb0de877db6 not found: ID does not exist" containerID="50e421110418d18cfa8f4c1a70be55ff60183956b30975b358220fb0de877db6" Mar 11 09:01:26 crc kubenswrapper[4808]: I0311 09:01:26.627205 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50e421110418d18cfa8f4c1a70be55ff60183956b30975b358220fb0de877db6"} err="failed to get container status \"50e421110418d18cfa8f4c1a70be55ff60183956b30975b358220fb0de877db6\": rpc error: code = NotFound desc = could not find container \"50e421110418d18cfa8f4c1a70be55ff60183956b30975b358220fb0de877db6\": container with ID starting with 50e421110418d18cfa8f4c1a70be55ff60183956b30975b358220fb0de877db6 not found: ID does not exist" Mar 11 09:01:26 crc kubenswrapper[4808]: I0311 09:01:26.650453 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f58d6bb6f-d48mb"] Mar 11 09:01:26 crc kubenswrapper[4808]: I0311 09:01:26.658902 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7f58d6bb6f-d48mb"] Mar 11 09:01:27 crc kubenswrapper[4808]: I0311 09:01:27.502687 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c886f8b5-92fbr" event={"ID":"2c773fa5-d0f3-424a-8cc6-09a36e87cd5e","Type":"ContainerStarted","Data":"f3d937f4fa771f91ef99ac30a5e678a4052e9e09bc6ea07e5703714642907cc2"} Mar 11 09:01:27 crc kubenswrapper[4808]: I0311 09:01:27.503007 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-75c886f8b5-92fbr" Mar 11 09:01:27 crc kubenswrapper[4808]: I0311 09:01:27.532287 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-75c886f8b5-92fbr" podStartSLOduration=3.5322597030000003 podStartE2EDuration="3.532259703s" podCreationTimestamp="2026-03-11 09:01:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:01:27.527949308 +0000 UTC m=+1338.481272628" watchObservedRunningTime="2026-03-11 09:01:27.532259703 +0000 UTC m=+1338.485583063" Mar 11 09:01:27 crc kubenswrapper[4808]: I0311 09:01:27.803081 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef7117e6-2beb-4153-85b5-6b75d68e1a9a" path="/var/lib/kubelet/pods/ef7117e6-2beb-4153-85b5-6b75d68e1a9a/volumes" Mar 11 09:01:27 crc kubenswrapper[4808]: I0311 09:01:27.879319 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-cgtv2" Mar 11 09:01:28 crc kubenswrapper[4808]: I0311 09:01:28.012605 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a124b7a1-bcf0-4d53-82b1-2a32992b56b8-config-data\") pod \"a124b7a1-bcf0-4d53-82b1-2a32992b56b8\" (UID: \"a124b7a1-bcf0-4d53-82b1-2a32992b56b8\") " Mar 11 09:01:28 crc kubenswrapper[4808]: I0311 09:01:28.012739 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnxzj\" (UniqueName: \"kubernetes.io/projected/a124b7a1-bcf0-4d53-82b1-2a32992b56b8-kube-api-access-xnxzj\") pod \"a124b7a1-bcf0-4d53-82b1-2a32992b56b8\" (UID: \"a124b7a1-bcf0-4d53-82b1-2a32992b56b8\") " Mar 11 09:01:28 crc kubenswrapper[4808]: I0311 09:01:28.012817 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a124b7a1-bcf0-4d53-82b1-2a32992b56b8-combined-ca-bundle\") pod \"a124b7a1-bcf0-4d53-82b1-2a32992b56b8\" (UID: \"a124b7a1-bcf0-4d53-82b1-2a32992b56b8\") " Mar 11 09:01:28 crc kubenswrapper[4808]: I0311 09:01:28.019806 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a124b7a1-bcf0-4d53-82b1-2a32992b56b8-kube-api-access-xnxzj" (OuterVolumeSpecName: "kube-api-access-xnxzj") pod "a124b7a1-bcf0-4d53-82b1-2a32992b56b8" (UID: "a124b7a1-bcf0-4d53-82b1-2a32992b56b8"). InnerVolumeSpecName "kube-api-access-xnxzj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:01:28 crc kubenswrapper[4808]: I0311 09:01:28.048403 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a124b7a1-bcf0-4d53-82b1-2a32992b56b8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a124b7a1-bcf0-4d53-82b1-2a32992b56b8" (UID: "a124b7a1-bcf0-4d53-82b1-2a32992b56b8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:01:28 crc kubenswrapper[4808]: I0311 09:01:28.072521 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a124b7a1-bcf0-4d53-82b1-2a32992b56b8-config-data" (OuterVolumeSpecName: "config-data") pod "a124b7a1-bcf0-4d53-82b1-2a32992b56b8" (UID: "a124b7a1-bcf0-4d53-82b1-2a32992b56b8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:01:28 crc kubenswrapper[4808]: I0311 09:01:28.116862 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xnxzj\" (UniqueName: \"kubernetes.io/projected/a124b7a1-bcf0-4d53-82b1-2a32992b56b8-kube-api-access-xnxzj\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:28 crc kubenswrapper[4808]: I0311 09:01:28.117070 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a124b7a1-bcf0-4d53-82b1-2a32992b56b8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:28 crc kubenswrapper[4808]: I0311 09:01:28.117189 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a124b7a1-bcf0-4d53-82b1-2a32992b56b8-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:28 crc kubenswrapper[4808]: I0311 09:01:28.519424 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-cgtv2" event={"ID":"a124b7a1-bcf0-4d53-82b1-2a32992b56b8","Type":"ContainerDied","Data":"b4b696b22a4f2871fe6b667f53a1536f885cb6b5bacab70f18d7736dd64c55dc"} Mar 11 09:01:28 crc kubenswrapper[4808]: I0311 09:01:28.519843 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4b696b22a4f2871fe6b667f53a1536f885cb6b5bacab70f18d7736dd64c55dc" Mar 11 09:01:28 crc kubenswrapper[4808]: I0311 09:01:28.519491 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-cgtv2" Mar 11 09:01:28 crc kubenswrapper[4808]: I0311 09:01:28.736854 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75c886f8b5-92fbr"] Mar 11 09:01:28 crc kubenswrapper[4808]: I0311 09:01:28.774211 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-5p7hh"] Mar 11 09:01:28 crc kubenswrapper[4808]: E0311 09:01:28.774785 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a124b7a1-bcf0-4d53-82b1-2a32992b56b8" containerName="keystone-db-sync" Mar 11 09:01:28 crc kubenswrapper[4808]: I0311 09:01:28.774815 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="a124b7a1-bcf0-4d53-82b1-2a32992b56b8" containerName="keystone-db-sync" Mar 11 09:01:28 crc kubenswrapper[4808]: E0311 09:01:28.774849 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef7117e6-2beb-4153-85b5-6b75d68e1a9a" containerName="init" Mar 11 09:01:28 crc kubenswrapper[4808]: I0311 09:01:28.774861 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef7117e6-2beb-4153-85b5-6b75d68e1a9a" containerName="init" Mar 11 09:01:28 crc kubenswrapper[4808]: E0311 09:01:28.774891 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef7117e6-2beb-4153-85b5-6b75d68e1a9a" containerName="dnsmasq-dns" Mar 11 09:01:28 crc kubenswrapper[4808]: I0311 09:01:28.774904 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef7117e6-2beb-4153-85b5-6b75d68e1a9a" containerName="dnsmasq-dns" Mar 11 09:01:28 crc kubenswrapper[4808]: I0311 09:01:28.775175 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="a124b7a1-bcf0-4d53-82b1-2a32992b56b8" containerName="keystone-db-sync" Mar 11 09:01:28 crc kubenswrapper[4808]: I0311 09:01:28.775202 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef7117e6-2beb-4153-85b5-6b75d68e1a9a" containerName="dnsmasq-dns" Mar 11 09:01:28 crc kubenswrapper[4808]: I0311 09:01:28.776043 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-5p7hh" Mar 11 09:01:28 crc kubenswrapper[4808]: I0311 09:01:28.779322 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 11 09:01:28 crc kubenswrapper[4808]: I0311 09:01:28.779587 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-7fg4s" Mar 11 09:01:28 crc kubenswrapper[4808]: I0311 09:01:28.779704 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 11 09:01:28 crc kubenswrapper[4808]: I0311 09:01:28.779804 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 11 09:01:28 crc kubenswrapper[4808]: I0311 09:01:28.779927 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 11 09:01:28 crc kubenswrapper[4808]: I0311 09:01:28.789683 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5985c59c55-hhpvt"] Mar 11 09:01:28 crc kubenswrapper[4808]: I0311 09:01:28.791809 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5985c59c55-hhpvt" Mar 11 09:01:28 crc kubenswrapper[4808]: I0311 09:01:28.812710 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-5p7hh"] Mar 11 09:01:28 crc kubenswrapper[4808]: I0311 09:01:28.831429 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c0e6992f-4344-4cc1-86b9-6481b4b7205c-fernet-keys\") pod \"keystone-bootstrap-5p7hh\" (UID: \"c0e6992f-4344-4cc1-86b9-6481b4b7205c\") " pod="openstack/keystone-bootstrap-5p7hh" Mar 11 09:01:28 crc kubenswrapper[4808]: I0311 09:01:28.831490 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d405cc3-966f-4f51-b607-9b31e50c1bd3-config\") pod \"dnsmasq-dns-5985c59c55-hhpvt\" (UID: \"0d405cc3-966f-4f51-b607-9b31e50c1bd3\") " pod="openstack/dnsmasq-dns-5985c59c55-hhpvt" Mar 11 09:01:28 crc kubenswrapper[4808]: I0311 09:01:28.831523 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0e6992f-4344-4cc1-86b9-6481b4b7205c-config-data\") pod \"keystone-bootstrap-5p7hh\" (UID: \"c0e6992f-4344-4cc1-86b9-6481b4b7205c\") " pod="openstack/keystone-bootstrap-5p7hh" Mar 11 09:01:28 crc kubenswrapper[4808]: I0311 09:01:28.831552 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0e6992f-4344-4cc1-86b9-6481b4b7205c-combined-ca-bundle\") pod \"keystone-bootstrap-5p7hh\" (UID: \"c0e6992f-4344-4cc1-86b9-6481b4b7205c\") " pod="openstack/keystone-bootstrap-5p7hh" Mar 11 09:01:28 crc kubenswrapper[4808]: I0311 09:01:28.831579 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0e6992f-4344-4cc1-86b9-6481b4b7205c-scripts\") pod \"keystone-bootstrap-5p7hh\" (UID: \"c0e6992f-4344-4cc1-86b9-6481b4b7205c\") " pod="openstack/keystone-bootstrap-5p7hh" Mar 11 09:01:28 crc kubenswrapper[4808]: I0311 09:01:28.831619 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jkr6\" (UniqueName: \"kubernetes.io/projected/0d405cc3-966f-4f51-b607-9b31e50c1bd3-kube-api-access-6jkr6\") pod \"dnsmasq-dns-5985c59c55-hhpvt\" (UID: \"0d405cc3-966f-4f51-b607-9b31e50c1bd3\") " pod="openstack/dnsmasq-dns-5985c59c55-hhpvt" Mar 11 09:01:28 crc kubenswrapper[4808]: I0311 09:01:28.831676 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c0e6992f-4344-4cc1-86b9-6481b4b7205c-credential-keys\") pod \"keystone-bootstrap-5p7hh\" (UID: \"c0e6992f-4344-4cc1-86b9-6481b4b7205c\") " pod="openstack/keystone-bootstrap-5p7hh" Mar 11 09:01:28 crc kubenswrapper[4808]: I0311 09:01:28.831712 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0d405cc3-966f-4f51-b607-9b31e50c1bd3-dns-svc\") pod \"dnsmasq-dns-5985c59c55-hhpvt\" (UID: \"0d405cc3-966f-4f51-b607-9b31e50c1bd3\") " pod="openstack/dnsmasq-dns-5985c59c55-hhpvt" Mar 11 09:01:28 crc kubenswrapper[4808]: I0311 09:01:28.831742 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0d405cc3-966f-4f51-b607-9b31e50c1bd3-dns-swift-storage-0\") pod \"dnsmasq-dns-5985c59c55-hhpvt\" (UID: \"0d405cc3-966f-4f51-b607-9b31e50c1bd3\") " pod="openstack/dnsmasq-dns-5985c59c55-hhpvt" Mar 11 09:01:28 crc kubenswrapper[4808]: I0311 09:01:28.831807 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhpr7\" (UniqueName: \"kubernetes.io/projected/c0e6992f-4344-4cc1-86b9-6481b4b7205c-kube-api-access-jhpr7\") pod \"keystone-bootstrap-5p7hh\" (UID: \"c0e6992f-4344-4cc1-86b9-6481b4b7205c\") " pod="openstack/keystone-bootstrap-5p7hh" Mar 11 09:01:28 crc kubenswrapper[4808]: I0311 09:01:28.831877 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0d405cc3-966f-4f51-b607-9b31e50c1bd3-ovsdbserver-nb\") pod \"dnsmasq-dns-5985c59c55-hhpvt\" (UID: \"0d405cc3-966f-4f51-b607-9b31e50c1bd3\") " pod="openstack/dnsmasq-dns-5985c59c55-hhpvt" Mar 11 09:01:28 crc kubenswrapper[4808]: I0311 09:01:28.831955 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0d405cc3-966f-4f51-b607-9b31e50c1bd3-ovsdbserver-sb\") pod \"dnsmasq-dns-5985c59c55-hhpvt\" (UID: \"0d405cc3-966f-4f51-b607-9b31e50c1bd3\") " pod="openstack/dnsmasq-dns-5985c59c55-hhpvt" Mar 11 09:01:28 crc kubenswrapper[4808]: I0311 09:01:28.847410 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5985c59c55-hhpvt"] Mar 11 09:01:28 crc kubenswrapper[4808]: I0311 09:01:28.933437 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0d405cc3-966f-4f51-b607-9b31e50c1bd3-ovsdbserver-nb\") pod \"dnsmasq-dns-5985c59c55-hhpvt\" (UID: \"0d405cc3-966f-4f51-b607-9b31e50c1bd3\") " pod="openstack/dnsmasq-dns-5985c59c55-hhpvt" Mar 11 09:01:28 crc kubenswrapper[4808]: I0311 09:01:28.933501 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0d405cc3-966f-4f51-b607-9b31e50c1bd3-ovsdbserver-sb\") pod \"dnsmasq-dns-5985c59c55-hhpvt\" (UID: \"0d405cc3-966f-4f51-b607-9b31e50c1bd3\") " pod="openstack/dnsmasq-dns-5985c59c55-hhpvt" Mar 11 09:01:28 crc kubenswrapper[4808]: I0311 09:01:28.933534 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c0e6992f-4344-4cc1-86b9-6481b4b7205c-fernet-keys\") pod \"keystone-bootstrap-5p7hh\" (UID: \"c0e6992f-4344-4cc1-86b9-6481b4b7205c\") " pod="openstack/keystone-bootstrap-5p7hh" Mar 11 09:01:28 crc kubenswrapper[4808]: I0311 09:01:28.933555 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d405cc3-966f-4f51-b607-9b31e50c1bd3-config\") pod \"dnsmasq-dns-5985c59c55-hhpvt\" (UID: \"0d405cc3-966f-4f51-b607-9b31e50c1bd3\") " pod="openstack/dnsmasq-dns-5985c59c55-hhpvt" Mar 11 09:01:28 crc kubenswrapper[4808]: I0311 09:01:28.933571 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0e6992f-4344-4cc1-86b9-6481b4b7205c-config-data\") pod \"keystone-bootstrap-5p7hh\" (UID: \"c0e6992f-4344-4cc1-86b9-6481b4b7205c\") " pod="openstack/keystone-bootstrap-5p7hh" Mar 11 09:01:28 crc kubenswrapper[4808]: I0311 09:01:28.933589 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0e6992f-4344-4cc1-86b9-6481b4b7205c-combined-ca-bundle\") pod \"keystone-bootstrap-5p7hh\" (UID: \"c0e6992f-4344-4cc1-86b9-6481b4b7205c\") " pod="openstack/keystone-bootstrap-5p7hh" Mar 11 09:01:28 crc kubenswrapper[4808]: I0311 09:01:28.933604 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0e6992f-4344-4cc1-86b9-6481b4b7205c-scripts\") pod \"keystone-bootstrap-5p7hh\" (UID: \"c0e6992f-4344-4cc1-86b9-6481b4b7205c\") " pod="openstack/keystone-bootstrap-5p7hh" Mar 11 09:01:28 crc kubenswrapper[4808]: I0311 09:01:28.933624 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jkr6\" (UniqueName: \"kubernetes.io/projected/0d405cc3-966f-4f51-b607-9b31e50c1bd3-kube-api-access-6jkr6\") pod \"dnsmasq-dns-5985c59c55-hhpvt\" (UID: \"0d405cc3-966f-4f51-b607-9b31e50c1bd3\") " pod="openstack/dnsmasq-dns-5985c59c55-hhpvt" Mar 11 09:01:28 crc kubenswrapper[4808]: I0311 09:01:28.933657 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c0e6992f-4344-4cc1-86b9-6481b4b7205c-credential-keys\") pod \"keystone-bootstrap-5p7hh\" (UID: \"c0e6992f-4344-4cc1-86b9-6481b4b7205c\") " pod="openstack/keystone-bootstrap-5p7hh" Mar 11 09:01:28 crc kubenswrapper[4808]: I0311 09:01:28.933678 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0d405cc3-966f-4f51-b607-9b31e50c1bd3-dns-svc\") pod \"dnsmasq-dns-5985c59c55-hhpvt\" (UID: \"0d405cc3-966f-4f51-b607-9b31e50c1bd3\") " pod="openstack/dnsmasq-dns-5985c59c55-hhpvt" Mar 11 09:01:28 crc kubenswrapper[4808]: I0311 09:01:28.933698 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0d405cc3-966f-4f51-b607-9b31e50c1bd3-dns-swift-storage-0\") pod \"dnsmasq-dns-5985c59c55-hhpvt\" (UID: \"0d405cc3-966f-4f51-b607-9b31e50c1bd3\") " pod="openstack/dnsmasq-dns-5985c59c55-hhpvt" Mar 11 09:01:28 crc kubenswrapper[4808]: I0311 09:01:28.933738 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhpr7\" (UniqueName: \"kubernetes.io/projected/c0e6992f-4344-4cc1-86b9-6481b4b7205c-kube-api-access-jhpr7\") pod \"keystone-bootstrap-5p7hh\" (UID: \"c0e6992f-4344-4cc1-86b9-6481b4b7205c\") " pod="openstack/keystone-bootstrap-5p7hh" Mar 11 09:01:28 crc kubenswrapper[4808]: I0311 09:01:28.934369 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0d405cc3-966f-4f51-b607-9b31e50c1bd3-ovsdbserver-nb\") pod \"dnsmasq-dns-5985c59c55-hhpvt\" (UID: \"0d405cc3-966f-4f51-b607-9b31e50c1bd3\") " pod="openstack/dnsmasq-dns-5985c59c55-hhpvt" Mar 11 09:01:28 crc kubenswrapper[4808]: I0311 09:01:28.934628 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d405cc3-966f-4f51-b607-9b31e50c1bd3-config\") pod \"dnsmasq-dns-5985c59c55-hhpvt\" (UID: \"0d405cc3-966f-4f51-b607-9b31e50c1bd3\") " pod="openstack/dnsmasq-dns-5985c59c55-hhpvt" Mar 11 09:01:28 crc kubenswrapper[4808]: I0311 09:01:28.935335 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0d405cc3-966f-4f51-b607-9b31e50c1bd3-ovsdbserver-sb\") pod \"dnsmasq-dns-5985c59c55-hhpvt\" (UID: \"0d405cc3-966f-4f51-b607-9b31e50c1bd3\") " pod="openstack/dnsmasq-dns-5985c59c55-hhpvt" Mar 11 09:01:28 crc kubenswrapper[4808]: I0311 09:01:28.935693 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0d405cc3-966f-4f51-b607-9b31e50c1bd3-dns-svc\") pod \"dnsmasq-dns-5985c59c55-hhpvt\" (UID: \"0d405cc3-966f-4f51-b607-9b31e50c1bd3\") " pod="openstack/dnsmasq-dns-5985c59c55-hhpvt" Mar 11 09:01:28 crc kubenswrapper[4808]: I0311 09:01:28.935991 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0d405cc3-966f-4f51-b607-9b31e50c1bd3-dns-swift-storage-0\") pod \"dnsmasq-dns-5985c59c55-hhpvt\" (UID: \"0d405cc3-966f-4f51-b607-9b31e50c1bd3\") " pod="openstack/dnsmasq-dns-5985c59c55-hhpvt" Mar 11 09:01:28 crc kubenswrapper[4808]: I0311 09:01:28.945931 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c0e6992f-4344-4cc1-86b9-6481b4b7205c-credential-keys\") pod \"keystone-bootstrap-5p7hh\" (UID: \"c0e6992f-4344-4cc1-86b9-6481b4b7205c\") " pod="openstack/keystone-bootstrap-5p7hh" Mar 11 09:01:28 crc kubenswrapper[4808]: I0311 09:01:28.951651 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0e6992f-4344-4cc1-86b9-6481b4b7205c-config-data\") pod \"keystone-bootstrap-5p7hh\" (UID: \"c0e6992f-4344-4cc1-86b9-6481b4b7205c\") " pod="openstack/keystone-bootstrap-5p7hh" Mar 11 09:01:28 crc kubenswrapper[4808]: I0311 09:01:28.952065 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c0e6992f-4344-4cc1-86b9-6481b4b7205c-fernet-keys\") pod \"keystone-bootstrap-5p7hh\" (UID: \"c0e6992f-4344-4cc1-86b9-6481b4b7205c\") " pod="openstack/keystone-bootstrap-5p7hh" Mar 11 09:01:28 crc kubenswrapper[4808]: I0311 09:01:28.952497 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0e6992f-4344-4cc1-86b9-6481b4b7205c-combined-ca-bundle\") pod \"keystone-bootstrap-5p7hh\" (UID: \"c0e6992f-4344-4cc1-86b9-6481b4b7205c\") " pod="openstack/keystone-bootstrap-5p7hh" Mar 11 09:01:28 crc kubenswrapper[4808]: I0311 09:01:28.953797 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0e6992f-4344-4cc1-86b9-6481b4b7205c-scripts\") pod \"keystone-bootstrap-5p7hh\" (UID: \"c0e6992f-4344-4cc1-86b9-6481b4b7205c\") " pod="openstack/keystone-bootstrap-5p7hh" Mar 11 09:01:28 crc kubenswrapper[4808]: I0311 09:01:28.958231 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhpr7\" (UniqueName: \"kubernetes.io/projected/c0e6992f-4344-4cc1-86b9-6481b4b7205c-kube-api-access-jhpr7\") pod \"keystone-bootstrap-5p7hh\" (UID: \"c0e6992f-4344-4cc1-86b9-6481b4b7205c\") " pod="openstack/keystone-bootstrap-5p7hh" Mar 11 09:01:28 crc kubenswrapper[4808]: I0311 09:01:28.975267 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jkr6\" (UniqueName: \"kubernetes.io/projected/0d405cc3-966f-4f51-b607-9b31e50c1bd3-kube-api-access-6jkr6\") pod \"dnsmasq-dns-5985c59c55-hhpvt\" (UID: \"0d405cc3-966f-4f51-b607-9b31e50c1bd3\") " pod="openstack/dnsmasq-dns-5985c59c55-hhpvt" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.010734 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.028119 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.032426 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-rtcbw"] Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.033792 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-rtcbw" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.035385 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bfeb64e4-ece9-4403-8534-acf6cfccf457-run-httpd\") pod \"ceilometer-0\" (UID: \"bfeb64e4-ece9-4403-8534-acf6cfccf457\") " pod="openstack/ceilometer-0" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.035428 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfeb64e4-ece9-4403-8534-acf6cfccf457-config-data\") pod \"ceilometer-0\" (UID: \"bfeb64e4-ece9-4403-8534-acf6cfccf457\") " pod="openstack/ceilometer-0" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.035459 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bfeb64e4-ece9-4403-8534-acf6cfccf457-scripts\") pod \"ceilometer-0\" (UID: \"bfeb64e4-ece9-4403-8534-acf6cfccf457\") " pod="openstack/ceilometer-0" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.035504 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bfeb64e4-ece9-4403-8534-acf6cfccf457-log-httpd\") pod \"ceilometer-0\" (UID: \"bfeb64e4-ece9-4403-8534-acf6cfccf457\") " pod="openstack/ceilometer-0" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.035557 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bfeb64e4-ece9-4403-8534-acf6cfccf457-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bfeb64e4-ece9-4403-8534-acf6cfccf457\") " pod="openstack/ceilometer-0" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.035627 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfeb64e4-ece9-4403-8534-acf6cfccf457-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bfeb64e4-ece9-4403-8534-acf6cfccf457\") " pod="openstack/ceilometer-0" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.035671 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zm8xl\" (UniqueName: \"kubernetes.io/projected/bfeb64e4-ece9-4403-8534-acf6cfccf457-kube-api-access-zm8xl\") pod \"ceilometer-0\" (UID: \"bfeb64e4-ece9-4403-8534-acf6cfccf457\") " pod="openstack/ceilometer-0" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.036279 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.036475 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.036632 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-q67p9" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.036788 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.038063 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.048044 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.064279 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-rtcbw"] Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.109098 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-2t99h"] Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.110061 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-2t99h" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.121739 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-5p7hh" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.122858 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.123918 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-9gr4z" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.124172 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.136878 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfeb64e4-ece9-4403-8534-acf6cfccf457-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bfeb64e4-ece9-4403-8534-acf6cfccf457\") " pod="openstack/ceilometer-0" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.136930 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2csk\" (UniqueName: \"kubernetes.io/projected/ee99a1df-1d19-4463-a0ae-84a18e2f6d4e-kube-api-access-c2csk\") pod \"neutron-db-sync-2t99h\" (UID: \"ee99a1df-1d19-4463-a0ae-84a18e2f6d4e\") " pod="openstack/neutron-db-sync-2t99h" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.136967 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zm8xl\" (UniqueName: \"kubernetes.io/projected/bfeb64e4-ece9-4403-8534-acf6cfccf457-kube-api-access-zm8xl\") pod \"ceilometer-0\" (UID: \"bfeb64e4-ece9-4403-8534-acf6cfccf457\") " pod="openstack/ceilometer-0" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.136999 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75b5595d-1d35-47d9-b6a2-196e30848a13-config-data\") pod \"cinder-db-sync-rtcbw\" (UID: \"75b5595d-1d35-47d9-b6a2-196e30848a13\") " pod="openstack/cinder-db-sync-rtcbw" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.137018 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bfeb64e4-ece9-4403-8534-acf6cfccf457-run-httpd\") pod \"ceilometer-0\" (UID: \"bfeb64e4-ece9-4403-8534-acf6cfccf457\") " pod="openstack/ceilometer-0" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.137036 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee99a1df-1d19-4463-a0ae-84a18e2f6d4e-combined-ca-bundle\") pod \"neutron-db-sync-2t99h\" (UID: \"ee99a1df-1d19-4463-a0ae-84a18e2f6d4e\") " pod="openstack/neutron-db-sync-2t99h" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.137060 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfeb64e4-ece9-4403-8534-acf6cfccf457-config-data\") pod \"ceilometer-0\" (UID: \"bfeb64e4-ece9-4403-8534-acf6cfccf457\") " pod="openstack/ceilometer-0" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.137086 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75b5595d-1d35-47d9-b6a2-196e30848a13-combined-ca-bundle\") pod \"cinder-db-sync-rtcbw\" (UID: \"75b5595d-1d35-47d9-b6a2-196e30848a13\") " pod="openstack/cinder-db-sync-rtcbw" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.137102 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/75b5595d-1d35-47d9-b6a2-196e30848a13-etc-machine-id\") pod \"cinder-db-sync-rtcbw\" (UID: \"75b5595d-1d35-47d9-b6a2-196e30848a13\") " pod="openstack/cinder-db-sync-rtcbw" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.137119 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bfeb64e4-ece9-4403-8534-acf6cfccf457-scripts\") pod \"ceilometer-0\" (UID: \"bfeb64e4-ece9-4403-8534-acf6cfccf457\") " pod="openstack/ceilometer-0" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.137140 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75b5595d-1d35-47d9-b6a2-196e30848a13-scripts\") pod \"cinder-db-sync-rtcbw\" (UID: \"75b5595d-1d35-47d9-b6a2-196e30848a13\") " pod="openstack/cinder-db-sync-rtcbw" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.137166 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bfeb64e4-ece9-4403-8534-acf6cfccf457-log-httpd\") pod \"ceilometer-0\" (UID: \"bfeb64e4-ece9-4403-8534-acf6cfccf457\") " pod="openstack/ceilometer-0" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.137183 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/75b5595d-1d35-47d9-b6a2-196e30848a13-db-sync-config-data\") pod \"cinder-db-sync-rtcbw\" (UID: \"75b5595d-1d35-47d9-b6a2-196e30848a13\") " pod="openstack/cinder-db-sync-rtcbw" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.137205 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2p85\" (UniqueName: \"kubernetes.io/projected/75b5595d-1d35-47d9-b6a2-196e30848a13-kube-api-access-x2p85\") pod \"cinder-db-sync-rtcbw\" (UID: \"75b5595d-1d35-47d9-b6a2-196e30848a13\") " pod="openstack/cinder-db-sync-rtcbw" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.137240 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bfeb64e4-ece9-4403-8534-acf6cfccf457-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bfeb64e4-ece9-4403-8534-acf6cfccf457\") " pod="openstack/ceilometer-0" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.137256 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ee99a1df-1d19-4463-a0ae-84a18e2f6d4e-config\") pod \"neutron-db-sync-2t99h\" (UID: \"ee99a1df-1d19-4463-a0ae-84a18e2f6d4e\") " pod="openstack/neutron-db-sync-2t99h" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.138267 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bfeb64e4-ece9-4403-8534-acf6cfccf457-run-httpd\") pod \"ceilometer-0\" (UID: \"bfeb64e4-ece9-4403-8534-acf6cfccf457\") " pod="openstack/ceilometer-0" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.138835 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bfeb64e4-ece9-4403-8534-acf6cfccf457-log-httpd\") pod \"ceilometer-0\" (UID: \"bfeb64e4-ece9-4403-8534-acf6cfccf457\") " pod="openstack/ceilometer-0" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.142493 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bfeb64e4-ece9-4403-8534-acf6cfccf457-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bfeb64e4-ece9-4403-8534-acf6cfccf457\") " pod="openstack/ceilometer-0" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.142544 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-2t99h"] Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.146729 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5985c59c55-hhpvt" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.150197 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfeb64e4-ece9-4403-8534-acf6cfccf457-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bfeb64e4-ece9-4403-8534-acf6cfccf457\") " pod="openstack/ceilometer-0" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.152128 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfeb64e4-ece9-4403-8534-acf6cfccf457-config-data\") pod \"ceilometer-0\" (UID: \"bfeb64e4-ece9-4403-8534-acf6cfccf457\") " pod="openstack/ceilometer-0" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.152312 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bfeb64e4-ece9-4403-8534-acf6cfccf457-scripts\") pod \"ceilometer-0\" (UID: \"bfeb64e4-ece9-4403-8534-acf6cfccf457\") " pod="openstack/ceilometer-0" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.173591 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-rksh6"] Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.193654 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zm8xl\" (UniqueName: \"kubernetes.io/projected/bfeb64e4-ece9-4403-8534-acf6cfccf457-kube-api-access-zm8xl\") pod \"ceilometer-0\" (UID: \"bfeb64e4-ece9-4403-8534-acf6cfccf457\") " pod="openstack/ceilometer-0" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.194379 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-rksh6" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.197144 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-pvdcw" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.197368 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.216434 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-rksh6"] Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.237655 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75b5595d-1d35-47d9-b6a2-196e30848a13-config-data\") pod \"cinder-db-sync-rtcbw\" (UID: \"75b5595d-1d35-47d9-b6a2-196e30848a13\") " pod="openstack/cinder-db-sync-rtcbw" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.237691 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee99a1df-1d19-4463-a0ae-84a18e2f6d4e-combined-ca-bundle\") pod \"neutron-db-sync-2t99h\" (UID: \"ee99a1df-1d19-4463-a0ae-84a18e2f6d4e\") " pod="openstack/neutron-db-sync-2t99h" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.237719 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75b5595d-1d35-47d9-b6a2-196e30848a13-combined-ca-bundle\") pod \"cinder-db-sync-rtcbw\" (UID: \"75b5595d-1d35-47d9-b6a2-196e30848a13\") " pod="openstack/cinder-db-sync-rtcbw" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.237735 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/75b5595d-1d35-47d9-b6a2-196e30848a13-etc-machine-id\") pod \"cinder-db-sync-rtcbw\" (UID: \"75b5595d-1d35-47d9-b6a2-196e30848a13\") " pod="openstack/cinder-db-sync-rtcbw" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.237761 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75b5595d-1d35-47d9-b6a2-196e30848a13-scripts\") pod \"cinder-db-sync-rtcbw\" (UID: \"75b5595d-1d35-47d9-b6a2-196e30848a13\") " pod="openstack/cinder-db-sync-rtcbw" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.237781 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/847a58e2-c27f-4b49-8300-cbe239822ffa-db-sync-config-data\") pod \"barbican-db-sync-rksh6\" (UID: \"847a58e2-c27f-4b49-8300-cbe239822ffa\") " pod="openstack/barbican-db-sync-rksh6" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.237808 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/75b5595d-1d35-47d9-b6a2-196e30848a13-db-sync-config-data\") pod \"cinder-db-sync-rtcbw\" (UID: \"75b5595d-1d35-47d9-b6a2-196e30848a13\") " pod="openstack/cinder-db-sync-rtcbw" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.237830 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2p85\" (UniqueName: \"kubernetes.io/projected/75b5595d-1d35-47d9-b6a2-196e30848a13-kube-api-access-x2p85\") pod \"cinder-db-sync-rtcbw\" (UID: \"75b5595d-1d35-47d9-b6a2-196e30848a13\") " pod="openstack/cinder-db-sync-rtcbw" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.237861 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ee99a1df-1d19-4463-a0ae-84a18e2f6d4e-config\") pod \"neutron-db-sync-2t99h\" (UID: \"ee99a1df-1d19-4463-a0ae-84a18e2f6d4e\") " pod="openstack/neutron-db-sync-2t99h" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.237896 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2csk\" (UniqueName: \"kubernetes.io/projected/ee99a1df-1d19-4463-a0ae-84a18e2f6d4e-kube-api-access-c2csk\") pod \"neutron-db-sync-2t99h\" (UID: \"ee99a1df-1d19-4463-a0ae-84a18e2f6d4e\") " pod="openstack/neutron-db-sync-2t99h" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.237918 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/847a58e2-c27f-4b49-8300-cbe239822ffa-combined-ca-bundle\") pod \"barbican-db-sync-rksh6\" (UID: \"847a58e2-c27f-4b49-8300-cbe239822ffa\") " pod="openstack/barbican-db-sync-rksh6" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.237940 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qslwn\" (UniqueName: \"kubernetes.io/projected/847a58e2-c27f-4b49-8300-cbe239822ffa-kube-api-access-qslwn\") pod \"barbican-db-sync-rksh6\" (UID: \"847a58e2-c27f-4b49-8300-cbe239822ffa\") " pod="openstack/barbican-db-sync-rksh6" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.248305 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/75b5595d-1d35-47d9-b6a2-196e30848a13-etc-machine-id\") pod \"cinder-db-sync-rtcbw\" (UID: \"75b5595d-1d35-47d9-b6a2-196e30848a13\") " pod="openstack/cinder-db-sync-rtcbw" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.251889 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75b5595d-1d35-47d9-b6a2-196e30848a13-combined-ca-bundle\") pod \"cinder-db-sync-rtcbw\" (UID: \"75b5595d-1d35-47d9-b6a2-196e30848a13\") " pod="openstack/cinder-db-sync-rtcbw" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.252945 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75b5595d-1d35-47d9-b6a2-196e30848a13-config-data\") pod \"cinder-db-sync-rtcbw\" (UID: \"75b5595d-1d35-47d9-b6a2-196e30848a13\") " pod="openstack/cinder-db-sync-rtcbw" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.252947 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee99a1df-1d19-4463-a0ae-84a18e2f6d4e-combined-ca-bundle\") pod \"neutron-db-sync-2t99h\" (UID: \"ee99a1df-1d19-4463-a0ae-84a18e2f6d4e\") " pod="openstack/neutron-db-sync-2t99h" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.254602 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/ee99a1df-1d19-4463-a0ae-84a18e2f6d4e-config\") pod \"neutron-db-sync-2t99h\" (UID: \"ee99a1df-1d19-4463-a0ae-84a18e2f6d4e\") " pod="openstack/neutron-db-sync-2t99h" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.255525 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75b5595d-1d35-47d9-b6a2-196e30848a13-scripts\") pod \"cinder-db-sync-rtcbw\" (UID: \"75b5595d-1d35-47d9-b6a2-196e30848a13\") " pod="openstack/cinder-db-sync-rtcbw" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.264592 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/75b5595d-1d35-47d9-b6a2-196e30848a13-db-sync-config-data\") pod \"cinder-db-sync-rtcbw\" (UID: \"75b5595d-1d35-47d9-b6a2-196e30848a13\") " pod="openstack/cinder-db-sync-rtcbw" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.280837 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2p85\" (UniqueName: \"kubernetes.io/projected/75b5595d-1d35-47d9-b6a2-196e30848a13-kube-api-access-x2p85\") pod \"cinder-db-sync-rtcbw\" (UID: \"75b5595d-1d35-47d9-b6a2-196e30848a13\") " pod="openstack/cinder-db-sync-rtcbw" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.282580 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2csk\" (UniqueName: \"kubernetes.io/projected/ee99a1df-1d19-4463-a0ae-84a18e2f6d4e-kube-api-access-c2csk\") pod \"neutron-db-sync-2t99h\" (UID: \"ee99a1df-1d19-4463-a0ae-84a18e2f6d4e\") " pod="openstack/neutron-db-sync-2t99h" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.295604 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5985c59c55-hhpvt"] Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.303724 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-ccd7c9f8f-zc7mn"] Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.305101 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-ccd7c9f8f-zc7mn" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.313854 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-ccd7c9f8f-zc7mn"] Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.353013 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzftg\" (UniqueName: \"kubernetes.io/projected/dcacf1a9-dbf5-42c3-90a3-03178f6e1659-kube-api-access-qzftg\") pod \"dnsmasq-dns-ccd7c9f8f-zc7mn\" (UID: \"dcacf1a9-dbf5-42c3-90a3-03178f6e1659\") " pod="openstack/dnsmasq-dns-ccd7c9f8f-zc7mn" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.353068 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qslwn\" (UniqueName: \"kubernetes.io/projected/847a58e2-c27f-4b49-8300-cbe239822ffa-kube-api-access-qslwn\") pod \"barbican-db-sync-rksh6\" (UID: \"847a58e2-c27f-4b49-8300-cbe239822ffa\") " pod="openstack/barbican-db-sync-rksh6" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.353455 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/847a58e2-c27f-4b49-8300-cbe239822ffa-db-sync-config-data\") pod \"barbican-db-sync-rksh6\" (UID: \"847a58e2-c27f-4b49-8300-cbe239822ffa\") " pod="openstack/barbican-db-sync-rksh6" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.353613 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dcacf1a9-dbf5-42c3-90a3-03178f6e1659-ovsdbserver-nb\") pod \"dnsmasq-dns-ccd7c9f8f-zc7mn\" (UID: \"dcacf1a9-dbf5-42c3-90a3-03178f6e1659\") " pod="openstack/dnsmasq-dns-ccd7c9f8f-zc7mn" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.353642 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dcacf1a9-dbf5-42c3-90a3-03178f6e1659-ovsdbserver-sb\") pod \"dnsmasq-dns-ccd7c9f8f-zc7mn\" (UID: \"dcacf1a9-dbf5-42c3-90a3-03178f6e1659\") " pod="openstack/dnsmasq-dns-ccd7c9f8f-zc7mn" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.353669 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dcacf1a9-dbf5-42c3-90a3-03178f6e1659-dns-swift-storage-0\") pod \"dnsmasq-dns-ccd7c9f8f-zc7mn\" (UID: \"dcacf1a9-dbf5-42c3-90a3-03178f6e1659\") " pod="openstack/dnsmasq-dns-ccd7c9f8f-zc7mn" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.353711 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dcacf1a9-dbf5-42c3-90a3-03178f6e1659-dns-svc\") pod \"dnsmasq-dns-ccd7c9f8f-zc7mn\" (UID: \"dcacf1a9-dbf5-42c3-90a3-03178f6e1659\") " pod="openstack/dnsmasq-dns-ccd7c9f8f-zc7mn" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.353767 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcacf1a9-dbf5-42c3-90a3-03178f6e1659-config\") pod \"dnsmasq-dns-ccd7c9f8f-zc7mn\" (UID: \"dcacf1a9-dbf5-42c3-90a3-03178f6e1659\") " pod="openstack/dnsmasq-dns-ccd7c9f8f-zc7mn" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.353792 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/847a58e2-c27f-4b49-8300-cbe239822ffa-combined-ca-bundle\") pod \"barbican-db-sync-rksh6\" (UID: \"847a58e2-c27f-4b49-8300-cbe239822ffa\") " pod="openstack/barbican-db-sync-rksh6" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.361574 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.376166 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-j6tdp"] Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.387117 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qslwn\" (UniqueName: \"kubernetes.io/projected/847a58e2-c27f-4b49-8300-cbe239822ffa-kube-api-access-qslwn\") pod \"barbican-db-sync-rksh6\" (UID: \"847a58e2-c27f-4b49-8300-cbe239822ffa\") " pod="openstack/barbican-db-sync-rksh6" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.390153 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-rtcbw" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.399890 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-j6tdp" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.403170 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/847a58e2-c27f-4b49-8300-cbe239822ffa-db-sync-config-data\") pod \"barbican-db-sync-rksh6\" (UID: \"847a58e2-c27f-4b49-8300-cbe239822ffa\") " pod="openstack/barbican-db-sync-rksh6" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.411604 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.413044 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/847a58e2-c27f-4b49-8300-cbe239822ffa-combined-ca-bundle\") pod \"barbican-db-sync-rksh6\" (UID: \"847a58e2-c27f-4b49-8300-cbe239822ffa\") " pod="openstack/barbican-db-sync-rksh6" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.413898 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-gglgv" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.414283 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.429538 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-rksh6" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.463039 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/712fc7d0-7b9d-4293-abfb-262e5482bfed-logs\") pod \"placement-db-sync-j6tdp\" (UID: \"712fc7d0-7b9d-4293-abfb-262e5482bfed\") " pod="openstack/placement-db-sync-j6tdp" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.463373 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dcacf1a9-dbf5-42c3-90a3-03178f6e1659-ovsdbserver-nb\") pod \"dnsmasq-dns-ccd7c9f8f-zc7mn\" (UID: \"dcacf1a9-dbf5-42c3-90a3-03178f6e1659\") " pod="openstack/dnsmasq-dns-ccd7c9f8f-zc7mn" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.463398 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dcacf1a9-dbf5-42c3-90a3-03178f6e1659-ovsdbserver-sb\") pod \"dnsmasq-dns-ccd7c9f8f-zc7mn\" (UID: \"dcacf1a9-dbf5-42c3-90a3-03178f6e1659\") " pod="openstack/dnsmasq-dns-ccd7c9f8f-zc7mn" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.463414 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dcacf1a9-dbf5-42c3-90a3-03178f6e1659-dns-swift-storage-0\") pod \"dnsmasq-dns-ccd7c9f8f-zc7mn\" (UID: \"dcacf1a9-dbf5-42c3-90a3-03178f6e1659\") " pod="openstack/dnsmasq-dns-ccd7c9f8f-zc7mn" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.463438 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dcacf1a9-dbf5-42c3-90a3-03178f6e1659-dns-svc\") pod \"dnsmasq-dns-ccd7c9f8f-zc7mn\" (UID: \"dcacf1a9-dbf5-42c3-90a3-03178f6e1659\") " pod="openstack/dnsmasq-dns-ccd7c9f8f-zc7mn" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.463455 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/712fc7d0-7b9d-4293-abfb-262e5482bfed-combined-ca-bundle\") pod \"placement-db-sync-j6tdp\" (UID: \"712fc7d0-7b9d-4293-abfb-262e5482bfed\") " pod="openstack/placement-db-sync-j6tdp" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.463485 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcacf1a9-dbf5-42c3-90a3-03178f6e1659-config\") pod \"dnsmasq-dns-ccd7c9f8f-zc7mn\" (UID: \"dcacf1a9-dbf5-42c3-90a3-03178f6e1659\") " pod="openstack/dnsmasq-dns-ccd7c9f8f-zc7mn" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.463515 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzftg\" (UniqueName: \"kubernetes.io/projected/dcacf1a9-dbf5-42c3-90a3-03178f6e1659-kube-api-access-qzftg\") pod \"dnsmasq-dns-ccd7c9f8f-zc7mn\" (UID: \"dcacf1a9-dbf5-42c3-90a3-03178f6e1659\") " pod="openstack/dnsmasq-dns-ccd7c9f8f-zc7mn" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.463513 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-j6tdp"] Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.463546 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/712fc7d0-7b9d-4293-abfb-262e5482bfed-scripts\") pod \"placement-db-sync-j6tdp\" (UID: \"712fc7d0-7b9d-4293-abfb-262e5482bfed\") " pod="openstack/placement-db-sync-j6tdp" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.463582 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vc4d\" (UniqueName: \"kubernetes.io/projected/712fc7d0-7b9d-4293-abfb-262e5482bfed-kube-api-access-9vc4d\") pod \"placement-db-sync-j6tdp\" (UID: \"712fc7d0-7b9d-4293-abfb-262e5482bfed\") " pod="openstack/placement-db-sync-j6tdp" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.463606 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/712fc7d0-7b9d-4293-abfb-262e5482bfed-config-data\") pod \"placement-db-sync-j6tdp\" (UID: \"712fc7d0-7b9d-4293-abfb-262e5482bfed\") " pod="openstack/placement-db-sync-j6tdp" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.464193 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dcacf1a9-dbf5-42c3-90a3-03178f6e1659-dns-svc\") pod \"dnsmasq-dns-ccd7c9f8f-zc7mn\" (UID: \"dcacf1a9-dbf5-42c3-90a3-03178f6e1659\") " pod="openstack/dnsmasq-dns-ccd7c9f8f-zc7mn" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.464732 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dcacf1a9-dbf5-42c3-90a3-03178f6e1659-ovsdbserver-nb\") pod \"dnsmasq-dns-ccd7c9f8f-zc7mn\" (UID: \"dcacf1a9-dbf5-42c3-90a3-03178f6e1659\") " pod="openstack/dnsmasq-dns-ccd7c9f8f-zc7mn" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.464623 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcacf1a9-dbf5-42c3-90a3-03178f6e1659-config\") pod \"dnsmasq-dns-ccd7c9f8f-zc7mn\" (UID: \"dcacf1a9-dbf5-42c3-90a3-03178f6e1659\") " pod="openstack/dnsmasq-dns-ccd7c9f8f-zc7mn" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.465562 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dcacf1a9-dbf5-42c3-90a3-03178f6e1659-dns-swift-storage-0\") pod \"dnsmasq-dns-ccd7c9f8f-zc7mn\" (UID: \"dcacf1a9-dbf5-42c3-90a3-03178f6e1659\") " pod="openstack/dnsmasq-dns-ccd7c9f8f-zc7mn" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.465582 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dcacf1a9-dbf5-42c3-90a3-03178f6e1659-ovsdbserver-sb\") pod \"dnsmasq-dns-ccd7c9f8f-zc7mn\" (UID: \"dcacf1a9-dbf5-42c3-90a3-03178f6e1659\") " pod="openstack/dnsmasq-dns-ccd7c9f8f-zc7mn" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.487104 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzftg\" (UniqueName: \"kubernetes.io/projected/dcacf1a9-dbf5-42c3-90a3-03178f6e1659-kube-api-access-qzftg\") pod \"dnsmasq-dns-ccd7c9f8f-zc7mn\" (UID: \"dcacf1a9-dbf5-42c3-90a3-03178f6e1659\") " pod="openstack/dnsmasq-dns-ccd7c9f8f-zc7mn" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.526301 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-75c886f8b5-92fbr" podUID="2c773fa5-d0f3-424a-8cc6-09a36e87cd5e" containerName="dnsmasq-dns" containerID="cri-o://f3d937f4fa771f91ef99ac30a5e678a4052e9e09bc6ea07e5703714642907cc2" gracePeriod=10 Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.546494 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-2t99h" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.567298 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/712fc7d0-7b9d-4293-abfb-262e5482bfed-logs\") pod \"placement-db-sync-j6tdp\" (UID: \"712fc7d0-7b9d-4293-abfb-262e5482bfed\") " pod="openstack/placement-db-sync-j6tdp" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.568291 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/712fc7d0-7b9d-4293-abfb-262e5482bfed-logs\") pod \"placement-db-sync-j6tdp\" (UID: \"712fc7d0-7b9d-4293-abfb-262e5482bfed\") " pod="openstack/placement-db-sync-j6tdp" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.568376 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/712fc7d0-7b9d-4293-abfb-262e5482bfed-combined-ca-bundle\") pod \"placement-db-sync-j6tdp\" (UID: \"712fc7d0-7b9d-4293-abfb-262e5482bfed\") " pod="openstack/placement-db-sync-j6tdp" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.568438 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/712fc7d0-7b9d-4293-abfb-262e5482bfed-scripts\") pod \"placement-db-sync-j6tdp\" (UID: \"712fc7d0-7b9d-4293-abfb-262e5482bfed\") " pod="openstack/placement-db-sync-j6tdp" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.568467 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vc4d\" (UniqueName: \"kubernetes.io/projected/712fc7d0-7b9d-4293-abfb-262e5482bfed-kube-api-access-9vc4d\") pod \"placement-db-sync-j6tdp\" (UID: \"712fc7d0-7b9d-4293-abfb-262e5482bfed\") " pod="openstack/placement-db-sync-j6tdp" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.568484 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/712fc7d0-7b9d-4293-abfb-262e5482bfed-config-data\") pod \"placement-db-sync-j6tdp\" (UID: \"712fc7d0-7b9d-4293-abfb-262e5482bfed\") " pod="openstack/placement-db-sync-j6tdp" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.572972 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/712fc7d0-7b9d-4293-abfb-262e5482bfed-combined-ca-bundle\") pod \"placement-db-sync-j6tdp\" (UID: \"712fc7d0-7b9d-4293-abfb-262e5482bfed\") " pod="openstack/placement-db-sync-j6tdp" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.575272 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/712fc7d0-7b9d-4293-abfb-262e5482bfed-scripts\") pod \"placement-db-sync-j6tdp\" (UID: \"712fc7d0-7b9d-4293-abfb-262e5482bfed\") " pod="openstack/placement-db-sync-j6tdp" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.580799 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/712fc7d0-7b9d-4293-abfb-262e5482bfed-config-data\") pod \"placement-db-sync-j6tdp\" (UID: \"712fc7d0-7b9d-4293-abfb-262e5482bfed\") " pod="openstack/placement-db-sync-j6tdp" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.589881 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vc4d\" (UniqueName: \"kubernetes.io/projected/712fc7d0-7b9d-4293-abfb-262e5482bfed-kube-api-access-9vc4d\") pod \"placement-db-sync-j6tdp\" (UID: \"712fc7d0-7b9d-4293-abfb-262e5482bfed\") " pod="openstack/placement-db-sync-j6tdp" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.764217 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-ccd7c9f8f-zc7mn" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.769969 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-j6tdp" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.814169 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5985c59c55-hhpvt"] Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.874169 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-5p7hh"] Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.900996 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.903036 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.911867 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.911915 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.912078 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-vkl5m" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.912114 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Mar 11 09:01:29 crc kubenswrapper[4808]: I0311 09:01:29.941665 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 11 09:01:30 crc kubenswrapper[4808]: I0311 09:01:30.008111 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-rtcbw"] Mar 11 09:01:30 crc kubenswrapper[4808]: I0311 09:01:30.045312 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 11 09:01:30 crc kubenswrapper[4808]: I0311 09:01:30.047042 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 11 09:01:30 crc kubenswrapper[4808]: I0311 09:01:30.050170 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Mar 11 09:01:30 crc kubenswrapper[4808]: I0311 09:01:30.050391 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Mar 11 09:01:30 crc kubenswrapper[4808]: I0311 09:01:30.055341 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 11 09:01:30 crc kubenswrapper[4808]: W0311 09:01:30.071915 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75b5595d_1d35_47d9_b6a2_196e30848a13.slice/crio-58653438b666ac59f4ac7c66a50b554fea1e0549b1b3814b5692588ecec2a397 WatchSource:0}: Error finding container 58653438b666ac59f4ac7c66a50b554fea1e0549b1b3814b5692588ecec2a397: Status 404 returned error can't find the container with id 58653438b666ac59f4ac7c66a50b554fea1e0549b1b3814b5692588ecec2a397 Mar 11 09:01:30 crc kubenswrapper[4808]: I0311 09:01:30.082656 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/52ea94c4-1455-495c-9cf1-3d25c0a1ceac-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"52ea94c4-1455-495c-9cf1-3d25c0a1ceac\") " pod="openstack/glance-default-external-api-0" Mar 11 09:01:30 crc kubenswrapper[4808]: I0311 09:01:30.082709 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llgxj\" (UniqueName: \"kubernetes.io/projected/52ea94c4-1455-495c-9cf1-3d25c0a1ceac-kube-api-access-llgxj\") pod \"glance-default-external-api-0\" (UID: \"52ea94c4-1455-495c-9cf1-3d25c0a1ceac\") " pod="openstack/glance-default-external-api-0" Mar 11 09:01:30 crc kubenswrapper[4808]: I0311 09:01:30.082740 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52ea94c4-1455-495c-9cf1-3d25c0a1ceac-config-data\") pod \"glance-default-external-api-0\" (UID: \"52ea94c4-1455-495c-9cf1-3d25c0a1ceac\") " pod="openstack/glance-default-external-api-0" Mar 11 09:01:30 crc kubenswrapper[4808]: I0311 09:01:30.082769 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52ea94c4-1455-495c-9cf1-3d25c0a1ceac-scripts\") pod \"glance-default-external-api-0\" (UID: \"52ea94c4-1455-495c-9cf1-3d25c0a1ceac\") " pod="openstack/glance-default-external-api-0" Mar 11 09:01:30 crc kubenswrapper[4808]: I0311 09:01:30.082798 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52ea94c4-1455-495c-9cf1-3d25c0a1ceac-logs\") pod \"glance-default-external-api-0\" (UID: \"52ea94c4-1455-495c-9cf1-3d25c0a1ceac\") " pod="openstack/glance-default-external-api-0" Mar 11 09:01:30 crc kubenswrapper[4808]: I0311 09:01:30.082829 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/52ea94c4-1455-495c-9cf1-3d25c0a1ceac-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"52ea94c4-1455-495c-9cf1-3d25c0a1ceac\") " pod="openstack/glance-default-external-api-0" Mar 11 09:01:30 crc kubenswrapper[4808]: I0311 09:01:30.082854 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52ea94c4-1455-495c-9cf1-3d25c0a1ceac-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"52ea94c4-1455-495c-9cf1-3d25c0a1ceac\") " pod="openstack/glance-default-external-api-0" Mar 11 09:01:30 crc kubenswrapper[4808]: I0311 09:01:30.083018 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"52ea94c4-1455-495c-9cf1-3d25c0a1ceac\") " pod="openstack/glance-default-external-api-0" Mar 11 09:01:30 crc kubenswrapper[4808]: I0311 09:01:30.091793 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 11 09:01:30 crc kubenswrapper[4808]: I0311 09:01:30.127558 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c886f8b5-92fbr" Mar 11 09:01:30 crc kubenswrapper[4808]: W0311 09:01:30.139500 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod847a58e2_c27f_4b49_8300_cbe239822ffa.slice/crio-bfbe72f0247d8422f282298a0728be8c6700436f5a507766ec9b153e81b78304 WatchSource:0}: Error finding container bfbe72f0247d8422f282298a0728be8c6700436f5a507766ec9b153e81b78304: Status 404 returned error can't find the container with id bfbe72f0247d8422f282298a0728be8c6700436f5a507766ec9b153e81b78304 Mar 11 09:01:30 crc kubenswrapper[4808]: I0311 09:01:30.163430 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-rksh6"] Mar 11 09:01:30 crc kubenswrapper[4808]: I0311 09:01:30.185564 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7873193d-ee9a-4c37-960a-ae69b881833e-logs\") pod \"glance-default-internal-api-0\" (UID: \"7873193d-ee9a-4c37-960a-ae69b881833e\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:01:30 crc kubenswrapper[4808]: I0311 09:01:30.185615 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52ea94c4-1455-495c-9cf1-3d25c0a1ceac-config-data\") pod \"glance-default-external-api-0\" (UID: \"52ea94c4-1455-495c-9cf1-3d25c0a1ceac\") " pod="openstack/glance-default-external-api-0" Mar 11 09:01:30 crc kubenswrapper[4808]: I0311 09:01:30.185655 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7873193d-ee9a-4c37-960a-ae69b881833e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7873193d-ee9a-4c37-960a-ae69b881833e\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:01:30 crc kubenswrapper[4808]: I0311 09:01:30.185686 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52ea94c4-1455-495c-9cf1-3d25c0a1ceac-scripts\") pod \"glance-default-external-api-0\" (UID: \"52ea94c4-1455-495c-9cf1-3d25c0a1ceac\") " pod="openstack/glance-default-external-api-0" Mar 11 09:01:30 crc kubenswrapper[4808]: I0311 09:01:30.185717 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7873193d-ee9a-4c37-960a-ae69b881833e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7873193d-ee9a-4c37-960a-ae69b881833e\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:01:30 crc kubenswrapper[4808]: I0311 09:01:30.185744 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52ea94c4-1455-495c-9cf1-3d25c0a1ceac-logs\") pod \"glance-default-external-api-0\" (UID: \"52ea94c4-1455-495c-9cf1-3d25c0a1ceac\") " pod="openstack/glance-default-external-api-0" Mar 11 09:01:30 crc kubenswrapper[4808]: I0311 09:01:30.185766 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7873193d-ee9a-4c37-960a-ae69b881833e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"7873193d-ee9a-4c37-960a-ae69b881833e\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:01:30 crc kubenswrapper[4808]: I0311 09:01:30.185812 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/52ea94c4-1455-495c-9cf1-3d25c0a1ceac-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"52ea94c4-1455-495c-9cf1-3d25c0a1ceac\") " pod="openstack/glance-default-external-api-0" Mar 11 09:01:30 crc kubenswrapper[4808]: I0311 09:01:30.185847 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52ea94c4-1455-495c-9cf1-3d25c0a1ceac-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"52ea94c4-1455-495c-9cf1-3d25c0a1ceac\") " pod="openstack/glance-default-external-api-0" Mar 11 09:01:30 crc kubenswrapper[4808]: I0311 09:01:30.185876 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7873193d-ee9a-4c37-960a-ae69b881833e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7873193d-ee9a-4c37-960a-ae69b881833e\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:01:30 crc kubenswrapper[4808]: I0311 09:01:30.185901 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"7873193d-ee9a-4c37-960a-ae69b881833e\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:01:30 crc kubenswrapper[4808]: I0311 09:01:30.185923 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"52ea94c4-1455-495c-9cf1-3d25c0a1ceac\") " pod="openstack/glance-default-external-api-0" Mar 11 09:01:30 crc kubenswrapper[4808]: I0311 09:01:30.185982 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/52ea94c4-1455-495c-9cf1-3d25c0a1ceac-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"52ea94c4-1455-495c-9cf1-3d25c0a1ceac\") " pod="openstack/glance-default-external-api-0" Mar 11 09:01:30 crc kubenswrapper[4808]: I0311 09:01:30.186008 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9cz9\" (UniqueName: \"kubernetes.io/projected/7873193d-ee9a-4c37-960a-ae69b881833e-kube-api-access-m9cz9\") pod \"glance-default-internal-api-0\" (UID: \"7873193d-ee9a-4c37-960a-ae69b881833e\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:01:30 crc kubenswrapper[4808]: I0311 09:01:30.186043 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7873193d-ee9a-4c37-960a-ae69b881833e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7873193d-ee9a-4c37-960a-ae69b881833e\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:01:30 crc kubenswrapper[4808]: I0311 09:01:30.188097 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/52ea94c4-1455-495c-9cf1-3d25c0a1ceac-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"52ea94c4-1455-495c-9cf1-3d25c0a1ceac\") " pod="openstack/glance-default-external-api-0" Mar 11 09:01:30 crc kubenswrapper[4808]: I0311 09:01:30.188704 4808 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"52ea94c4-1455-495c-9cf1-3d25c0a1ceac\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Mar 11 09:01:30 crc kubenswrapper[4808]: I0311 09:01:30.196803 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llgxj\" (UniqueName: \"kubernetes.io/projected/52ea94c4-1455-495c-9cf1-3d25c0a1ceac-kube-api-access-llgxj\") pod \"glance-default-external-api-0\" (UID: \"52ea94c4-1455-495c-9cf1-3d25c0a1ceac\") " pod="openstack/glance-default-external-api-0" Mar 11 09:01:30 crc kubenswrapper[4808]: I0311 09:01:30.197131 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52ea94c4-1455-495c-9cf1-3d25c0a1ceac-logs\") pod \"glance-default-external-api-0\" (UID: \"52ea94c4-1455-495c-9cf1-3d25c0a1ceac\") " pod="openstack/glance-default-external-api-0" Mar 11 09:01:30 crc kubenswrapper[4808]: I0311 09:01:30.199423 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52ea94c4-1455-495c-9cf1-3d25c0a1ceac-config-data\") pod \"glance-default-external-api-0\" (UID: \"52ea94c4-1455-495c-9cf1-3d25c0a1ceac\") " pod="openstack/glance-default-external-api-0" Mar 11 09:01:30 crc kubenswrapper[4808]: I0311 09:01:30.201748 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/52ea94c4-1455-495c-9cf1-3d25c0a1ceac-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"52ea94c4-1455-495c-9cf1-3d25c0a1ceac\") " pod="openstack/glance-default-external-api-0" Mar 11 09:01:30 crc kubenswrapper[4808]: I0311 09:01:30.201352 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52ea94c4-1455-495c-9cf1-3d25c0a1ceac-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"52ea94c4-1455-495c-9cf1-3d25c0a1ceac\") " pod="openstack/glance-default-external-api-0" Mar 11 09:01:30 crc kubenswrapper[4808]: I0311 09:01:30.204688 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52ea94c4-1455-495c-9cf1-3d25c0a1ceac-scripts\") pod \"glance-default-external-api-0\" (UID: \"52ea94c4-1455-495c-9cf1-3d25c0a1ceac\") " pod="openstack/glance-default-external-api-0" Mar 11 09:01:30 crc kubenswrapper[4808]: I0311 09:01:30.221708 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llgxj\" (UniqueName: \"kubernetes.io/projected/52ea94c4-1455-495c-9cf1-3d25c0a1ceac-kube-api-access-llgxj\") pod \"glance-default-external-api-0\" (UID: \"52ea94c4-1455-495c-9cf1-3d25c0a1ceac\") " pod="openstack/glance-default-external-api-0" Mar 11 09:01:30 crc kubenswrapper[4808]: I0311 09:01:30.241300 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"52ea94c4-1455-495c-9cf1-3d25c0a1ceac\") " pod="openstack/glance-default-external-api-0" Mar 11 09:01:30 crc kubenswrapper[4808]: I0311 09:01:30.245915 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 11 09:01:30 crc kubenswrapper[4808]: I0311 09:01:30.264987 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-2t99h"] Mar 11 09:01:30 crc kubenswrapper[4808]: I0311 09:01:30.298388 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jqjdh\" (UniqueName: \"kubernetes.io/projected/2c773fa5-d0f3-424a-8cc6-09a36e87cd5e-kube-api-access-jqjdh\") pod \"2c773fa5-d0f3-424a-8cc6-09a36e87cd5e\" (UID: \"2c773fa5-d0f3-424a-8cc6-09a36e87cd5e\") " Mar 11 09:01:30 crc kubenswrapper[4808]: I0311 09:01:30.298483 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2c773fa5-d0f3-424a-8cc6-09a36e87cd5e-dns-svc\") pod \"2c773fa5-d0f3-424a-8cc6-09a36e87cd5e\" (UID: \"2c773fa5-d0f3-424a-8cc6-09a36e87cd5e\") " Mar 11 09:01:30 crc kubenswrapper[4808]: I0311 09:01:30.298513 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2c773fa5-d0f3-424a-8cc6-09a36e87cd5e-ovsdbserver-sb\") pod \"2c773fa5-d0f3-424a-8cc6-09a36e87cd5e\" (UID: \"2c773fa5-d0f3-424a-8cc6-09a36e87cd5e\") " Mar 11 09:01:30 crc kubenswrapper[4808]: I0311 09:01:30.298603 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c773fa5-d0f3-424a-8cc6-09a36e87cd5e-config\") pod \"2c773fa5-d0f3-424a-8cc6-09a36e87cd5e\" (UID: \"2c773fa5-d0f3-424a-8cc6-09a36e87cd5e\") " Mar 11 09:01:30 crc kubenswrapper[4808]: I0311 09:01:30.298640 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2c773fa5-d0f3-424a-8cc6-09a36e87cd5e-dns-swift-storage-0\") pod \"2c773fa5-d0f3-424a-8cc6-09a36e87cd5e\" (UID: \"2c773fa5-d0f3-424a-8cc6-09a36e87cd5e\") " Mar 11 09:01:30 crc kubenswrapper[4808]: I0311 09:01:30.298668 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2c773fa5-d0f3-424a-8cc6-09a36e87cd5e-ovsdbserver-nb\") pod \"2c773fa5-d0f3-424a-8cc6-09a36e87cd5e\" (UID: \"2c773fa5-d0f3-424a-8cc6-09a36e87cd5e\") " Mar 11 09:01:30 crc kubenswrapper[4808]: I0311 09:01:30.298877 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7873193d-ee9a-4c37-960a-ae69b881833e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7873193d-ee9a-4c37-960a-ae69b881833e\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:01:30 crc kubenswrapper[4808]: I0311 09:01:30.298914 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7873193d-ee9a-4c37-960a-ae69b881833e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7873193d-ee9a-4c37-960a-ae69b881833e\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:01:30 crc kubenswrapper[4808]: I0311 09:01:30.298935 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7873193d-ee9a-4c37-960a-ae69b881833e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"7873193d-ee9a-4c37-960a-ae69b881833e\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:01:30 crc kubenswrapper[4808]: I0311 09:01:30.298988 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7873193d-ee9a-4c37-960a-ae69b881833e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7873193d-ee9a-4c37-960a-ae69b881833e\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:01:30 crc kubenswrapper[4808]: I0311 09:01:30.299008 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"7873193d-ee9a-4c37-960a-ae69b881833e\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:01:30 crc kubenswrapper[4808]: I0311 09:01:30.299056 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9cz9\" (UniqueName: \"kubernetes.io/projected/7873193d-ee9a-4c37-960a-ae69b881833e-kube-api-access-m9cz9\") pod \"glance-default-internal-api-0\" (UID: \"7873193d-ee9a-4c37-960a-ae69b881833e\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:01:30 crc kubenswrapper[4808]: I0311 09:01:30.299083 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7873193d-ee9a-4c37-960a-ae69b881833e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7873193d-ee9a-4c37-960a-ae69b881833e\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:01:30 crc kubenswrapper[4808]: I0311 09:01:30.299117 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7873193d-ee9a-4c37-960a-ae69b881833e-logs\") pod \"glance-default-internal-api-0\" (UID: \"7873193d-ee9a-4c37-960a-ae69b881833e\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:01:30 crc kubenswrapper[4808]: I0311 09:01:30.299543 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7873193d-ee9a-4c37-960a-ae69b881833e-logs\") pod \"glance-default-internal-api-0\" (UID: \"7873193d-ee9a-4c37-960a-ae69b881833e\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:01:30 crc kubenswrapper[4808]: I0311 09:01:30.299677 4808 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"7873193d-ee9a-4c37-960a-ae69b881833e\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-internal-api-0" Mar 11 09:01:30 crc kubenswrapper[4808]: I0311 09:01:30.301277 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7873193d-ee9a-4c37-960a-ae69b881833e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7873193d-ee9a-4c37-960a-ae69b881833e\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:01:30 crc kubenswrapper[4808]: I0311 09:01:30.305195 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7873193d-ee9a-4c37-960a-ae69b881833e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"7873193d-ee9a-4c37-960a-ae69b881833e\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:01:30 crc kubenswrapper[4808]: I0311 09:01:30.307016 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7873193d-ee9a-4c37-960a-ae69b881833e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7873193d-ee9a-4c37-960a-ae69b881833e\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:01:30 crc kubenswrapper[4808]: I0311 09:01:30.307214 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c773fa5-d0f3-424a-8cc6-09a36e87cd5e-kube-api-access-jqjdh" (OuterVolumeSpecName: "kube-api-access-jqjdh") pod "2c773fa5-d0f3-424a-8cc6-09a36e87cd5e" (UID: "2c773fa5-d0f3-424a-8cc6-09a36e87cd5e"). InnerVolumeSpecName "kube-api-access-jqjdh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:01:30 crc kubenswrapper[4808]: I0311 09:01:30.308783 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7873193d-ee9a-4c37-960a-ae69b881833e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7873193d-ee9a-4c37-960a-ae69b881833e\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:01:30 crc kubenswrapper[4808]: I0311 09:01:30.322622 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7873193d-ee9a-4c37-960a-ae69b881833e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7873193d-ee9a-4c37-960a-ae69b881833e\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:01:30 crc kubenswrapper[4808]: I0311 09:01:30.343981 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9cz9\" (UniqueName: \"kubernetes.io/projected/7873193d-ee9a-4c37-960a-ae69b881833e-kube-api-access-m9cz9\") pod \"glance-default-internal-api-0\" (UID: \"7873193d-ee9a-4c37-960a-ae69b881833e\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:01:30 crc kubenswrapper[4808]: I0311 09:01:30.401787 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"7873193d-ee9a-4c37-960a-ae69b881833e\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:01:30 crc kubenswrapper[4808]: I0311 09:01:30.402474 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jqjdh\" (UniqueName: \"kubernetes.io/projected/2c773fa5-d0f3-424a-8cc6-09a36e87cd5e-kube-api-access-jqjdh\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:30 crc kubenswrapper[4808]: I0311 09:01:30.426771 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c773fa5-d0f3-424a-8cc6-09a36e87cd5e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2c773fa5-d0f3-424a-8cc6-09a36e87cd5e" (UID: "2c773fa5-d0f3-424a-8cc6-09a36e87cd5e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:01:30 crc kubenswrapper[4808]: I0311 09:01:30.444943 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c773fa5-d0f3-424a-8cc6-09a36e87cd5e-config" (OuterVolumeSpecName: "config") pod "2c773fa5-d0f3-424a-8cc6-09a36e87cd5e" (UID: "2c773fa5-d0f3-424a-8cc6-09a36e87cd5e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:01:30 crc kubenswrapper[4808]: I0311 09:01:30.458117 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c773fa5-d0f3-424a-8cc6-09a36e87cd5e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2c773fa5-d0f3-424a-8cc6-09a36e87cd5e" (UID: "2c773fa5-d0f3-424a-8cc6-09a36e87cd5e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:01:30 crc kubenswrapper[4808]: I0311 09:01:30.467153 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c773fa5-d0f3-424a-8cc6-09a36e87cd5e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "2c773fa5-d0f3-424a-8cc6-09a36e87cd5e" (UID: "2c773fa5-d0f3-424a-8cc6-09a36e87cd5e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:01:30 crc kubenswrapper[4808]: I0311 09:01:30.475299 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-j6tdp"] Mar 11 09:01:30 crc kubenswrapper[4808]: I0311 09:01:30.497698 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-ccd7c9f8f-zc7mn"] Mar 11 09:01:30 crc kubenswrapper[4808]: I0311 09:01:30.506928 4808 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2c773fa5-d0f3-424a-8cc6-09a36e87cd5e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:30 crc kubenswrapper[4808]: I0311 09:01:30.506954 4808 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2c773fa5-d0f3-424a-8cc6-09a36e87cd5e-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:30 crc kubenswrapper[4808]: I0311 09:01:30.506965 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c773fa5-d0f3-424a-8cc6-09a36e87cd5e-config\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:30 crc kubenswrapper[4808]: I0311 09:01:30.506973 4808 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2c773fa5-d0f3-424a-8cc6-09a36e87cd5e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:30 crc kubenswrapper[4808]: I0311 09:01:30.517988 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c773fa5-d0f3-424a-8cc6-09a36e87cd5e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2c773fa5-d0f3-424a-8cc6-09a36e87cd5e" (UID: "2c773fa5-d0f3-424a-8cc6-09a36e87cd5e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:01:30 crc kubenswrapper[4808]: I0311 09:01:30.571989 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bfeb64e4-ece9-4403-8534-acf6cfccf457","Type":"ContainerStarted","Data":"d3c2dc743330ba22109a8b3e17e73ad9e0471b125f83775a202c09df941db2a9"} Mar 11 09:01:30 crc kubenswrapper[4808]: I0311 09:01:30.573412 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-5p7hh" event={"ID":"c0e6992f-4344-4cc1-86b9-6481b4b7205c","Type":"ContainerStarted","Data":"baba432d3961a2c86fc9d83251361663c5b70aad2fc46af4af66f314f4348ec9"} Mar 11 09:01:30 crc kubenswrapper[4808]: I0311 09:01:30.573453 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-5p7hh" event={"ID":"c0e6992f-4344-4cc1-86b9-6481b4b7205c","Type":"ContainerStarted","Data":"da7b425c7759c1dd15108d5e671a6460cfb4944f553b86a92744fd359d5eb3ed"} Mar 11 09:01:30 crc kubenswrapper[4808]: I0311 09:01:30.576027 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-rksh6" event={"ID":"847a58e2-c27f-4b49-8300-cbe239822ffa","Type":"ContainerStarted","Data":"bfbe72f0247d8422f282298a0728be8c6700436f5a507766ec9b153e81b78304"} Mar 11 09:01:30 crc kubenswrapper[4808]: I0311 09:01:30.578550 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-rtcbw" event={"ID":"75b5595d-1d35-47d9-b6a2-196e30848a13","Type":"ContainerStarted","Data":"58653438b666ac59f4ac7c66a50b554fea1e0549b1b3814b5692588ecec2a397"} Mar 11 09:01:30 crc kubenswrapper[4808]: I0311 09:01:30.582194 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5985c59c55-hhpvt" event={"ID":"0d405cc3-966f-4f51-b607-9b31e50c1bd3","Type":"ContainerStarted","Data":"5666ae7eb57a9873fbd02938939ab139b5ae2549f8cdbc3b72ff16ff365015e5"} Mar 11 09:01:30 crc kubenswrapper[4808]: I0311 09:01:30.591553 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-5p7hh" podStartSLOduration=2.591537808 podStartE2EDuration="2.591537808s" podCreationTimestamp="2026-03-11 09:01:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:01:30.588732397 +0000 UTC m=+1341.542055717" watchObservedRunningTime="2026-03-11 09:01:30.591537808 +0000 UTC m=+1341.544861128" Mar 11 09:01:30 crc kubenswrapper[4808]: I0311 09:01:30.592696 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-j6tdp" event={"ID":"712fc7d0-7b9d-4293-abfb-262e5482bfed","Type":"ContainerStarted","Data":"e22586ace7221589905bee2fdbc63872b7b6d52631bb3f3a5f4ccf4388a1f0d7"} Mar 11 09:01:30 crc kubenswrapper[4808]: I0311 09:01:30.603958 4808 generic.go:334] "Generic (PLEG): container finished" podID="2c773fa5-d0f3-424a-8cc6-09a36e87cd5e" containerID="f3d937f4fa771f91ef99ac30a5e678a4052e9e09bc6ea07e5703714642907cc2" exitCode=0 Mar 11 09:01:30 crc kubenswrapper[4808]: I0311 09:01:30.604033 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c886f8b5-92fbr" event={"ID":"2c773fa5-d0f3-424a-8cc6-09a36e87cd5e","Type":"ContainerDied","Data":"f3d937f4fa771f91ef99ac30a5e678a4052e9e09bc6ea07e5703714642907cc2"} Mar 11 09:01:30 crc kubenswrapper[4808]: I0311 09:01:30.604063 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c886f8b5-92fbr" event={"ID":"2c773fa5-d0f3-424a-8cc6-09a36e87cd5e","Type":"ContainerDied","Data":"ab4fae23ba47d7ebbea74fe0e06313b6954435a59003b8be69cfb9bc2063fcbd"} Mar 11 09:01:30 crc kubenswrapper[4808]: I0311 09:01:30.604084 4808 scope.go:117] "RemoveContainer" containerID="f3d937f4fa771f91ef99ac30a5e678a4052e9e09bc6ea07e5703714642907cc2" Mar 11 09:01:30 crc kubenswrapper[4808]: I0311 09:01:30.604246 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c886f8b5-92fbr" Mar 11 09:01:30 crc kubenswrapper[4808]: I0311 09:01:30.608074 4808 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2c773fa5-d0f3-424a-8cc6-09a36e87cd5e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:30 crc kubenswrapper[4808]: I0311 09:01:30.615667 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-ccd7c9f8f-zc7mn" event={"ID":"dcacf1a9-dbf5-42c3-90a3-03178f6e1659","Type":"ContainerStarted","Data":"bec31dacc6706207053c4db22ff410c02b1730aaf23871c6fd6dec062e4d0871"} Mar 11 09:01:30 crc kubenswrapper[4808]: I0311 09:01:30.624833 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-2t99h" event={"ID":"ee99a1df-1d19-4463-a0ae-84a18e2f6d4e","Type":"ContainerStarted","Data":"d00ab90e53c9e0e609e98c1b0cbc4d27c4160ce951237f0afbb2a231696f4c16"} Mar 11 09:01:30 crc kubenswrapper[4808]: I0311 09:01:30.652217 4808 scope.go:117] "RemoveContainer" containerID="faf97e0cc98aac2f8539339849061471c9ce7efbb3483298ef41c527f5d7c39e" Mar 11 09:01:30 crc kubenswrapper[4808]: I0311 09:01:30.658481 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75c886f8b5-92fbr"] Mar 11 09:01:30 crc kubenswrapper[4808]: I0311 09:01:30.666732 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-75c886f8b5-92fbr"] Mar 11 09:01:30 crc kubenswrapper[4808]: I0311 09:01:30.683901 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 11 09:01:30 crc kubenswrapper[4808]: I0311 09:01:30.686462 4808 scope.go:117] "RemoveContainer" containerID="f3d937f4fa771f91ef99ac30a5e678a4052e9e09bc6ea07e5703714642907cc2" Mar 11 09:01:30 crc kubenswrapper[4808]: E0311 09:01:30.687157 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3d937f4fa771f91ef99ac30a5e678a4052e9e09bc6ea07e5703714642907cc2\": container with ID starting with f3d937f4fa771f91ef99ac30a5e678a4052e9e09bc6ea07e5703714642907cc2 not found: ID does not exist" containerID="f3d937f4fa771f91ef99ac30a5e678a4052e9e09bc6ea07e5703714642907cc2" Mar 11 09:01:30 crc kubenswrapper[4808]: I0311 09:01:30.687209 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3d937f4fa771f91ef99ac30a5e678a4052e9e09bc6ea07e5703714642907cc2"} err="failed to get container status \"f3d937f4fa771f91ef99ac30a5e678a4052e9e09bc6ea07e5703714642907cc2\": rpc error: code = NotFound desc = could not find container \"f3d937f4fa771f91ef99ac30a5e678a4052e9e09bc6ea07e5703714642907cc2\": container with ID starting with f3d937f4fa771f91ef99ac30a5e678a4052e9e09bc6ea07e5703714642907cc2 not found: ID does not exist" Mar 11 09:01:30 crc kubenswrapper[4808]: I0311 09:01:30.687238 4808 scope.go:117] "RemoveContainer" containerID="faf97e0cc98aac2f8539339849061471c9ce7efbb3483298ef41c527f5d7c39e" Mar 11 09:01:30 crc kubenswrapper[4808]: E0311 09:01:30.688679 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"faf97e0cc98aac2f8539339849061471c9ce7efbb3483298ef41c527f5d7c39e\": container with ID starting with faf97e0cc98aac2f8539339849061471c9ce7efbb3483298ef41c527f5d7c39e not found: ID does not exist" containerID="faf97e0cc98aac2f8539339849061471c9ce7efbb3483298ef41c527f5d7c39e" Mar 11 09:01:30 crc kubenswrapper[4808]: I0311 09:01:30.688707 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"faf97e0cc98aac2f8539339849061471c9ce7efbb3483298ef41c527f5d7c39e"} err="failed to get container status \"faf97e0cc98aac2f8539339849061471c9ce7efbb3483298ef41c527f5d7c39e\": rpc error: code = NotFound desc = could not find container \"faf97e0cc98aac2f8539339849061471c9ce7efbb3483298ef41c527f5d7c39e\": container with ID starting with faf97e0cc98aac2f8539339849061471c9ce7efbb3483298ef41c527f5d7c39e not found: ID does not exist" Mar 11 09:01:30 crc kubenswrapper[4808]: I0311 09:01:30.934038 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 11 09:01:31 crc kubenswrapper[4808]: I0311 09:01:31.039911 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 11 09:01:31 crc kubenswrapper[4808]: I0311 09:01:31.111552 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 11 09:01:31 crc kubenswrapper[4808]: I0311 09:01:31.118097 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 11 09:01:31 crc kubenswrapper[4808]: I0311 09:01:31.368597 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 11 09:01:31 crc kubenswrapper[4808]: W0311 09:01:31.400977 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7873193d_ee9a_4c37_960a_ae69b881833e.slice/crio-758d29d5622f0f7030ad61617c4904b4b1bd64c328bf502b5a2bb1c30ac05b02 WatchSource:0}: Error finding container 758d29d5622f0f7030ad61617c4904b4b1bd64c328bf502b5a2bb1c30ac05b02: Status 404 returned error can't find the container with id 758d29d5622f0f7030ad61617c4904b4b1bd64c328bf502b5a2bb1c30ac05b02 Mar 11 09:01:31 crc kubenswrapper[4808]: I0311 09:01:31.663882 4808 generic.go:334] "Generic (PLEG): container finished" podID="dcacf1a9-dbf5-42c3-90a3-03178f6e1659" containerID="eb0547253f11aafc0e90b8cba5a50c52125439a905f93c09438369f437c0316a" exitCode=0 Mar 11 09:01:31 crc kubenswrapper[4808]: I0311 09:01:31.663958 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-ccd7c9f8f-zc7mn" event={"ID":"dcacf1a9-dbf5-42c3-90a3-03178f6e1659","Type":"ContainerDied","Data":"eb0547253f11aafc0e90b8cba5a50c52125439a905f93c09438369f437c0316a"} Mar 11 09:01:31 crc kubenswrapper[4808]: I0311 09:01:31.669614 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-2t99h" event={"ID":"ee99a1df-1d19-4463-a0ae-84a18e2f6d4e","Type":"ContainerStarted","Data":"e1d5167e0ef98d8c3a6364c193b5c0835578c635b67e1e0877f6c0259ac12fe8"} Mar 11 09:01:31 crc kubenswrapper[4808]: I0311 09:01:31.698308 4808 generic.go:334] "Generic (PLEG): container finished" podID="0d405cc3-966f-4f51-b607-9b31e50c1bd3" containerID="a00f06f84b174aea30492ab98b9aa6c3f5cc5400288beeff48827e38415e060d" exitCode=0 Mar 11 09:01:31 crc kubenswrapper[4808]: I0311 09:01:31.698387 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5985c59c55-hhpvt" event={"ID":"0d405cc3-966f-4f51-b607-9b31e50c1bd3","Type":"ContainerDied","Data":"a00f06f84b174aea30492ab98b9aa6c3f5cc5400288beeff48827e38415e060d"} Mar 11 09:01:31 crc kubenswrapper[4808]: I0311 09:01:31.701046 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7873193d-ee9a-4c37-960a-ae69b881833e","Type":"ContainerStarted","Data":"758d29d5622f0f7030ad61617c4904b4b1bd64c328bf502b5a2bb1c30ac05b02"} Mar 11 09:01:31 crc kubenswrapper[4808]: I0311 09:01:31.705654 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"52ea94c4-1455-495c-9cf1-3d25c0a1ceac","Type":"ContainerStarted","Data":"66ae20f3bf5a14a305a8df78199a05ae676a66d1e12b8b60a33e21f7d4958b1c"} Mar 11 09:01:31 crc kubenswrapper[4808]: I0311 09:01:31.722036 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-2t99h" podStartSLOduration=2.7220133669999997 podStartE2EDuration="2.722013367s" podCreationTimestamp="2026-03-11 09:01:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:01:31.713246493 +0000 UTC m=+1342.666569833" watchObservedRunningTime="2026-03-11 09:01:31.722013367 +0000 UTC m=+1342.675336687" Mar 11 09:01:31 crc kubenswrapper[4808]: I0311 09:01:31.826119 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c773fa5-d0f3-424a-8cc6-09a36e87cd5e" path="/var/lib/kubelet/pods/2c773fa5-d0f3-424a-8cc6-09a36e87cd5e/volumes" Mar 11 09:01:32 crc kubenswrapper[4808]: I0311 09:01:32.066843 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5985c59c55-hhpvt" Mar 11 09:01:32 crc kubenswrapper[4808]: I0311 09:01:32.157909 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0d405cc3-966f-4f51-b607-9b31e50c1bd3-dns-swift-storage-0\") pod \"0d405cc3-966f-4f51-b607-9b31e50c1bd3\" (UID: \"0d405cc3-966f-4f51-b607-9b31e50c1bd3\") " Mar 11 09:01:32 crc kubenswrapper[4808]: I0311 09:01:32.157992 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0d405cc3-966f-4f51-b607-9b31e50c1bd3-dns-svc\") pod \"0d405cc3-966f-4f51-b607-9b31e50c1bd3\" (UID: \"0d405cc3-966f-4f51-b607-9b31e50c1bd3\") " Mar 11 09:01:32 crc kubenswrapper[4808]: I0311 09:01:32.158019 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0d405cc3-966f-4f51-b607-9b31e50c1bd3-ovsdbserver-nb\") pod \"0d405cc3-966f-4f51-b607-9b31e50c1bd3\" (UID: \"0d405cc3-966f-4f51-b607-9b31e50c1bd3\") " Mar 11 09:01:32 crc kubenswrapper[4808]: I0311 09:01:32.158063 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6jkr6\" (UniqueName: \"kubernetes.io/projected/0d405cc3-966f-4f51-b607-9b31e50c1bd3-kube-api-access-6jkr6\") pod \"0d405cc3-966f-4f51-b607-9b31e50c1bd3\" (UID: \"0d405cc3-966f-4f51-b607-9b31e50c1bd3\") " Mar 11 09:01:32 crc kubenswrapper[4808]: I0311 09:01:32.158089 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d405cc3-966f-4f51-b607-9b31e50c1bd3-config\") pod \"0d405cc3-966f-4f51-b607-9b31e50c1bd3\" (UID: \"0d405cc3-966f-4f51-b607-9b31e50c1bd3\") " Mar 11 09:01:32 crc kubenswrapper[4808]: I0311 09:01:32.158112 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0d405cc3-966f-4f51-b607-9b31e50c1bd3-ovsdbserver-sb\") pod \"0d405cc3-966f-4f51-b607-9b31e50c1bd3\" (UID: \"0d405cc3-966f-4f51-b607-9b31e50c1bd3\") " Mar 11 09:01:32 crc kubenswrapper[4808]: I0311 09:01:32.178050 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d405cc3-966f-4f51-b607-9b31e50c1bd3-kube-api-access-6jkr6" (OuterVolumeSpecName: "kube-api-access-6jkr6") pod "0d405cc3-966f-4f51-b607-9b31e50c1bd3" (UID: "0d405cc3-966f-4f51-b607-9b31e50c1bd3"). InnerVolumeSpecName "kube-api-access-6jkr6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:01:32 crc kubenswrapper[4808]: I0311 09:01:32.263428 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6jkr6\" (UniqueName: \"kubernetes.io/projected/0d405cc3-966f-4f51-b607-9b31e50c1bd3-kube-api-access-6jkr6\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:32 crc kubenswrapper[4808]: I0311 09:01:32.434530 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d405cc3-966f-4f51-b607-9b31e50c1bd3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0d405cc3-966f-4f51-b607-9b31e50c1bd3" (UID: "0d405cc3-966f-4f51-b607-9b31e50c1bd3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:01:32 crc kubenswrapper[4808]: I0311 09:01:32.442699 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d405cc3-966f-4f51-b607-9b31e50c1bd3-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "0d405cc3-966f-4f51-b607-9b31e50c1bd3" (UID: "0d405cc3-966f-4f51-b607-9b31e50c1bd3"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:01:32 crc kubenswrapper[4808]: I0311 09:01:32.466817 4808 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0d405cc3-966f-4f51-b607-9b31e50c1bd3-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:32 crc kubenswrapper[4808]: I0311 09:01:32.466856 4808 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0d405cc3-966f-4f51-b607-9b31e50c1bd3-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:32 crc kubenswrapper[4808]: I0311 09:01:32.493142 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d405cc3-966f-4f51-b607-9b31e50c1bd3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0d405cc3-966f-4f51-b607-9b31e50c1bd3" (UID: "0d405cc3-966f-4f51-b607-9b31e50c1bd3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:01:32 crc kubenswrapper[4808]: I0311 09:01:32.499844 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d405cc3-966f-4f51-b607-9b31e50c1bd3-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0d405cc3-966f-4f51-b607-9b31e50c1bd3" (UID: "0d405cc3-966f-4f51-b607-9b31e50c1bd3"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:01:32 crc kubenswrapper[4808]: I0311 09:01:32.511736 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d405cc3-966f-4f51-b607-9b31e50c1bd3-config" (OuterVolumeSpecName: "config") pod "0d405cc3-966f-4f51-b607-9b31e50c1bd3" (UID: "0d405cc3-966f-4f51-b607-9b31e50c1bd3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:01:32 crc kubenswrapper[4808]: I0311 09:01:32.568147 4808 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0d405cc3-966f-4f51-b607-9b31e50c1bd3-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:32 crc kubenswrapper[4808]: I0311 09:01:32.568180 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d405cc3-966f-4f51-b607-9b31e50c1bd3-config\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:32 crc kubenswrapper[4808]: I0311 09:01:32.568191 4808 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0d405cc3-966f-4f51-b607-9b31e50c1bd3-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:32 crc kubenswrapper[4808]: I0311 09:01:32.719478 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5985c59c55-hhpvt" event={"ID":"0d405cc3-966f-4f51-b607-9b31e50c1bd3","Type":"ContainerDied","Data":"5666ae7eb57a9873fbd02938939ab139b5ae2549f8cdbc3b72ff16ff365015e5"} Mar 11 09:01:32 crc kubenswrapper[4808]: I0311 09:01:32.719527 4808 scope.go:117] "RemoveContainer" containerID="a00f06f84b174aea30492ab98b9aa6c3f5cc5400288beeff48827e38415e060d" Mar 11 09:01:32 crc kubenswrapper[4808]: I0311 09:01:32.719647 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5985c59c55-hhpvt" Mar 11 09:01:32 crc kubenswrapper[4808]: I0311 09:01:32.728649 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7873193d-ee9a-4c37-960a-ae69b881833e","Type":"ContainerStarted","Data":"bd6797a9c0789fc636911fcda99c50039f8d094fc8897eab94b7ad7ddcfb73ba"} Mar 11 09:01:32 crc kubenswrapper[4808]: I0311 09:01:32.732697 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"52ea94c4-1455-495c-9cf1-3d25c0a1ceac","Type":"ContainerStarted","Data":"46df675c364b61e43a37239955a65bd3d4d46bb4ad4c57bf69a9ac026702a5fc"} Mar 11 09:01:32 crc kubenswrapper[4808]: I0311 09:01:32.737555 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-ccd7c9f8f-zc7mn" event={"ID":"dcacf1a9-dbf5-42c3-90a3-03178f6e1659","Type":"ContainerStarted","Data":"23fac4e981367fd2ffbd1611629680dcbba440d3d62c634ab2ace85327b0385e"} Mar 11 09:01:32 crc kubenswrapper[4808]: I0311 09:01:32.737615 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-ccd7c9f8f-zc7mn" Mar 11 09:01:32 crc kubenswrapper[4808]: I0311 09:01:32.782726 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-ccd7c9f8f-zc7mn" podStartSLOduration=3.782709416 podStartE2EDuration="3.782709416s" podCreationTimestamp="2026-03-11 09:01:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:01:32.778165404 +0000 UTC m=+1343.731488734" watchObservedRunningTime="2026-03-11 09:01:32.782709416 +0000 UTC m=+1343.736032736" Mar 11 09:01:32 crc kubenswrapper[4808]: I0311 09:01:32.832972 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5985c59c55-hhpvt"] Mar 11 09:01:32 crc kubenswrapper[4808]: I0311 09:01:32.841383 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5985c59c55-hhpvt"] Mar 11 09:01:33 crc kubenswrapper[4808]: I0311 09:01:33.752578 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7873193d-ee9a-4c37-960a-ae69b881833e","Type":"ContainerStarted","Data":"34bd0311487087f1fa5f1036d1b584e502b9bbca239b9ab876eea15df2e3d0ea"} Mar 11 09:01:33 crc kubenswrapper[4808]: I0311 09:01:33.752691 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="7873193d-ee9a-4c37-960a-ae69b881833e" containerName="glance-log" containerID="cri-o://bd6797a9c0789fc636911fcda99c50039f8d094fc8897eab94b7ad7ddcfb73ba" gracePeriod=30 Mar 11 09:01:33 crc kubenswrapper[4808]: I0311 09:01:33.753027 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="7873193d-ee9a-4c37-960a-ae69b881833e" containerName="glance-httpd" containerID="cri-o://34bd0311487087f1fa5f1036d1b584e502b9bbca239b9ab876eea15df2e3d0ea" gracePeriod=30 Mar 11 09:01:33 crc kubenswrapper[4808]: I0311 09:01:33.756280 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"52ea94c4-1455-495c-9cf1-3d25c0a1ceac","Type":"ContainerStarted","Data":"dafa87acce78967f76088a5597b5e118481b93f2b933b8f0a567a9e0eb949b35"} Mar 11 09:01:33 crc kubenswrapper[4808]: I0311 09:01:33.756455 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="52ea94c4-1455-495c-9cf1-3d25c0a1ceac" containerName="glance-log" containerID="cri-o://46df675c364b61e43a37239955a65bd3d4d46bb4ad4c57bf69a9ac026702a5fc" gracePeriod=30 Mar 11 09:01:33 crc kubenswrapper[4808]: I0311 09:01:33.756566 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="52ea94c4-1455-495c-9cf1-3d25c0a1ceac" containerName="glance-httpd" containerID="cri-o://dafa87acce78967f76088a5597b5e118481b93f2b933b8f0a567a9e0eb949b35" gracePeriod=30 Mar 11 09:01:33 crc kubenswrapper[4808]: I0311 09:01:33.793672 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.793650376 podStartE2EDuration="5.793650376s" podCreationTimestamp="2026-03-11 09:01:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:01:33.779467576 +0000 UTC m=+1344.732790906" watchObservedRunningTime="2026-03-11 09:01:33.793650376 +0000 UTC m=+1344.746973696" Mar 11 09:01:33 crc kubenswrapper[4808]: I0311 09:01:33.809763 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.809740942 podStartE2EDuration="5.809740942s" podCreationTimestamp="2026-03-11 09:01:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:01:33.801147643 +0000 UTC m=+1344.754470963" watchObservedRunningTime="2026-03-11 09:01:33.809740942 +0000 UTC m=+1344.763064262" Mar 11 09:01:33 crc kubenswrapper[4808]: I0311 09:01:33.819382 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d405cc3-966f-4f51-b607-9b31e50c1bd3" path="/var/lib/kubelet/pods/0d405cc3-966f-4f51-b607-9b31e50c1bd3/volumes" Mar 11 09:01:34 crc kubenswrapper[4808]: I0311 09:01:34.779023 4808 generic.go:334] "Generic (PLEG): container finished" podID="7873193d-ee9a-4c37-960a-ae69b881833e" containerID="34bd0311487087f1fa5f1036d1b584e502b9bbca239b9ab876eea15df2e3d0ea" exitCode=0 Mar 11 09:01:34 crc kubenswrapper[4808]: I0311 09:01:34.779351 4808 generic.go:334] "Generic (PLEG): container finished" podID="7873193d-ee9a-4c37-960a-ae69b881833e" containerID="bd6797a9c0789fc636911fcda99c50039f8d094fc8897eab94b7ad7ddcfb73ba" exitCode=143 Mar 11 09:01:34 crc kubenswrapper[4808]: I0311 09:01:34.779098 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7873193d-ee9a-4c37-960a-ae69b881833e","Type":"ContainerDied","Data":"34bd0311487087f1fa5f1036d1b584e502b9bbca239b9ab876eea15df2e3d0ea"} Mar 11 09:01:34 crc kubenswrapper[4808]: I0311 09:01:34.779455 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7873193d-ee9a-4c37-960a-ae69b881833e","Type":"ContainerDied","Data":"bd6797a9c0789fc636911fcda99c50039f8d094fc8897eab94b7ad7ddcfb73ba"} Mar 11 09:01:34 crc kubenswrapper[4808]: I0311 09:01:34.782809 4808 generic.go:334] "Generic (PLEG): container finished" podID="c0e6992f-4344-4cc1-86b9-6481b4b7205c" containerID="baba432d3961a2c86fc9d83251361663c5b70aad2fc46af4af66f314f4348ec9" exitCode=0 Mar 11 09:01:34 crc kubenswrapper[4808]: I0311 09:01:34.782931 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-5p7hh" event={"ID":"c0e6992f-4344-4cc1-86b9-6481b4b7205c","Type":"ContainerDied","Data":"baba432d3961a2c86fc9d83251361663c5b70aad2fc46af4af66f314f4348ec9"} Mar 11 09:01:34 crc kubenswrapper[4808]: I0311 09:01:34.787167 4808 generic.go:334] "Generic (PLEG): container finished" podID="52ea94c4-1455-495c-9cf1-3d25c0a1ceac" containerID="dafa87acce78967f76088a5597b5e118481b93f2b933b8f0a567a9e0eb949b35" exitCode=0 Mar 11 09:01:34 crc kubenswrapper[4808]: I0311 09:01:34.787203 4808 generic.go:334] "Generic (PLEG): container finished" podID="52ea94c4-1455-495c-9cf1-3d25c0a1ceac" containerID="46df675c364b61e43a37239955a65bd3d4d46bb4ad4c57bf69a9ac026702a5fc" exitCode=143 Mar 11 09:01:34 crc kubenswrapper[4808]: I0311 09:01:34.787230 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"52ea94c4-1455-495c-9cf1-3d25c0a1ceac","Type":"ContainerDied","Data":"dafa87acce78967f76088a5597b5e118481b93f2b933b8f0a567a9e0eb949b35"} Mar 11 09:01:34 crc kubenswrapper[4808]: I0311 09:01:34.787280 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"52ea94c4-1455-495c-9cf1-3d25c0a1ceac","Type":"ContainerDied","Data":"46df675c364b61e43a37239955a65bd3d4d46bb4ad4c57bf69a9ac026702a5fc"} Mar 11 09:01:39 crc kubenswrapper[4808]: I0311 09:01:39.765544 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-ccd7c9f8f-zc7mn" Mar 11 09:01:39 crc kubenswrapper[4808]: I0311 09:01:39.840108 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f7dd995-lrrzj"] Mar 11 09:01:39 crc kubenswrapper[4808]: I0311 09:01:39.840339 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-675f7dd995-lrrzj" podUID="6ad27efa-241d-4ee1-978d-2dacc25cb7e1" containerName="dnsmasq-dns" containerID="cri-o://e06cdeb9590b9d80c01d88d8ccb42a576bbd93754fc91726c0f46b7324808de6" gracePeriod=10 Mar 11 09:01:40 crc kubenswrapper[4808]: I0311 09:01:40.859407 4808 generic.go:334] "Generic (PLEG): container finished" podID="6ad27efa-241d-4ee1-978d-2dacc25cb7e1" containerID="e06cdeb9590b9d80c01d88d8ccb42a576bbd93754fc91726c0f46b7324808de6" exitCode=0 Mar 11 09:01:40 crc kubenswrapper[4808]: I0311 09:01:40.859500 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f7dd995-lrrzj" event={"ID":"6ad27efa-241d-4ee1-978d-2dacc25cb7e1","Type":"ContainerDied","Data":"e06cdeb9590b9d80c01d88d8ccb42a576bbd93754fc91726c0f46b7324808de6"} Mar 11 09:01:44 crc kubenswrapper[4808]: I0311 09:01:44.134894 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-675f7dd995-lrrzj" podUID="6ad27efa-241d-4ee1-978d-2dacc25cb7e1" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.127:5353: connect: connection refused" Mar 11 09:01:46 crc kubenswrapper[4808]: I0311 09:01:46.027351 4808 patch_prober.go:28] interesting pod/machine-config-daemon-tfsm9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 09:01:46 crc kubenswrapper[4808]: I0311 09:01:46.027423 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 09:01:46 crc kubenswrapper[4808]: I0311 09:01:46.027461 4808 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" Mar 11 09:01:46 crc kubenswrapper[4808]: I0311 09:01:46.028248 4808 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"84afb20a36811210fd2305d9fb0d3f8a8331946a4c99ca791ee6a486c55a2dfe"} pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 11 09:01:46 crc kubenswrapper[4808]: I0311 09:01:46.028542 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerName="machine-config-daemon" containerID="cri-o://84afb20a36811210fd2305d9fb0d3f8a8331946a4c99ca791ee6a486c55a2dfe" gracePeriod=600 Mar 11 09:01:46 crc kubenswrapper[4808]: I0311 09:01:46.920748 4808 generic.go:334] "Generic (PLEG): container finished" podID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerID="84afb20a36811210fd2305d9fb0d3f8a8331946a4c99ca791ee6a486c55a2dfe" exitCode=0 Mar 11 09:01:46 crc kubenswrapper[4808]: I0311 09:01:46.920920 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" event={"ID":"3dda5309-668d-4e3c-b3b2-1d708eecc578","Type":"ContainerDied","Data":"84afb20a36811210fd2305d9fb0d3f8a8331946a4c99ca791ee6a486c55a2dfe"} Mar 11 09:01:46 crc kubenswrapper[4808]: I0311 09:01:46.921561 4808 scope.go:117] "RemoveContainer" containerID="488e906783a49352d04d778f4c40f55061de3db9ceb8af5362f944dc622b1e1a" Mar 11 09:01:47 crc kubenswrapper[4808]: I0311 09:01:47.456035 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-5p7hh" Mar 11 09:01:47 crc kubenswrapper[4808]: I0311 09:01:47.469881 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 11 09:01:47 crc kubenswrapper[4808]: I0311 09:01:47.484237 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 11 09:01:47 crc kubenswrapper[4808]: I0311 09:01:47.535662 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0e6992f-4344-4cc1-86b9-6481b4b7205c-combined-ca-bundle\") pod \"c0e6992f-4344-4cc1-86b9-6481b4b7205c\" (UID: \"c0e6992f-4344-4cc1-86b9-6481b4b7205c\") " Mar 11 09:01:47 crc kubenswrapper[4808]: I0311 09:01:47.536526 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhpr7\" (UniqueName: \"kubernetes.io/projected/c0e6992f-4344-4cc1-86b9-6481b4b7205c-kube-api-access-jhpr7\") pod \"c0e6992f-4344-4cc1-86b9-6481b4b7205c\" (UID: \"c0e6992f-4344-4cc1-86b9-6481b4b7205c\") " Mar 11 09:01:47 crc kubenswrapper[4808]: I0311 09:01:47.536586 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/52ea94c4-1455-495c-9cf1-3d25c0a1ceac-httpd-run\") pod \"52ea94c4-1455-495c-9cf1-3d25c0a1ceac\" (UID: \"52ea94c4-1455-495c-9cf1-3d25c0a1ceac\") " Mar 11 09:01:47 crc kubenswrapper[4808]: I0311 09:01:47.536662 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7873193d-ee9a-4c37-960a-ae69b881833e-scripts\") pod \"7873193d-ee9a-4c37-960a-ae69b881833e\" (UID: \"7873193d-ee9a-4c37-960a-ae69b881833e\") " Mar 11 09:01:47 crc kubenswrapper[4808]: I0311 09:01:47.536692 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7873193d-ee9a-4c37-960a-ae69b881833e-logs\") pod \"7873193d-ee9a-4c37-960a-ae69b881833e\" (UID: \"7873193d-ee9a-4c37-960a-ae69b881833e\") " Mar 11 09:01:47 crc kubenswrapper[4808]: I0311 09:01:47.536733 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-llgxj\" (UniqueName: \"kubernetes.io/projected/52ea94c4-1455-495c-9cf1-3d25c0a1ceac-kube-api-access-llgxj\") pod \"52ea94c4-1455-495c-9cf1-3d25c0a1ceac\" (UID: \"52ea94c4-1455-495c-9cf1-3d25c0a1ceac\") " Mar 11 09:01:47 crc kubenswrapper[4808]: I0311 09:01:47.541880 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7873193d-ee9a-4c37-960a-ae69b881833e-logs" (OuterVolumeSpecName: "logs") pod "7873193d-ee9a-4c37-960a-ae69b881833e" (UID: "7873193d-ee9a-4c37-960a-ae69b881833e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:01:47 crc kubenswrapper[4808]: I0311 09:01:47.541985 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52ea94c4-1455-495c-9cf1-3d25c0a1ceac-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "52ea94c4-1455-495c-9cf1-3d25c0a1ceac" (UID: "52ea94c4-1455-495c-9cf1-3d25c0a1ceac"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:01:47 crc kubenswrapper[4808]: I0311 09:01:47.542677 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52ea94c4-1455-495c-9cf1-3d25c0a1ceac-kube-api-access-llgxj" (OuterVolumeSpecName: "kube-api-access-llgxj") pod "52ea94c4-1455-495c-9cf1-3d25c0a1ceac" (UID: "52ea94c4-1455-495c-9cf1-3d25c0a1ceac"). InnerVolumeSpecName "kube-api-access-llgxj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:01:47 crc kubenswrapper[4808]: I0311 09:01:47.563684 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7873193d-ee9a-4c37-960a-ae69b881833e-scripts" (OuterVolumeSpecName: "scripts") pod "7873193d-ee9a-4c37-960a-ae69b881833e" (UID: "7873193d-ee9a-4c37-960a-ae69b881833e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:01:47 crc kubenswrapper[4808]: I0311 09:01:47.563736 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0e6992f-4344-4cc1-86b9-6481b4b7205c-kube-api-access-jhpr7" (OuterVolumeSpecName: "kube-api-access-jhpr7") pod "c0e6992f-4344-4cc1-86b9-6481b4b7205c" (UID: "c0e6992f-4344-4cc1-86b9-6481b4b7205c"). InnerVolumeSpecName "kube-api-access-jhpr7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:01:47 crc kubenswrapper[4808]: I0311 09:01:47.572606 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0e6992f-4344-4cc1-86b9-6481b4b7205c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c0e6992f-4344-4cc1-86b9-6481b4b7205c" (UID: "c0e6992f-4344-4cc1-86b9-6481b4b7205c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:01:47 crc kubenswrapper[4808]: I0311 09:01:47.637871 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0e6992f-4344-4cc1-86b9-6481b4b7205c-scripts\") pod \"c0e6992f-4344-4cc1-86b9-6481b4b7205c\" (UID: \"c0e6992f-4344-4cc1-86b9-6481b4b7205c\") " Mar 11 09:01:47 crc kubenswrapper[4808]: I0311 09:01:47.637926 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m9cz9\" (UniqueName: \"kubernetes.io/projected/7873193d-ee9a-4c37-960a-ae69b881833e-kube-api-access-m9cz9\") pod \"7873193d-ee9a-4c37-960a-ae69b881833e\" (UID: \"7873193d-ee9a-4c37-960a-ae69b881833e\") " Mar 11 09:01:47 crc kubenswrapper[4808]: I0311 09:01:47.637948 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"52ea94c4-1455-495c-9cf1-3d25c0a1ceac\" (UID: \"52ea94c4-1455-495c-9cf1-3d25c0a1ceac\") " Mar 11 09:01:47 crc kubenswrapper[4808]: I0311 09:01:47.637964 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7873193d-ee9a-4c37-960a-ae69b881833e-config-data\") pod \"7873193d-ee9a-4c37-960a-ae69b881833e\" (UID: \"7873193d-ee9a-4c37-960a-ae69b881833e\") " Mar 11 09:01:47 crc kubenswrapper[4808]: I0311 09:01:47.637994 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c0e6992f-4344-4cc1-86b9-6481b4b7205c-credential-keys\") pod \"c0e6992f-4344-4cc1-86b9-6481b4b7205c\" (UID: \"c0e6992f-4344-4cc1-86b9-6481b4b7205c\") " Mar 11 09:01:47 crc kubenswrapper[4808]: I0311 09:01:47.638013 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c0e6992f-4344-4cc1-86b9-6481b4b7205c-fernet-keys\") pod \"c0e6992f-4344-4cc1-86b9-6481b4b7205c\" (UID: \"c0e6992f-4344-4cc1-86b9-6481b4b7205c\") " Mar 11 09:01:47 crc kubenswrapper[4808]: I0311 09:01:47.638030 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/52ea94c4-1455-495c-9cf1-3d25c0a1ceac-public-tls-certs\") pod \"52ea94c4-1455-495c-9cf1-3d25c0a1ceac\" (UID: \"52ea94c4-1455-495c-9cf1-3d25c0a1ceac\") " Mar 11 09:01:47 crc kubenswrapper[4808]: I0311 09:01:47.638047 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7873193d-ee9a-4c37-960a-ae69b881833e-combined-ca-bundle\") pod \"7873193d-ee9a-4c37-960a-ae69b881833e\" (UID: \"7873193d-ee9a-4c37-960a-ae69b881833e\") " Mar 11 09:01:47 crc kubenswrapper[4808]: I0311 09:01:47.638065 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0e6992f-4344-4cc1-86b9-6481b4b7205c-config-data\") pod \"c0e6992f-4344-4cc1-86b9-6481b4b7205c\" (UID: \"c0e6992f-4344-4cc1-86b9-6481b4b7205c\") " Mar 11 09:01:47 crc kubenswrapper[4808]: I0311 09:01:47.638082 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52ea94c4-1455-495c-9cf1-3d25c0a1ceac-logs\") pod \"52ea94c4-1455-495c-9cf1-3d25c0a1ceac\" (UID: \"52ea94c4-1455-495c-9cf1-3d25c0a1ceac\") " Mar 11 09:01:47 crc kubenswrapper[4808]: I0311 09:01:47.638106 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52ea94c4-1455-495c-9cf1-3d25c0a1ceac-scripts\") pod \"52ea94c4-1455-495c-9cf1-3d25c0a1ceac\" (UID: \"52ea94c4-1455-495c-9cf1-3d25c0a1ceac\") " Mar 11 09:01:47 crc kubenswrapper[4808]: I0311 09:01:47.639151 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7873193d-ee9a-4c37-960a-ae69b881833e-internal-tls-certs\") pod \"7873193d-ee9a-4c37-960a-ae69b881833e\" (UID: \"7873193d-ee9a-4c37-960a-ae69b881833e\") " Mar 11 09:01:47 crc kubenswrapper[4808]: I0311 09:01:47.639415 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"7873193d-ee9a-4c37-960a-ae69b881833e\" (UID: \"7873193d-ee9a-4c37-960a-ae69b881833e\") " Mar 11 09:01:47 crc kubenswrapper[4808]: I0311 09:01:47.639446 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52ea94c4-1455-495c-9cf1-3d25c0a1ceac-combined-ca-bundle\") pod \"52ea94c4-1455-495c-9cf1-3d25c0a1ceac\" (UID: \"52ea94c4-1455-495c-9cf1-3d25c0a1ceac\") " Mar 11 09:01:47 crc kubenswrapper[4808]: I0311 09:01:47.639483 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7873193d-ee9a-4c37-960a-ae69b881833e-httpd-run\") pod \"7873193d-ee9a-4c37-960a-ae69b881833e\" (UID: \"7873193d-ee9a-4c37-960a-ae69b881833e\") " Mar 11 09:01:47 crc kubenswrapper[4808]: I0311 09:01:47.639512 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52ea94c4-1455-495c-9cf1-3d25c0a1ceac-config-data\") pod \"52ea94c4-1455-495c-9cf1-3d25c0a1ceac\" (UID: \"52ea94c4-1455-495c-9cf1-3d25c0a1ceac\") " Mar 11 09:01:47 crc kubenswrapper[4808]: I0311 09:01:47.639634 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52ea94c4-1455-495c-9cf1-3d25c0a1ceac-logs" (OuterVolumeSpecName: "logs") pod "52ea94c4-1455-495c-9cf1-3d25c0a1ceac" (UID: "52ea94c4-1455-495c-9cf1-3d25c0a1ceac"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:01:47 crc kubenswrapper[4808]: I0311 09:01:47.639849 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7873193d-ee9a-4c37-960a-ae69b881833e-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "7873193d-ee9a-4c37-960a-ae69b881833e" (UID: "7873193d-ee9a-4c37-960a-ae69b881833e"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:01:47 crc kubenswrapper[4808]: I0311 09:01:47.639913 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0e6992f-4344-4cc1-86b9-6481b4b7205c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:47 crc kubenswrapper[4808]: I0311 09:01:47.639932 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhpr7\" (UniqueName: \"kubernetes.io/projected/c0e6992f-4344-4cc1-86b9-6481b4b7205c-kube-api-access-jhpr7\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:47 crc kubenswrapper[4808]: I0311 09:01:47.639944 4808 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/52ea94c4-1455-495c-9cf1-3d25c0a1ceac-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:47 crc kubenswrapper[4808]: I0311 09:01:47.639955 4808 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7873193d-ee9a-4c37-960a-ae69b881833e-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:47 crc kubenswrapper[4808]: I0311 09:01:47.639965 4808 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7873193d-ee9a-4c37-960a-ae69b881833e-logs\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:47 crc kubenswrapper[4808]: I0311 09:01:47.639975 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-llgxj\" (UniqueName: \"kubernetes.io/projected/52ea94c4-1455-495c-9cf1-3d25c0a1ceac-kube-api-access-llgxj\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:47 crc kubenswrapper[4808]: I0311 09:01:47.639985 4808 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52ea94c4-1455-495c-9cf1-3d25c0a1ceac-logs\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:47 crc kubenswrapper[4808]: I0311 09:01:47.643411 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0e6992f-4344-4cc1-86b9-6481b4b7205c-scripts" (OuterVolumeSpecName: "scripts") pod "c0e6992f-4344-4cc1-86b9-6481b4b7205c" (UID: "c0e6992f-4344-4cc1-86b9-6481b4b7205c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:01:47 crc kubenswrapper[4808]: I0311 09:01:47.643981 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7873193d-ee9a-4c37-960a-ae69b881833e-kube-api-access-m9cz9" (OuterVolumeSpecName: "kube-api-access-m9cz9") pod "7873193d-ee9a-4c37-960a-ae69b881833e" (UID: "7873193d-ee9a-4c37-960a-ae69b881833e"). InnerVolumeSpecName "kube-api-access-m9cz9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:01:47 crc kubenswrapper[4808]: I0311 09:01:47.645720 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52ea94c4-1455-495c-9cf1-3d25c0a1ceac-scripts" (OuterVolumeSpecName: "scripts") pod "52ea94c4-1455-495c-9cf1-3d25c0a1ceac" (UID: "52ea94c4-1455-495c-9cf1-3d25c0a1ceac"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:01:47 crc kubenswrapper[4808]: I0311 09:01:47.645760 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "52ea94c4-1455-495c-9cf1-3d25c0a1ceac" (UID: "52ea94c4-1455-495c-9cf1-3d25c0a1ceac"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 11 09:01:47 crc kubenswrapper[4808]: I0311 09:01:47.646131 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance") pod "7873193d-ee9a-4c37-960a-ae69b881833e" (UID: "7873193d-ee9a-4c37-960a-ae69b881833e"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 11 09:01:47 crc kubenswrapper[4808]: I0311 09:01:47.648476 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0e6992f-4344-4cc1-86b9-6481b4b7205c-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "c0e6992f-4344-4cc1-86b9-6481b4b7205c" (UID: "c0e6992f-4344-4cc1-86b9-6481b4b7205c"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:01:47 crc kubenswrapper[4808]: I0311 09:01:47.665522 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0e6992f-4344-4cc1-86b9-6481b4b7205c-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "c0e6992f-4344-4cc1-86b9-6481b4b7205c" (UID: "c0e6992f-4344-4cc1-86b9-6481b4b7205c"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:01:47 crc kubenswrapper[4808]: I0311 09:01:47.671724 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7873193d-ee9a-4c37-960a-ae69b881833e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7873193d-ee9a-4c37-960a-ae69b881833e" (UID: "7873193d-ee9a-4c37-960a-ae69b881833e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:01:47 crc kubenswrapper[4808]: I0311 09:01:47.672489 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52ea94c4-1455-495c-9cf1-3d25c0a1ceac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "52ea94c4-1455-495c-9cf1-3d25c0a1ceac" (UID: "52ea94c4-1455-495c-9cf1-3d25c0a1ceac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:01:47 crc kubenswrapper[4808]: I0311 09:01:47.683138 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0e6992f-4344-4cc1-86b9-6481b4b7205c-config-data" (OuterVolumeSpecName: "config-data") pod "c0e6992f-4344-4cc1-86b9-6481b4b7205c" (UID: "c0e6992f-4344-4cc1-86b9-6481b4b7205c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:01:47 crc kubenswrapper[4808]: I0311 09:01:47.694825 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7873193d-ee9a-4c37-960a-ae69b881833e-config-data" (OuterVolumeSpecName: "config-data") pod "7873193d-ee9a-4c37-960a-ae69b881833e" (UID: "7873193d-ee9a-4c37-960a-ae69b881833e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:01:47 crc kubenswrapper[4808]: I0311 09:01:47.698435 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52ea94c4-1455-495c-9cf1-3d25c0a1ceac-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "52ea94c4-1455-495c-9cf1-3d25c0a1ceac" (UID: "52ea94c4-1455-495c-9cf1-3d25c0a1ceac"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:01:47 crc kubenswrapper[4808]: I0311 09:01:47.707225 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7873193d-ee9a-4c37-960a-ae69b881833e-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "7873193d-ee9a-4c37-960a-ae69b881833e" (UID: "7873193d-ee9a-4c37-960a-ae69b881833e"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:01:47 crc kubenswrapper[4808]: I0311 09:01:47.717982 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52ea94c4-1455-495c-9cf1-3d25c0a1ceac-config-data" (OuterVolumeSpecName: "config-data") pod "52ea94c4-1455-495c-9cf1-3d25c0a1ceac" (UID: "52ea94c4-1455-495c-9cf1-3d25c0a1ceac"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:01:47 crc kubenswrapper[4808]: I0311 09:01:47.741116 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m9cz9\" (UniqueName: \"kubernetes.io/projected/7873193d-ee9a-4c37-960a-ae69b881833e-kube-api-access-m9cz9\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:47 crc kubenswrapper[4808]: I0311 09:01:47.741208 4808 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Mar 11 09:01:47 crc kubenswrapper[4808]: I0311 09:01:47.741220 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7873193d-ee9a-4c37-960a-ae69b881833e-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:47 crc kubenswrapper[4808]: I0311 09:01:47.741229 4808 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c0e6992f-4344-4cc1-86b9-6481b4b7205c-credential-keys\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:47 crc kubenswrapper[4808]: I0311 09:01:47.741237 4808 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c0e6992f-4344-4cc1-86b9-6481b4b7205c-fernet-keys\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:47 crc kubenswrapper[4808]: I0311 09:01:47.741247 4808 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/52ea94c4-1455-495c-9cf1-3d25c0a1ceac-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:47 crc kubenswrapper[4808]: I0311 09:01:47.741255 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7873193d-ee9a-4c37-960a-ae69b881833e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:47 crc kubenswrapper[4808]: I0311 09:01:47.741262 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0e6992f-4344-4cc1-86b9-6481b4b7205c-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:47 crc kubenswrapper[4808]: I0311 09:01:47.741269 4808 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52ea94c4-1455-495c-9cf1-3d25c0a1ceac-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:47 crc kubenswrapper[4808]: I0311 09:01:47.741277 4808 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7873193d-ee9a-4c37-960a-ae69b881833e-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:47 crc kubenswrapper[4808]: I0311 09:01:47.741291 4808 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Mar 11 09:01:47 crc kubenswrapper[4808]: I0311 09:01:47.741299 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52ea94c4-1455-495c-9cf1-3d25c0a1ceac-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:47 crc kubenswrapper[4808]: I0311 09:01:47.741307 4808 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7873193d-ee9a-4c37-960a-ae69b881833e-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:47 crc kubenswrapper[4808]: I0311 09:01:47.741314 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52ea94c4-1455-495c-9cf1-3d25c0a1ceac-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:47 crc kubenswrapper[4808]: I0311 09:01:47.741321 4808 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0e6992f-4344-4cc1-86b9-6481b4b7205c-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:47 crc kubenswrapper[4808]: I0311 09:01:47.757503 4808 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Mar 11 09:01:47 crc kubenswrapper[4808]: I0311 09:01:47.761077 4808 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Mar 11 09:01:47 crc kubenswrapper[4808]: I0311 09:01:47.842761 4808 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:47 crc kubenswrapper[4808]: I0311 09:01:47.842800 4808 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:47 crc kubenswrapper[4808]: I0311 09:01:47.947684 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7873193d-ee9a-4c37-960a-ae69b881833e","Type":"ContainerDied","Data":"758d29d5622f0f7030ad61617c4904b4b1bd64c328bf502b5a2bb1c30ac05b02"} Mar 11 09:01:47 crc kubenswrapper[4808]: I0311 09:01:47.947808 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 11 09:01:47 crc kubenswrapper[4808]: I0311 09:01:47.954572 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-5p7hh" Mar 11 09:01:47 crc kubenswrapper[4808]: I0311 09:01:47.954584 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-5p7hh" event={"ID":"c0e6992f-4344-4cc1-86b9-6481b4b7205c","Type":"ContainerDied","Data":"da7b425c7759c1dd15108d5e671a6460cfb4944f553b86a92744fd359d5eb3ed"} Mar 11 09:01:47 crc kubenswrapper[4808]: I0311 09:01:47.954631 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da7b425c7759c1dd15108d5e671a6460cfb4944f553b86a92744fd359d5eb3ed" Mar 11 09:01:47 crc kubenswrapper[4808]: I0311 09:01:47.961138 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"52ea94c4-1455-495c-9cf1-3d25c0a1ceac","Type":"ContainerDied","Data":"66ae20f3bf5a14a305a8df78199a05ae676a66d1e12b8b60a33e21f7d4958b1c"} Mar 11 09:01:47 crc kubenswrapper[4808]: I0311 09:01:47.961248 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 11 09:01:47 crc kubenswrapper[4808]: E0311 09:01:47.969897 4808 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod52ea94c4_1455_495c_9cf1_3d25c0a1ceac.slice/crio-66ae20f3bf5a14a305a8df78199a05ae676a66d1e12b8b60a33e21f7d4958b1c\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7873193d_ee9a_4c37_960a_ae69b881833e.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc0e6992f_4344_4cc1_86b9_6481b4b7205c.slice/crio-da7b425c7759c1dd15108d5e671a6460cfb4944f553b86a92744fd359d5eb3ed\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod52ea94c4_1455_495c_9cf1_3d25c0a1ceac.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc0e6992f_4344_4cc1_86b9_6481b4b7205c.slice\": RecentStats: unable to find data in memory cache]" Mar 11 09:01:47 crc kubenswrapper[4808]: I0311 09:01:47.980663 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 11 09:01:47 crc kubenswrapper[4808]: I0311 09:01:47.994493 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.007317 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.025935 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.043274 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 11 09:01:48 crc kubenswrapper[4808]: E0311 09:01:48.043675 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d405cc3-966f-4f51-b607-9b31e50c1bd3" containerName="init" Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.043692 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d405cc3-966f-4f51-b607-9b31e50c1bd3" containerName="init" Mar 11 09:01:48 crc kubenswrapper[4808]: E0311 09:01:48.043702 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0e6992f-4344-4cc1-86b9-6481b4b7205c" containerName="keystone-bootstrap" Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.043709 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0e6992f-4344-4cc1-86b9-6481b4b7205c" containerName="keystone-bootstrap" Mar 11 09:01:48 crc kubenswrapper[4808]: E0311 09:01:48.043719 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7873193d-ee9a-4c37-960a-ae69b881833e" containerName="glance-log" Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.043724 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="7873193d-ee9a-4c37-960a-ae69b881833e" containerName="glance-log" Mar 11 09:01:48 crc kubenswrapper[4808]: E0311 09:01:48.043732 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c773fa5-d0f3-424a-8cc6-09a36e87cd5e" containerName="dnsmasq-dns" Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.043737 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c773fa5-d0f3-424a-8cc6-09a36e87cd5e" containerName="dnsmasq-dns" Mar 11 09:01:48 crc kubenswrapper[4808]: E0311 09:01:48.043750 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7873193d-ee9a-4c37-960a-ae69b881833e" containerName="glance-httpd" Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.043756 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="7873193d-ee9a-4c37-960a-ae69b881833e" containerName="glance-httpd" Mar 11 09:01:48 crc kubenswrapper[4808]: E0311 09:01:48.043778 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c773fa5-d0f3-424a-8cc6-09a36e87cd5e" containerName="init" Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.043783 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c773fa5-d0f3-424a-8cc6-09a36e87cd5e" containerName="init" Mar 11 09:01:48 crc kubenswrapper[4808]: E0311 09:01:48.043791 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52ea94c4-1455-495c-9cf1-3d25c0a1ceac" containerName="glance-log" Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.043796 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="52ea94c4-1455-495c-9cf1-3d25c0a1ceac" containerName="glance-log" Mar 11 09:01:48 crc kubenswrapper[4808]: E0311 09:01:48.043806 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52ea94c4-1455-495c-9cf1-3d25c0a1ceac" containerName="glance-httpd" Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.043811 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="52ea94c4-1455-495c-9cf1-3d25c0a1ceac" containerName="glance-httpd" Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.043961 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c773fa5-d0f3-424a-8cc6-09a36e87cd5e" containerName="dnsmasq-dns" Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.044001 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0e6992f-4344-4cc1-86b9-6481b4b7205c" containerName="keystone-bootstrap" Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.044011 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="52ea94c4-1455-495c-9cf1-3d25c0a1ceac" containerName="glance-log" Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.044023 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="7873193d-ee9a-4c37-960a-ae69b881833e" containerName="glance-httpd" Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.044038 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d405cc3-966f-4f51-b607-9b31e50c1bd3" containerName="init" Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.044047 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="7873193d-ee9a-4c37-960a-ae69b881833e" containerName="glance-log" Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.044056 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="52ea94c4-1455-495c-9cf1-3d25c0a1ceac" containerName="glance-httpd" Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.044940 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.047384 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.048139 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.048304 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.048429 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-vkl5m" Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.052216 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.066661 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.068193 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.071054 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.071234 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.081856 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 11 09:01:48 crc kubenswrapper[4808]: E0311 09:01:48.174629 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:2c52c0f4b4baa15796eb284522adff7fa9e5c85a2d77c2e47ef4afdf8e4a7c7d" Mar 11 09:01:48 crc kubenswrapper[4808]: E0311 09:01:48.174821 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:2c52c0f4b4baa15796eb284522adff7fa9e5c85a2d77c2e47ef4afdf8e4a7c7d,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qslwn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-rksh6_openstack(847a58e2-c27f-4b49-8300-cbe239822ffa): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 11 09:01:48 crc kubenswrapper[4808]: E0311 09:01:48.177636 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-rksh6" podUID="847a58e2-c27f-4b49-8300-cbe239822ffa" Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.248292 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd05834b-3326-4f0c-a5b8-2a0e28e5782e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"fd05834b-3326-4f0c-a5b8-2a0e28e5782e\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.248339 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fd05834b-3326-4f0c-a5b8-2a0e28e5782e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"fd05834b-3326-4f0c-a5b8-2a0e28e5782e\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.248645 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1a9b795b-5646-4126-b1fb-609c53efdf13-logs\") pod \"glance-default-external-api-0\" (UID: \"1a9b795b-5646-4126-b1fb-609c53efdf13\") " pod="openstack/glance-default-external-api-0" Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.248722 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd05834b-3326-4f0c-a5b8-2a0e28e5782e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"fd05834b-3326-4f0c-a5b8-2a0e28e5782e\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.248756 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a9b795b-5646-4126-b1fb-609c53efdf13-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"1a9b795b-5646-4126-b1fb-609c53efdf13\") " pod="openstack/glance-default-external-api-0" Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.248807 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd05834b-3326-4f0c-a5b8-2a0e28e5782e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"fd05834b-3326-4f0c-a5b8-2a0e28e5782e\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.248853 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a9b795b-5646-4126-b1fb-609c53efdf13-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1a9b795b-5646-4126-b1fb-609c53efdf13\") " pod="openstack/glance-default-external-api-0" Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.249019 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd05834b-3326-4f0c-a5b8-2a0e28e5782e-logs\") pod \"glance-default-internal-api-0\" (UID: \"fd05834b-3326-4f0c-a5b8-2a0e28e5782e\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.249081 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"fd05834b-3326-4f0c-a5b8-2a0e28e5782e\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.249109 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"1a9b795b-5646-4126-b1fb-609c53efdf13\") " pod="openstack/glance-default-external-api-0" Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.249168 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ff9b5\" (UniqueName: \"kubernetes.io/projected/1a9b795b-5646-4126-b1fb-609c53efdf13-kube-api-access-ff9b5\") pod \"glance-default-external-api-0\" (UID: \"1a9b795b-5646-4126-b1fb-609c53efdf13\") " pod="openstack/glance-default-external-api-0" Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.249233 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1a9b795b-5646-4126-b1fb-609c53efdf13-scripts\") pod \"glance-default-external-api-0\" (UID: \"1a9b795b-5646-4126-b1fb-609c53efdf13\") " pod="openstack/glance-default-external-api-0" Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.249317 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzxhn\" (UniqueName: \"kubernetes.io/projected/fd05834b-3326-4f0c-a5b8-2a0e28e5782e-kube-api-access-hzxhn\") pod \"glance-default-internal-api-0\" (UID: \"fd05834b-3326-4f0c-a5b8-2a0e28e5782e\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.249383 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1a9b795b-5646-4126-b1fb-609c53efdf13-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1a9b795b-5646-4126-b1fb-609c53efdf13\") " pod="openstack/glance-default-external-api-0" Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.249427 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd05834b-3326-4f0c-a5b8-2a0e28e5782e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"fd05834b-3326-4f0c-a5b8-2a0e28e5782e\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.249456 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a9b795b-5646-4126-b1fb-609c53efdf13-config-data\") pod \"glance-default-external-api-0\" (UID: \"1a9b795b-5646-4126-b1fb-609c53efdf13\") " pod="openstack/glance-default-external-api-0" Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.351484 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd05834b-3326-4f0c-a5b8-2a0e28e5782e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"fd05834b-3326-4f0c-a5b8-2a0e28e5782e\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.351634 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fd05834b-3326-4f0c-a5b8-2a0e28e5782e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"fd05834b-3326-4f0c-a5b8-2a0e28e5782e\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.351687 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1a9b795b-5646-4126-b1fb-609c53efdf13-logs\") pod \"glance-default-external-api-0\" (UID: \"1a9b795b-5646-4126-b1fb-609c53efdf13\") " pod="openstack/glance-default-external-api-0" Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.351710 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd05834b-3326-4f0c-a5b8-2a0e28e5782e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"fd05834b-3326-4f0c-a5b8-2a0e28e5782e\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.351748 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a9b795b-5646-4126-b1fb-609c53efdf13-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"1a9b795b-5646-4126-b1fb-609c53efdf13\") " pod="openstack/glance-default-external-api-0" Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.351765 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd05834b-3326-4f0c-a5b8-2a0e28e5782e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"fd05834b-3326-4f0c-a5b8-2a0e28e5782e\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.351786 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a9b795b-5646-4126-b1fb-609c53efdf13-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1a9b795b-5646-4126-b1fb-609c53efdf13\") " pod="openstack/glance-default-external-api-0" Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.351877 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd05834b-3326-4f0c-a5b8-2a0e28e5782e-logs\") pod \"glance-default-internal-api-0\" (UID: \"fd05834b-3326-4f0c-a5b8-2a0e28e5782e\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.351923 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"fd05834b-3326-4f0c-a5b8-2a0e28e5782e\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.351941 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"1a9b795b-5646-4126-b1fb-609c53efdf13\") " pod="openstack/glance-default-external-api-0" Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.351966 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ff9b5\" (UniqueName: \"kubernetes.io/projected/1a9b795b-5646-4126-b1fb-609c53efdf13-kube-api-access-ff9b5\") pod \"glance-default-external-api-0\" (UID: \"1a9b795b-5646-4126-b1fb-609c53efdf13\") " pod="openstack/glance-default-external-api-0" Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.352596 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1a9b795b-5646-4126-b1fb-609c53efdf13-scripts\") pod \"glance-default-external-api-0\" (UID: \"1a9b795b-5646-4126-b1fb-609c53efdf13\") " pod="openstack/glance-default-external-api-0" Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.352676 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzxhn\" (UniqueName: \"kubernetes.io/projected/fd05834b-3326-4f0c-a5b8-2a0e28e5782e-kube-api-access-hzxhn\") pod \"glance-default-internal-api-0\" (UID: \"fd05834b-3326-4f0c-a5b8-2a0e28e5782e\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.352703 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1a9b795b-5646-4126-b1fb-609c53efdf13-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1a9b795b-5646-4126-b1fb-609c53efdf13\") " pod="openstack/glance-default-external-api-0" Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.352747 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd05834b-3326-4f0c-a5b8-2a0e28e5782e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"fd05834b-3326-4f0c-a5b8-2a0e28e5782e\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.352765 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a9b795b-5646-4126-b1fb-609c53efdf13-config-data\") pod \"glance-default-external-api-0\" (UID: \"1a9b795b-5646-4126-b1fb-609c53efdf13\") " pod="openstack/glance-default-external-api-0" Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.352810 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1a9b795b-5646-4126-b1fb-609c53efdf13-logs\") pod \"glance-default-external-api-0\" (UID: \"1a9b795b-5646-4126-b1fb-609c53efdf13\") " pod="openstack/glance-default-external-api-0" Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.353065 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fd05834b-3326-4f0c-a5b8-2a0e28e5782e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"fd05834b-3326-4f0c-a5b8-2a0e28e5782e\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.354198 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1a9b795b-5646-4126-b1fb-609c53efdf13-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1a9b795b-5646-4126-b1fb-609c53efdf13\") " pod="openstack/glance-default-external-api-0" Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.354962 4808 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"fd05834b-3326-4f0c-a5b8-2a0e28e5782e\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-internal-api-0" Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.355326 4808 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"1a9b795b-5646-4126-b1fb-609c53efdf13\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.357062 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd05834b-3326-4f0c-a5b8-2a0e28e5782e-logs\") pod \"glance-default-internal-api-0\" (UID: \"fd05834b-3326-4f0c-a5b8-2a0e28e5782e\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.359479 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a9b795b-5646-4126-b1fb-609c53efdf13-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"1a9b795b-5646-4126-b1fb-609c53efdf13\") " pod="openstack/glance-default-external-api-0" Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.360793 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd05834b-3326-4f0c-a5b8-2a0e28e5782e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"fd05834b-3326-4f0c-a5b8-2a0e28e5782e\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.361017 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd05834b-3326-4f0c-a5b8-2a0e28e5782e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"fd05834b-3326-4f0c-a5b8-2a0e28e5782e\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.371143 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a9b795b-5646-4126-b1fb-609c53efdf13-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1a9b795b-5646-4126-b1fb-609c53efdf13\") " pod="openstack/glance-default-external-api-0" Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.371252 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1a9b795b-5646-4126-b1fb-609c53efdf13-scripts\") pod \"glance-default-external-api-0\" (UID: \"1a9b795b-5646-4126-b1fb-609c53efdf13\") " pod="openstack/glance-default-external-api-0" Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.371860 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a9b795b-5646-4126-b1fb-609c53efdf13-config-data\") pod \"glance-default-external-api-0\" (UID: \"1a9b795b-5646-4126-b1fb-609c53efdf13\") " pod="openstack/glance-default-external-api-0" Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.372055 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd05834b-3326-4f0c-a5b8-2a0e28e5782e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"fd05834b-3326-4f0c-a5b8-2a0e28e5782e\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.375046 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ff9b5\" (UniqueName: \"kubernetes.io/projected/1a9b795b-5646-4126-b1fb-609c53efdf13-kube-api-access-ff9b5\") pod \"glance-default-external-api-0\" (UID: \"1a9b795b-5646-4126-b1fb-609c53efdf13\") " pod="openstack/glance-default-external-api-0" Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.376002 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd05834b-3326-4f0c-a5b8-2a0e28e5782e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"fd05834b-3326-4f0c-a5b8-2a0e28e5782e\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.377496 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzxhn\" (UniqueName: \"kubernetes.io/projected/fd05834b-3326-4f0c-a5b8-2a0e28e5782e-kube-api-access-hzxhn\") pod \"glance-default-internal-api-0\" (UID: \"fd05834b-3326-4f0c-a5b8-2a0e28e5782e\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.386054 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"fd05834b-3326-4f0c-a5b8-2a0e28e5782e\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.394398 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"1a9b795b-5646-4126-b1fb-609c53efdf13\") " pod="openstack/glance-default-external-api-0" Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.455574 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.616291 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-5p7hh"] Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.623594 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-5p7hh"] Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.665481 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.715504 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-tbnpr"] Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.716956 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-tbnpr" Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.719054 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.719342 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.719494 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-7fg4s" Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.719558 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.720097 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.729218 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-tbnpr"] Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.861300 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/657b6f1c-c040-46ea-b7b9-603c41615d66-fernet-keys\") pod \"keystone-bootstrap-tbnpr\" (UID: \"657b6f1c-c040-46ea-b7b9-603c41615d66\") " pod="openstack/keystone-bootstrap-tbnpr" Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.861394 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwxxz\" (UniqueName: \"kubernetes.io/projected/657b6f1c-c040-46ea-b7b9-603c41615d66-kube-api-access-vwxxz\") pod \"keystone-bootstrap-tbnpr\" (UID: \"657b6f1c-c040-46ea-b7b9-603c41615d66\") " pod="openstack/keystone-bootstrap-tbnpr" Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.861467 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/657b6f1c-c040-46ea-b7b9-603c41615d66-config-data\") pod \"keystone-bootstrap-tbnpr\" (UID: \"657b6f1c-c040-46ea-b7b9-603c41615d66\") " pod="openstack/keystone-bootstrap-tbnpr" Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.861573 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/657b6f1c-c040-46ea-b7b9-603c41615d66-combined-ca-bundle\") pod \"keystone-bootstrap-tbnpr\" (UID: \"657b6f1c-c040-46ea-b7b9-603c41615d66\") " pod="openstack/keystone-bootstrap-tbnpr" Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.861633 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/657b6f1c-c040-46ea-b7b9-603c41615d66-credential-keys\") pod \"keystone-bootstrap-tbnpr\" (UID: \"657b6f1c-c040-46ea-b7b9-603c41615d66\") " pod="openstack/keystone-bootstrap-tbnpr" Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.861706 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/657b6f1c-c040-46ea-b7b9-603c41615d66-scripts\") pod \"keystone-bootstrap-tbnpr\" (UID: \"657b6f1c-c040-46ea-b7b9-603c41615d66\") " pod="openstack/keystone-bootstrap-tbnpr" Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.963584 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/657b6f1c-c040-46ea-b7b9-603c41615d66-combined-ca-bundle\") pod \"keystone-bootstrap-tbnpr\" (UID: \"657b6f1c-c040-46ea-b7b9-603c41615d66\") " pod="openstack/keystone-bootstrap-tbnpr" Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.963661 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/657b6f1c-c040-46ea-b7b9-603c41615d66-credential-keys\") pod \"keystone-bootstrap-tbnpr\" (UID: \"657b6f1c-c040-46ea-b7b9-603c41615d66\") " pod="openstack/keystone-bootstrap-tbnpr" Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.963741 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/657b6f1c-c040-46ea-b7b9-603c41615d66-scripts\") pod \"keystone-bootstrap-tbnpr\" (UID: \"657b6f1c-c040-46ea-b7b9-603c41615d66\") " pod="openstack/keystone-bootstrap-tbnpr" Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.964197 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/657b6f1c-c040-46ea-b7b9-603c41615d66-fernet-keys\") pod \"keystone-bootstrap-tbnpr\" (UID: \"657b6f1c-c040-46ea-b7b9-603c41615d66\") " pod="openstack/keystone-bootstrap-tbnpr" Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.964303 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwxxz\" (UniqueName: \"kubernetes.io/projected/657b6f1c-c040-46ea-b7b9-603c41615d66-kube-api-access-vwxxz\") pod \"keystone-bootstrap-tbnpr\" (UID: \"657b6f1c-c040-46ea-b7b9-603c41615d66\") " pod="openstack/keystone-bootstrap-tbnpr" Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.964490 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/657b6f1c-c040-46ea-b7b9-603c41615d66-config-data\") pod \"keystone-bootstrap-tbnpr\" (UID: \"657b6f1c-c040-46ea-b7b9-603c41615d66\") " pod="openstack/keystone-bootstrap-tbnpr" Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.968764 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/657b6f1c-c040-46ea-b7b9-603c41615d66-combined-ca-bundle\") pod \"keystone-bootstrap-tbnpr\" (UID: \"657b6f1c-c040-46ea-b7b9-603c41615d66\") " pod="openstack/keystone-bootstrap-tbnpr" Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.969635 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/657b6f1c-c040-46ea-b7b9-603c41615d66-fernet-keys\") pod \"keystone-bootstrap-tbnpr\" (UID: \"657b6f1c-c040-46ea-b7b9-603c41615d66\") " pod="openstack/keystone-bootstrap-tbnpr" Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.970321 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/657b6f1c-c040-46ea-b7b9-603c41615d66-credential-keys\") pod \"keystone-bootstrap-tbnpr\" (UID: \"657b6f1c-c040-46ea-b7b9-603c41615d66\") " pod="openstack/keystone-bootstrap-tbnpr" Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.973692 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/657b6f1c-c040-46ea-b7b9-603c41615d66-config-data\") pod \"keystone-bootstrap-tbnpr\" (UID: \"657b6f1c-c040-46ea-b7b9-603c41615d66\") " pod="openstack/keystone-bootstrap-tbnpr" Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.980086 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/657b6f1c-c040-46ea-b7b9-603c41615d66-scripts\") pod \"keystone-bootstrap-tbnpr\" (UID: \"657b6f1c-c040-46ea-b7b9-603c41615d66\") " pod="openstack/keystone-bootstrap-tbnpr" Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.980715 4808 generic.go:334] "Generic (PLEG): container finished" podID="ee99a1df-1d19-4463-a0ae-84a18e2f6d4e" containerID="e1d5167e0ef98d8c3a6364c193b5c0835578c635b67e1e0877f6c0259ac12fe8" exitCode=0 Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.981520 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-2t99h" event={"ID":"ee99a1df-1d19-4463-a0ae-84a18e2f6d4e","Type":"ContainerDied","Data":"e1d5167e0ef98d8c3a6364c193b5c0835578c635b67e1e0877f6c0259ac12fe8"} Mar 11 09:01:48 crc kubenswrapper[4808]: I0311 09:01:48.982044 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwxxz\" (UniqueName: \"kubernetes.io/projected/657b6f1c-c040-46ea-b7b9-603c41615d66-kube-api-access-vwxxz\") pod \"keystone-bootstrap-tbnpr\" (UID: \"657b6f1c-c040-46ea-b7b9-603c41615d66\") " pod="openstack/keystone-bootstrap-tbnpr" Mar 11 09:01:48 crc kubenswrapper[4808]: E0311 09:01:48.982495 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:2c52c0f4b4baa15796eb284522adff7fa9e5c85a2d77c2e47ef4afdf8e4a7c7d\\\"\"" pod="openstack/barbican-db-sync-rksh6" podUID="847a58e2-c27f-4b49-8300-cbe239822ffa" Mar 11 09:01:49 crc kubenswrapper[4808]: I0311 09:01:49.045545 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-tbnpr" Mar 11 09:01:49 crc kubenswrapper[4808]: I0311 09:01:49.339752 4808 scope.go:117] "RemoveContainer" containerID="34bd0311487087f1fa5f1036d1b584e502b9bbca239b9ab876eea15df2e3d0ea" Mar 11 09:01:49 crc kubenswrapper[4808]: E0311 09:01:49.366219 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:44ed1ca84e17bd0f004cfbdc3c0827d767daba52abb8e83e076bfd0e6c02f838" Mar 11 09:01:49 crc kubenswrapper[4808]: E0311 09:01:49.366442 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:44ed1ca84e17bd0f004cfbdc3c0827d767daba52abb8e83e076bfd0e6c02f838,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x2p85,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-rtcbw_openstack(75b5595d-1d35-47d9-b6a2-196e30848a13): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 11 09:01:49 crc kubenswrapper[4808]: E0311 09:01:49.367845 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-rtcbw" podUID="75b5595d-1d35-47d9-b6a2-196e30848a13" Mar 11 09:01:49 crc kubenswrapper[4808]: I0311 09:01:49.581123 4808 scope.go:117] "RemoveContainer" containerID="bd6797a9c0789fc636911fcda99c50039f8d094fc8897eab94b7ad7ddcfb73ba" Mar 11 09:01:49 crc kubenswrapper[4808]: I0311 09:01:49.673219 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f7dd995-lrrzj" Mar 11 09:01:49 crc kubenswrapper[4808]: I0311 09:01:49.692801 4808 scope.go:117] "RemoveContainer" containerID="dafa87acce78967f76088a5597b5e118481b93f2b933b8f0a567a9e0eb949b35" Mar 11 09:01:49 crc kubenswrapper[4808]: I0311 09:01:49.779968 4808 scope.go:117] "RemoveContainer" containerID="46df675c364b61e43a37239955a65bd3d4d46bb4ad4c57bf69a9ac026702a5fc" Mar 11 09:01:49 crc kubenswrapper[4808]: I0311 09:01:49.820874 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6ad27efa-241d-4ee1-978d-2dacc25cb7e1-ovsdbserver-nb\") pod \"6ad27efa-241d-4ee1-978d-2dacc25cb7e1\" (UID: \"6ad27efa-241d-4ee1-978d-2dacc25cb7e1\") " Mar 11 09:01:49 crc kubenswrapper[4808]: I0311 09:01:49.821023 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ad27efa-241d-4ee1-978d-2dacc25cb7e1-config\") pod \"6ad27efa-241d-4ee1-978d-2dacc25cb7e1\" (UID: \"6ad27efa-241d-4ee1-978d-2dacc25cb7e1\") " Mar 11 09:01:49 crc kubenswrapper[4808]: I0311 09:01:49.821055 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ltfv2\" (UniqueName: \"kubernetes.io/projected/6ad27efa-241d-4ee1-978d-2dacc25cb7e1-kube-api-access-ltfv2\") pod \"6ad27efa-241d-4ee1-978d-2dacc25cb7e1\" (UID: \"6ad27efa-241d-4ee1-978d-2dacc25cb7e1\") " Mar 11 09:01:49 crc kubenswrapper[4808]: I0311 09:01:49.822503 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6ad27efa-241d-4ee1-978d-2dacc25cb7e1-dns-svc\") pod \"6ad27efa-241d-4ee1-978d-2dacc25cb7e1\" (UID: \"6ad27efa-241d-4ee1-978d-2dacc25cb7e1\") " Mar 11 09:01:49 crc kubenswrapper[4808]: I0311 09:01:49.822564 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6ad27efa-241d-4ee1-978d-2dacc25cb7e1-ovsdbserver-sb\") pod \"6ad27efa-241d-4ee1-978d-2dacc25cb7e1\" (UID: \"6ad27efa-241d-4ee1-978d-2dacc25cb7e1\") " Mar 11 09:01:49 crc kubenswrapper[4808]: I0311 09:01:49.860586 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52ea94c4-1455-495c-9cf1-3d25c0a1ceac" path="/var/lib/kubelet/pods/52ea94c4-1455-495c-9cf1-3d25c0a1ceac/volumes" Mar 11 09:01:49 crc kubenswrapper[4808]: I0311 09:01:49.862578 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7873193d-ee9a-4c37-960a-ae69b881833e" path="/var/lib/kubelet/pods/7873193d-ee9a-4c37-960a-ae69b881833e/volumes" Mar 11 09:01:49 crc kubenswrapper[4808]: I0311 09:01:49.863698 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0e6992f-4344-4cc1-86b9-6481b4b7205c" path="/var/lib/kubelet/pods/c0e6992f-4344-4cc1-86b9-6481b4b7205c/volumes" Mar 11 09:01:49 crc kubenswrapper[4808]: I0311 09:01:49.876553 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ad27efa-241d-4ee1-978d-2dacc25cb7e1-kube-api-access-ltfv2" (OuterVolumeSpecName: "kube-api-access-ltfv2") pod "6ad27efa-241d-4ee1-978d-2dacc25cb7e1" (UID: "6ad27efa-241d-4ee1-978d-2dacc25cb7e1"). InnerVolumeSpecName "kube-api-access-ltfv2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:01:49 crc kubenswrapper[4808]: I0311 09:01:49.917035 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ad27efa-241d-4ee1-978d-2dacc25cb7e1-config" (OuterVolumeSpecName: "config") pod "6ad27efa-241d-4ee1-978d-2dacc25cb7e1" (UID: "6ad27efa-241d-4ee1-978d-2dacc25cb7e1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:01:49 crc kubenswrapper[4808]: I0311 09:01:49.917592 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ad27efa-241d-4ee1-978d-2dacc25cb7e1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6ad27efa-241d-4ee1-978d-2dacc25cb7e1" (UID: "6ad27efa-241d-4ee1-978d-2dacc25cb7e1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:01:49 crc kubenswrapper[4808]: I0311 09:01:49.933646 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ad27efa-241d-4ee1-978d-2dacc25cb7e1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6ad27efa-241d-4ee1-978d-2dacc25cb7e1" (UID: "6ad27efa-241d-4ee1-978d-2dacc25cb7e1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:01:49 crc kubenswrapper[4808]: I0311 09:01:49.943578 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ad27efa-241d-4ee1-978d-2dacc25cb7e1-config\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:49 crc kubenswrapper[4808]: I0311 09:01:49.943818 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ltfv2\" (UniqueName: \"kubernetes.io/projected/6ad27efa-241d-4ee1-978d-2dacc25cb7e1-kube-api-access-ltfv2\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:49 crc kubenswrapper[4808]: I0311 09:01:49.943924 4808 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6ad27efa-241d-4ee1-978d-2dacc25cb7e1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:49 crc kubenswrapper[4808]: I0311 09:01:49.943998 4808 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6ad27efa-241d-4ee1-978d-2dacc25cb7e1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:49 crc kubenswrapper[4808]: I0311 09:01:49.949975 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ad27efa-241d-4ee1-978d-2dacc25cb7e1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6ad27efa-241d-4ee1-978d-2dacc25cb7e1" (UID: "6ad27efa-241d-4ee1-978d-2dacc25cb7e1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:01:50 crc kubenswrapper[4808]: I0311 09:01:50.000428 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" event={"ID":"3dda5309-668d-4e3c-b3b2-1d708eecc578","Type":"ContainerStarted","Data":"9a282cc104124a36930296e8446190e2ce24a7536755b142f5bbb0e29d5b97d9"} Mar 11 09:01:50 crc kubenswrapper[4808]: I0311 09:01:50.010121 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f7dd995-lrrzj" event={"ID":"6ad27efa-241d-4ee1-978d-2dacc25cb7e1","Type":"ContainerDied","Data":"719f5e0895343dc9d8628ff011d0e449d17587f13f927a5c19c6bc369923f55d"} Mar 11 09:01:50 crc kubenswrapper[4808]: I0311 09:01:50.010181 4808 scope.go:117] "RemoveContainer" containerID="e06cdeb9590b9d80c01d88d8ccb42a576bbd93754fc91726c0f46b7324808de6" Mar 11 09:01:50 crc kubenswrapper[4808]: I0311 09:01:50.010312 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f7dd995-lrrzj" Mar 11 09:01:50 crc kubenswrapper[4808]: I0311 09:01:50.037165 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bfeb64e4-ece9-4403-8534-acf6cfccf457","Type":"ContainerStarted","Data":"9455acbb1198e035922698b6fcc2f339f0368d2f6d1aaf238c322fd6157c5976"} Mar 11 09:01:50 crc kubenswrapper[4808]: I0311 09:01:50.045100 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-j6tdp" event={"ID":"712fc7d0-7b9d-4293-abfb-262e5482bfed","Type":"ContainerStarted","Data":"8847b3ed61fa0264c90d6d596e3065ce3477671b8ae7c4dcf8914edca1f79d84"} Mar 11 09:01:50 crc kubenswrapper[4808]: I0311 09:01:50.046137 4808 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6ad27efa-241d-4ee1-978d-2dacc25cb7e1-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:50 crc kubenswrapper[4808]: I0311 09:01:50.049680 4808 scope.go:117] "RemoveContainer" containerID="c79da07aca0afa9e6f59f1241aa80a3dc20f000679b23e35d9b1c2b0255848c3" Mar 11 09:01:50 crc kubenswrapper[4808]: I0311 09:01:50.056790 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 11 09:01:50 crc kubenswrapper[4808]: E0311 09:01:50.062958 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:44ed1ca84e17bd0f004cfbdc3c0827d767daba52abb8e83e076bfd0e6c02f838\\\"\"" pod="openstack/cinder-db-sync-rtcbw" podUID="75b5595d-1d35-47d9-b6a2-196e30848a13" Mar 11 09:01:50 crc kubenswrapper[4808]: I0311 09:01:50.077156 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f7dd995-lrrzj"] Mar 11 09:01:50 crc kubenswrapper[4808]: I0311 09:01:50.078906 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f7dd995-lrrzj"] Mar 11 09:01:50 crc kubenswrapper[4808]: I0311 09:01:50.083785 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-j6tdp" podStartSLOduration=2.28147281 podStartE2EDuration="21.083768205s" podCreationTimestamp="2026-03-11 09:01:29 +0000 UTC" firstStartedPulling="2026-03-11 09:01:30.515027484 +0000 UTC m=+1341.468350794" lastFinishedPulling="2026-03-11 09:01:49.317322859 +0000 UTC m=+1360.270646189" observedRunningTime="2026-03-11 09:01:50.066513354 +0000 UTC m=+1361.019836674" watchObservedRunningTime="2026-03-11 09:01:50.083768205 +0000 UTC m=+1361.037091525" Mar 11 09:01:50 crc kubenswrapper[4808]: I0311 09:01:50.189338 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-tbnpr"] Mar 11 09:01:50 crc kubenswrapper[4808]: I0311 09:01:50.349883 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-2t99h" Mar 11 09:01:50 crc kubenswrapper[4808]: I0311 09:01:50.453129 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ee99a1df-1d19-4463-a0ae-84a18e2f6d4e-config\") pod \"ee99a1df-1d19-4463-a0ae-84a18e2f6d4e\" (UID: \"ee99a1df-1d19-4463-a0ae-84a18e2f6d4e\") " Mar 11 09:01:50 crc kubenswrapper[4808]: I0311 09:01:50.453528 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee99a1df-1d19-4463-a0ae-84a18e2f6d4e-combined-ca-bundle\") pod \"ee99a1df-1d19-4463-a0ae-84a18e2f6d4e\" (UID: \"ee99a1df-1d19-4463-a0ae-84a18e2f6d4e\") " Mar 11 09:01:50 crc kubenswrapper[4808]: I0311 09:01:50.453685 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c2csk\" (UniqueName: \"kubernetes.io/projected/ee99a1df-1d19-4463-a0ae-84a18e2f6d4e-kube-api-access-c2csk\") pod \"ee99a1df-1d19-4463-a0ae-84a18e2f6d4e\" (UID: \"ee99a1df-1d19-4463-a0ae-84a18e2f6d4e\") " Mar 11 09:01:50 crc kubenswrapper[4808]: I0311 09:01:50.462570 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee99a1df-1d19-4463-a0ae-84a18e2f6d4e-kube-api-access-c2csk" (OuterVolumeSpecName: "kube-api-access-c2csk") pod "ee99a1df-1d19-4463-a0ae-84a18e2f6d4e" (UID: "ee99a1df-1d19-4463-a0ae-84a18e2f6d4e"). InnerVolumeSpecName "kube-api-access-c2csk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:01:50 crc kubenswrapper[4808]: I0311 09:01:50.482983 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee99a1df-1d19-4463-a0ae-84a18e2f6d4e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ee99a1df-1d19-4463-a0ae-84a18e2f6d4e" (UID: "ee99a1df-1d19-4463-a0ae-84a18e2f6d4e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:01:50 crc kubenswrapper[4808]: I0311 09:01:50.484595 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee99a1df-1d19-4463-a0ae-84a18e2f6d4e-config" (OuterVolumeSpecName: "config") pod "ee99a1df-1d19-4463-a0ae-84a18e2f6d4e" (UID: "ee99a1df-1d19-4463-a0ae-84a18e2f6d4e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:01:50 crc kubenswrapper[4808]: I0311 09:01:50.555018 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c2csk\" (UniqueName: \"kubernetes.io/projected/ee99a1df-1d19-4463-a0ae-84a18e2f6d4e-kube-api-access-c2csk\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:50 crc kubenswrapper[4808]: I0311 09:01:50.555048 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/ee99a1df-1d19-4463-a0ae-84a18e2f6d4e-config\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:50 crc kubenswrapper[4808]: I0311 09:01:50.555060 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee99a1df-1d19-4463-a0ae-84a18e2f6d4e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:50 crc kubenswrapper[4808]: I0311 09:01:50.869619 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 11 09:01:51 crc kubenswrapper[4808]: W0311 09:01:51.021530 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1a9b795b_5646_4126_b1fb_609c53efdf13.slice/crio-1095bead7237ff4804966e486e7e792116f58e022df7a3fb9f0c1ba328442629 WatchSource:0}: Error finding container 1095bead7237ff4804966e486e7e792116f58e022df7a3fb9f0c1ba328442629: Status 404 returned error can't find the container with id 1095bead7237ff4804966e486e7e792116f58e022df7a3fb9f0c1ba328442629 Mar 11 09:01:51 crc kubenswrapper[4808]: I0311 09:01:51.070901 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1a9b795b-5646-4126-b1fb-609c53efdf13","Type":"ContainerStarted","Data":"1095bead7237ff4804966e486e7e792116f58e022df7a3fb9f0c1ba328442629"} Mar 11 09:01:51 crc kubenswrapper[4808]: I0311 09:01:51.072828 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-2t99h" event={"ID":"ee99a1df-1d19-4463-a0ae-84a18e2f6d4e","Type":"ContainerDied","Data":"d00ab90e53c9e0e609e98c1b0cbc4d27c4160ce951237f0afbb2a231696f4c16"} Mar 11 09:01:51 crc kubenswrapper[4808]: I0311 09:01:51.072860 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d00ab90e53c9e0e609e98c1b0cbc4d27c4160ce951237f0afbb2a231696f4c16" Mar 11 09:01:51 crc kubenswrapper[4808]: I0311 09:01:51.072925 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-2t99h" Mar 11 09:01:51 crc kubenswrapper[4808]: I0311 09:01:51.079270 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fd05834b-3326-4f0c-a5b8-2a0e28e5782e","Type":"ContainerStarted","Data":"4a2f973d3d8aa7749026d2a72e93b744c4d626ad6b2c8dfbd20c14b2858430d8"} Mar 11 09:01:51 crc kubenswrapper[4808]: I0311 09:01:51.079387 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fd05834b-3326-4f0c-a5b8-2a0e28e5782e","Type":"ContainerStarted","Data":"b37f8681748e68387428da1d9fc7e8d6acf041470008e3ea53e89725348b9548"} Mar 11 09:01:51 crc kubenswrapper[4808]: I0311 09:01:51.083169 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-tbnpr" event={"ID":"657b6f1c-c040-46ea-b7b9-603c41615d66","Type":"ContainerStarted","Data":"cb3ce65524aaf92c57bd65d02e77f99b1572dbdb2d2fc4229b1ecb25766b4545"} Mar 11 09:01:51 crc kubenswrapper[4808]: I0311 09:01:51.083194 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-tbnpr" event={"ID":"657b6f1c-c040-46ea-b7b9-603c41615d66","Type":"ContainerStarted","Data":"8abd1cb927804260fa2abe80fcd03fb1d9eb70673b8db1963b676c931b65aeec"} Mar 11 09:01:51 crc kubenswrapper[4808]: I0311 09:01:51.116977 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-tbnpr" podStartSLOduration=3.116959426 podStartE2EDuration="3.116959426s" podCreationTimestamp="2026-03-11 09:01:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:01:51.110451537 +0000 UTC m=+1362.063774857" watchObservedRunningTime="2026-03-11 09:01:51.116959426 +0000 UTC m=+1362.070282746" Mar 11 09:01:51 crc kubenswrapper[4808]: I0311 09:01:51.332622 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7859c7799c-pp2j9"] Mar 11 09:01:51 crc kubenswrapper[4808]: E0311 09:01:51.333774 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee99a1df-1d19-4463-a0ae-84a18e2f6d4e" containerName="neutron-db-sync" Mar 11 09:01:51 crc kubenswrapper[4808]: I0311 09:01:51.333788 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee99a1df-1d19-4463-a0ae-84a18e2f6d4e" containerName="neutron-db-sync" Mar 11 09:01:51 crc kubenswrapper[4808]: E0311 09:01:51.333805 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ad27efa-241d-4ee1-978d-2dacc25cb7e1" containerName="init" Mar 11 09:01:51 crc kubenswrapper[4808]: I0311 09:01:51.333812 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ad27efa-241d-4ee1-978d-2dacc25cb7e1" containerName="init" Mar 11 09:01:51 crc kubenswrapper[4808]: E0311 09:01:51.333831 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ad27efa-241d-4ee1-978d-2dacc25cb7e1" containerName="dnsmasq-dns" Mar 11 09:01:51 crc kubenswrapper[4808]: I0311 09:01:51.333837 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ad27efa-241d-4ee1-978d-2dacc25cb7e1" containerName="dnsmasq-dns" Mar 11 09:01:51 crc kubenswrapper[4808]: I0311 09:01:51.334114 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee99a1df-1d19-4463-a0ae-84a18e2f6d4e" containerName="neutron-db-sync" Mar 11 09:01:51 crc kubenswrapper[4808]: I0311 09:01:51.334143 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ad27efa-241d-4ee1-978d-2dacc25cb7e1" containerName="dnsmasq-dns" Mar 11 09:01:51 crc kubenswrapper[4808]: I0311 09:01:51.335538 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7859c7799c-pp2j9" Mar 11 09:01:51 crc kubenswrapper[4808]: I0311 09:01:51.351897 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7859c7799c-pp2j9"] Mar 11 09:01:51 crc kubenswrapper[4808]: I0311 09:01:51.495213 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5mb4\" (UniqueName: \"kubernetes.io/projected/7fd87e10-8e91-4b2c-90a5-74cfcaa976b4-kube-api-access-b5mb4\") pod \"dnsmasq-dns-7859c7799c-pp2j9\" (UID: \"7fd87e10-8e91-4b2c-90a5-74cfcaa976b4\") " pod="openstack/dnsmasq-dns-7859c7799c-pp2j9" Mar 11 09:01:51 crc kubenswrapper[4808]: I0311 09:01:51.495292 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7fd87e10-8e91-4b2c-90a5-74cfcaa976b4-ovsdbserver-nb\") pod \"dnsmasq-dns-7859c7799c-pp2j9\" (UID: \"7fd87e10-8e91-4b2c-90a5-74cfcaa976b4\") " pod="openstack/dnsmasq-dns-7859c7799c-pp2j9" Mar 11 09:01:51 crc kubenswrapper[4808]: I0311 09:01:51.495330 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7fd87e10-8e91-4b2c-90a5-74cfcaa976b4-ovsdbserver-sb\") pod \"dnsmasq-dns-7859c7799c-pp2j9\" (UID: \"7fd87e10-8e91-4b2c-90a5-74cfcaa976b4\") " pod="openstack/dnsmasq-dns-7859c7799c-pp2j9" Mar 11 09:01:51 crc kubenswrapper[4808]: I0311 09:01:51.496411 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7fd87e10-8e91-4b2c-90a5-74cfcaa976b4-dns-svc\") pod \"dnsmasq-dns-7859c7799c-pp2j9\" (UID: \"7fd87e10-8e91-4b2c-90a5-74cfcaa976b4\") " pod="openstack/dnsmasq-dns-7859c7799c-pp2j9" Mar 11 09:01:51 crc kubenswrapper[4808]: I0311 09:01:51.496682 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fd87e10-8e91-4b2c-90a5-74cfcaa976b4-config\") pod \"dnsmasq-dns-7859c7799c-pp2j9\" (UID: \"7fd87e10-8e91-4b2c-90a5-74cfcaa976b4\") " pod="openstack/dnsmasq-dns-7859c7799c-pp2j9" Mar 11 09:01:51 crc kubenswrapper[4808]: I0311 09:01:51.496843 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7fd87e10-8e91-4b2c-90a5-74cfcaa976b4-dns-swift-storage-0\") pod \"dnsmasq-dns-7859c7799c-pp2j9\" (UID: \"7fd87e10-8e91-4b2c-90a5-74cfcaa976b4\") " pod="openstack/dnsmasq-dns-7859c7799c-pp2j9" Mar 11 09:01:51 crc kubenswrapper[4808]: I0311 09:01:51.598636 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7fd87e10-8e91-4b2c-90a5-74cfcaa976b4-dns-swift-storage-0\") pod \"dnsmasq-dns-7859c7799c-pp2j9\" (UID: \"7fd87e10-8e91-4b2c-90a5-74cfcaa976b4\") " pod="openstack/dnsmasq-dns-7859c7799c-pp2j9" Mar 11 09:01:51 crc kubenswrapper[4808]: I0311 09:01:51.598813 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5mb4\" (UniqueName: \"kubernetes.io/projected/7fd87e10-8e91-4b2c-90a5-74cfcaa976b4-kube-api-access-b5mb4\") pod \"dnsmasq-dns-7859c7799c-pp2j9\" (UID: \"7fd87e10-8e91-4b2c-90a5-74cfcaa976b4\") " pod="openstack/dnsmasq-dns-7859c7799c-pp2j9" Mar 11 09:01:51 crc kubenswrapper[4808]: I0311 09:01:51.598906 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7fd87e10-8e91-4b2c-90a5-74cfcaa976b4-ovsdbserver-nb\") pod \"dnsmasq-dns-7859c7799c-pp2j9\" (UID: \"7fd87e10-8e91-4b2c-90a5-74cfcaa976b4\") " pod="openstack/dnsmasq-dns-7859c7799c-pp2j9" Mar 11 09:01:51 crc kubenswrapper[4808]: I0311 09:01:51.598978 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7fd87e10-8e91-4b2c-90a5-74cfcaa976b4-ovsdbserver-sb\") pod \"dnsmasq-dns-7859c7799c-pp2j9\" (UID: \"7fd87e10-8e91-4b2c-90a5-74cfcaa976b4\") " pod="openstack/dnsmasq-dns-7859c7799c-pp2j9" Mar 11 09:01:51 crc kubenswrapper[4808]: I0311 09:01:51.599093 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7fd87e10-8e91-4b2c-90a5-74cfcaa976b4-dns-svc\") pod \"dnsmasq-dns-7859c7799c-pp2j9\" (UID: \"7fd87e10-8e91-4b2c-90a5-74cfcaa976b4\") " pod="openstack/dnsmasq-dns-7859c7799c-pp2j9" Mar 11 09:01:51 crc kubenswrapper[4808]: I0311 09:01:51.599148 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fd87e10-8e91-4b2c-90a5-74cfcaa976b4-config\") pod \"dnsmasq-dns-7859c7799c-pp2j9\" (UID: \"7fd87e10-8e91-4b2c-90a5-74cfcaa976b4\") " pod="openstack/dnsmasq-dns-7859c7799c-pp2j9" Mar 11 09:01:51 crc kubenswrapper[4808]: I0311 09:01:51.600191 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7fd87e10-8e91-4b2c-90a5-74cfcaa976b4-ovsdbserver-nb\") pod \"dnsmasq-dns-7859c7799c-pp2j9\" (UID: \"7fd87e10-8e91-4b2c-90a5-74cfcaa976b4\") " pod="openstack/dnsmasq-dns-7859c7799c-pp2j9" Mar 11 09:01:51 crc kubenswrapper[4808]: I0311 09:01:51.600297 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7fd87e10-8e91-4b2c-90a5-74cfcaa976b4-dns-svc\") pod \"dnsmasq-dns-7859c7799c-pp2j9\" (UID: \"7fd87e10-8e91-4b2c-90a5-74cfcaa976b4\") " pod="openstack/dnsmasq-dns-7859c7799c-pp2j9" Mar 11 09:01:51 crc kubenswrapper[4808]: I0311 09:01:51.600397 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7fd87e10-8e91-4b2c-90a5-74cfcaa976b4-ovsdbserver-sb\") pod \"dnsmasq-dns-7859c7799c-pp2j9\" (UID: \"7fd87e10-8e91-4b2c-90a5-74cfcaa976b4\") " pod="openstack/dnsmasq-dns-7859c7799c-pp2j9" Mar 11 09:01:51 crc kubenswrapper[4808]: I0311 09:01:51.601314 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fd87e10-8e91-4b2c-90a5-74cfcaa976b4-config\") pod \"dnsmasq-dns-7859c7799c-pp2j9\" (UID: \"7fd87e10-8e91-4b2c-90a5-74cfcaa976b4\") " pod="openstack/dnsmasq-dns-7859c7799c-pp2j9" Mar 11 09:01:51 crc kubenswrapper[4808]: I0311 09:01:51.603249 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7fd87e10-8e91-4b2c-90a5-74cfcaa976b4-dns-swift-storage-0\") pod \"dnsmasq-dns-7859c7799c-pp2j9\" (UID: \"7fd87e10-8e91-4b2c-90a5-74cfcaa976b4\") " pod="openstack/dnsmasq-dns-7859c7799c-pp2j9" Mar 11 09:01:51 crc kubenswrapper[4808]: I0311 09:01:51.628020 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5mb4\" (UniqueName: \"kubernetes.io/projected/7fd87e10-8e91-4b2c-90a5-74cfcaa976b4-kube-api-access-b5mb4\") pod \"dnsmasq-dns-7859c7799c-pp2j9\" (UID: \"7fd87e10-8e91-4b2c-90a5-74cfcaa976b4\") " pod="openstack/dnsmasq-dns-7859c7799c-pp2j9" Mar 11 09:01:51 crc kubenswrapper[4808]: I0311 09:01:51.676906 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7859c7799c-pp2j9" Mar 11 09:01:51 crc kubenswrapper[4808]: I0311 09:01:51.711926 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-764ddbc49b-rd7qj"] Mar 11 09:01:51 crc kubenswrapper[4808]: I0311 09:01:51.713897 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-764ddbc49b-rd7qj" Mar 11 09:01:51 crc kubenswrapper[4808]: I0311 09:01:51.719744 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-764ddbc49b-rd7qj"] Mar 11 09:01:51 crc kubenswrapper[4808]: I0311 09:01:51.720387 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Mar 11 09:01:51 crc kubenswrapper[4808]: I0311 09:01:51.722429 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-9gr4z" Mar 11 09:01:51 crc kubenswrapper[4808]: I0311 09:01:51.722470 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Mar 11 09:01:51 crc kubenswrapper[4808]: I0311 09:01:51.722517 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Mar 11 09:01:51 crc kubenswrapper[4808]: I0311 09:01:51.812138 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ad27efa-241d-4ee1-978d-2dacc25cb7e1" path="/var/lib/kubelet/pods/6ad27efa-241d-4ee1-978d-2dacc25cb7e1/volumes" Mar 11 09:01:51 crc kubenswrapper[4808]: I0311 09:01:51.903752 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jv42w\" (UniqueName: \"kubernetes.io/projected/1d8e77d9-ecc3-4798-9a33-aeaa7e24a907-kube-api-access-jv42w\") pod \"neutron-764ddbc49b-rd7qj\" (UID: \"1d8e77d9-ecc3-4798-9a33-aeaa7e24a907\") " pod="openstack/neutron-764ddbc49b-rd7qj" Mar 11 09:01:51 crc kubenswrapper[4808]: I0311 09:01:51.904124 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d8e77d9-ecc3-4798-9a33-aeaa7e24a907-ovndb-tls-certs\") pod \"neutron-764ddbc49b-rd7qj\" (UID: \"1d8e77d9-ecc3-4798-9a33-aeaa7e24a907\") " pod="openstack/neutron-764ddbc49b-rd7qj" Mar 11 09:01:51 crc kubenswrapper[4808]: I0311 09:01:51.904244 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1d8e77d9-ecc3-4798-9a33-aeaa7e24a907-config\") pod \"neutron-764ddbc49b-rd7qj\" (UID: \"1d8e77d9-ecc3-4798-9a33-aeaa7e24a907\") " pod="openstack/neutron-764ddbc49b-rd7qj" Mar 11 09:01:51 crc kubenswrapper[4808]: I0311 09:01:51.904281 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1d8e77d9-ecc3-4798-9a33-aeaa7e24a907-httpd-config\") pod \"neutron-764ddbc49b-rd7qj\" (UID: \"1d8e77d9-ecc3-4798-9a33-aeaa7e24a907\") " pod="openstack/neutron-764ddbc49b-rd7qj" Mar 11 09:01:51 crc kubenswrapper[4808]: I0311 09:01:51.904405 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d8e77d9-ecc3-4798-9a33-aeaa7e24a907-combined-ca-bundle\") pod \"neutron-764ddbc49b-rd7qj\" (UID: \"1d8e77d9-ecc3-4798-9a33-aeaa7e24a907\") " pod="openstack/neutron-764ddbc49b-rd7qj" Mar 11 09:01:52 crc kubenswrapper[4808]: I0311 09:01:52.009315 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d8e77d9-ecc3-4798-9a33-aeaa7e24a907-ovndb-tls-certs\") pod \"neutron-764ddbc49b-rd7qj\" (UID: \"1d8e77d9-ecc3-4798-9a33-aeaa7e24a907\") " pod="openstack/neutron-764ddbc49b-rd7qj" Mar 11 09:01:52 crc kubenswrapper[4808]: I0311 09:01:52.009380 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jv42w\" (UniqueName: \"kubernetes.io/projected/1d8e77d9-ecc3-4798-9a33-aeaa7e24a907-kube-api-access-jv42w\") pod \"neutron-764ddbc49b-rd7qj\" (UID: \"1d8e77d9-ecc3-4798-9a33-aeaa7e24a907\") " pod="openstack/neutron-764ddbc49b-rd7qj" Mar 11 09:01:52 crc kubenswrapper[4808]: I0311 09:01:52.009431 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1d8e77d9-ecc3-4798-9a33-aeaa7e24a907-config\") pod \"neutron-764ddbc49b-rd7qj\" (UID: \"1d8e77d9-ecc3-4798-9a33-aeaa7e24a907\") " pod="openstack/neutron-764ddbc49b-rd7qj" Mar 11 09:01:52 crc kubenswrapper[4808]: I0311 09:01:52.009454 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1d8e77d9-ecc3-4798-9a33-aeaa7e24a907-httpd-config\") pod \"neutron-764ddbc49b-rd7qj\" (UID: \"1d8e77d9-ecc3-4798-9a33-aeaa7e24a907\") " pod="openstack/neutron-764ddbc49b-rd7qj" Mar 11 09:01:52 crc kubenswrapper[4808]: I0311 09:01:52.009498 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d8e77d9-ecc3-4798-9a33-aeaa7e24a907-combined-ca-bundle\") pod \"neutron-764ddbc49b-rd7qj\" (UID: \"1d8e77d9-ecc3-4798-9a33-aeaa7e24a907\") " pod="openstack/neutron-764ddbc49b-rd7qj" Mar 11 09:01:52 crc kubenswrapper[4808]: I0311 09:01:52.021312 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d8e77d9-ecc3-4798-9a33-aeaa7e24a907-combined-ca-bundle\") pod \"neutron-764ddbc49b-rd7qj\" (UID: \"1d8e77d9-ecc3-4798-9a33-aeaa7e24a907\") " pod="openstack/neutron-764ddbc49b-rd7qj" Mar 11 09:01:52 crc kubenswrapper[4808]: I0311 09:01:52.022003 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d8e77d9-ecc3-4798-9a33-aeaa7e24a907-ovndb-tls-certs\") pod \"neutron-764ddbc49b-rd7qj\" (UID: \"1d8e77d9-ecc3-4798-9a33-aeaa7e24a907\") " pod="openstack/neutron-764ddbc49b-rd7qj" Mar 11 09:01:52 crc kubenswrapper[4808]: I0311 09:01:52.026387 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/1d8e77d9-ecc3-4798-9a33-aeaa7e24a907-config\") pod \"neutron-764ddbc49b-rd7qj\" (UID: \"1d8e77d9-ecc3-4798-9a33-aeaa7e24a907\") " pod="openstack/neutron-764ddbc49b-rd7qj" Mar 11 09:01:52 crc kubenswrapper[4808]: I0311 09:01:52.040091 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jv42w\" (UniqueName: \"kubernetes.io/projected/1d8e77d9-ecc3-4798-9a33-aeaa7e24a907-kube-api-access-jv42w\") pod \"neutron-764ddbc49b-rd7qj\" (UID: \"1d8e77d9-ecc3-4798-9a33-aeaa7e24a907\") " pod="openstack/neutron-764ddbc49b-rd7qj" Mar 11 09:01:52 crc kubenswrapper[4808]: I0311 09:01:52.042271 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1d8e77d9-ecc3-4798-9a33-aeaa7e24a907-httpd-config\") pod \"neutron-764ddbc49b-rd7qj\" (UID: \"1d8e77d9-ecc3-4798-9a33-aeaa7e24a907\") " pod="openstack/neutron-764ddbc49b-rd7qj" Mar 11 09:01:52 crc kubenswrapper[4808]: I0311 09:01:52.098135 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bfeb64e4-ece9-4403-8534-acf6cfccf457","Type":"ContainerStarted","Data":"296fb8cceddadaf5b4b35036cafc79c48ff95644998859118d49bfb4a73d6e43"} Mar 11 09:01:52 crc kubenswrapper[4808]: I0311 09:01:52.143895 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-764ddbc49b-rd7qj" Mar 11 09:01:52 crc kubenswrapper[4808]: I0311 09:01:52.254368 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7859c7799c-pp2j9"] Mar 11 09:01:52 crc kubenswrapper[4808]: I0311 09:01:52.511183 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-764ddbc49b-rd7qj"] Mar 11 09:01:53 crc kubenswrapper[4808]: I0311 09:01:53.111718 4808 generic.go:334] "Generic (PLEG): container finished" podID="7fd87e10-8e91-4b2c-90a5-74cfcaa976b4" containerID="e7e0bcde2f5d58b16ee466e360d7efc7de50d5ebc509e72267663a75ec0038ca" exitCode=0 Mar 11 09:01:53 crc kubenswrapper[4808]: I0311 09:01:53.112226 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7859c7799c-pp2j9" event={"ID":"7fd87e10-8e91-4b2c-90a5-74cfcaa976b4","Type":"ContainerDied","Data":"e7e0bcde2f5d58b16ee466e360d7efc7de50d5ebc509e72267663a75ec0038ca"} Mar 11 09:01:53 crc kubenswrapper[4808]: I0311 09:01:53.112254 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7859c7799c-pp2j9" event={"ID":"7fd87e10-8e91-4b2c-90a5-74cfcaa976b4","Type":"ContainerStarted","Data":"d78875dacf15f406770744a6e4c466766d8aab49d2cf28b03058283f6e55dea9"} Mar 11 09:01:53 crc kubenswrapper[4808]: I0311 09:01:53.118790 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fd05834b-3326-4f0c-a5b8-2a0e28e5782e","Type":"ContainerStarted","Data":"b41ea5d8370aa5fc880ec07310ffc8a8ad1a2432531ddb92270dee58ab23797c"} Mar 11 09:01:53 crc kubenswrapper[4808]: I0311 09:01:53.120903 4808 generic.go:334] "Generic (PLEG): container finished" podID="712fc7d0-7b9d-4293-abfb-262e5482bfed" containerID="8847b3ed61fa0264c90d6d596e3065ce3477671b8ae7c4dcf8914edca1f79d84" exitCode=0 Mar 11 09:01:53 crc kubenswrapper[4808]: I0311 09:01:53.120950 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-j6tdp" event={"ID":"712fc7d0-7b9d-4293-abfb-262e5482bfed","Type":"ContainerDied","Data":"8847b3ed61fa0264c90d6d596e3065ce3477671b8ae7c4dcf8914edca1f79d84"} Mar 11 09:01:53 crc kubenswrapper[4808]: I0311 09:01:53.122842 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-764ddbc49b-rd7qj" event={"ID":"1d8e77d9-ecc3-4798-9a33-aeaa7e24a907","Type":"ContainerStarted","Data":"8add428764c0b0d0319171f3520b3b32308c89e35aac004b113120480955c405"} Mar 11 09:01:53 crc kubenswrapper[4808]: I0311 09:01:53.122887 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-764ddbc49b-rd7qj" event={"ID":"1d8e77d9-ecc3-4798-9a33-aeaa7e24a907","Type":"ContainerStarted","Data":"8dd16f3fab19aaa57bb055c80d4f3c1c63cf41c143b321c09b021c8b3dd5a644"} Mar 11 09:01:53 crc kubenswrapper[4808]: I0311 09:01:53.122897 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-764ddbc49b-rd7qj" event={"ID":"1d8e77d9-ecc3-4798-9a33-aeaa7e24a907","Type":"ContainerStarted","Data":"5b947e34552d117576ef7a2239d01f276eb51c651b264f2535121438fd3c5777"} Mar 11 09:01:53 crc kubenswrapper[4808]: I0311 09:01:53.123475 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-764ddbc49b-rd7qj" Mar 11 09:01:53 crc kubenswrapper[4808]: I0311 09:01:53.129093 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1a9b795b-5646-4126-b1fb-609c53efdf13","Type":"ContainerStarted","Data":"8c9f08a6349aaa6356f7534edf61cd2a47293bb1bb86157e93a29265615a1e5b"} Mar 11 09:01:53 crc kubenswrapper[4808]: I0311 09:01:53.129156 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1a9b795b-5646-4126-b1fb-609c53efdf13","Type":"ContainerStarted","Data":"a9eb276a3708ab32e878bcdcf4d3597cb44173cc940b8b74f6a1dee37bbdfdf6"} Mar 11 09:01:53 crc kubenswrapper[4808]: I0311 09:01:53.159672 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.159653339 podStartE2EDuration="6.159653339s" podCreationTimestamp="2026-03-11 09:01:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:01:53.148939507 +0000 UTC m=+1364.102262827" watchObservedRunningTime="2026-03-11 09:01:53.159653339 +0000 UTC m=+1364.112976659" Mar 11 09:01:53 crc kubenswrapper[4808]: I0311 09:01:53.188724 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-764ddbc49b-rd7qj" podStartSLOduration=2.188707853 podStartE2EDuration="2.188707853s" podCreationTimestamp="2026-03-11 09:01:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:01:53.187195519 +0000 UTC m=+1364.140518839" watchObservedRunningTime="2026-03-11 09:01:53.188707853 +0000 UTC m=+1364.142031173" Mar 11 09:01:53 crc kubenswrapper[4808]: I0311 09:01:53.221986 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.22196917 podStartE2EDuration="6.22196917s" podCreationTimestamp="2026-03-11 09:01:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:01:53.220857668 +0000 UTC m=+1364.174180998" watchObservedRunningTime="2026-03-11 09:01:53.22196917 +0000 UTC m=+1364.175292490" Mar 11 09:01:54 crc kubenswrapper[4808]: I0311 09:01:54.136008 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-675f7dd995-lrrzj" podUID="6ad27efa-241d-4ee1-978d-2dacc25cb7e1" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.127:5353: i/o timeout" Mar 11 09:01:54 crc kubenswrapper[4808]: I0311 09:01:54.143313 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7859c7799c-pp2j9" event={"ID":"7fd87e10-8e91-4b2c-90a5-74cfcaa976b4","Type":"ContainerStarted","Data":"9314f7b9a7ff3e21f94a52be821753a056f6b9ae94a252eb2518da9cd797e85a"} Mar 11 09:01:54 crc kubenswrapper[4808]: I0311 09:01:54.144305 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7859c7799c-pp2j9" Mar 11 09:01:54 crc kubenswrapper[4808]: I0311 09:01:54.146683 4808 generic.go:334] "Generic (PLEG): container finished" podID="657b6f1c-c040-46ea-b7b9-603c41615d66" containerID="cb3ce65524aaf92c57bd65d02e77f99b1572dbdb2d2fc4229b1ecb25766b4545" exitCode=0 Mar 11 09:01:54 crc kubenswrapper[4808]: I0311 09:01:54.146841 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-tbnpr" event={"ID":"657b6f1c-c040-46ea-b7b9-603c41615d66","Type":"ContainerDied","Data":"cb3ce65524aaf92c57bd65d02e77f99b1572dbdb2d2fc4229b1ecb25766b4545"} Mar 11 09:01:54 crc kubenswrapper[4808]: I0311 09:01:54.195265 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7859c7799c-pp2j9" podStartSLOduration=3.195238738 podStartE2EDuration="3.195238738s" podCreationTimestamp="2026-03-11 09:01:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:01:54.175635638 +0000 UTC m=+1365.128958958" watchObservedRunningTime="2026-03-11 09:01:54.195238738 +0000 UTC m=+1365.148562078" Mar 11 09:01:54 crc kubenswrapper[4808]: I0311 09:01:54.615038 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-j6tdp" Mar 11 09:01:54 crc kubenswrapper[4808]: I0311 09:01:54.765198 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/712fc7d0-7b9d-4293-abfb-262e5482bfed-logs\") pod \"712fc7d0-7b9d-4293-abfb-262e5482bfed\" (UID: \"712fc7d0-7b9d-4293-abfb-262e5482bfed\") " Mar 11 09:01:54 crc kubenswrapper[4808]: I0311 09:01:54.765405 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/712fc7d0-7b9d-4293-abfb-262e5482bfed-config-data\") pod \"712fc7d0-7b9d-4293-abfb-262e5482bfed\" (UID: \"712fc7d0-7b9d-4293-abfb-262e5482bfed\") " Mar 11 09:01:54 crc kubenswrapper[4808]: I0311 09:01:54.765509 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vc4d\" (UniqueName: \"kubernetes.io/projected/712fc7d0-7b9d-4293-abfb-262e5482bfed-kube-api-access-9vc4d\") pod \"712fc7d0-7b9d-4293-abfb-262e5482bfed\" (UID: \"712fc7d0-7b9d-4293-abfb-262e5482bfed\") " Mar 11 09:01:54 crc kubenswrapper[4808]: I0311 09:01:54.765720 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/712fc7d0-7b9d-4293-abfb-262e5482bfed-combined-ca-bundle\") pod \"712fc7d0-7b9d-4293-abfb-262e5482bfed\" (UID: \"712fc7d0-7b9d-4293-abfb-262e5482bfed\") " Mar 11 09:01:54 crc kubenswrapper[4808]: I0311 09:01:54.765764 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/712fc7d0-7b9d-4293-abfb-262e5482bfed-scripts\") pod \"712fc7d0-7b9d-4293-abfb-262e5482bfed\" (UID: \"712fc7d0-7b9d-4293-abfb-262e5482bfed\") " Mar 11 09:01:54 crc kubenswrapper[4808]: I0311 09:01:54.765828 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/712fc7d0-7b9d-4293-abfb-262e5482bfed-logs" (OuterVolumeSpecName: "logs") pod "712fc7d0-7b9d-4293-abfb-262e5482bfed" (UID: "712fc7d0-7b9d-4293-abfb-262e5482bfed"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:01:54 crc kubenswrapper[4808]: I0311 09:01:54.766238 4808 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/712fc7d0-7b9d-4293-abfb-262e5482bfed-logs\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:54 crc kubenswrapper[4808]: I0311 09:01:54.771015 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/712fc7d0-7b9d-4293-abfb-262e5482bfed-scripts" (OuterVolumeSpecName: "scripts") pod "712fc7d0-7b9d-4293-abfb-262e5482bfed" (UID: "712fc7d0-7b9d-4293-abfb-262e5482bfed"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:01:54 crc kubenswrapper[4808]: I0311 09:01:54.772535 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/712fc7d0-7b9d-4293-abfb-262e5482bfed-kube-api-access-9vc4d" (OuterVolumeSpecName: "kube-api-access-9vc4d") pod "712fc7d0-7b9d-4293-abfb-262e5482bfed" (UID: "712fc7d0-7b9d-4293-abfb-262e5482bfed"). InnerVolumeSpecName "kube-api-access-9vc4d". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:01:54 crc kubenswrapper[4808]: I0311 09:01:54.792231 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/712fc7d0-7b9d-4293-abfb-262e5482bfed-config-data" (OuterVolumeSpecName: "config-data") pod "712fc7d0-7b9d-4293-abfb-262e5482bfed" (UID: "712fc7d0-7b9d-4293-abfb-262e5482bfed"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:01:54 crc kubenswrapper[4808]: I0311 09:01:54.800139 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/712fc7d0-7b9d-4293-abfb-262e5482bfed-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "712fc7d0-7b9d-4293-abfb-262e5482bfed" (UID: "712fc7d0-7b9d-4293-abfb-262e5482bfed"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:01:54 crc kubenswrapper[4808]: I0311 09:01:54.869089 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/712fc7d0-7b9d-4293-abfb-262e5482bfed-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:54 crc kubenswrapper[4808]: I0311 09:01:54.869858 4808 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/712fc7d0-7b9d-4293-abfb-262e5482bfed-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:54 crc kubenswrapper[4808]: I0311 09:01:54.870017 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/712fc7d0-7b9d-4293-abfb-262e5482bfed-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:54 crc kubenswrapper[4808]: I0311 09:01:54.870086 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9vc4d\" (UniqueName: \"kubernetes.io/projected/712fc7d0-7b9d-4293-abfb-262e5482bfed-kube-api-access-9vc4d\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:55 crc kubenswrapper[4808]: I0311 09:01:55.144537 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-587fb944fc-6frn6"] Mar 11 09:01:55 crc kubenswrapper[4808]: E0311 09:01:55.144939 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="712fc7d0-7b9d-4293-abfb-262e5482bfed" containerName="placement-db-sync" Mar 11 09:01:55 crc kubenswrapper[4808]: I0311 09:01:55.144954 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="712fc7d0-7b9d-4293-abfb-262e5482bfed" containerName="placement-db-sync" Mar 11 09:01:55 crc kubenswrapper[4808]: I0311 09:01:55.145157 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="712fc7d0-7b9d-4293-abfb-262e5482bfed" containerName="placement-db-sync" Mar 11 09:01:55 crc kubenswrapper[4808]: I0311 09:01:55.146081 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-587fb944fc-6frn6" Mar 11 09:01:55 crc kubenswrapper[4808]: I0311 09:01:55.149676 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Mar 11 09:01:55 crc kubenswrapper[4808]: I0311 09:01:55.149837 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Mar 11 09:01:55 crc kubenswrapper[4808]: I0311 09:01:55.169864 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-587fb944fc-6frn6"] Mar 11 09:01:55 crc kubenswrapper[4808]: I0311 09:01:55.172134 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-j6tdp" Mar 11 09:01:55 crc kubenswrapper[4808]: I0311 09:01:55.172298 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-j6tdp" event={"ID":"712fc7d0-7b9d-4293-abfb-262e5482bfed","Type":"ContainerDied","Data":"e22586ace7221589905bee2fdbc63872b7b6d52631bb3f3a5f4ccf4388a1f0d7"} Mar 11 09:01:55 crc kubenswrapper[4808]: I0311 09:01:55.172351 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e22586ace7221589905bee2fdbc63872b7b6d52631bb3f3a5f4ccf4388a1f0d7" Mar 11 09:01:55 crc kubenswrapper[4808]: I0311 09:01:55.276791 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ba48f43e-924f-4ba8-af86-f0574b9a625a-config\") pod \"neutron-587fb944fc-6frn6\" (UID: \"ba48f43e-924f-4ba8-af86-f0574b9a625a\") " pod="openstack/neutron-587fb944fc-6frn6" Mar 11 09:01:55 crc kubenswrapper[4808]: I0311 09:01:55.276863 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba48f43e-924f-4ba8-af86-f0574b9a625a-public-tls-certs\") pod \"neutron-587fb944fc-6frn6\" (UID: \"ba48f43e-924f-4ba8-af86-f0574b9a625a\") " pod="openstack/neutron-587fb944fc-6frn6" Mar 11 09:01:55 crc kubenswrapper[4808]: I0311 09:01:55.276940 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x54nj\" (UniqueName: \"kubernetes.io/projected/ba48f43e-924f-4ba8-af86-f0574b9a625a-kube-api-access-x54nj\") pod \"neutron-587fb944fc-6frn6\" (UID: \"ba48f43e-924f-4ba8-af86-f0574b9a625a\") " pod="openstack/neutron-587fb944fc-6frn6" Mar 11 09:01:55 crc kubenswrapper[4808]: I0311 09:01:55.276969 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba48f43e-924f-4ba8-af86-f0574b9a625a-internal-tls-certs\") pod \"neutron-587fb944fc-6frn6\" (UID: \"ba48f43e-924f-4ba8-af86-f0574b9a625a\") " pod="openstack/neutron-587fb944fc-6frn6" Mar 11 09:01:55 crc kubenswrapper[4808]: I0311 09:01:55.276991 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba48f43e-924f-4ba8-af86-f0574b9a625a-combined-ca-bundle\") pod \"neutron-587fb944fc-6frn6\" (UID: \"ba48f43e-924f-4ba8-af86-f0574b9a625a\") " pod="openstack/neutron-587fb944fc-6frn6" Mar 11 09:01:55 crc kubenswrapper[4808]: I0311 09:01:55.277055 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba48f43e-924f-4ba8-af86-f0574b9a625a-ovndb-tls-certs\") pod \"neutron-587fb944fc-6frn6\" (UID: \"ba48f43e-924f-4ba8-af86-f0574b9a625a\") " pod="openstack/neutron-587fb944fc-6frn6" Mar 11 09:01:55 crc kubenswrapper[4808]: I0311 09:01:55.277126 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ba48f43e-924f-4ba8-af86-f0574b9a625a-httpd-config\") pod \"neutron-587fb944fc-6frn6\" (UID: \"ba48f43e-924f-4ba8-af86-f0574b9a625a\") " pod="openstack/neutron-587fb944fc-6frn6" Mar 11 09:01:55 crc kubenswrapper[4808]: I0311 09:01:55.330413 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-74c8dbd954-d5nz4"] Mar 11 09:01:55 crc kubenswrapper[4808]: I0311 09:01:55.332177 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-74c8dbd954-d5nz4" Mar 11 09:01:55 crc kubenswrapper[4808]: I0311 09:01:55.338800 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-74c8dbd954-d5nz4"] Mar 11 09:01:55 crc kubenswrapper[4808]: I0311 09:01:55.338803 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Mar 11 09:01:55 crc kubenswrapper[4808]: I0311 09:01:55.338887 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Mar 11 09:01:55 crc kubenswrapper[4808]: I0311 09:01:55.338814 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Mar 11 09:01:55 crc kubenswrapper[4808]: I0311 09:01:55.339198 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-gglgv" Mar 11 09:01:55 crc kubenswrapper[4808]: I0311 09:01:55.339394 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Mar 11 09:01:55 crc kubenswrapper[4808]: I0311 09:01:55.379057 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ba48f43e-924f-4ba8-af86-f0574b9a625a-config\") pod \"neutron-587fb944fc-6frn6\" (UID: \"ba48f43e-924f-4ba8-af86-f0574b9a625a\") " pod="openstack/neutron-587fb944fc-6frn6" Mar 11 09:01:55 crc kubenswrapper[4808]: I0311 09:01:55.379120 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba48f43e-924f-4ba8-af86-f0574b9a625a-public-tls-certs\") pod \"neutron-587fb944fc-6frn6\" (UID: \"ba48f43e-924f-4ba8-af86-f0574b9a625a\") " pod="openstack/neutron-587fb944fc-6frn6" Mar 11 09:01:55 crc kubenswrapper[4808]: I0311 09:01:55.379198 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x54nj\" (UniqueName: \"kubernetes.io/projected/ba48f43e-924f-4ba8-af86-f0574b9a625a-kube-api-access-x54nj\") pod \"neutron-587fb944fc-6frn6\" (UID: \"ba48f43e-924f-4ba8-af86-f0574b9a625a\") " pod="openstack/neutron-587fb944fc-6frn6" Mar 11 09:01:55 crc kubenswrapper[4808]: I0311 09:01:55.379228 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba48f43e-924f-4ba8-af86-f0574b9a625a-internal-tls-certs\") pod \"neutron-587fb944fc-6frn6\" (UID: \"ba48f43e-924f-4ba8-af86-f0574b9a625a\") " pod="openstack/neutron-587fb944fc-6frn6" Mar 11 09:01:55 crc kubenswrapper[4808]: I0311 09:01:55.379253 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba48f43e-924f-4ba8-af86-f0574b9a625a-combined-ca-bundle\") pod \"neutron-587fb944fc-6frn6\" (UID: \"ba48f43e-924f-4ba8-af86-f0574b9a625a\") " pod="openstack/neutron-587fb944fc-6frn6" Mar 11 09:01:55 crc kubenswrapper[4808]: I0311 09:01:55.379284 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba48f43e-924f-4ba8-af86-f0574b9a625a-ovndb-tls-certs\") pod \"neutron-587fb944fc-6frn6\" (UID: \"ba48f43e-924f-4ba8-af86-f0574b9a625a\") " pod="openstack/neutron-587fb944fc-6frn6" Mar 11 09:01:55 crc kubenswrapper[4808]: I0311 09:01:55.379306 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ba48f43e-924f-4ba8-af86-f0574b9a625a-httpd-config\") pod \"neutron-587fb944fc-6frn6\" (UID: \"ba48f43e-924f-4ba8-af86-f0574b9a625a\") " pod="openstack/neutron-587fb944fc-6frn6" Mar 11 09:01:55 crc kubenswrapper[4808]: I0311 09:01:55.384760 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba48f43e-924f-4ba8-af86-f0574b9a625a-combined-ca-bundle\") pod \"neutron-587fb944fc-6frn6\" (UID: \"ba48f43e-924f-4ba8-af86-f0574b9a625a\") " pod="openstack/neutron-587fb944fc-6frn6" Mar 11 09:01:55 crc kubenswrapper[4808]: I0311 09:01:55.389162 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba48f43e-924f-4ba8-af86-f0574b9a625a-internal-tls-certs\") pod \"neutron-587fb944fc-6frn6\" (UID: \"ba48f43e-924f-4ba8-af86-f0574b9a625a\") " pod="openstack/neutron-587fb944fc-6frn6" Mar 11 09:01:55 crc kubenswrapper[4808]: I0311 09:01:55.393407 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba48f43e-924f-4ba8-af86-f0574b9a625a-ovndb-tls-certs\") pod \"neutron-587fb944fc-6frn6\" (UID: \"ba48f43e-924f-4ba8-af86-f0574b9a625a\") " pod="openstack/neutron-587fb944fc-6frn6" Mar 11 09:01:55 crc kubenswrapper[4808]: I0311 09:01:55.397843 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba48f43e-924f-4ba8-af86-f0574b9a625a-public-tls-certs\") pod \"neutron-587fb944fc-6frn6\" (UID: \"ba48f43e-924f-4ba8-af86-f0574b9a625a\") " pod="openstack/neutron-587fb944fc-6frn6" Mar 11 09:01:55 crc kubenswrapper[4808]: I0311 09:01:55.398037 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/ba48f43e-924f-4ba8-af86-f0574b9a625a-config\") pod \"neutron-587fb944fc-6frn6\" (UID: \"ba48f43e-924f-4ba8-af86-f0574b9a625a\") " pod="openstack/neutron-587fb944fc-6frn6" Mar 11 09:01:55 crc kubenswrapper[4808]: I0311 09:01:55.412903 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x54nj\" (UniqueName: \"kubernetes.io/projected/ba48f43e-924f-4ba8-af86-f0574b9a625a-kube-api-access-x54nj\") pod \"neutron-587fb944fc-6frn6\" (UID: \"ba48f43e-924f-4ba8-af86-f0574b9a625a\") " pod="openstack/neutron-587fb944fc-6frn6" Mar 11 09:01:55 crc kubenswrapper[4808]: I0311 09:01:55.442214 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ba48f43e-924f-4ba8-af86-f0574b9a625a-httpd-config\") pod \"neutron-587fb944fc-6frn6\" (UID: \"ba48f43e-924f-4ba8-af86-f0574b9a625a\") " pod="openstack/neutron-587fb944fc-6frn6" Mar 11 09:01:55 crc kubenswrapper[4808]: I0311 09:01:55.464258 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-587fb944fc-6frn6" Mar 11 09:01:55 crc kubenswrapper[4808]: I0311 09:01:55.480938 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f04cfb3-ade7-4ab5-a497-0be1d758cad7-internal-tls-certs\") pod \"placement-74c8dbd954-d5nz4\" (UID: \"0f04cfb3-ade7-4ab5-a497-0be1d758cad7\") " pod="openstack/placement-74c8dbd954-d5nz4" Mar 11 09:01:55 crc kubenswrapper[4808]: I0311 09:01:55.480987 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f04cfb3-ade7-4ab5-a497-0be1d758cad7-public-tls-certs\") pod \"placement-74c8dbd954-d5nz4\" (UID: \"0f04cfb3-ade7-4ab5-a497-0be1d758cad7\") " pod="openstack/placement-74c8dbd954-d5nz4" Mar 11 09:01:55 crc kubenswrapper[4808]: I0311 09:01:55.481019 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bvwz\" (UniqueName: \"kubernetes.io/projected/0f04cfb3-ade7-4ab5-a497-0be1d758cad7-kube-api-access-6bvwz\") pod \"placement-74c8dbd954-d5nz4\" (UID: \"0f04cfb3-ade7-4ab5-a497-0be1d758cad7\") " pod="openstack/placement-74c8dbd954-d5nz4" Mar 11 09:01:55 crc kubenswrapper[4808]: I0311 09:01:55.481069 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f04cfb3-ade7-4ab5-a497-0be1d758cad7-scripts\") pod \"placement-74c8dbd954-d5nz4\" (UID: \"0f04cfb3-ade7-4ab5-a497-0be1d758cad7\") " pod="openstack/placement-74c8dbd954-d5nz4" Mar 11 09:01:55 crc kubenswrapper[4808]: I0311 09:01:55.481098 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f04cfb3-ade7-4ab5-a497-0be1d758cad7-combined-ca-bundle\") pod \"placement-74c8dbd954-d5nz4\" (UID: \"0f04cfb3-ade7-4ab5-a497-0be1d758cad7\") " pod="openstack/placement-74c8dbd954-d5nz4" Mar 11 09:01:55 crc kubenswrapper[4808]: I0311 09:01:55.481116 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f04cfb3-ade7-4ab5-a497-0be1d758cad7-logs\") pod \"placement-74c8dbd954-d5nz4\" (UID: \"0f04cfb3-ade7-4ab5-a497-0be1d758cad7\") " pod="openstack/placement-74c8dbd954-d5nz4" Mar 11 09:01:55 crc kubenswrapper[4808]: I0311 09:01:55.481166 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f04cfb3-ade7-4ab5-a497-0be1d758cad7-config-data\") pod \"placement-74c8dbd954-d5nz4\" (UID: \"0f04cfb3-ade7-4ab5-a497-0be1d758cad7\") " pod="openstack/placement-74c8dbd954-d5nz4" Mar 11 09:01:55 crc kubenswrapper[4808]: I0311 09:01:55.560166 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-tbnpr" Mar 11 09:01:55 crc kubenswrapper[4808]: I0311 09:01:55.584415 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f04cfb3-ade7-4ab5-a497-0be1d758cad7-internal-tls-certs\") pod \"placement-74c8dbd954-d5nz4\" (UID: \"0f04cfb3-ade7-4ab5-a497-0be1d758cad7\") " pod="openstack/placement-74c8dbd954-d5nz4" Mar 11 09:01:55 crc kubenswrapper[4808]: I0311 09:01:55.584469 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f04cfb3-ade7-4ab5-a497-0be1d758cad7-public-tls-certs\") pod \"placement-74c8dbd954-d5nz4\" (UID: \"0f04cfb3-ade7-4ab5-a497-0be1d758cad7\") " pod="openstack/placement-74c8dbd954-d5nz4" Mar 11 09:01:55 crc kubenswrapper[4808]: I0311 09:01:55.584505 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bvwz\" (UniqueName: \"kubernetes.io/projected/0f04cfb3-ade7-4ab5-a497-0be1d758cad7-kube-api-access-6bvwz\") pod \"placement-74c8dbd954-d5nz4\" (UID: \"0f04cfb3-ade7-4ab5-a497-0be1d758cad7\") " pod="openstack/placement-74c8dbd954-d5nz4" Mar 11 09:01:55 crc kubenswrapper[4808]: I0311 09:01:55.584560 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f04cfb3-ade7-4ab5-a497-0be1d758cad7-scripts\") pod \"placement-74c8dbd954-d5nz4\" (UID: \"0f04cfb3-ade7-4ab5-a497-0be1d758cad7\") " pod="openstack/placement-74c8dbd954-d5nz4" Mar 11 09:01:55 crc kubenswrapper[4808]: I0311 09:01:55.584588 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f04cfb3-ade7-4ab5-a497-0be1d758cad7-combined-ca-bundle\") pod \"placement-74c8dbd954-d5nz4\" (UID: \"0f04cfb3-ade7-4ab5-a497-0be1d758cad7\") " pod="openstack/placement-74c8dbd954-d5nz4" Mar 11 09:01:55 crc kubenswrapper[4808]: I0311 09:01:55.584608 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f04cfb3-ade7-4ab5-a497-0be1d758cad7-logs\") pod \"placement-74c8dbd954-d5nz4\" (UID: \"0f04cfb3-ade7-4ab5-a497-0be1d758cad7\") " pod="openstack/placement-74c8dbd954-d5nz4" Mar 11 09:01:55 crc kubenswrapper[4808]: I0311 09:01:55.584674 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f04cfb3-ade7-4ab5-a497-0be1d758cad7-config-data\") pod \"placement-74c8dbd954-d5nz4\" (UID: \"0f04cfb3-ade7-4ab5-a497-0be1d758cad7\") " pod="openstack/placement-74c8dbd954-d5nz4" Mar 11 09:01:55 crc kubenswrapper[4808]: I0311 09:01:55.589959 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f04cfb3-ade7-4ab5-a497-0be1d758cad7-logs\") pod \"placement-74c8dbd954-d5nz4\" (UID: \"0f04cfb3-ade7-4ab5-a497-0be1d758cad7\") " pod="openstack/placement-74c8dbd954-d5nz4" Mar 11 09:01:55 crc kubenswrapper[4808]: I0311 09:01:55.591568 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f04cfb3-ade7-4ab5-a497-0be1d758cad7-combined-ca-bundle\") pod \"placement-74c8dbd954-d5nz4\" (UID: \"0f04cfb3-ade7-4ab5-a497-0be1d758cad7\") " pod="openstack/placement-74c8dbd954-d5nz4" Mar 11 09:01:55 crc kubenswrapper[4808]: I0311 09:01:55.594804 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f04cfb3-ade7-4ab5-a497-0be1d758cad7-config-data\") pod \"placement-74c8dbd954-d5nz4\" (UID: \"0f04cfb3-ade7-4ab5-a497-0be1d758cad7\") " pod="openstack/placement-74c8dbd954-d5nz4" Mar 11 09:01:55 crc kubenswrapper[4808]: I0311 09:01:55.597630 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f04cfb3-ade7-4ab5-a497-0be1d758cad7-scripts\") pod \"placement-74c8dbd954-d5nz4\" (UID: \"0f04cfb3-ade7-4ab5-a497-0be1d758cad7\") " pod="openstack/placement-74c8dbd954-d5nz4" Mar 11 09:01:55 crc kubenswrapper[4808]: I0311 09:01:55.608483 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f04cfb3-ade7-4ab5-a497-0be1d758cad7-public-tls-certs\") pod \"placement-74c8dbd954-d5nz4\" (UID: \"0f04cfb3-ade7-4ab5-a497-0be1d758cad7\") " pod="openstack/placement-74c8dbd954-d5nz4" Mar 11 09:01:55 crc kubenswrapper[4808]: I0311 09:01:55.609447 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f04cfb3-ade7-4ab5-a497-0be1d758cad7-internal-tls-certs\") pod \"placement-74c8dbd954-d5nz4\" (UID: \"0f04cfb3-ade7-4ab5-a497-0be1d758cad7\") " pod="openstack/placement-74c8dbd954-d5nz4" Mar 11 09:01:55 crc kubenswrapper[4808]: I0311 09:01:55.612998 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bvwz\" (UniqueName: \"kubernetes.io/projected/0f04cfb3-ade7-4ab5-a497-0be1d758cad7-kube-api-access-6bvwz\") pod \"placement-74c8dbd954-d5nz4\" (UID: \"0f04cfb3-ade7-4ab5-a497-0be1d758cad7\") " pod="openstack/placement-74c8dbd954-d5nz4" Mar 11 09:01:55 crc kubenswrapper[4808]: I0311 09:01:55.666849 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-74c8dbd954-d5nz4" Mar 11 09:01:55 crc kubenswrapper[4808]: I0311 09:01:55.686282 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/657b6f1c-c040-46ea-b7b9-603c41615d66-config-data\") pod \"657b6f1c-c040-46ea-b7b9-603c41615d66\" (UID: \"657b6f1c-c040-46ea-b7b9-603c41615d66\") " Mar 11 09:01:55 crc kubenswrapper[4808]: I0311 09:01:55.686409 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/657b6f1c-c040-46ea-b7b9-603c41615d66-scripts\") pod \"657b6f1c-c040-46ea-b7b9-603c41615d66\" (UID: \"657b6f1c-c040-46ea-b7b9-603c41615d66\") " Mar 11 09:01:55 crc kubenswrapper[4808]: I0311 09:01:55.686454 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwxxz\" (UniqueName: \"kubernetes.io/projected/657b6f1c-c040-46ea-b7b9-603c41615d66-kube-api-access-vwxxz\") pod \"657b6f1c-c040-46ea-b7b9-603c41615d66\" (UID: \"657b6f1c-c040-46ea-b7b9-603c41615d66\") " Mar 11 09:01:55 crc kubenswrapper[4808]: I0311 09:01:55.686476 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/657b6f1c-c040-46ea-b7b9-603c41615d66-combined-ca-bundle\") pod \"657b6f1c-c040-46ea-b7b9-603c41615d66\" (UID: \"657b6f1c-c040-46ea-b7b9-603c41615d66\") " Mar 11 09:01:55 crc kubenswrapper[4808]: I0311 09:01:55.686530 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/657b6f1c-c040-46ea-b7b9-603c41615d66-fernet-keys\") pod \"657b6f1c-c040-46ea-b7b9-603c41615d66\" (UID: \"657b6f1c-c040-46ea-b7b9-603c41615d66\") " Mar 11 09:01:55 crc kubenswrapper[4808]: I0311 09:01:55.686591 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/657b6f1c-c040-46ea-b7b9-603c41615d66-credential-keys\") pod \"657b6f1c-c040-46ea-b7b9-603c41615d66\" (UID: \"657b6f1c-c040-46ea-b7b9-603c41615d66\") " Mar 11 09:01:55 crc kubenswrapper[4808]: I0311 09:01:55.689940 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/657b6f1c-c040-46ea-b7b9-603c41615d66-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "657b6f1c-c040-46ea-b7b9-603c41615d66" (UID: "657b6f1c-c040-46ea-b7b9-603c41615d66"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:01:55 crc kubenswrapper[4808]: I0311 09:01:55.696459 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/657b6f1c-c040-46ea-b7b9-603c41615d66-kube-api-access-vwxxz" (OuterVolumeSpecName: "kube-api-access-vwxxz") pod "657b6f1c-c040-46ea-b7b9-603c41615d66" (UID: "657b6f1c-c040-46ea-b7b9-603c41615d66"). InnerVolumeSpecName "kube-api-access-vwxxz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:01:55 crc kubenswrapper[4808]: I0311 09:01:55.696769 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/657b6f1c-c040-46ea-b7b9-603c41615d66-scripts" (OuterVolumeSpecName: "scripts") pod "657b6f1c-c040-46ea-b7b9-603c41615d66" (UID: "657b6f1c-c040-46ea-b7b9-603c41615d66"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:01:55 crc kubenswrapper[4808]: I0311 09:01:55.697539 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/657b6f1c-c040-46ea-b7b9-603c41615d66-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "657b6f1c-c040-46ea-b7b9-603c41615d66" (UID: "657b6f1c-c040-46ea-b7b9-603c41615d66"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:01:55 crc kubenswrapper[4808]: I0311 09:01:55.715574 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/657b6f1c-c040-46ea-b7b9-603c41615d66-config-data" (OuterVolumeSpecName: "config-data") pod "657b6f1c-c040-46ea-b7b9-603c41615d66" (UID: "657b6f1c-c040-46ea-b7b9-603c41615d66"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:01:55 crc kubenswrapper[4808]: I0311 09:01:55.726885 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/657b6f1c-c040-46ea-b7b9-603c41615d66-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "657b6f1c-c040-46ea-b7b9-603c41615d66" (UID: "657b6f1c-c040-46ea-b7b9-603c41615d66"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:01:55 crc kubenswrapper[4808]: I0311 09:01:55.789203 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/657b6f1c-c040-46ea-b7b9-603c41615d66-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:55 crc kubenswrapper[4808]: I0311 09:01:55.789241 4808 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/657b6f1c-c040-46ea-b7b9-603c41615d66-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:55 crc kubenswrapper[4808]: I0311 09:01:55.789253 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vwxxz\" (UniqueName: \"kubernetes.io/projected/657b6f1c-c040-46ea-b7b9-603c41615d66-kube-api-access-vwxxz\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:55 crc kubenswrapper[4808]: I0311 09:01:55.789265 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/657b6f1c-c040-46ea-b7b9-603c41615d66-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:55 crc kubenswrapper[4808]: I0311 09:01:55.789276 4808 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/657b6f1c-c040-46ea-b7b9-603c41615d66-fernet-keys\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:55 crc kubenswrapper[4808]: I0311 09:01:55.789286 4808 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/657b6f1c-c040-46ea-b7b9-603c41615d66-credential-keys\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:56 crc kubenswrapper[4808]: I0311 09:01:56.030278 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-587fb944fc-6frn6"] Mar 11 09:01:56 crc kubenswrapper[4808]: I0311 09:01:56.182724 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-tbnpr" event={"ID":"657b6f1c-c040-46ea-b7b9-603c41615d66","Type":"ContainerDied","Data":"8abd1cb927804260fa2abe80fcd03fb1d9eb70673b8db1963b676c931b65aeec"} Mar 11 09:01:56 crc kubenswrapper[4808]: I0311 09:01:56.182776 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-tbnpr" Mar 11 09:01:56 crc kubenswrapper[4808]: I0311 09:01:56.182798 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8abd1cb927804260fa2abe80fcd03fb1d9eb70673b8db1963b676c931b65aeec" Mar 11 09:01:56 crc kubenswrapper[4808]: I0311 09:01:56.329929 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-6447d66dcc-mc8df"] Mar 11 09:01:56 crc kubenswrapper[4808]: E0311 09:01:56.330667 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="657b6f1c-c040-46ea-b7b9-603c41615d66" containerName="keystone-bootstrap" Mar 11 09:01:56 crc kubenswrapper[4808]: I0311 09:01:56.330771 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="657b6f1c-c040-46ea-b7b9-603c41615d66" containerName="keystone-bootstrap" Mar 11 09:01:56 crc kubenswrapper[4808]: I0311 09:01:56.331028 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="657b6f1c-c040-46ea-b7b9-603c41615d66" containerName="keystone-bootstrap" Mar 11 09:01:56 crc kubenswrapper[4808]: I0311 09:01:56.332643 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6447d66dcc-mc8df" Mar 11 09:01:56 crc kubenswrapper[4808]: I0311 09:01:56.337424 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-7fg4s" Mar 11 09:01:56 crc kubenswrapper[4808]: I0311 09:01:56.337486 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 11 09:01:56 crc kubenswrapper[4808]: I0311 09:01:56.337700 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 11 09:01:56 crc kubenswrapper[4808]: I0311 09:01:56.337864 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Mar 11 09:01:56 crc kubenswrapper[4808]: I0311 09:01:56.338106 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 11 09:01:56 crc kubenswrapper[4808]: I0311 09:01:56.338123 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Mar 11 09:01:56 crc kubenswrapper[4808]: I0311 09:01:56.358128 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6447d66dcc-mc8df"] Mar 11 09:01:56 crc kubenswrapper[4808]: I0311 09:01:56.399802 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmqz5\" (UniqueName: \"kubernetes.io/projected/b4b8b237-3ec7-41fd-b6a1-fa3670d61c38-kube-api-access-qmqz5\") pod \"keystone-6447d66dcc-mc8df\" (UID: \"b4b8b237-3ec7-41fd-b6a1-fa3670d61c38\") " pod="openstack/keystone-6447d66dcc-mc8df" Mar 11 09:01:56 crc kubenswrapper[4808]: I0311 09:01:56.399866 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4b8b237-3ec7-41fd-b6a1-fa3670d61c38-config-data\") pod \"keystone-6447d66dcc-mc8df\" (UID: \"b4b8b237-3ec7-41fd-b6a1-fa3670d61c38\") " pod="openstack/keystone-6447d66dcc-mc8df" Mar 11 09:01:56 crc kubenswrapper[4808]: I0311 09:01:56.399904 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b4b8b237-3ec7-41fd-b6a1-fa3670d61c38-fernet-keys\") pod \"keystone-6447d66dcc-mc8df\" (UID: \"b4b8b237-3ec7-41fd-b6a1-fa3670d61c38\") " pod="openstack/keystone-6447d66dcc-mc8df" Mar 11 09:01:56 crc kubenswrapper[4808]: I0311 09:01:56.399939 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4b8b237-3ec7-41fd-b6a1-fa3670d61c38-public-tls-certs\") pod \"keystone-6447d66dcc-mc8df\" (UID: \"b4b8b237-3ec7-41fd-b6a1-fa3670d61c38\") " pod="openstack/keystone-6447d66dcc-mc8df" Mar 11 09:01:56 crc kubenswrapper[4808]: I0311 09:01:56.399977 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4b8b237-3ec7-41fd-b6a1-fa3670d61c38-internal-tls-certs\") pod \"keystone-6447d66dcc-mc8df\" (UID: \"b4b8b237-3ec7-41fd-b6a1-fa3670d61c38\") " pod="openstack/keystone-6447d66dcc-mc8df" Mar 11 09:01:56 crc kubenswrapper[4808]: I0311 09:01:56.400012 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b4b8b237-3ec7-41fd-b6a1-fa3670d61c38-credential-keys\") pod \"keystone-6447d66dcc-mc8df\" (UID: \"b4b8b237-3ec7-41fd-b6a1-fa3670d61c38\") " pod="openstack/keystone-6447d66dcc-mc8df" Mar 11 09:01:56 crc kubenswrapper[4808]: I0311 09:01:56.400680 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4b8b237-3ec7-41fd-b6a1-fa3670d61c38-scripts\") pod \"keystone-6447d66dcc-mc8df\" (UID: \"b4b8b237-3ec7-41fd-b6a1-fa3670d61c38\") " pod="openstack/keystone-6447d66dcc-mc8df" Mar 11 09:01:56 crc kubenswrapper[4808]: I0311 09:01:56.401260 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4b8b237-3ec7-41fd-b6a1-fa3670d61c38-combined-ca-bundle\") pod \"keystone-6447d66dcc-mc8df\" (UID: \"b4b8b237-3ec7-41fd-b6a1-fa3670d61c38\") " pod="openstack/keystone-6447d66dcc-mc8df" Mar 11 09:01:56 crc kubenswrapper[4808]: I0311 09:01:56.503721 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4b8b237-3ec7-41fd-b6a1-fa3670d61c38-combined-ca-bundle\") pod \"keystone-6447d66dcc-mc8df\" (UID: \"b4b8b237-3ec7-41fd-b6a1-fa3670d61c38\") " pod="openstack/keystone-6447d66dcc-mc8df" Mar 11 09:01:56 crc kubenswrapper[4808]: I0311 09:01:56.503811 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmqz5\" (UniqueName: \"kubernetes.io/projected/b4b8b237-3ec7-41fd-b6a1-fa3670d61c38-kube-api-access-qmqz5\") pod \"keystone-6447d66dcc-mc8df\" (UID: \"b4b8b237-3ec7-41fd-b6a1-fa3670d61c38\") " pod="openstack/keystone-6447d66dcc-mc8df" Mar 11 09:01:56 crc kubenswrapper[4808]: I0311 09:01:56.503847 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4b8b237-3ec7-41fd-b6a1-fa3670d61c38-config-data\") pod \"keystone-6447d66dcc-mc8df\" (UID: \"b4b8b237-3ec7-41fd-b6a1-fa3670d61c38\") " pod="openstack/keystone-6447d66dcc-mc8df" Mar 11 09:01:56 crc kubenswrapper[4808]: I0311 09:01:56.503886 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b4b8b237-3ec7-41fd-b6a1-fa3670d61c38-fernet-keys\") pod \"keystone-6447d66dcc-mc8df\" (UID: \"b4b8b237-3ec7-41fd-b6a1-fa3670d61c38\") " pod="openstack/keystone-6447d66dcc-mc8df" Mar 11 09:01:56 crc kubenswrapper[4808]: I0311 09:01:56.503918 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4b8b237-3ec7-41fd-b6a1-fa3670d61c38-public-tls-certs\") pod \"keystone-6447d66dcc-mc8df\" (UID: \"b4b8b237-3ec7-41fd-b6a1-fa3670d61c38\") " pod="openstack/keystone-6447d66dcc-mc8df" Mar 11 09:01:56 crc kubenswrapper[4808]: I0311 09:01:56.503955 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4b8b237-3ec7-41fd-b6a1-fa3670d61c38-internal-tls-certs\") pod \"keystone-6447d66dcc-mc8df\" (UID: \"b4b8b237-3ec7-41fd-b6a1-fa3670d61c38\") " pod="openstack/keystone-6447d66dcc-mc8df" Mar 11 09:01:56 crc kubenswrapper[4808]: I0311 09:01:56.503984 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b4b8b237-3ec7-41fd-b6a1-fa3670d61c38-credential-keys\") pod \"keystone-6447d66dcc-mc8df\" (UID: \"b4b8b237-3ec7-41fd-b6a1-fa3670d61c38\") " pod="openstack/keystone-6447d66dcc-mc8df" Mar 11 09:01:56 crc kubenswrapper[4808]: I0311 09:01:56.504086 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4b8b237-3ec7-41fd-b6a1-fa3670d61c38-scripts\") pod \"keystone-6447d66dcc-mc8df\" (UID: \"b4b8b237-3ec7-41fd-b6a1-fa3670d61c38\") " pod="openstack/keystone-6447d66dcc-mc8df" Mar 11 09:01:56 crc kubenswrapper[4808]: I0311 09:01:56.510963 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4b8b237-3ec7-41fd-b6a1-fa3670d61c38-scripts\") pod \"keystone-6447d66dcc-mc8df\" (UID: \"b4b8b237-3ec7-41fd-b6a1-fa3670d61c38\") " pod="openstack/keystone-6447d66dcc-mc8df" Mar 11 09:01:56 crc kubenswrapper[4808]: I0311 09:01:56.511523 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4b8b237-3ec7-41fd-b6a1-fa3670d61c38-public-tls-certs\") pod \"keystone-6447d66dcc-mc8df\" (UID: \"b4b8b237-3ec7-41fd-b6a1-fa3670d61c38\") " pod="openstack/keystone-6447d66dcc-mc8df" Mar 11 09:01:56 crc kubenswrapper[4808]: I0311 09:01:56.512185 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b4b8b237-3ec7-41fd-b6a1-fa3670d61c38-fernet-keys\") pod \"keystone-6447d66dcc-mc8df\" (UID: \"b4b8b237-3ec7-41fd-b6a1-fa3670d61c38\") " pod="openstack/keystone-6447d66dcc-mc8df" Mar 11 09:01:56 crc kubenswrapper[4808]: I0311 09:01:56.513697 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4b8b237-3ec7-41fd-b6a1-fa3670d61c38-config-data\") pod \"keystone-6447d66dcc-mc8df\" (UID: \"b4b8b237-3ec7-41fd-b6a1-fa3670d61c38\") " pod="openstack/keystone-6447d66dcc-mc8df" Mar 11 09:01:56 crc kubenswrapper[4808]: I0311 09:01:56.516799 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4b8b237-3ec7-41fd-b6a1-fa3670d61c38-combined-ca-bundle\") pod \"keystone-6447d66dcc-mc8df\" (UID: \"b4b8b237-3ec7-41fd-b6a1-fa3670d61c38\") " pod="openstack/keystone-6447d66dcc-mc8df" Mar 11 09:01:56 crc kubenswrapper[4808]: I0311 09:01:56.520450 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b4b8b237-3ec7-41fd-b6a1-fa3670d61c38-credential-keys\") pod \"keystone-6447d66dcc-mc8df\" (UID: \"b4b8b237-3ec7-41fd-b6a1-fa3670d61c38\") " pod="openstack/keystone-6447d66dcc-mc8df" Mar 11 09:01:56 crc kubenswrapper[4808]: I0311 09:01:56.522875 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4b8b237-3ec7-41fd-b6a1-fa3670d61c38-internal-tls-certs\") pod \"keystone-6447d66dcc-mc8df\" (UID: \"b4b8b237-3ec7-41fd-b6a1-fa3670d61c38\") " pod="openstack/keystone-6447d66dcc-mc8df" Mar 11 09:01:56 crc kubenswrapper[4808]: I0311 09:01:56.525035 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmqz5\" (UniqueName: \"kubernetes.io/projected/b4b8b237-3ec7-41fd-b6a1-fa3670d61c38-kube-api-access-qmqz5\") pod \"keystone-6447d66dcc-mc8df\" (UID: \"b4b8b237-3ec7-41fd-b6a1-fa3670d61c38\") " pod="openstack/keystone-6447d66dcc-mc8df" Mar 11 09:01:56 crc kubenswrapper[4808]: I0311 09:01:56.655924 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6447d66dcc-mc8df" Mar 11 09:01:58 crc kubenswrapper[4808]: I0311 09:01:58.008065 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6447d66dcc-mc8df"] Mar 11 09:01:58 crc kubenswrapper[4808]: I0311 09:01:58.128405 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-74c8dbd954-d5nz4"] Mar 11 09:01:58 crc kubenswrapper[4808]: W0311 09:01:58.131487 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0f04cfb3_ade7_4ab5_a497_0be1d758cad7.slice/crio-b4280b9132054522d6ddf770c67d9de49349a38b2b63727f0ff4611247da7761 WatchSource:0}: Error finding container b4280b9132054522d6ddf770c67d9de49349a38b2b63727f0ff4611247da7761: Status 404 returned error can't find the container with id b4280b9132054522d6ddf770c67d9de49349a38b2b63727f0ff4611247da7761 Mar 11 09:01:58 crc kubenswrapper[4808]: I0311 09:01:58.227202 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-587fb944fc-6frn6" event={"ID":"ba48f43e-924f-4ba8-af86-f0574b9a625a","Type":"ContainerStarted","Data":"61eb3cc64742aa5f8e0fcc5334433bb4c72d03a30181457fa988d9e6937fa649"} Mar 11 09:01:58 crc kubenswrapper[4808]: I0311 09:01:58.228047 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-587fb944fc-6frn6" event={"ID":"ba48f43e-924f-4ba8-af86-f0574b9a625a","Type":"ContainerStarted","Data":"5744e1e3d7c24424d0ec7ee697296da0bd6bae1cef0d68b1c0d722bc0d72af72"} Mar 11 09:01:58 crc kubenswrapper[4808]: I0311 09:01:58.228066 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-587fb944fc-6frn6" event={"ID":"ba48f43e-924f-4ba8-af86-f0574b9a625a","Type":"ContainerStarted","Data":"ad946938b3f80d69c159970b1ea98d5dbcc84406c0a5fdac7640214bf8c43d88"} Mar 11 09:01:58 crc kubenswrapper[4808]: I0311 09:01:58.228211 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-587fb944fc-6frn6" Mar 11 09:01:58 crc kubenswrapper[4808]: I0311 09:01:58.233160 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bfeb64e4-ece9-4403-8534-acf6cfccf457","Type":"ContainerStarted","Data":"26010bdaea83c0634aa7f139d8fcd7c895247f66b33bbd7b1aa3d40d74ee7f3f"} Mar 11 09:01:58 crc kubenswrapper[4808]: I0311 09:01:58.237451 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-74c8dbd954-d5nz4" event={"ID":"0f04cfb3-ade7-4ab5-a497-0be1d758cad7","Type":"ContainerStarted","Data":"b4280b9132054522d6ddf770c67d9de49349a38b2b63727f0ff4611247da7761"} Mar 11 09:01:58 crc kubenswrapper[4808]: I0311 09:01:58.245793 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6447d66dcc-mc8df" event={"ID":"b4b8b237-3ec7-41fd-b6a1-fa3670d61c38","Type":"ContainerStarted","Data":"2cff0fdb0107f2b8631be6cf24556311063eef05685df078f55d9979aa002673"} Mar 11 09:01:58 crc kubenswrapper[4808]: I0311 09:01:58.246573 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-6447d66dcc-mc8df" Mar 11 09:01:58 crc kubenswrapper[4808]: I0311 09:01:58.263916 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-587fb944fc-6frn6" podStartSLOduration=3.263895387 podStartE2EDuration="3.263895387s" podCreationTimestamp="2026-03-11 09:01:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:01:58.25678989 +0000 UTC m=+1369.210113210" watchObservedRunningTime="2026-03-11 09:01:58.263895387 +0000 UTC m=+1369.217218707" Mar 11 09:01:58 crc kubenswrapper[4808]: I0311 09:01:58.289270 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-6447d66dcc-mc8df" podStartSLOduration=2.289247723 podStartE2EDuration="2.289247723s" podCreationTimestamp="2026-03-11 09:01:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:01:58.270930341 +0000 UTC m=+1369.224253681" watchObservedRunningTime="2026-03-11 09:01:58.289247723 +0000 UTC m=+1369.242571043" Mar 11 09:01:58 crc kubenswrapper[4808]: I0311 09:01:58.456263 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Mar 11 09:01:58 crc kubenswrapper[4808]: I0311 09:01:58.456306 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Mar 11 09:01:58 crc kubenswrapper[4808]: I0311 09:01:58.485530 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Mar 11 09:01:58 crc kubenswrapper[4808]: I0311 09:01:58.514036 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Mar 11 09:01:58 crc kubenswrapper[4808]: I0311 09:01:58.667303 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Mar 11 09:01:58 crc kubenswrapper[4808]: I0311 09:01:58.667812 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Mar 11 09:01:58 crc kubenswrapper[4808]: I0311 09:01:58.698962 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Mar 11 09:01:58 crc kubenswrapper[4808]: I0311 09:01:58.706883 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Mar 11 09:01:59 crc kubenswrapper[4808]: I0311 09:01:59.259125 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-74c8dbd954-d5nz4" event={"ID":"0f04cfb3-ade7-4ab5-a497-0be1d758cad7","Type":"ContainerStarted","Data":"2b68ac5b1779d86b6bec7e3aa128e429df297b656bb8d0ebf0dffd758c3998f1"} Mar 11 09:01:59 crc kubenswrapper[4808]: I0311 09:01:59.259531 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-74c8dbd954-d5nz4" Mar 11 09:01:59 crc kubenswrapper[4808]: I0311 09:01:59.259552 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-74c8dbd954-d5nz4" Mar 11 09:01:59 crc kubenswrapper[4808]: I0311 09:01:59.259567 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-74c8dbd954-d5nz4" event={"ID":"0f04cfb3-ade7-4ab5-a497-0be1d758cad7","Type":"ContainerStarted","Data":"553c984b0b8459dbd21f31f9d84cf1612f1aeedce28219353b4b8bf87f44b318"} Mar 11 09:01:59 crc kubenswrapper[4808]: I0311 09:01:59.262142 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6447d66dcc-mc8df" event={"ID":"b4b8b237-3ec7-41fd-b6a1-fa3670d61c38","Type":"ContainerStarted","Data":"b47c6964048276c5d98b0417ce7ca53e6a7bf3b09f49607b9460de27e6f58132"} Mar 11 09:01:59 crc kubenswrapper[4808]: I0311 09:01:59.262949 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Mar 11 09:01:59 crc kubenswrapper[4808]: I0311 09:01:59.262982 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Mar 11 09:01:59 crc kubenswrapper[4808]: I0311 09:01:59.263137 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Mar 11 09:01:59 crc kubenswrapper[4808]: I0311 09:01:59.263188 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Mar 11 09:01:59 crc kubenswrapper[4808]: I0311 09:01:59.286690 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-74c8dbd954-d5nz4" podStartSLOduration=4.286672274 podStartE2EDuration="4.286672274s" podCreationTimestamp="2026-03-11 09:01:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:01:59.284529952 +0000 UTC m=+1370.237853282" watchObservedRunningTime="2026-03-11 09:01:59.286672274 +0000 UTC m=+1370.239995594" Mar 11 09:02:00 crc kubenswrapper[4808]: I0311 09:02:00.148686 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553662-hzj4w"] Mar 11 09:02:00 crc kubenswrapper[4808]: I0311 09:02:00.151386 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553662-hzj4w" Mar 11 09:02:00 crc kubenswrapper[4808]: I0311 09:02:00.155960 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 09:02:00 crc kubenswrapper[4808]: I0311 09:02:00.156213 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-8r7cc" Mar 11 09:02:00 crc kubenswrapper[4808]: I0311 09:02:00.156400 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 09:02:00 crc kubenswrapper[4808]: I0311 09:02:00.177238 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kss92\" (UniqueName: \"kubernetes.io/projected/45244591-dcfa-4143-81a3-0be4bbca5450-kube-api-access-kss92\") pod \"auto-csr-approver-29553662-hzj4w\" (UID: \"45244591-dcfa-4143-81a3-0be4bbca5450\") " pod="openshift-infra/auto-csr-approver-29553662-hzj4w" Mar 11 09:02:00 crc kubenswrapper[4808]: I0311 09:02:00.177666 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553662-hzj4w"] Mar 11 09:02:00 crc kubenswrapper[4808]: I0311 09:02:00.280881 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kss92\" (UniqueName: \"kubernetes.io/projected/45244591-dcfa-4143-81a3-0be4bbca5450-kube-api-access-kss92\") pod \"auto-csr-approver-29553662-hzj4w\" (UID: \"45244591-dcfa-4143-81a3-0be4bbca5450\") " pod="openshift-infra/auto-csr-approver-29553662-hzj4w" Mar 11 09:02:00 crc kubenswrapper[4808]: I0311 09:02:00.310946 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kss92\" (UniqueName: \"kubernetes.io/projected/45244591-dcfa-4143-81a3-0be4bbca5450-kube-api-access-kss92\") pod \"auto-csr-approver-29553662-hzj4w\" (UID: \"45244591-dcfa-4143-81a3-0be4bbca5450\") " pod="openshift-infra/auto-csr-approver-29553662-hzj4w" Mar 11 09:02:00 crc kubenswrapper[4808]: I0311 09:02:00.477158 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553662-hzj4w" Mar 11 09:02:01 crc kubenswrapper[4808]: I0311 09:02:01.039623 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553662-hzj4w"] Mar 11 09:02:01 crc kubenswrapper[4808]: I0311 09:02:01.302414 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553662-hzj4w" event={"ID":"45244591-dcfa-4143-81a3-0be4bbca5450","Type":"ContainerStarted","Data":"cef756f227cbfab7e772a668dfd784793869fd62868457362aded6801e5bdf88"} Mar 11 09:02:01 crc kubenswrapper[4808]: I0311 09:02:01.302468 4808 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 11 09:02:01 crc kubenswrapper[4808]: I0311 09:02:01.302748 4808 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 11 09:02:01 crc kubenswrapper[4808]: I0311 09:02:01.302468 4808 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 11 09:02:01 crc kubenswrapper[4808]: I0311 09:02:01.302878 4808 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 11 09:02:01 crc kubenswrapper[4808]: I0311 09:02:01.473281 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Mar 11 09:02:01 crc kubenswrapper[4808]: I0311 09:02:01.584445 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Mar 11 09:02:01 crc kubenswrapper[4808]: I0311 09:02:01.678483 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7859c7799c-pp2j9" Mar 11 09:02:01 crc kubenswrapper[4808]: I0311 09:02:01.692927 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Mar 11 09:02:01 crc kubenswrapper[4808]: I0311 09:02:01.779844 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-ccd7c9f8f-zc7mn"] Mar 11 09:02:01 crc kubenswrapper[4808]: I0311 09:02:01.780060 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-ccd7c9f8f-zc7mn" podUID="dcacf1a9-dbf5-42c3-90a3-03178f6e1659" containerName="dnsmasq-dns" containerID="cri-o://23fac4e981367fd2ffbd1611629680dcbba440d3d62c634ab2ace85327b0385e" gracePeriod=10 Mar 11 09:02:01 crc kubenswrapper[4808]: I0311 09:02:01.780462 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Mar 11 09:02:02 crc kubenswrapper[4808]: I0311 09:02:02.335875 4808 generic.go:334] "Generic (PLEG): container finished" podID="dcacf1a9-dbf5-42c3-90a3-03178f6e1659" containerID="23fac4e981367fd2ffbd1611629680dcbba440d3d62c634ab2ace85327b0385e" exitCode=0 Mar 11 09:02:02 crc kubenswrapper[4808]: I0311 09:02:02.337172 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-ccd7c9f8f-zc7mn" event={"ID":"dcacf1a9-dbf5-42c3-90a3-03178f6e1659","Type":"ContainerDied","Data":"23fac4e981367fd2ffbd1611629680dcbba440d3d62c634ab2ace85327b0385e"} Mar 11 09:02:02 crc kubenswrapper[4808]: I0311 09:02:02.821493 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-ccd7c9f8f-zc7mn" Mar 11 09:02:02 crc kubenswrapper[4808]: I0311 09:02:02.941543 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qzftg\" (UniqueName: \"kubernetes.io/projected/dcacf1a9-dbf5-42c3-90a3-03178f6e1659-kube-api-access-qzftg\") pod \"dcacf1a9-dbf5-42c3-90a3-03178f6e1659\" (UID: \"dcacf1a9-dbf5-42c3-90a3-03178f6e1659\") " Mar 11 09:02:02 crc kubenswrapper[4808]: I0311 09:02:02.941671 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcacf1a9-dbf5-42c3-90a3-03178f6e1659-config\") pod \"dcacf1a9-dbf5-42c3-90a3-03178f6e1659\" (UID: \"dcacf1a9-dbf5-42c3-90a3-03178f6e1659\") " Mar 11 09:02:02 crc kubenswrapper[4808]: I0311 09:02:02.941689 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dcacf1a9-dbf5-42c3-90a3-03178f6e1659-dns-swift-storage-0\") pod \"dcacf1a9-dbf5-42c3-90a3-03178f6e1659\" (UID: \"dcacf1a9-dbf5-42c3-90a3-03178f6e1659\") " Mar 11 09:02:02 crc kubenswrapper[4808]: I0311 09:02:02.941743 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dcacf1a9-dbf5-42c3-90a3-03178f6e1659-dns-svc\") pod \"dcacf1a9-dbf5-42c3-90a3-03178f6e1659\" (UID: \"dcacf1a9-dbf5-42c3-90a3-03178f6e1659\") " Mar 11 09:02:02 crc kubenswrapper[4808]: I0311 09:02:02.941829 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dcacf1a9-dbf5-42c3-90a3-03178f6e1659-ovsdbserver-nb\") pod \"dcacf1a9-dbf5-42c3-90a3-03178f6e1659\" (UID: \"dcacf1a9-dbf5-42c3-90a3-03178f6e1659\") " Mar 11 09:02:02 crc kubenswrapper[4808]: I0311 09:02:02.941874 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dcacf1a9-dbf5-42c3-90a3-03178f6e1659-ovsdbserver-sb\") pod \"dcacf1a9-dbf5-42c3-90a3-03178f6e1659\" (UID: \"dcacf1a9-dbf5-42c3-90a3-03178f6e1659\") " Mar 11 09:02:02 crc kubenswrapper[4808]: I0311 09:02:02.982544 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcacf1a9-dbf5-42c3-90a3-03178f6e1659-kube-api-access-qzftg" (OuterVolumeSpecName: "kube-api-access-qzftg") pod "dcacf1a9-dbf5-42c3-90a3-03178f6e1659" (UID: "dcacf1a9-dbf5-42c3-90a3-03178f6e1659"). InnerVolumeSpecName "kube-api-access-qzftg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:02:03 crc kubenswrapper[4808]: I0311 09:02:03.010117 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcacf1a9-dbf5-42c3-90a3-03178f6e1659-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "dcacf1a9-dbf5-42c3-90a3-03178f6e1659" (UID: "dcacf1a9-dbf5-42c3-90a3-03178f6e1659"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:02:03 crc kubenswrapper[4808]: I0311 09:02:03.037606 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcacf1a9-dbf5-42c3-90a3-03178f6e1659-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "dcacf1a9-dbf5-42c3-90a3-03178f6e1659" (UID: "dcacf1a9-dbf5-42c3-90a3-03178f6e1659"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:02:03 crc kubenswrapper[4808]: I0311 09:02:03.038628 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcacf1a9-dbf5-42c3-90a3-03178f6e1659-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "dcacf1a9-dbf5-42c3-90a3-03178f6e1659" (UID: "dcacf1a9-dbf5-42c3-90a3-03178f6e1659"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:02:03 crc kubenswrapper[4808]: I0311 09:02:03.044393 4808 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dcacf1a9-dbf5-42c3-90a3-03178f6e1659-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:03 crc kubenswrapper[4808]: I0311 09:02:03.044415 4808 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dcacf1a9-dbf5-42c3-90a3-03178f6e1659-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:03 crc kubenswrapper[4808]: I0311 09:02:03.044425 4808 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dcacf1a9-dbf5-42c3-90a3-03178f6e1659-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:03 crc kubenswrapper[4808]: I0311 09:02:03.044436 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qzftg\" (UniqueName: \"kubernetes.io/projected/dcacf1a9-dbf5-42c3-90a3-03178f6e1659-kube-api-access-qzftg\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:03 crc kubenswrapper[4808]: I0311 09:02:03.053350 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcacf1a9-dbf5-42c3-90a3-03178f6e1659-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "dcacf1a9-dbf5-42c3-90a3-03178f6e1659" (UID: "dcacf1a9-dbf5-42c3-90a3-03178f6e1659"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:02:03 crc kubenswrapper[4808]: I0311 09:02:03.056931 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcacf1a9-dbf5-42c3-90a3-03178f6e1659-config" (OuterVolumeSpecName: "config") pod "dcacf1a9-dbf5-42c3-90a3-03178f6e1659" (UID: "dcacf1a9-dbf5-42c3-90a3-03178f6e1659"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:02:03 crc kubenswrapper[4808]: I0311 09:02:03.146165 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcacf1a9-dbf5-42c3-90a3-03178f6e1659-config\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:03 crc kubenswrapper[4808]: I0311 09:02:03.146194 4808 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dcacf1a9-dbf5-42c3-90a3-03178f6e1659-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:03 crc kubenswrapper[4808]: I0311 09:02:03.344623 4808 generic.go:334] "Generic (PLEG): container finished" podID="45244591-dcfa-4143-81a3-0be4bbca5450" containerID="f0ec50125001e1ecc713469145b8273720df67689800b8bfa82e2d70ca2acaa4" exitCode=0 Mar 11 09:02:03 crc kubenswrapper[4808]: I0311 09:02:03.344683 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553662-hzj4w" event={"ID":"45244591-dcfa-4143-81a3-0be4bbca5450","Type":"ContainerDied","Data":"f0ec50125001e1ecc713469145b8273720df67689800b8bfa82e2d70ca2acaa4"} Mar 11 09:02:03 crc kubenswrapper[4808]: I0311 09:02:03.348428 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-ccd7c9f8f-zc7mn" Mar 11 09:02:03 crc kubenswrapper[4808]: I0311 09:02:03.348551 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-ccd7c9f8f-zc7mn" event={"ID":"dcacf1a9-dbf5-42c3-90a3-03178f6e1659","Type":"ContainerDied","Data":"bec31dacc6706207053c4db22ff410c02b1730aaf23871c6fd6dec062e4d0871"} Mar 11 09:02:03 crc kubenswrapper[4808]: I0311 09:02:03.348581 4808 scope.go:117] "RemoveContainer" containerID="23fac4e981367fd2ffbd1611629680dcbba440d3d62c634ab2ace85327b0385e" Mar 11 09:02:03 crc kubenswrapper[4808]: I0311 09:02:03.387485 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-ccd7c9f8f-zc7mn"] Mar 11 09:02:03 crc kubenswrapper[4808]: I0311 09:02:03.395637 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-ccd7c9f8f-zc7mn"] Mar 11 09:02:03 crc kubenswrapper[4808]: I0311 09:02:03.806923 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcacf1a9-dbf5-42c3-90a3-03178f6e1659" path="/var/lib/kubelet/pods/dcacf1a9-dbf5-42c3-90a3-03178f6e1659/volumes" Mar 11 09:02:07 crc kubenswrapper[4808]: I0311 09:02:07.195830 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553662-hzj4w" Mar 11 09:02:07 crc kubenswrapper[4808]: I0311 09:02:07.322118 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kss92\" (UniqueName: \"kubernetes.io/projected/45244591-dcfa-4143-81a3-0be4bbca5450-kube-api-access-kss92\") pod \"45244591-dcfa-4143-81a3-0be4bbca5450\" (UID: \"45244591-dcfa-4143-81a3-0be4bbca5450\") " Mar 11 09:02:07 crc kubenswrapper[4808]: I0311 09:02:07.343435 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45244591-dcfa-4143-81a3-0be4bbca5450-kube-api-access-kss92" (OuterVolumeSpecName: "kube-api-access-kss92") pod "45244591-dcfa-4143-81a3-0be4bbca5450" (UID: "45244591-dcfa-4143-81a3-0be4bbca5450"). InnerVolumeSpecName "kube-api-access-kss92". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:02:07 crc kubenswrapper[4808]: I0311 09:02:07.383083 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553662-hzj4w" event={"ID":"45244591-dcfa-4143-81a3-0be4bbca5450","Type":"ContainerDied","Data":"cef756f227cbfab7e772a668dfd784793869fd62868457362aded6801e5bdf88"} Mar 11 09:02:07 crc kubenswrapper[4808]: I0311 09:02:07.383119 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cef756f227cbfab7e772a668dfd784793869fd62868457362aded6801e5bdf88" Mar 11 09:02:07 crc kubenswrapper[4808]: I0311 09:02:07.383621 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553662-hzj4w" Mar 11 09:02:07 crc kubenswrapper[4808]: I0311 09:02:07.424792 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kss92\" (UniqueName: \"kubernetes.io/projected/45244591-dcfa-4143-81a3-0be4bbca5450-kube-api-access-kss92\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:07 crc kubenswrapper[4808]: I0311 09:02:07.534871 4808 scope.go:117] "RemoveContainer" containerID="eb0547253f11aafc0e90b8cba5a50c52125439a905f93c09438369f437c0316a" Mar 11 09:02:08 crc kubenswrapper[4808]: I0311 09:02:08.261173 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553656-v2j29"] Mar 11 09:02:08 crc kubenswrapper[4808]: I0311 09:02:08.271230 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553656-v2j29"] Mar 11 09:02:08 crc kubenswrapper[4808]: I0311 09:02:08.392469 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-rksh6" event={"ID":"847a58e2-c27f-4b49-8300-cbe239822ffa","Type":"ContainerStarted","Data":"af7c20691f2ca3c9ce9cd99eba3a140d610ed51c1ead1e4be3eb85a879f669b7"} Mar 11 09:02:08 crc kubenswrapper[4808]: I0311 09:02:08.400286 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bfeb64e4-ece9-4403-8534-acf6cfccf457","Type":"ContainerStarted","Data":"9661ec65feb5f893c832a6211b48278ae7c3f8f7c736d6ee39c52a10bf5d79bf"} Mar 11 09:02:08 crc kubenswrapper[4808]: I0311 09:02:08.400643 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bfeb64e4-ece9-4403-8534-acf6cfccf457" containerName="ceilometer-central-agent" containerID="cri-o://9455acbb1198e035922698b6fcc2f339f0368d2f6d1aaf238c322fd6157c5976" gracePeriod=30 Mar 11 09:02:08 crc kubenswrapper[4808]: I0311 09:02:08.400753 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 11 09:02:08 crc kubenswrapper[4808]: I0311 09:02:08.400802 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bfeb64e4-ece9-4403-8534-acf6cfccf457" containerName="proxy-httpd" containerID="cri-o://9661ec65feb5f893c832a6211b48278ae7c3f8f7c736d6ee39c52a10bf5d79bf" gracePeriod=30 Mar 11 09:02:08 crc kubenswrapper[4808]: I0311 09:02:08.400857 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bfeb64e4-ece9-4403-8534-acf6cfccf457" containerName="sg-core" containerID="cri-o://26010bdaea83c0634aa7f139d8fcd7c895247f66b33bbd7b1aa3d40d74ee7f3f" gracePeriod=30 Mar 11 09:02:08 crc kubenswrapper[4808]: I0311 09:02:08.400899 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bfeb64e4-ece9-4403-8534-acf6cfccf457" containerName="ceilometer-notification-agent" containerID="cri-o://296fb8cceddadaf5b4b35036cafc79c48ff95644998859118d49bfb4a73d6e43" gracePeriod=30 Mar 11 09:02:08 crc kubenswrapper[4808]: I0311 09:02:08.441448 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-rksh6" podStartSLOduration=1.8643080429999999 podStartE2EDuration="39.441421403s" podCreationTimestamp="2026-03-11 09:01:29 +0000 UTC" firstStartedPulling="2026-03-11 09:01:30.158977472 +0000 UTC m=+1341.112300782" lastFinishedPulling="2026-03-11 09:02:07.736090822 +0000 UTC m=+1378.689414142" observedRunningTime="2026-03-11 09:02:08.419538587 +0000 UTC m=+1379.372861907" watchObservedRunningTime="2026-03-11 09:02:08.441421403 +0000 UTC m=+1379.394744743" Mar 11 09:02:08 crc kubenswrapper[4808]: I0311 09:02:08.441852 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.712694683 podStartE2EDuration="40.441843295s" podCreationTimestamp="2026-03-11 09:01:28 +0000 UTC" firstStartedPulling="2026-03-11 09:01:30.030380851 +0000 UTC m=+1340.983704171" lastFinishedPulling="2026-03-11 09:02:07.759529463 +0000 UTC m=+1378.712852783" observedRunningTime="2026-03-11 09:02:08.435755018 +0000 UTC m=+1379.389078388" watchObservedRunningTime="2026-03-11 09:02:08.441843295 +0000 UTC m=+1379.395166635" Mar 11 09:02:08 crc kubenswrapper[4808]: E0311 09:02:08.555139 4808 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbfeb64e4_ece9_4403_8534_acf6cfccf457.slice/crio-26010bdaea83c0634aa7f139d8fcd7c895247f66b33bbd7b1aa3d40d74ee7f3f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbfeb64e4_ece9_4403_8534_acf6cfccf457.slice/crio-conmon-26010bdaea83c0634aa7f139d8fcd7c895247f66b33bbd7b1aa3d40d74ee7f3f.scope\": RecentStats: unable to find data in memory cache]" Mar 11 09:02:09 crc kubenswrapper[4808]: I0311 09:02:09.410825 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-rtcbw" event={"ID":"75b5595d-1d35-47d9-b6a2-196e30848a13","Type":"ContainerStarted","Data":"7466a0268d7368fd7ab3cd95b2a779f359f149cf2f4b6201476727e40a6d77bd"} Mar 11 09:02:09 crc kubenswrapper[4808]: I0311 09:02:09.415242 4808 generic.go:334] "Generic (PLEG): container finished" podID="bfeb64e4-ece9-4403-8534-acf6cfccf457" containerID="9661ec65feb5f893c832a6211b48278ae7c3f8f7c736d6ee39c52a10bf5d79bf" exitCode=0 Mar 11 09:02:09 crc kubenswrapper[4808]: I0311 09:02:09.415266 4808 generic.go:334] "Generic (PLEG): container finished" podID="bfeb64e4-ece9-4403-8534-acf6cfccf457" containerID="26010bdaea83c0634aa7f139d8fcd7c895247f66b33bbd7b1aa3d40d74ee7f3f" exitCode=2 Mar 11 09:02:09 crc kubenswrapper[4808]: I0311 09:02:09.415274 4808 generic.go:334] "Generic (PLEG): container finished" podID="bfeb64e4-ece9-4403-8534-acf6cfccf457" containerID="296fb8cceddadaf5b4b35036cafc79c48ff95644998859118d49bfb4a73d6e43" exitCode=0 Mar 11 09:02:09 crc kubenswrapper[4808]: I0311 09:02:09.415282 4808 generic.go:334] "Generic (PLEG): container finished" podID="bfeb64e4-ece9-4403-8534-acf6cfccf457" containerID="9455acbb1198e035922698b6fcc2f339f0368d2f6d1aaf238c322fd6157c5976" exitCode=0 Mar 11 09:02:09 crc kubenswrapper[4808]: I0311 09:02:09.415299 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bfeb64e4-ece9-4403-8534-acf6cfccf457","Type":"ContainerDied","Data":"9661ec65feb5f893c832a6211b48278ae7c3f8f7c736d6ee39c52a10bf5d79bf"} Mar 11 09:02:09 crc kubenswrapper[4808]: I0311 09:02:09.415323 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bfeb64e4-ece9-4403-8534-acf6cfccf457","Type":"ContainerDied","Data":"26010bdaea83c0634aa7f139d8fcd7c895247f66b33bbd7b1aa3d40d74ee7f3f"} Mar 11 09:02:09 crc kubenswrapper[4808]: I0311 09:02:09.415335 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bfeb64e4-ece9-4403-8534-acf6cfccf457","Type":"ContainerDied","Data":"296fb8cceddadaf5b4b35036cafc79c48ff95644998859118d49bfb4a73d6e43"} Mar 11 09:02:09 crc kubenswrapper[4808]: I0311 09:02:09.415343 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bfeb64e4-ece9-4403-8534-acf6cfccf457","Type":"ContainerDied","Data":"9455acbb1198e035922698b6fcc2f339f0368d2f6d1aaf238c322fd6157c5976"} Mar 11 09:02:09 crc kubenswrapper[4808]: I0311 09:02:09.443933 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-rtcbw" podStartSLOduration=3.781466173 podStartE2EDuration="41.443917182s" podCreationTimestamp="2026-03-11 09:01:28 +0000 UTC" firstStartedPulling="2026-03-11 09:01:30.075021283 +0000 UTC m=+1341.028344603" lastFinishedPulling="2026-03-11 09:02:07.737472282 +0000 UTC m=+1378.690795612" observedRunningTime="2026-03-11 09:02:09.443657554 +0000 UTC m=+1380.396980894" watchObservedRunningTime="2026-03-11 09:02:09.443917182 +0000 UTC m=+1380.397240502" Mar 11 09:02:09 crc kubenswrapper[4808]: I0311 09:02:09.754078 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 11 09:02:09 crc kubenswrapper[4808]: I0311 09:02:09.816760 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="911163d2-4780-49ae-bbc0-523d657323a8" path="/var/lib/kubelet/pods/911163d2-4780-49ae-bbc0-523d657323a8/volumes" Mar 11 09:02:09 crc kubenswrapper[4808]: I0311 09:02:09.866611 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bfeb64e4-ece9-4403-8534-acf6cfccf457-scripts\") pod \"bfeb64e4-ece9-4403-8534-acf6cfccf457\" (UID: \"bfeb64e4-ece9-4403-8534-acf6cfccf457\") " Mar 11 09:02:09 crc kubenswrapper[4808]: I0311 09:02:09.866720 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bfeb64e4-ece9-4403-8534-acf6cfccf457-log-httpd\") pod \"bfeb64e4-ece9-4403-8534-acf6cfccf457\" (UID: \"bfeb64e4-ece9-4403-8534-acf6cfccf457\") " Mar 11 09:02:09 crc kubenswrapper[4808]: I0311 09:02:09.866774 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfeb64e4-ece9-4403-8534-acf6cfccf457-config-data\") pod \"bfeb64e4-ece9-4403-8534-acf6cfccf457\" (UID: \"bfeb64e4-ece9-4403-8534-acf6cfccf457\") " Mar 11 09:02:09 crc kubenswrapper[4808]: I0311 09:02:09.866800 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zm8xl\" (UniqueName: \"kubernetes.io/projected/bfeb64e4-ece9-4403-8534-acf6cfccf457-kube-api-access-zm8xl\") pod \"bfeb64e4-ece9-4403-8534-acf6cfccf457\" (UID: \"bfeb64e4-ece9-4403-8534-acf6cfccf457\") " Mar 11 09:02:09 crc kubenswrapper[4808]: I0311 09:02:09.866823 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfeb64e4-ece9-4403-8534-acf6cfccf457-combined-ca-bundle\") pod \"bfeb64e4-ece9-4403-8534-acf6cfccf457\" (UID: \"bfeb64e4-ece9-4403-8534-acf6cfccf457\") " Mar 11 09:02:09 crc kubenswrapper[4808]: I0311 09:02:09.866874 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bfeb64e4-ece9-4403-8534-acf6cfccf457-run-httpd\") pod \"bfeb64e4-ece9-4403-8534-acf6cfccf457\" (UID: \"bfeb64e4-ece9-4403-8534-acf6cfccf457\") " Mar 11 09:02:09 crc kubenswrapper[4808]: I0311 09:02:09.866937 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bfeb64e4-ece9-4403-8534-acf6cfccf457-sg-core-conf-yaml\") pod \"bfeb64e4-ece9-4403-8534-acf6cfccf457\" (UID: \"bfeb64e4-ece9-4403-8534-acf6cfccf457\") " Mar 11 09:02:09 crc kubenswrapper[4808]: I0311 09:02:09.868233 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bfeb64e4-ece9-4403-8534-acf6cfccf457-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "bfeb64e4-ece9-4403-8534-acf6cfccf457" (UID: "bfeb64e4-ece9-4403-8534-acf6cfccf457"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:02:09 crc kubenswrapper[4808]: I0311 09:02:09.868883 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bfeb64e4-ece9-4403-8534-acf6cfccf457-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "bfeb64e4-ece9-4403-8534-acf6cfccf457" (UID: "bfeb64e4-ece9-4403-8534-acf6cfccf457"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:02:09 crc kubenswrapper[4808]: I0311 09:02:09.872969 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfeb64e4-ece9-4403-8534-acf6cfccf457-scripts" (OuterVolumeSpecName: "scripts") pod "bfeb64e4-ece9-4403-8534-acf6cfccf457" (UID: "bfeb64e4-ece9-4403-8534-acf6cfccf457"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:02:09 crc kubenswrapper[4808]: I0311 09:02:09.879536 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfeb64e4-ece9-4403-8534-acf6cfccf457-kube-api-access-zm8xl" (OuterVolumeSpecName: "kube-api-access-zm8xl") pod "bfeb64e4-ece9-4403-8534-acf6cfccf457" (UID: "bfeb64e4-ece9-4403-8534-acf6cfccf457"). InnerVolumeSpecName "kube-api-access-zm8xl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:02:09 crc kubenswrapper[4808]: I0311 09:02:09.896524 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfeb64e4-ece9-4403-8534-acf6cfccf457-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "bfeb64e4-ece9-4403-8534-acf6cfccf457" (UID: "bfeb64e4-ece9-4403-8534-acf6cfccf457"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:02:09 crc kubenswrapper[4808]: I0311 09:02:09.946682 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfeb64e4-ece9-4403-8534-acf6cfccf457-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bfeb64e4-ece9-4403-8534-acf6cfccf457" (UID: "bfeb64e4-ece9-4403-8534-acf6cfccf457"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:02:09 crc kubenswrapper[4808]: I0311 09:02:09.968734 4808 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bfeb64e4-ece9-4403-8534-acf6cfccf457-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:09 crc kubenswrapper[4808]: I0311 09:02:09.968779 4808 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bfeb64e4-ece9-4403-8534-acf6cfccf457-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:09 crc kubenswrapper[4808]: I0311 09:02:09.968797 4808 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bfeb64e4-ece9-4403-8534-acf6cfccf457-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:09 crc kubenswrapper[4808]: I0311 09:02:09.968812 4808 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bfeb64e4-ece9-4403-8534-acf6cfccf457-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:09 crc kubenswrapper[4808]: I0311 09:02:09.968829 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zm8xl\" (UniqueName: \"kubernetes.io/projected/bfeb64e4-ece9-4403-8534-acf6cfccf457-kube-api-access-zm8xl\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:09 crc kubenswrapper[4808]: I0311 09:02:09.968846 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfeb64e4-ece9-4403-8534-acf6cfccf457-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:09 crc kubenswrapper[4808]: I0311 09:02:09.969007 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfeb64e4-ece9-4403-8534-acf6cfccf457-config-data" (OuterVolumeSpecName: "config-data") pod "bfeb64e4-ece9-4403-8534-acf6cfccf457" (UID: "bfeb64e4-ece9-4403-8534-acf6cfccf457"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:02:10 crc kubenswrapper[4808]: I0311 09:02:10.072282 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfeb64e4-ece9-4403-8534-acf6cfccf457-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:10 crc kubenswrapper[4808]: I0311 09:02:10.429767 4808 generic.go:334] "Generic (PLEG): container finished" podID="847a58e2-c27f-4b49-8300-cbe239822ffa" containerID="af7c20691f2ca3c9ce9cd99eba3a140d610ed51c1ead1e4be3eb85a879f669b7" exitCode=0 Mar 11 09:02:10 crc kubenswrapper[4808]: I0311 09:02:10.429905 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-rksh6" event={"ID":"847a58e2-c27f-4b49-8300-cbe239822ffa","Type":"ContainerDied","Data":"af7c20691f2ca3c9ce9cd99eba3a140d610ed51c1ead1e4be3eb85a879f669b7"} Mar 11 09:02:10 crc kubenswrapper[4808]: I0311 09:02:10.437178 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bfeb64e4-ece9-4403-8534-acf6cfccf457","Type":"ContainerDied","Data":"d3c2dc743330ba22109a8b3e17e73ad9e0471b125f83775a202c09df941db2a9"} Mar 11 09:02:10 crc kubenswrapper[4808]: I0311 09:02:10.437240 4808 scope.go:117] "RemoveContainer" containerID="9661ec65feb5f893c832a6211b48278ae7c3f8f7c736d6ee39c52a10bf5d79bf" Mar 11 09:02:10 crc kubenswrapper[4808]: I0311 09:02:10.437485 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 11 09:02:10 crc kubenswrapper[4808]: I0311 09:02:10.495907 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 11 09:02:10 crc kubenswrapper[4808]: I0311 09:02:10.501112 4808 scope.go:117] "RemoveContainer" containerID="26010bdaea83c0634aa7f139d8fcd7c895247f66b33bbd7b1aa3d40d74ee7f3f" Mar 11 09:02:10 crc kubenswrapper[4808]: I0311 09:02:10.506760 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 11 09:02:10 crc kubenswrapper[4808]: I0311 09:02:10.524895 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 11 09:02:10 crc kubenswrapper[4808]: E0311 09:02:10.525229 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcacf1a9-dbf5-42c3-90a3-03178f6e1659" containerName="init" Mar 11 09:02:10 crc kubenswrapper[4808]: I0311 09:02:10.525257 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcacf1a9-dbf5-42c3-90a3-03178f6e1659" containerName="init" Mar 11 09:02:10 crc kubenswrapper[4808]: E0311 09:02:10.525269 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfeb64e4-ece9-4403-8534-acf6cfccf457" containerName="ceilometer-notification-agent" Mar 11 09:02:10 crc kubenswrapper[4808]: I0311 09:02:10.525276 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfeb64e4-ece9-4403-8534-acf6cfccf457" containerName="ceilometer-notification-agent" Mar 11 09:02:10 crc kubenswrapper[4808]: E0311 09:02:10.525288 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45244591-dcfa-4143-81a3-0be4bbca5450" containerName="oc" Mar 11 09:02:10 crc kubenswrapper[4808]: I0311 09:02:10.525293 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="45244591-dcfa-4143-81a3-0be4bbca5450" containerName="oc" Mar 11 09:02:10 crc kubenswrapper[4808]: E0311 09:02:10.525310 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcacf1a9-dbf5-42c3-90a3-03178f6e1659" containerName="dnsmasq-dns" Mar 11 09:02:10 crc kubenswrapper[4808]: I0311 09:02:10.525316 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcacf1a9-dbf5-42c3-90a3-03178f6e1659" containerName="dnsmasq-dns" Mar 11 09:02:10 crc kubenswrapper[4808]: E0311 09:02:10.525340 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfeb64e4-ece9-4403-8534-acf6cfccf457" containerName="sg-core" Mar 11 09:02:10 crc kubenswrapper[4808]: I0311 09:02:10.525346 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfeb64e4-ece9-4403-8534-acf6cfccf457" containerName="sg-core" Mar 11 09:02:10 crc kubenswrapper[4808]: E0311 09:02:10.525372 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfeb64e4-ece9-4403-8534-acf6cfccf457" containerName="proxy-httpd" Mar 11 09:02:10 crc kubenswrapper[4808]: I0311 09:02:10.525381 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfeb64e4-ece9-4403-8534-acf6cfccf457" containerName="proxy-httpd" Mar 11 09:02:10 crc kubenswrapper[4808]: E0311 09:02:10.525391 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfeb64e4-ece9-4403-8534-acf6cfccf457" containerName="ceilometer-central-agent" Mar 11 09:02:10 crc kubenswrapper[4808]: I0311 09:02:10.525399 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfeb64e4-ece9-4403-8534-acf6cfccf457" containerName="ceilometer-central-agent" Mar 11 09:02:10 crc kubenswrapper[4808]: I0311 09:02:10.525543 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfeb64e4-ece9-4403-8534-acf6cfccf457" containerName="proxy-httpd" Mar 11 09:02:10 crc kubenswrapper[4808]: I0311 09:02:10.525561 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcacf1a9-dbf5-42c3-90a3-03178f6e1659" containerName="dnsmasq-dns" Mar 11 09:02:10 crc kubenswrapper[4808]: I0311 09:02:10.525573 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfeb64e4-ece9-4403-8534-acf6cfccf457" containerName="sg-core" Mar 11 09:02:10 crc kubenswrapper[4808]: I0311 09:02:10.525586 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfeb64e4-ece9-4403-8534-acf6cfccf457" containerName="ceilometer-central-agent" Mar 11 09:02:10 crc kubenswrapper[4808]: I0311 09:02:10.525594 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfeb64e4-ece9-4403-8534-acf6cfccf457" containerName="ceilometer-notification-agent" Mar 11 09:02:10 crc kubenswrapper[4808]: I0311 09:02:10.525605 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="45244591-dcfa-4143-81a3-0be4bbca5450" containerName="oc" Mar 11 09:02:10 crc kubenswrapper[4808]: I0311 09:02:10.527116 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 11 09:02:10 crc kubenswrapper[4808]: I0311 09:02:10.532511 4808 scope.go:117] "RemoveContainer" containerID="296fb8cceddadaf5b4b35036cafc79c48ff95644998859118d49bfb4a73d6e43" Mar 11 09:02:10 crc kubenswrapper[4808]: I0311 09:02:10.535926 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 11 09:02:10 crc kubenswrapper[4808]: I0311 09:02:10.536688 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 11 09:02:10 crc kubenswrapper[4808]: I0311 09:02:10.559158 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 11 09:02:10 crc kubenswrapper[4808]: I0311 09:02:10.580157 4808 scope.go:117] "RemoveContainer" containerID="9455acbb1198e035922698b6fcc2f339f0368d2f6d1aaf238c322fd6157c5976" Mar 11 09:02:10 crc kubenswrapper[4808]: I0311 09:02:10.583290 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1b65c974-ed08-431d-a8bc-3e86af280319-log-httpd\") pod \"ceilometer-0\" (UID: \"1b65c974-ed08-431d-a8bc-3e86af280319\") " pod="openstack/ceilometer-0" Mar 11 09:02:10 crc kubenswrapper[4808]: I0311 09:02:10.583337 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1b65c974-ed08-431d-a8bc-3e86af280319-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1b65c974-ed08-431d-a8bc-3e86af280319\") " pod="openstack/ceilometer-0" Mar 11 09:02:10 crc kubenswrapper[4808]: I0311 09:02:10.583379 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m76f5\" (UniqueName: \"kubernetes.io/projected/1b65c974-ed08-431d-a8bc-3e86af280319-kube-api-access-m76f5\") pod \"ceilometer-0\" (UID: \"1b65c974-ed08-431d-a8bc-3e86af280319\") " pod="openstack/ceilometer-0" Mar 11 09:02:10 crc kubenswrapper[4808]: I0311 09:02:10.583463 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b65c974-ed08-431d-a8bc-3e86af280319-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1b65c974-ed08-431d-a8bc-3e86af280319\") " pod="openstack/ceilometer-0" Mar 11 09:02:10 crc kubenswrapper[4808]: I0311 09:02:10.583484 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b65c974-ed08-431d-a8bc-3e86af280319-scripts\") pod \"ceilometer-0\" (UID: \"1b65c974-ed08-431d-a8bc-3e86af280319\") " pod="openstack/ceilometer-0" Mar 11 09:02:10 crc kubenswrapper[4808]: I0311 09:02:10.583505 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b65c974-ed08-431d-a8bc-3e86af280319-config-data\") pod \"ceilometer-0\" (UID: \"1b65c974-ed08-431d-a8bc-3e86af280319\") " pod="openstack/ceilometer-0" Mar 11 09:02:10 crc kubenswrapper[4808]: I0311 09:02:10.583550 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1b65c974-ed08-431d-a8bc-3e86af280319-run-httpd\") pod \"ceilometer-0\" (UID: \"1b65c974-ed08-431d-a8bc-3e86af280319\") " pod="openstack/ceilometer-0" Mar 11 09:02:10 crc kubenswrapper[4808]: I0311 09:02:10.624647 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 11 09:02:10 crc kubenswrapper[4808]: E0311 09:02:10.625625 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle config-data kube-api-access-m76f5 log-httpd run-httpd scripts sg-core-conf-yaml], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/ceilometer-0" podUID="1b65c974-ed08-431d-a8bc-3e86af280319" Mar 11 09:02:10 crc kubenswrapper[4808]: I0311 09:02:10.684861 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b65c974-ed08-431d-a8bc-3e86af280319-config-data\") pod \"ceilometer-0\" (UID: \"1b65c974-ed08-431d-a8bc-3e86af280319\") " pod="openstack/ceilometer-0" Mar 11 09:02:10 crc kubenswrapper[4808]: I0311 09:02:10.685106 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1b65c974-ed08-431d-a8bc-3e86af280319-run-httpd\") pod \"ceilometer-0\" (UID: \"1b65c974-ed08-431d-a8bc-3e86af280319\") " pod="openstack/ceilometer-0" Mar 11 09:02:10 crc kubenswrapper[4808]: I0311 09:02:10.685254 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1b65c974-ed08-431d-a8bc-3e86af280319-log-httpd\") pod \"ceilometer-0\" (UID: \"1b65c974-ed08-431d-a8bc-3e86af280319\") " pod="openstack/ceilometer-0" Mar 11 09:02:10 crc kubenswrapper[4808]: I0311 09:02:10.685337 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1b65c974-ed08-431d-a8bc-3e86af280319-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1b65c974-ed08-431d-a8bc-3e86af280319\") " pod="openstack/ceilometer-0" Mar 11 09:02:10 crc kubenswrapper[4808]: I0311 09:02:10.685465 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m76f5\" (UniqueName: \"kubernetes.io/projected/1b65c974-ed08-431d-a8bc-3e86af280319-kube-api-access-m76f5\") pod \"ceilometer-0\" (UID: \"1b65c974-ed08-431d-a8bc-3e86af280319\") " pod="openstack/ceilometer-0" Mar 11 09:02:10 crc kubenswrapper[4808]: I0311 09:02:10.685581 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b65c974-ed08-431d-a8bc-3e86af280319-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1b65c974-ed08-431d-a8bc-3e86af280319\") " pod="openstack/ceilometer-0" Mar 11 09:02:10 crc kubenswrapper[4808]: I0311 09:02:10.685657 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b65c974-ed08-431d-a8bc-3e86af280319-scripts\") pod \"ceilometer-0\" (UID: \"1b65c974-ed08-431d-a8bc-3e86af280319\") " pod="openstack/ceilometer-0" Mar 11 09:02:10 crc kubenswrapper[4808]: I0311 09:02:10.685856 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1b65c974-ed08-431d-a8bc-3e86af280319-run-httpd\") pod \"ceilometer-0\" (UID: \"1b65c974-ed08-431d-a8bc-3e86af280319\") " pod="openstack/ceilometer-0" Mar 11 09:02:10 crc kubenswrapper[4808]: I0311 09:02:10.685955 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1b65c974-ed08-431d-a8bc-3e86af280319-log-httpd\") pod \"ceilometer-0\" (UID: \"1b65c974-ed08-431d-a8bc-3e86af280319\") " pod="openstack/ceilometer-0" Mar 11 09:02:10 crc kubenswrapper[4808]: I0311 09:02:10.690447 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b65c974-ed08-431d-a8bc-3e86af280319-scripts\") pod \"ceilometer-0\" (UID: \"1b65c974-ed08-431d-a8bc-3e86af280319\") " pod="openstack/ceilometer-0" Mar 11 09:02:10 crc kubenswrapper[4808]: I0311 09:02:10.690740 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b65c974-ed08-431d-a8bc-3e86af280319-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1b65c974-ed08-431d-a8bc-3e86af280319\") " pod="openstack/ceilometer-0" Mar 11 09:02:10 crc kubenswrapper[4808]: I0311 09:02:10.694111 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b65c974-ed08-431d-a8bc-3e86af280319-config-data\") pod \"ceilometer-0\" (UID: \"1b65c974-ed08-431d-a8bc-3e86af280319\") " pod="openstack/ceilometer-0" Mar 11 09:02:10 crc kubenswrapper[4808]: I0311 09:02:10.705897 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m76f5\" (UniqueName: \"kubernetes.io/projected/1b65c974-ed08-431d-a8bc-3e86af280319-kube-api-access-m76f5\") pod \"ceilometer-0\" (UID: \"1b65c974-ed08-431d-a8bc-3e86af280319\") " pod="openstack/ceilometer-0" Mar 11 09:02:10 crc kubenswrapper[4808]: I0311 09:02:10.707674 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1b65c974-ed08-431d-a8bc-3e86af280319-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1b65c974-ed08-431d-a8bc-3e86af280319\") " pod="openstack/ceilometer-0" Mar 11 09:02:11 crc kubenswrapper[4808]: I0311 09:02:11.452265 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 11 09:02:11 crc kubenswrapper[4808]: I0311 09:02:11.469680 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 11 09:02:11 crc kubenswrapper[4808]: I0311 09:02:11.498662 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b65c974-ed08-431d-a8bc-3e86af280319-combined-ca-bundle\") pod \"1b65c974-ed08-431d-a8bc-3e86af280319\" (UID: \"1b65c974-ed08-431d-a8bc-3e86af280319\") " Mar 11 09:02:11 crc kubenswrapper[4808]: I0311 09:02:11.498769 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b65c974-ed08-431d-a8bc-3e86af280319-scripts\") pod \"1b65c974-ed08-431d-a8bc-3e86af280319\" (UID: \"1b65c974-ed08-431d-a8bc-3e86af280319\") " Mar 11 09:02:11 crc kubenswrapper[4808]: I0311 09:02:11.498811 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1b65c974-ed08-431d-a8bc-3e86af280319-sg-core-conf-yaml\") pod \"1b65c974-ed08-431d-a8bc-3e86af280319\" (UID: \"1b65c974-ed08-431d-a8bc-3e86af280319\") " Mar 11 09:02:11 crc kubenswrapper[4808]: I0311 09:02:11.498852 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m76f5\" (UniqueName: \"kubernetes.io/projected/1b65c974-ed08-431d-a8bc-3e86af280319-kube-api-access-m76f5\") pod \"1b65c974-ed08-431d-a8bc-3e86af280319\" (UID: \"1b65c974-ed08-431d-a8bc-3e86af280319\") " Mar 11 09:02:11 crc kubenswrapper[4808]: I0311 09:02:11.498902 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1b65c974-ed08-431d-a8bc-3e86af280319-run-httpd\") pod \"1b65c974-ed08-431d-a8bc-3e86af280319\" (UID: \"1b65c974-ed08-431d-a8bc-3e86af280319\") " Mar 11 09:02:11 crc kubenswrapper[4808]: I0311 09:02:11.498963 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b65c974-ed08-431d-a8bc-3e86af280319-config-data\") pod \"1b65c974-ed08-431d-a8bc-3e86af280319\" (UID: \"1b65c974-ed08-431d-a8bc-3e86af280319\") " Mar 11 09:02:11 crc kubenswrapper[4808]: I0311 09:02:11.499080 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1b65c974-ed08-431d-a8bc-3e86af280319-log-httpd\") pod \"1b65c974-ed08-431d-a8bc-3e86af280319\" (UID: \"1b65c974-ed08-431d-a8bc-3e86af280319\") " Mar 11 09:02:11 crc kubenswrapper[4808]: I0311 09:02:11.499732 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b65c974-ed08-431d-a8bc-3e86af280319-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "1b65c974-ed08-431d-a8bc-3e86af280319" (UID: "1b65c974-ed08-431d-a8bc-3e86af280319"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:02:11 crc kubenswrapper[4808]: I0311 09:02:11.500050 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b65c974-ed08-431d-a8bc-3e86af280319-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "1b65c974-ed08-431d-a8bc-3e86af280319" (UID: "1b65c974-ed08-431d-a8bc-3e86af280319"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:02:11 crc kubenswrapper[4808]: I0311 09:02:11.503934 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b65c974-ed08-431d-a8bc-3e86af280319-scripts" (OuterVolumeSpecName: "scripts") pod "1b65c974-ed08-431d-a8bc-3e86af280319" (UID: "1b65c974-ed08-431d-a8bc-3e86af280319"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:02:11 crc kubenswrapper[4808]: I0311 09:02:11.505575 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b65c974-ed08-431d-a8bc-3e86af280319-config-data" (OuterVolumeSpecName: "config-data") pod "1b65c974-ed08-431d-a8bc-3e86af280319" (UID: "1b65c974-ed08-431d-a8bc-3e86af280319"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:02:11 crc kubenswrapper[4808]: I0311 09:02:11.507181 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b65c974-ed08-431d-a8bc-3e86af280319-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1b65c974-ed08-431d-a8bc-3e86af280319" (UID: "1b65c974-ed08-431d-a8bc-3e86af280319"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:02:11 crc kubenswrapper[4808]: I0311 09:02:11.509642 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b65c974-ed08-431d-a8bc-3e86af280319-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "1b65c974-ed08-431d-a8bc-3e86af280319" (UID: "1b65c974-ed08-431d-a8bc-3e86af280319"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:02:11 crc kubenswrapper[4808]: I0311 09:02:11.513154 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b65c974-ed08-431d-a8bc-3e86af280319-kube-api-access-m76f5" (OuterVolumeSpecName: "kube-api-access-m76f5") pod "1b65c974-ed08-431d-a8bc-3e86af280319" (UID: "1b65c974-ed08-431d-a8bc-3e86af280319"). InnerVolumeSpecName "kube-api-access-m76f5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:02:11 crc kubenswrapper[4808]: I0311 09:02:11.601622 4808 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1b65c974-ed08-431d-a8bc-3e86af280319-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:11 crc kubenswrapper[4808]: I0311 09:02:11.601678 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b65c974-ed08-431d-a8bc-3e86af280319-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:11 crc kubenswrapper[4808]: I0311 09:02:11.601696 4808 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1b65c974-ed08-431d-a8bc-3e86af280319-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:11 crc kubenswrapper[4808]: I0311 09:02:11.601713 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b65c974-ed08-431d-a8bc-3e86af280319-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:11 crc kubenswrapper[4808]: I0311 09:02:11.601733 4808 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b65c974-ed08-431d-a8bc-3e86af280319-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:11 crc kubenswrapper[4808]: I0311 09:02:11.601749 4808 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1b65c974-ed08-431d-a8bc-3e86af280319-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:11 crc kubenswrapper[4808]: I0311 09:02:11.601765 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m76f5\" (UniqueName: \"kubernetes.io/projected/1b65c974-ed08-431d-a8bc-3e86af280319-kube-api-access-m76f5\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:11 crc kubenswrapper[4808]: I0311 09:02:11.807577 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bfeb64e4-ece9-4403-8534-acf6cfccf457" path="/var/lib/kubelet/pods/bfeb64e4-ece9-4403-8534-acf6cfccf457/volumes" Mar 11 09:02:11 crc kubenswrapper[4808]: I0311 09:02:11.839780 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-rksh6" Mar 11 09:02:11 crc kubenswrapper[4808]: I0311 09:02:11.905753 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/847a58e2-c27f-4b49-8300-cbe239822ffa-combined-ca-bundle\") pod \"847a58e2-c27f-4b49-8300-cbe239822ffa\" (UID: \"847a58e2-c27f-4b49-8300-cbe239822ffa\") " Mar 11 09:02:11 crc kubenswrapper[4808]: I0311 09:02:11.906233 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/847a58e2-c27f-4b49-8300-cbe239822ffa-db-sync-config-data\") pod \"847a58e2-c27f-4b49-8300-cbe239822ffa\" (UID: \"847a58e2-c27f-4b49-8300-cbe239822ffa\") " Mar 11 09:02:11 crc kubenswrapper[4808]: I0311 09:02:11.906450 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qslwn\" (UniqueName: \"kubernetes.io/projected/847a58e2-c27f-4b49-8300-cbe239822ffa-kube-api-access-qslwn\") pod \"847a58e2-c27f-4b49-8300-cbe239822ffa\" (UID: \"847a58e2-c27f-4b49-8300-cbe239822ffa\") " Mar 11 09:02:11 crc kubenswrapper[4808]: I0311 09:02:11.913166 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/847a58e2-c27f-4b49-8300-cbe239822ffa-kube-api-access-qslwn" (OuterVolumeSpecName: "kube-api-access-qslwn") pod "847a58e2-c27f-4b49-8300-cbe239822ffa" (UID: "847a58e2-c27f-4b49-8300-cbe239822ffa"). InnerVolumeSpecName "kube-api-access-qslwn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:02:11 crc kubenswrapper[4808]: I0311 09:02:11.913177 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/847a58e2-c27f-4b49-8300-cbe239822ffa-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "847a58e2-c27f-4b49-8300-cbe239822ffa" (UID: "847a58e2-c27f-4b49-8300-cbe239822ffa"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:02:11 crc kubenswrapper[4808]: I0311 09:02:11.929901 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/847a58e2-c27f-4b49-8300-cbe239822ffa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "847a58e2-c27f-4b49-8300-cbe239822ffa" (UID: "847a58e2-c27f-4b49-8300-cbe239822ffa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:02:12 crc kubenswrapper[4808]: I0311 09:02:12.008657 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/847a58e2-c27f-4b49-8300-cbe239822ffa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:12 crc kubenswrapper[4808]: I0311 09:02:12.008691 4808 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/847a58e2-c27f-4b49-8300-cbe239822ffa-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:12 crc kubenswrapper[4808]: I0311 09:02:12.008706 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qslwn\" (UniqueName: \"kubernetes.io/projected/847a58e2-c27f-4b49-8300-cbe239822ffa-kube-api-access-qslwn\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:12 crc kubenswrapper[4808]: I0311 09:02:12.464496 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 11 09:02:12 crc kubenswrapper[4808]: I0311 09:02:12.464516 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-rksh6" event={"ID":"847a58e2-c27f-4b49-8300-cbe239822ffa","Type":"ContainerDied","Data":"bfbe72f0247d8422f282298a0728be8c6700436f5a507766ec9b153e81b78304"} Mar 11 09:02:12 crc kubenswrapper[4808]: I0311 09:02:12.465007 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bfbe72f0247d8422f282298a0728be8c6700436f5a507766ec9b153e81b78304" Mar 11 09:02:12 crc kubenswrapper[4808]: I0311 09:02:12.464562 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-rksh6" Mar 11 09:02:12 crc kubenswrapper[4808]: I0311 09:02:12.642862 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 11 09:02:12 crc kubenswrapper[4808]: I0311 09:02:12.648676 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 11 09:02:12 crc kubenswrapper[4808]: I0311 09:02:12.682706 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 11 09:02:12 crc kubenswrapper[4808]: E0311 09:02:12.683055 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="847a58e2-c27f-4b49-8300-cbe239822ffa" containerName="barbican-db-sync" Mar 11 09:02:12 crc kubenswrapper[4808]: I0311 09:02:12.683075 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="847a58e2-c27f-4b49-8300-cbe239822ffa" containerName="barbican-db-sync" Mar 11 09:02:12 crc kubenswrapper[4808]: I0311 09:02:12.683278 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="847a58e2-c27f-4b49-8300-cbe239822ffa" containerName="barbican-db-sync" Mar 11 09:02:12 crc kubenswrapper[4808]: I0311 09:02:12.712411 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 11 09:02:12 crc kubenswrapper[4808]: I0311 09:02:12.717260 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 11 09:02:12 crc kubenswrapper[4808]: I0311 09:02:12.717323 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 11 09:02:12 crc kubenswrapper[4808]: I0311 09:02:12.723540 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff4d7462-f6c6-4e65-b6ed-8ba0a281520a-run-httpd\") pod \"ceilometer-0\" (UID: \"ff4d7462-f6c6-4e65-b6ed-8ba0a281520a\") " pod="openstack/ceilometer-0" Mar 11 09:02:12 crc kubenswrapper[4808]: I0311 09:02:12.723621 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff4d7462-f6c6-4e65-b6ed-8ba0a281520a-config-data\") pod \"ceilometer-0\" (UID: \"ff4d7462-f6c6-4e65-b6ed-8ba0a281520a\") " pod="openstack/ceilometer-0" Mar 11 09:02:12 crc kubenswrapper[4808]: I0311 09:02:12.723669 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff4d7462-f6c6-4e65-b6ed-8ba0a281520a-log-httpd\") pod \"ceilometer-0\" (UID: \"ff4d7462-f6c6-4e65-b6ed-8ba0a281520a\") " pod="openstack/ceilometer-0" Mar 11 09:02:12 crc kubenswrapper[4808]: I0311 09:02:12.723708 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff4d7462-f6c6-4e65-b6ed-8ba0a281520a-scripts\") pod \"ceilometer-0\" (UID: \"ff4d7462-f6c6-4e65-b6ed-8ba0a281520a\") " pod="openstack/ceilometer-0" Mar 11 09:02:12 crc kubenswrapper[4808]: I0311 09:02:12.723758 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ff4d7462-f6c6-4e65-b6ed-8ba0a281520a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ff4d7462-f6c6-4e65-b6ed-8ba0a281520a\") " pod="openstack/ceilometer-0" Mar 11 09:02:12 crc kubenswrapper[4808]: I0311 09:02:12.723788 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsss5\" (UniqueName: \"kubernetes.io/projected/ff4d7462-f6c6-4e65-b6ed-8ba0a281520a-kube-api-access-fsss5\") pod \"ceilometer-0\" (UID: \"ff4d7462-f6c6-4e65-b6ed-8ba0a281520a\") " pod="openstack/ceilometer-0" Mar 11 09:02:12 crc kubenswrapper[4808]: I0311 09:02:12.723841 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff4d7462-f6c6-4e65-b6ed-8ba0a281520a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ff4d7462-f6c6-4e65-b6ed-8ba0a281520a\") " pod="openstack/ceilometer-0" Mar 11 09:02:12 crc kubenswrapper[4808]: I0311 09:02:12.732219 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 11 09:02:12 crc kubenswrapper[4808]: I0311 09:02:12.806000 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-6b745fd47c-v25f8"] Mar 11 09:02:12 crc kubenswrapper[4808]: I0311 09:02:12.807698 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6b745fd47c-v25f8" Mar 11 09:02:12 crc kubenswrapper[4808]: I0311 09:02:12.811968 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-pvdcw" Mar 11 09:02:12 crc kubenswrapper[4808]: I0311 09:02:12.812169 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Mar 11 09:02:12 crc kubenswrapper[4808]: I0311 09:02:12.812335 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Mar 11 09:02:12 crc kubenswrapper[4808]: I0311 09:02:12.826435 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff4d7462-f6c6-4e65-b6ed-8ba0a281520a-log-httpd\") pod \"ceilometer-0\" (UID: \"ff4d7462-f6c6-4e65-b6ed-8ba0a281520a\") " pod="openstack/ceilometer-0" Mar 11 09:02:12 crc kubenswrapper[4808]: I0311 09:02:12.826491 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/50b3975b-d699-4f86-8aba-3a00f99bfdbc-logs\") pod \"barbican-worker-6b745fd47c-v25f8\" (UID: \"50b3975b-d699-4f86-8aba-3a00f99bfdbc\") " pod="openstack/barbican-worker-6b745fd47c-v25f8" Mar 11 09:02:12 crc kubenswrapper[4808]: I0311 09:02:12.826517 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff4d7462-f6c6-4e65-b6ed-8ba0a281520a-scripts\") pod \"ceilometer-0\" (UID: \"ff4d7462-f6c6-4e65-b6ed-8ba0a281520a\") " pod="openstack/ceilometer-0" Mar 11 09:02:12 crc kubenswrapper[4808]: I0311 09:02:12.826587 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsss5\" (UniqueName: \"kubernetes.io/projected/ff4d7462-f6c6-4e65-b6ed-8ba0a281520a-kube-api-access-fsss5\") pod \"ceilometer-0\" (UID: \"ff4d7462-f6c6-4e65-b6ed-8ba0a281520a\") " pod="openstack/ceilometer-0" Mar 11 09:02:12 crc kubenswrapper[4808]: I0311 09:02:12.826606 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ff4d7462-f6c6-4e65-b6ed-8ba0a281520a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ff4d7462-f6c6-4e65-b6ed-8ba0a281520a\") " pod="openstack/ceilometer-0" Mar 11 09:02:12 crc kubenswrapper[4808]: I0311 09:02:12.826635 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50b3975b-d699-4f86-8aba-3a00f99bfdbc-config-data\") pod \"barbican-worker-6b745fd47c-v25f8\" (UID: \"50b3975b-d699-4f86-8aba-3a00f99bfdbc\") " pod="openstack/barbican-worker-6b745fd47c-v25f8" Mar 11 09:02:12 crc kubenswrapper[4808]: I0311 09:02:12.826672 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff4d7462-f6c6-4e65-b6ed-8ba0a281520a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ff4d7462-f6c6-4e65-b6ed-8ba0a281520a\") " pod="openstack/ceilometer-0" Mar 11 09:02:12 crc kubenswrapper[4808]: I0311 09:02:12.826759 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/50b3975b-d699-4f86-8aba-3a00f99bfdbc-config-data-custom\") pod \"barbican-worker-6b745fd47c-v25f8\" (UID: \"50b3975b-d699-4f86-8aba-3a00f99bfdbc\") " pod="openstack/barbican-worker-6b745fd47c-v25f8" Mar 11 09:02:12 crc kubenswrapper[4808]: I0311 09:02:12.826799 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xwj2\" (UniqueName: \"kubernetes.io/projected/50b3975b-d699-4f86-8aba-3a00f99bfdbc-kube-api-access-4xwj2\") pod \"barbican-worker-6b745fd47c-v25f8\" (UID: \"50b3975b-d699-4f86-8aba-3a00f99bfdbc\") " pod="openstack/barbican-worker-6b745fd47c-v25f8" Mar 11 09:02:12 crc kubenswrapper[4808]: I0311 09:02:12.826841 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff4d7462-f6c6-4e65-b6ed-8ba0a281520a-run-httpd\") pod \"ceilometer-0\" (UID: \"ff4d7462-f6c6-4e65-b6ed-8ba0a281520a\") " pod="openstack/ceilometer-0" Mar 11 09:02:12 crc kubenswrapper[4808]: I0311 09:02:12.826861 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50b3975b-d699-4f86-8aba-3a00f99bfdbc-combined-ca-bundle\") pod \"barbican-worker-6b745fd47c-v25f8\" (UID: \"50b3975b-d699-4f86-8aba-3a00f99bfdbc\") " pod="openstack/barbican-worker-6b745fd47c-v25f8" Mar 11 09:02:12 crc kubenswrapper[4808]: I0311 09:02:12.826895 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff4d7462-f6c6-4e65-b6ed-8ba0a281520a-config-data\") pod \"ceilometer-0\" (UID: \"ff4d7462-f6c6-4e65-b6ed-8ba0a281520a\") " pod="openstack/ceilometer-0" Mar 11 09:02:12 crc kubenswrapper[4808]: I0311 09:02:12.829433 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-6b745fd47c-v25f8"] Mar 11 09:02:12 crc kubenswrapper[4808]: I0311 09:02:12.830323 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff4d7462-f6c6-4e65-b6ed-8ba0a281520a-log-httpd\") pod \"ceilometer-0\" (UID: \"ff4d7462-f6c6-4e65-b6ed-8ba0a281520a\") " pod="openstack/ceilometer-0" Mar 11 09:02:12 crc kubenswrapper[4808]: I0311 09:02:12.830635 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff4d7462-f6c6-4e65-b6ed-8ba0a281520a-run-httpd\") pod \"ceilometer-0\" (UID: \"ff4d7462-f6c6-4e65-b6ed-8ba0a281520a\") " pod="openstack/ceilometer-0" Mar 11 09:02:12 crc kubenswrapper[4808]: I0311 09:02:12.834155 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ff4d7462-f6c6-4e65-b6ed-8ba0a281520a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ff4d7462-f6c6-4e65-b6ed-8ba0a281520a\") " pod="openstack/ceilometer-0" Mar 11 09:02:12 crc kubenswrapper[4808]: I0311 09:02:12.834751 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff4d7462-f6c6-4e65-b6ed-8ba0a281520a-scripts\") pod \"ceilometer-0\" (UID: \"ff4d7462-f6c6-4e65-b6ed-8ba0a281520a\") " pod="openstack/ceilometer-0" Mar 11 09:02:12 crc kubenswrapper[4808]: I0311 09:02:12.839406 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff4d7462-f6c6-4e65-b6ed-8ba0a281520a-config-data\") pod \"ceilometer-0\" (UID: \"ff4d7462-f6c6-4e65-b6ed-8ba0a281520a\") " pod="openstack/ceilometer-0" Mar 11 09:02:12 crc kubenswrapper[4808]: I0311 09:02:12.849242 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-86db6b574-lsd58"] Mar 11 09:02:12 crc kubenswrapper[4808]: I0311 09:02:12.850798 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-86db6b574-lsd58" Mar 11 09:02:12 crc kubenswrapper[4808]: I0311 09:02:12.852259 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff4d7462-f6c6-4e65-b6ed-8ba0a281520a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ff4d7462-f6c6-4e65-b6ed-8ba0a281520a\") " pod="openstack/ceilometer-0" Mar 11 09:02:12 crc kubenswrapper[4808]: I0311 09:02:12.859784 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Mar 11 09:02:12 crc kubenswrapper[4808]: I0311 09:02:12.860342 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsss5\" (UniqueName: \"kubernetes.io/projected/ff4d7462-f6c6-4e65-b6ed-8ba0a281520a-kube-api-access-fsss5\") pod \"ceilometer-0\" (UID: \"ff4d7462-f6c6-4e65-b6ed-8ba0a281520a\") " pod="openstack/ceilometer-0" Mar 11 09:02:12 crc kubenswrapper[4808]: I0311 09:02:12.869392 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-86db6b574-lsd58"] Mar 11 09:02:12 crc kubenswrapper[4808]: I0311 09:02:12.918522 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8449d68f4f-gc7c4"] Mar 11 09:02:12 crc kubenswrapper[4808]: I0311 09:02:12.920340 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8449d68f4f-gc7c4" Mar 11 09:02:12 crc kubenswrapper[4808]: I0311 09:02:12.930678 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a2bfbf2-850f-4ce8-a3f0-ead546d6775d-config\") pod \"dnsmasq-dns-8449d68f4f-gc7c4\" (UID: \"7a2bfbf2-850f-4ce8-a3f0-ead546d6775d\") " pod="openstack/dnsmasq-dns-8449d68f4f-gc7c4" Mar 11 09:02:12 crc kubenswrapper[4808]: I0311 09:02:12.930847 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16eab58c-16f2-4054-aae1-d4de176db24c-config-data\") pod \"barbican-keystone-listener-86db6b574-lsd58\" (UID: \"16eab58c-16f2-4054-aae1-d4de176db24c\") " pod="openstack/barbican-keystone-listener-86db6b574-lsd58" Mar 11 09:02:12 crc kubenswrapper[4808]: I0311 09:02:12.930887 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/50b3975b-d699-4f86-8aba-3a00f99bfdbc-config-data-custom\") pod \"barbican-worker-6b745fd47c-v25f8\" (UID: \"50b3975b-d699-4f86-8aba-3a00f99bfdbc\") " pod="openstack/barbican-worker-6b745fd47c-v25f8" Mar 11 09:02:12 crc kubenswrapper[4808]: I0311 09:02:12.930918 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xwj2\" (UniqueName: \"kubernetes.io/projected/50b3975b-d699-4f86-8aba-3a00f99bfdbc-kube-api-access-4xwj2\") pod \"barbican-worker-6b745fd47c-v25f8\" (UID: \"50b3975b-d699-4f86-8aba-3a00f99bfdbc\") " pod="openstack/barbican-worker-6b745fd47c-v25f8" Mar 11 09:02:12 crc kubenswrapper[4808]: I0311 09:02:12.930943 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7a2bfbf2-850f-4ce8-a3f0-ead546d6775d-dns-swift-storage-0\") pod \"dnsmasq-dns-8449d68f4f-gc7c4\" (UID: \"7a2bfbf2-850f-4ce8-a3f0-ead546d6775d\") " pod="openstack/dnsmasq-dns-8449d68f4f-gc7c4" Mar 11 09:02:12 crc kubenswrapper[4808]: I0311 09:02:12.930968 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16eab58c-16f2-4054-aae1-d4de176db24c-combined-ca-bundle\") pod \"barbican-keystone-listener-86db6b574-lsd58\" (UID: \"16eab58c-16f2-4054-aae1-d4de176db24c\") " pod="openstack/barbican-keystone-listener-86db6b574-lsd58" Mar 11 09:02:12 crc kubenswrapper[4808]: I0311 09:02:12.930999 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7a2bfbf2-850f-4ce8-a3f0-ead546d6775d-dns-svc\") pod \"dnsmasq-dns-8449d68f4f-gc7c4\" (UID: \"7a2bfbf2-850f-4ce8-a3f0-ead546d6775d\") " pod="openstack/dnsmasq-dns-8449d68f4f-gc7c4" Mar 11 09:02:12 crc kubenswrapper[4808]: I0311 09:02:12.931026 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50b3975b-d699-4f86-8aba-3a00f99bfdbc-combined-ca-bundle\") pod \"barbican-worker-6b745fd47c-v25f8\" (UID: \"50b3975b-d699-4f86-8aba-3a00f99bfdbc\") " pod="openstack/barbican-worker-6b745fd47c-v25f8" Mar 11 09:02:12 crc kubenswrapper[4808]: I0311 09:02:12.931054 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjllh\" (UniqueName: \"kubernetes.io/projected/7a2bfbf2-850f-4ce8-a3f0-ead546d6775d-kube-api-access-qjllh\") pod \"dnsmasq-dns-8449d68f4f-gc7c4\" (UID: \"7a2bfbf2-850f-4ce8-a3f0-ead546d6775d\") " pod="openstack/dnsmasq-dns-8449d68f4f-gc7c4" Mar 11 09:02:12 crc kubenswrapper[4808]: I0311 09:02:12.931082 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7a2bfbf2-850f-4ce8-a3f0-ead546d6775d-ovsdbserver-nb\") pod \"dnsmasq-dns-8449d68f4f-gc7c4\" (UID: \"7a2bfbf2-850f-4ce8-a3f0-ead546d6775d\") " pod="openstack/dnsmasq-dns-8449d68f4f-gc7c4" Mar 11 09:02:12 crc kubenswrapper[4808]: I0311 09:02:12.931103 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/16eab58c-16f2-4054-aae1-d4de176db24c-config-data-custom\") pod \"barbican-keystone-listener-86db6b574-lsd58\" (UID: \"16eab58c-16f2-4054-aae1-d4de176db24c\") " pod="openstack/barbican-keystone-listener-86db6b574-lsd58" Mar 11 09:02:12 crc kubenswrapper[4808]: I0311 09:02:12.931157 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/50b3975b-d699-4f86-8aba-3a00f99bfdbc-logs\") pod \"barbican-worker-6b745fd47c-v25f8\" (UID: \"50b3975b-d699-4f86-8aba-3a00f99bfdbc\") " pod="openstack/barbican-worker-6b745fd47c-v25f8" Mar 11 09:02:12 crc kubenswrapper[4808]: I0311 09:02:12.931201 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bvxp\" (UniqueName: \"kubernetes.io/projected/16eab58c-16f2-4054-aae1-d4de176db24c-kube-api-access-8bvxp\") pod \"barbican-keystone-listener-86db6b574-lsd58\" (UID: \"16eab58c-16f2-4054-aae1-d4de176db24c\") " pod="openstack/barbican-keystone-listener-86db6b574-lsd58" Mar 11 09:02:12 crc kubenswrapper[4808]: I0311 09:02:12.931232 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7a2bfbf2-850f-4ce8-a3f0-ead546d6775d-ovsdbserver-sb\") pod \"dnsmasq-dns-8449d68f4f-gc7c4\" (UID: \"7a2bfbf2-850f-4ce8-a3f0-ead546d6775d\") " pod="openstack/dnsmasq-dns-8449d68f4f-gc7c4" Mar 11 09:02:12 crc kubenswrapper[4808]: I0311 09:02:12.931259 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50b3975b-d699-4f86-8aba-3a00f99bfdbc-config-data\") pod \"barbican-worker-6b745fd47c-v25f8\" (UID: \"50b3975b-d699-4f86-8aba-3a00f99bfdbc\") " pod="openstack/barbican-worker-6b745fd47c-v25f8" Mar 11 09:02:12 crc kubenswrapper[4808]: I0311 09:02:12.931282 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/16eab58c-16f2-4054-aae1-d4de176db24c-logs\") pod \"barbican-keystone-listener-86db6b574-lsd58\" (UID: \"16eab58c-16f2-4054-aae1-d4de176db24c\") " pod="openstack/barbican-keystone-listener-86db6b574-lsd58" Mar 11 09:02:12 crc kubenswrapper[4808]: I0311 09:02:12.932232 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8449d68f4f-gc7c4"] Mar 11 09:02:12 crc kubenswrapper[4808]: I0311 09:02:12.934523 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/50b3975b-d699-4f86-8aba-3a00f99bfdbc-logs\") pod \"barbican-worker-6b745fd47c-v25f8\" (UID: \"50b3975b-d699-4f86-8aba-3a00f99bfdbc\") " pod="openstack/barbican-worker-6b745fd47c-v25f8" Mar 11 09:02:12 crc kubenswrapper[4808]: I0311 09:02:12.945299 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50b3975b-d699-4f86-8aba-3a00f99bfdbc-combined-ca-bundle\") pod \"barbican-worker-6b745fd47c-v25f8\" (UID: \"50b3975b-d699-4f86-8aba-3a00f99bfdbc\") " pod="openstack/barbican-worker-6b745fd47c-v25f8" Mar 11 09:02:12 crc kubenswrapper[4808]: I0311 09:02:12.946620 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50b3975b-d699-4f86-8aba-3a00f99bfdbc-config-data\") pod \"barbican-worker-6b745fd47c-v25f8\" (UID: \"50b3975b-d699-4f86-8aba-3a00f99bfdbc\") " pod="openstack/barbican-worker-6b745fd47c-v25f8" Mar 11 09:02:12 crc kubenswrapper[4808]: I0311 09:02:12.957103 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/50b3975b-d699-4f86-8aba-3a00f99bfdbc-config-data-custom\") pod \"barbican-worker-6b745fd47c-v25f8\" (UID: \"50b3975b-d699-4f86-8aba-3a00f99bfdbc\") " pod="openstack/barbican-worker-6b745fd47c-v25f8" Mar 11 09:02:12 crc kubenswrapper[4808]: I0311 09:02:12.969622 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xwj2\" (UniqueName: \"kubernetes.io/projected/50b3975b-d699-4f86-8aba-3a00f99bfdbc-kube-api-access-4xwj2\") pod \"barbican-worker-6b745fd47c-v25f8\" (UID: \"50b3975b-d699-4f86-8aba-3a00f99bfdbc\") " pod="openstack/barbican-worker-6b745fd47c-v25f8" Mar 11 09:02:13 crc kubenswrapper[4808]: I0311 09:02:13.033373 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bvxp\" (UniqueName: \"kubernetes.io/projected/16eab58c-16f2-4054-aae1-d4de176db24c-kube-api-access-8bvxp\") pod \"barbican-keystone-listener-86db6b574-lsd58\" (UID: \"16eab58c-16f2-4054-aae1-d4de176db24c\") " pod="openstack/barbican-keystone-listener-86db6b574-lsd58" Mar 11 09:02:13 crc kubenswrapper[4808]: I0311 09:02:13.034234 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7a2bfbf2-850f-4ce8-a3f0-ead546d6775d-ovsdbserver-sb\") pod \"dnsmasq-dns-8449d68f4f-gc7c4\" (UID: \"7a2bfbf2-850f-4ce8-a3f0-ead546d6775d\") " pod="openstack/dnsmasq-dns-8449d68f4f-gc7c4" Mar 11 09:02:13 crc kubenswrapper[4808]: I0311 09:02:13.034275 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/16eab58c-16f2-4054-aae1-d4de176db24c-logs\") pod \"barbican-keystone-listener-86db6b574-lsd58\" (UID: \"16eab58c-16f2-4054-aae1-d4de176db24c\") " pod="openstack/barbican-keystone-listener-86db6b574-lsd58" Mar 11 09:02:13 crc kubenswrapper[4808]: I0311 09:02:13.034317 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a2bfbf2-850f-4ce8-a3f0-ead546d6775d-config\") pod \"dnsmasq-dns-8449d68f4f-gc7c4\" (UID: \"7a2bfbf2-850f-4ce8-a3f0-ead546d6775d\") " pod="openstack/dnsmasq-dns-8449d68f4f-gc7c4" Mar 11 09:02:13 crc kubenswrapper[4808]: I0311 09:02:13.034408 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16eab58c-16f2-4054-aae1-d4de176db24c-config-data\") pod \"barbican-keystone-listener-86db6b574-lsd58\" (UID: \"16eab58c-16f2-4054-aae1-d4de176db24c\") " pod="openstack/barbican-keystone-listener-86db6b574-lsd58" Mar 11 09:02:13 crc kubenswrapper[4808]: I0311 09:02:13.034466 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7a2bfbf2-850f-4ce8-a3f0-ead546d6775d-dns-swift-storage-0\") pod \"dnsmasq-dns-8449d68f4f-gc7c4\" (UID: \"7a2bfbf2-850f-4ce8-a3f0-ead546d6775d\") " pod="openstack/dnsmasq-dns-8449d68f4f-gc7c4" Mar 11 09:02:13 crc kubenswrapper[4808]: I0311 09:02:13.034496 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16eab58c-16f2-4054-aae1-d4de176db24c-combined-ca-bundle\") pod \"barbican-keystone-listener-86db6b574-lsd58\" (UID: \"16eab58c-16f2-4054-aae1-d4de176db24c\") " pod="openstack/barbican-keystone-listener-86db6b574-lsd58" Mar 11 09:02:13 crc kubenswrapper[4808]: I0311 09:02:13.034541 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7a2bfbf2-850f-4ce8-a3f0-ead546d6775d-dns-svc\") pod \"dnsmasq-dns-8449d68f4f-gc7c4\" (UID: \"7a2bfbf2-850f-4ce8-a3f0-ead546d6775d\") " pod="openstack/dnsmasq-dns-8449d68f4f-gc7c4" Mar 11 09:02:13 crc kubenswrapper[4808]: I0311 09:02:13.034588 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjllh\" (UniqueName: \"kubernetes.io/projected/7a2bfbf2-850f-4ce8-a3f0-ead546d6775d-kube-api-access-qjllh\") pod \"dnsmasq-dns-8449d68f4f-gc7c4\" (UID: \"7a2bfbf2-850f-4ce8-a3f0-ead546d6775d\") " pod="openstack/dnsmasq-dns-8449d68f4f-gc7c4" Mar 11 09:02:13 crc kubenswrapper[4808]: I0311 09:02:13.034627 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7a2bfbf2-850f-4ce8-a3f0-ead546d6775d-ovsdbserver-nb\") pod \"dnsmasq-dns-8449d68f4f-gc7c4\" (UID: \"7a2bfbf2-850f-4ce8-a3f0-ead546d6775d\") " pod="openstack/dnsmasq-dns-8449d68f4f-gc7c4" Mar 11 09:02:13 crc kubenswrapper[4808]: I0311 09:02:13.034654 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/16eab58c-16f2-4054-aae1-d4de176db24c-config-data-custom\") pod \"barbican-keystone-listener-86db6b574-lsd58\" (UID: \"16eab58c-16f2-4054-aae1-d4de176db24c\") " pod="openstack/barbican-keystone-listener-86db6b574-lsd58" Mar 11 09:02:13 crc kubenswrapper[4808]: I0311 09:02:13.036013 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7a2bfbf2-850f-4ce8-a3f0-ead546d6775d-dns-swift-storage-0\") pod \"dnsmasq-dns-8449d68f4f-gc7c4\" (UID: \"7a2bfbf2-850f-4ce8-a3f0-ead546d6775d\") " pod="openstack/dnsmasq-dns-8449d68f4f-gc7c4" Mar 11 09:02:13 crc kubenswrapper[4808]: I0311 09:02:13.036275 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7a2bfbf2-850f-4ce8-a3f0-ead546d6775d-ovsdbserver-sb\") pod \"dnsmasq-dns-8449d68f4f-gc7c4\" (UID: \"7a2bfbf2-850f-4ce8-a3f0-ead546d6775d\") " pod="openstack/dnsmasq-dns-8449d68f4f-gc7c4" Mar 11 09:02:13 crc kubenswrapper[4808]: I0311 09:02:13.036597 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a2bfbf2-850f-4ce8-a3f0-ead546d6775d-config\") pod \"dnsmasq-dns-8449d68f4f-gc7c4\" (UID: \"7a2bfbf2-850f-4ce8-a3f0-ead546d6775d\") " pod="openstack/dnsmasq-dns-8449d68f4f-gc7c4" Mar 11 09:02:13 crc kubenswrapper[4808]: I0311 09:02:13.037257 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7a2bfbf2-850f-4ce8-a3f0-ead546d6775d-ovsdbserver-nb\") pod \"dnsmasq-dns-8449d68f4f-gc7c4\" (UID: \"7a2bfbf2-850f-4ce8-a3f0-ead546d6775d\") " pod="openstack/dnsmasq-dns-8449d68f4f-gc7c4" Mar 11 09:02:13 crc kubenswrapper[4808]: I0311 09:02:13.038735 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7a2bfbf2-850f-4ce8-a3f0-ead546d6775d-dns-svc\") pod \"dnsmasq-dns-8449d68f4f-gc7c4\" (UID: \"7a2bfbf2-850f-4ce8-a3f0-ead546d6775d\") " pod="openstack/dnsmasq-dns-8449d68f4f-gc7c4" Mar 11 09:02:13 crc kubenswrapper[4808]: I0311 09:02:13.039721 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16eab58c-16f2-4054-aae1-d4de176db24c-combined-ca-bundle\") pod \"barbican-keystone-listener-86db6b574-lsd58\" (UID: \"16eab58c-16f2-4054-aae1-d4de176db24c\") " pod="openstack/barbican-keystone-listener-86db6b574-lsd58" Mar 11 09:02:13 crc kubenswrapper[4808]: I0311 09:02:13.041114 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/16eab58c-16f2-4054-aae1-d4de176db24c-logs\") pod \"barbican-keystone-listener-86db6b574-lsd58\" (UID: \"16eab58c-16f2-4054-aae1-d4de176db24c\") " pod="openstack/barbican-keystone-listener-86db6b574-lsd58" Mar 11 09:02:13 crc kubenswrapper[4808]: I0311 09:02:13.041368 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-86b7cb5956-tzj8f"] Mar 11 09:02:13 crc kubenswrapper[4808]: I0311 09:02:13.042000 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 11 09:02:13 crc kubenswrapper[4808]: I0311 09:02:13.043088 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-86b7cb5956-tzj8f" Mar 11 09:02:13 crc kubenswrapper[4808]: I0311 09:02:13.043343 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/16eab58c-16f2-4054-aae1-d4de176db24c-config-data-custom\") pod \"barbican-keystone-listener-86db6b574-lsd58\" (UID: \"16eab58c-16f2-4054-aae1-d4de176db24c\") " pod="openstack/barbican-keystone-listener-86db6b574-lsd58" Mar 11 09:02:13 crc kubenswrapper[4808]: I0311 09:02:13.046012 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Mar 11 09:02:13 crc kubenswrapper[4808]: I0311 09:02:13.050525 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16eab58c-16f2-4054-aae1-d4de176db24c-config-data\") pod \"barbican-keystone-listener-86db6b574-lsd58\" (UID: \"16eab58c-16f2-4054-aae1-d4de176db24c\") " pod="openstack/barbican-keystone-listener-86db6b574-lsd58" Mar 11 09:02:13 crc kubenswrapper[4808]: I0311 09:02:13.066623 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bvxp\" (UniqueName: \"kubernetes.io/projected/16eab58c-16f2-4054-aae1-d4de176db24c-kube-api-access-8bvxp\") pod \"barbican-keystone-listener-86db6b574-lsd58\" (UID: \"16eab58c-16f2-4054-aae1-d4de176db24c\") " pod="openstack/barbican-keystone-listener-86db6b574-lsd58" Mar 11 09:02:13 crc kubenswrapper[4808]: I0311 09:02:13.071711 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjllh\" (UniqueName: \"kubernetes.io/projected/7a2bfbf2-850f-4ce8-a3f0-ead546d6775d-kube-api-access-qjllh\") pod \"dnsmasq-dns-8449d68f4f-gc7c4\" (UID: \"7a2bfbf2-850f-4ce8-a3f0-ead546d6775d\") " pod="openstack/dnsmasq-dns-8449d68f4f-gc7c4" Mar 11 09:02:13 crc kubenswrapper[4808]: I0311 09:02:13.074749 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-86b7cb5956-tzj8f"] Mar 11 09:02:13 crc kubenswrapper[4808]: I0311 09:02:13.137074 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a322ec3d-e25c-4ccf-8a5d-af1b78914c8f-combined-ca-bundle\") pod \"barbican-api-86b7cb5956-tzj8f\" (UID: \"a322ec3d-e25c-4ccf-8a5d-af1b78914c8f\") " pod="openstack/barbican-api-86b7cb5956-tzj8f" Mar 11 09:02:13 crc kubenswrapper[4808]: I0311 09:02:13.137135 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a322ec3d-e25c-4ccf-8a5d-af1b78914c8f-logs\") pod \"barbican-api-86b7cb5956-tzj8f\" (UID: \"a322ec3d-e25c-4ccf-8a5d-af1b78914c8f\") " pod="openstack/barbican-api-86b7cb5956-tzj8f" Mar 11 09:02:13 crc kubenswrapper[4808]: I0311 09:02:13.137244 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a322ec3d-e25c-4ccf-8a5d-af1b78914c8f-config-data\") pod \"barbican-api-86b7cb5956-tzj8f\" (UID: \"a322ec3d-e25c-4ccf-8a5d-af1b78914c8f\") " pod="openstack/barbican-api-86b7cb5956-tzj8f" Mar 11 09:02:13 crc kubenswrapper[4808]: I0311 09:02:13.137633 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a322ec3d-e25c-4ccf-8a5d-af1b78914c8f-config-data-custom\") pod \"barbican-api-86b7cb5956-tzj8f\" (UID: \"a322ec3d-e25c-4ccf-8a5d-af1b78914c8f\") " pod="openstack/barbican-api-86b7cb5956-tzj8f" Mar 11 09:02:13 crc kubenswrapper[4808]: I0311 09:02:13.137792 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wr9mx\" (UniqueName: \"kubernetes.io/projected/a322ec3d-e25c-4ccf-8a5d-af1b78914c8f-kube-api-access-wr9mx\") pod \"barbican-api-86b7cb5956-tzj8f\" (UID: \"a322ec3d-e25c-4ccf-8a5d-af1b78914c8f\") " pod="openstack/barbican-api-86b7cb5956-tzj8f" Mar 11 09:02:13 crc kubenswrapper[4808]: I0311 09:02:13.147067 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6b745fd47c-v25f8" Mar 11 09:02:13 crc kubenswrapper[4808]: I0311 09:02:13.242570 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wr9mx\" (UniqueName: \"kubernetes.io/projected/a322ec3d-e25c-4ccf-8a5d-af1b78914c8f-kube-api-access-wr9mx\") pod \"barbican-api-86b7cb5956-tzj8f\" (UID: \"a322ec3d-e25c-4ccf-8a5d-af1b78914c8f\") " pod="openstack/barbican-api-86b7cb5956-tzj8f" Mar 11 09:02:13 crc kubenswrapper[4808]: I0311 09:02:13.242667 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a322ec3d-e25c-4ccf-8a5d-af1b78914c8f-combined-ca-bundle\") pod \"barbican-api-86b7cb5956-tzj8f\" (UID: \"a322ec3d-e25c-4ccf-8a5d-af1b78914c8f\") " pod="openstack/barbican-api-86b7cb5956-tzj8f" Mar 11 09:02:13 crc kubenswrapper[4808]: I0311 09:02:13.242700 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a322ec3d-e25c-4ccf-8a5d-af1b78914c8f-logs\") pod \"barbican-api-86b7cb5956-tzj8f\" (UID: \"a322ec3d-e25c-4ccf-8a5d-af1b78914c8f\") " pod="openstack/barbican-api-86b7cb5956-tzj8f" Mar 11 09:02:13 crc kubenswrapper[4808]: I0311 09:02:13.242737 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a322ec3d-e25c-4ccf-8a5d-af1b78914c8f-config-data\") pod \"barbican-api-86b7cb5956-tzj8f\" (UID: \"a322ec3d-e25c-4ccf-8a5d-af1b78914c8f\") " pod="openstack/barbican-api-86b7cb5956-tzj8f" Mar 11 09:02:13 crc kubenswrapper[4808]: I0311 09:02:13.242829 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a322ec3d-e25c-4ccf-8a5d-af1b78914c8f-config-data-custom\") pod \"barbican-api-86b7cb5956-tzj8f\" (UID: \"a322ec3d-e25c-4ccf-8a5d-af1b78914c8f\") " pod="openstack/barbican-api-86b7cb5956-tzj8f" Mar 11 09:02:13 crc kubenswrapper[4808]: I0311 09:02:13.243517 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a322ec3d-e25c-4ccf-8a5d-af1b78914c8f-logs\") pod \"barbican-api-86b7cb5956-tzj8f\" (UID: \"a322ec3d-e25c-4ccf-8a5d-af1b78914c8f\") " pod="openstack/barbican-api-86b7cb5956-tzj8f" Mar 11 09:02:13 crc kubenswrapper[4808]: I0311 09:02:13.246928 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a322ec3d-e25c-4ccf-8a5d-af1b78914c8f-config-data-custom\") pod \"barbican-api-86b7cb5956-tzj8f\" (UID: \"a322ec3d-e25c-4ccf-8a5d-af1b78914c8f\") " pod="openstack/barbican-api-86b7cb5956-tzj8f" Mar 11 09:02:13 crc kubenswrapper[4808]: I0311 09:02:13.247444 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a322ec3d-e25c-4ccf-8a5d-af1b78914c8f-config-data\") pod \"barbican-api-86b7cb5956-tzj8f\" (UID: \"a322ec3d-e25c-4ccf-8a5d-af1b78914c8f\") " pod="openstack/barbican-api-86b7cb5956-tzj8f" Mar 11 09:02:13 crc kubenswrapper[4808]: I0311 09:02:13.248021 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a322ec3d-e25c-4ccf-8a5d-af1b78914c8f-combined-ca-bundle\") pod \"barbican-api-86b7cb5956-tzj8f\" (UID: \"a322ec3d-e25c-4ccf-8a5d-af1b78914c8f\") " pod="openstack/barbican-api-86b7cb5956-tzj8f" Mar 11 09:02:13 crc kubenswrapper[4808]: I0311 09:02:13.262051 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wr9mx\" (UniqueName: \"kubernetes.io/projected/a322ec3d-e25c-4ccf-8a5d-af1b78914c8f-kube-api-access-wr9mx\") pod \"barbican-api-86b7cb5956-tzj8f\" (UID: \"a322ec3d-e25c-4ccf-8a5d-af1b78914c8f\") " pod="openstack/barbican-api-86b7cb5956-tzj8f" Mar 11 09:02:13 crc kubenswrapper[4808]: I0311 09:02:13.326651 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-86db6b574-lsd58" Mar 11 09:02:13 crc kubenswrapper[4808]: I0311 09:02:13.333531 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8449d68f4f-gc7c4" Mar 11 09:02:13 crc kubenswrapper[4808]: I0311 09:02:13.366316 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-86b7cb5956-tzj8f" Mar 11 09:02:13 crc kubenswrapper[4808]: I0311 09:02:13.480113 4808 generic.go:334] "Generic (PLEG): container finished" podID="75b5595d-1d35-47d9-b6a2-196e30848a13" containerID="7466a0268d7368fd7ab3cd95b2a779f359f149cf2f4b6201476727e40a6d77bd" exitCode=0 Mar 11 09:02:13 crc kubenswrapper[4808]: I0311 09:02:13.480177 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-rtcbw" event={"ID":"75b5595d-1d35-47d9-b6a2-196e30848a13","Type":"ContainerDied","Data":"7466a0268d7368fd7ab3cd95b2a779f359f149cf2f4b6201476727e40a6d77bd"} Mar 11 09:02:13 crc kubenswrapper[4808]: I0311 09:02:13.560321 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 11 09:02:13 crc kubenswrapper[4808]: I0311 09:02:13.622689 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-6b745fd47c-v25f8"] Mar 11 09:02:13 crc kubenswrapper[4808]: I0311 09:02:13.800181 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b65c974-ed08-431d-a8bc-3e86af280319" path="/var/lib/kubelet/pods/1b65c974-ed08-431d-a8bc-3e86af280319/volumes" Mar 11 09:02:13 crc kubenswrapper[4808]: I0311 09:02:13.874394 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-86db6b574-lsd58"] Mar 11 09:02:13 crc kubenswrapper[4808]: I0311 09:02:13.885739 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8449d68f4f-gc7c4"] Mar 11 09:02:14 crc kubenswrapper[4808]: W0311 09:02:14.053975 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda322ec3d_e25c_4ccf_8a5d_af1b78914c8f.slice/crio-3e71278cc3326489d8d7e180d446a590787add68f637d62ac244a8afb27e9af1 WatchSource:0}: Error finding container 3e71278cc3326489d8d7e180d446a590787add68f637d62ac244a8afb27e9af1: Status 404 returned error can't find the container with id 3e71278cc3326489d8d7e180d446a590787add68f637d62ac244a8afb27e9af1 Mar 11 09:02:14 crc kubenswrapper[4808]: I0311 09:02:14.056115 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-86b7cb5956-tzj8f"] Mar 11 09:02:14 crc kubenswrapper[4808]: I0311 09:02:14.498085 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-86b7cb5956-tzj8f" event={"ID":"a322ec3d-e25c-4ccf-8a5d-af1b78914c8f","Type":"ContainerStarted","Data":"d8aa3bfc204c1fbab362a3e358821b2edbdd78652875ef2e61ff3450beb8bec7"} Mar 11 09:02:14 crc kubenswrapper[4808]: I0311 09:02:14.498482 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-86b7cb5956-tzj8f" Mar 11 09:02:14 crc kubenswrapper[4808]: I0311 09:02:14.498550 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-86b7cb5956-tzj8f" event={"ID":"a322ec3d-e25c-4ccf-8a5d-af1b78914c8f","Type":"ContainerStarted","Data":"7c27079d33e451ffa153a65361da4dfa11e1369b3f6cff896dfa022cdc0aee8f"} Mar 11 09:02:14 crc kubenswrapper[4808]: I0311 09:02:14.498567 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-86b7cb5956-tzj8f" event={"ID":"a322ec3d-e25c-4ccf-8a5d-af1b78914c8f","Type":"ContainerStarted","Data":"3e71278cc3326489d8d7e180d446a590787add68f637d62ac244a8afb27e9af1"} Mar 11 09:02:14 crc kubenswrapper[4808]: I0311 09:02:14.499560 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff4d7462-f6c6-4e65-b6ed-8ba0a281520a","Type":"ContainerStarted","Data":"b4bb3bff315f5341075247c519d8f306799a8259fdd75e883cc1e10b032d7ea0"} Mar 11 09:02:14 crc kubenswrapper[4808]: I0311 09:02:14.508473 4808 generic.go:334] "Generic (PLEG): container finished" podID="7a2bfbf2-850f-4ce8-a3f0-ead546d6775d" containerID="02ce4fa0a884b9a66ca078ee374caeb0cde33936df599a173bf8026c00d8f211" exitCode=0 Mar 11 09:02:14 crc kubenswrapper[4808]: I0311 09:02:14.508760 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8449d68f4f-gc7c4" event={"ID":"7a2bfbf2-850f-4ce8-a3f0-ead546d6775d","Type":"ContainerDied","Data":"02ce4fa0a884b9a66ca078ee374caeb0cde33936df599a173bf8026c00d8f211"} Mar 11 09:02:14 crc kubenswrapper[4808]: I0311 09:02:14.508788 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8449d68f4f-gc7c4" event={"ID":"7a2bfbf2-850f-4ce8-a3f0-ead546d6775d","Type":"ContainerStarted","Data":"4a994fd90ee317f00e756ac339babdc78cb30960a8b2492df675ce3755203488"} Mar 11 09:02:14 crc kubenswrapper[4808]: I0311 09:02:14.523792 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-86db6b574-lsd58" event={"ID":"16eab58c-16f2-4054-aae1-d4de176db24c","Type":"ContainerStarted","Data":"0ec428591a54fc8c98b0acdd4e7245c695734dd18c90913fdfe58c9477c71b2f"} Mar 11 09:02:14 crc kubenswrapper[4808]: I0311 09:02:14.538464 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6b745fd47c-v25f8" event={"ID":"50b3975b-d699-4f86-8aba-3a00f99bfdbc","Type":"ContainerStarted","Data":"502e64cd15683baf26ed13219963e8e13c916389fe2a641d97b1c00ff3385749"} Mar 11 09:02:14 crc kubenswrapper[4808]: I0311 09:02:14.547335 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-86b7cb5956-tzj8f" podStartSLOduration=1.547317166 podStartE2EDuration="1.547317166s" podCreationTimestamp="2026-03-11 09:02:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:02:14.545327878 +0000 UTC m=+1385.498651198" watchObservedRunningTime="2026-03-11 09:02:14.547317166 +0000 UTC m=+1385.500640476" Mar 11 09:02:15 crc kubenswrapper[4808]: I0311 09:02:15.152298 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-rtcbw" Mar 11 09:02:15 crc kubenswrapper[4808]: I0311 09:02:15.186259 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/75b5595d-1d35-47d9-b6a2-196e30848a13-etc-machine-id\") pod \"75b5595d-1d35-47d9-b6a2-196e30848a13\" (UID: \"75b5595d-1d35-47d9-b6a2-196e30848a13\") " Mar 11 09:02:15 crc kubenswrapper[4808]: I0311 09:02:15.186351 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2p85\" (UniqueName: \"kubernetes.io/projected/75b5595d-1d35-47d9-b6a2-196e30848a13-kube-api-access-x2p85\") pod \"75b5595d-1d35-47d9-b6a2-196e30848a13\" (UID: \"75b5595d-1d35-47d9-b6a2-196e30848a13\") " Mar 11 09:02:15 crc kubenswrapper[4808]: I0311 09:02:15.186454 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75b5595d-1d35-47d9-b6a2-196e30848a13-combined-ca-bundle\") pod \"75b5595d-1d35-47d9-b6a2-196e30848a13\" (UID: \"75b5595d-1d35-47d9-b6a2-196e30848a13\") " Mar 11 09:02:15 crc kubenswrapper[4808]: I0311 09:02:15.186473 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/75b5595d-1d35-47d9-b6a2-196e30848a13-db-sync-config-data\") pod \"75b5595d-1d35-47d9-b6a2-196e30848a13\" (UID: \"75b5595d-1d35-47d9-b6a2-196e30848a13\") " Mar 11 09:02:15 crc kubenswrapper[4808]: I0311 09:02:15.186501 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75b5595d-1d35-47d9-b6a2-196e30848a13-scripts\") pod \"75b5595d-1d35-47d9-b6a2-196e30848a13\" (UID: \"75b5595d-1d35-47d9-b6a2-196e30848a13\") " Mar 11 09:02:15 crc kubenswrapper[4808]: I0311 09:02:15.186499 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75b5595d-1d35-47d9-b6a2-196e30848a13-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "75b5595d-1d35-47d9-b6a2-196e30848a13" (UID: "75b5595d-1d35-47d9-b6a2-196e30848a13"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 11 09:02:15 crc kubenswrapper[4808]: I0311 09:02:15.186567 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75b5595d-1d35-47d9-b6a2-196e30848a13-config-data\") pod \"75b5595d-1d35-47d9-b6a2-196e30848a13\" (UID: \"75b5595d-1d35-47d9-b6a2-196e30848a13\") " Mar 11 09:02:15 crc kubenswrapper[4808]: I0311 09:02:15.187043 4808 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/75b5595d-1d35-47d9-b6a2-196e30848a13-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:15 crc kubenswrapper[4808]: I0311 09:02:15.200142 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75b5595d-1d35-47d9-b6a2-196e30848a13-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "75b5595d-1d35-47d9-b6a2-196e30848a13" (UID: "75b5595d-1d35-47d9-b6a2-196e30848a13"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:02:15 crc kubenswrapper[4808]: I0311 09:02:15.205750 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75b5595d-1d35-47d9-b6a2-196e30848a13-kube-api-access-x2p85" (OuterVolumeSpecName: "kube-api-access-x2p85") pod "75b5595d-1d35-47d9-b6a2-196e30848a13" (UID: "75b5595d-1d35-47d9-b6a2-196e30848a13"). InnerVolumeSpecName "kube-api-access-x2p85". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:02:15 crc kubenswrapper[4808]: I0311 09:02:15.206684 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75b5595d-1d35-47d9-b6a2-196e30848a13-scripts" (OuterVolumeSpecName: "scripts") pod "75b5595d-1d35-47d9-b6a2-196e30848a13" (UID: "75b5595d-1d35-47d9-b6a2-196e30848a13"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:02:15 crc kubenswrapper[4808]: I0311 09:02:15.225590 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75b5595d-1d35-47d9-b6a2-196e30848a13-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "75b5595d-1d35-47d9-b6a2-196e30848a13" (UID: "75b5595d-1d35-47d9-b6a2-196e30848a13"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:02:15 crc kubenswrapper[4808]: I0311 09:02:15.289823 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2p85\" (UniqueName: \"kubernetes.io/projected/75b5595d-1d35-47d9-b6a2-196e30848a13-kube-api-access-x2p85\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:15 crc kubenswrapper[4808]: I0311 09:02:15.289853 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75b5595d-1d35-47d9-b6a2-196e30848a13-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:15 crc kubenswrapper[4808]: I0311 09:02:15.289862 4808 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/75b5595d-1d35-47d9-b6a2-196e30848a13-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:15 crc kubenswrapper[4808]: I0311 09:02:15.289871 4808 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75b5595d-1d35-47d9-b6a2-196e30848a13-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:15 crc kubenswrapper[4808]: I0311 09:02:15.293034 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75b5595d-1d35-47d9-b6a2-196e30848a13-config-data" (OuterVolumeSpecName: "config-data") pod "75b5595d-1d35-47d9-b6a2-196e30848a13" (UID: "75b5595d-1d35-47d9-b6a2-196e30848a13"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:02:15 crc kubenswrapper[4808]: I0311 09:02:15.391150 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75b5595d-1d35-47d9-b6a2-196e30848a13-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:15 crc kubenswrapper[4808]: I0311 09:02:15.543625 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-rtcbw" event={"ID":"75b5595d-1d35-47d9-b6a2-196e30848a13","Type":"ContainerDied","Data":"58653438b666ac59f4ac7c66a50b554fea1e0549b1b3814b5692588ecec2a397"} Mar 11 09:02:15 crc kubenswrapper[4808]: I0311 09:02:15.544702 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="58653438b666ac59f4ac7c66a50b554fea1e0549b1b3814b5692588ecec2a397" Mar 11 09:02:15 crc kubenswrapper[4808]: I0311 09:02:15.544833 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-rtcbw" Mar 11 09:02:15 crc kubenswrapper[4808]: I0311 09:02:15.560918 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8449d68f4f-gc7c4" event={"ID":"7a2bfbf2-850f-4ce8-a3f0-ead546d6775d","Type":"ContainerStarted","Data":"592d5d8a780e07c5c8076fb769f13eb22a214281cb1ab22a56d45399de586dc9"} Mar 11 09:02:15 crc kubenswrapper[4808]: I0311 09:02:15.561519 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8449d68f4f-gc7c4" Mar 11 09:02:15 crc kubenswrapper[4808]: I0311 09:02:15.566222 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6b745fd47c-v25f8" event={"ID":"50b3975b-d699-4f86-8aba-3a00f99bfdbc","Type":"ContainerStarted","Data":"7c409fde438e843afe5c9926ca77c42ad26f7c963455ed3066ee3ef3491431c2"} Mar 11 09:02:15 crc kubenswrapper[4808]: I0311 09:02:15.569238 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff4d7462-f6c6-4e65-b6ed-8ba0a281520a","Type":"ContainerStarted","Data":"83e4299f8db969047f115b2e6d865e8ed4751b1a09a6cc0470a630153af91366"} Mar 11 09:02:15 crc kubenswrapper[4808]: I0311 09:02:15.569276 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-86b7cb5956-tzj8f" Mar 11 09:02:15 crc kubenswrapper[4808]: I0311 09:02:15.586154 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8449d68f4f-gc7c4" podStartSLOduration=3.586136529 podStartE2EDuration="3.586136529s" podCreationTimestamp="2026-03-11 09:02:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:02:15.581968888 +0000 UTC m=+1386.535292208" watchObservedRunningTime="2026-03-11 09:02:15.586136529 +0000 UTC m=+1386.539459849" Mar 11 09:02:15 crc kubenswrapper[4808]: I0311 09:02:15.786024 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Mar 11 09:02:15 crc kubenswrapper[4808]: E0311 09:02:15.786465 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75b5595d-1d35-47d9-b6a2-196e30848a13" containerName="cinder-db-sync" Mar 11 09:02:15 crc kubenswrapper[4808]: I0311 09:02:15.786482 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="75b5595d-1d35-47d9-b6a2-196e30848a13" containerName="cinder-db-sync" Mar 11 09:02:15 crc kubenswrapper[4808]: I0311 09:02:15.786678 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="75b5595d-1d35-47d9-b6a2-196e30848a13" containerName="cinder-db-sync" Mar 11 09:02:15 crc kubenswrapper[4808]: I0311 09:02:15.787839 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 11 09:02:15 crc kubenswrapper[4808]: I0311 09:02:15.792912 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Mar 11 09:02:15 crc kubenswrapper[4808]: I0311 09:02:15.795890 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-q67p9" Mar 11 09:02:15 crc kubenswrapper[4808]: I0311 09:02:15.796082 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Mar 11 09:02:15 crc kubenswrapper[4808]: I0311 09:02:15.815081 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Mar 11 09:02:15 crc kubenswrapper[4808]: I0311 09:02:15.820651 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 11 09:02:15 crc kubenswrapper[4808]: I0311 09:02:15.876049 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8449d68f4f-gc7c4"] Mar 11 09:02:15 crc kubenswrapper[4808]: I0311 09:02:15.906199 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7b8fcc65cc-vlcss"] Mar 11 09:02:15 crc kubenswrapper[4808]: I0311 09:02:15.907708 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cb14cc88-783b-4bcb-bdbb-770da3ee3a4c-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"cb14cc88-783b-4bcb-bdbb-770da3ee3a4c\") " pod="openstack/cinder-scheduler-0" Mar 11 09:02:15 crc kubenswrapper[4808]: I0311 09:02:15.908696 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cb14cc88-783b-4bcb-bdbb-770da3ee3a4c-scripts\") pod \"cinder-scheduler-0\" (UID: \"cb14cc88-783b-4bcb-bdbb-770da3ee3a4c\") " pod="openstack/cinder-scheduler-0" Mar 11 09:02:15 crc kubenswrapper[4808]: I0311 09:02:15.908818 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cb14cc88-783b-4bcb-bdbb-770da3ee3a4c-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"cb14cc88-783b-4bcb-bdbb-770da3ee3a4c\") " pod="openstack/cinder-scheduler-0" Mar 11 09:02:15 crc kubenswrapper[4808]: I0311 09:02:15.908970 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb14cc88-783b-4bcb-bdbb-770da3ee3a4c-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"cb14cc88-783b-4bcb-bdbb-770da3ee3a4c\") " pod="openstack/cinder-scheduler-0" Mar 11 09:02:15 crc kubenswrapper[4808]: I0311 09:02:15.909128 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb14cc88-783b-4bcb-bdbb-770da3ee3a4c-config-data\") pod \"cinder-scheduler-0\" (UID: \"cb14cc88-783b-4bcb-bdbb-770da3ee3a4c\") " pod="openstack/cinder-scheduler-0" Mar 11 09:02:15 crc kubenswrapper[4808]: I0311 09:02:15.909236 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9mh7\" (UniqueName: \"kubernetes.io/projected/cb14cc88-783b-4bcb-bdbb-770da3ee3a4c-kube-api-access-x9mh7\") pod \"cinder-scheduler-0\" (UID: \"cb14cc88-783b-4bcb-bdbb-770da3ee3a4c\") " pod="openstack/cinder-scheduler-0" Mar 11 09:02:15 crc kubenswrapper[4808]: I0311 09:02:15.907771 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b8fcc65cc-vlcss" Mar 11 09:02:15 crc kubenswrapper[4808]: I0311 09:02:15.924906 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7b8fcc65cc-vlcss"] Mar 11 09:02:16 crc kubenswrapper[4808]: I0311 09:02:16.011026 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bf6365d-3a16-4b22-8d9a-0d261e818792-config\") pod \"dnsmasq-dns-7b8fcc65cc-vlcss\" (UID: \"6bf6365d-3a16-4b22-8d9a-0d261e818792\") " pod="openstack/dnsmasq-dns-7b8fcc65cc-vlcss" Mar 11 09:02:16 crc kubenswrapper[4808]: I0311 09:02:16.011076 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb14cc88-783b-4bcb-bdbb-770da3ee3a4c-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"cb14cc88-783b-4bcb-bdbb-770da3ee3a4c\") " pod="openstack/cinder-scheduler-0" Mar 11 09:02:16 crc kubenswrapper[4808]: I0311 09:02:16.011134 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb14cc88-783b-4bcb-bdbb-770da3ee3a4c-config-data\") pod \"cinder-scheduler-0\" (UID: \"cb14cc88-783b-4bcb-bdbb-770da3ee3a4c\") " pod="openstack/cinder-scheduler-0" Mar 11 09:02:16 crc kubenswrapper[4808]: I0311 09:02:16.011163 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9mh7\" (UniqueName: \"kubernetes.io/projected/cb14cc88-783b-4bcb-bdbb-770da3ee3a4c-kube-api-access-x9mh7\") pod \"cinder-scheduler-0\" (UID: \"cb14cc88-783b-4bcb-bdbb-770da3ee3a4c\") " pod="openstack/cinder-scheduler-0" Mar 11 09:02:16 crc kubenswrapper[4808]: I0311 09:02:16.011180 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjb6g\" (UniqueName: \"kubernetes.io/projected/6bf6365d-3a16-4b22-8d9a-0d261e818792-kube-api-access-qjb6g\") pod \"dnsmasq-dns-7b8fcc65cc-vlcss\" (UID: \"6bf6365d-3a16-4b22-8d9a-0d261e818792\") " pod="openstack/dnsmasq-dns-7b8fcc65cc-vlcss" Mar 11 09:02:16 crc kubenswrapper[4808]: I0311 09:02:16.011222 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cb14cc88-783b-4bcb-bdbb-770da3ee3a4c-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"cb14cc88-783b-4bcb-bdbb-770da3ee3a4c\") " pod="openstack/cinder-scheduler-0" Mar 11 09:02:16 crc kubenswrapper[4808]: I0311 09:02:16.011245 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6bf6365d-3a16-4b22-8d9a-0d261e818792-dns-svc\") pod \"dnsmasq-dns-7b8fcc65cc-vlcss\" (UID: \"6bf6365d-3a16-4b22-8d9a-0d261e818792\") " pod="openstack/dnsmasq-dns-7b8fcc65cc-vlcss" Mar 11 09:02:16 crc kubenswrapper[4808]: I0311 09:02:16.011268 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cb14cc88-783b-4bcb-bdbb-770da3ee3a4c-scripts\") pod \"cinder-scheduler-0\" (UID: \"cb14cc88-783b-4bcb-bdbb-770da3ee3a4c\") " pod="openstack/cinder-scheduler-0" Mar 11 09:02:16 crc kubenswrapper[4808]: I0311 09:02:16.011293 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cb14cc88-783b-4bcb-bdbb-770da3ee3a4c-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"cb14cc88-783b-4bcb-bdbb-770da3ee3a4c\") " pod="openstack/cinder-scheduler-0" Mar 11 09:02:16 crc kubenswrapper[4808]: I0311 09:02:16.011322 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6bf6365d-3a16-4b22-8d9a-0d261e818792-ovsdbserver-nb\") pod \"dnsmasq-dns-7b8fcc65cc-vlcss\" (UID: \"6bf6365d-3a16-4b22-8d9a-0d261e818792\") " pod="openstack/dnsmasq-dns-7b8fcc65cc-vlcss" Mar 11 09:02:16 crc kubenswrapper[4808]: I0311 09:02:16.011378 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6bf6365d-3a16-4b22-8d9a-0d261e818792-ovsdbserver-sb\") pod \"dnsmasq-dns-7b8fcc65cc-vlcss\" (UID: \"6bf6365d-3a16-4b22-8d9a-0d261e818792\") " pod="openstack/dnsmasq-dns-7b8fcc65cc-vlcss" Mar 11 09:02:16 crc kubenswrapper[4808]: I0311 09:02:16.011406 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6bf6365d-3a16-4b22-8d9a-0d261e818792-dns-swift-storage-0\") pod \"dnsmasq-dns-7b8fcc65cc-vlcss\" (UID: \"6bf6365d-3a16-4b22-8d9a-0d261e818792\") " pod="openstack/dnsmasq-dns-7b8fcc65cc-vlcss" Mar 11 09:02:16 crc kubenswrapper[4808]: I0311 09:02:16.013925 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cb14cc88-783b-4bcb-bdbb-770da3ee3a4c-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"cb14cc88-783b-4bcb-bdbb-770da3ee3a4c\") " pod="openstack/cinder-scheduler-0" Mar 11 09:02:16 crc kubenswrapper[4808]: I0311 09:02:16.019950 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cb14cc88-783b-4bcb-bdbb-770da3ee3a4c-scripts\") pod \"cinder-scheduler-0\" (UID: \"cb14cc88-783b-4bcb-bdbb-770da3ee3a4c\") " pod="openstack/cinder-scheduler-0" Mar 11 09:02:16 crc kubenswrapper[4808]: I0311 09:02:16.020323 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cb14cc88-783b-4bcb-bdbb-770da3ee3a4c-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"cb14cc88-783b-4bcb-bdbb-770da3ee3a4c\") " pod="openstack/cinder-scheduler-0" Mar 11 09:02:16 crc kubenswrapper[4808]: I0311 09:02:16.029648 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb14cc88-783b-4bcb-bdbb-770da3ee3a4c-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"cb14cc88-783b-4bcb-bdbb-770da3ee3a4c\") " pod="openstack/cinder-scheduler-0" Mar 11 09:02:16 crc kubenswrapper[4808]: I0311 09:02:16.030290 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb14cc88-783b-4bcb-bdbb-770da3ee3a4c-config-data\") pod \"cinder-scheduler-0\" (UID: \"cb14cc88-783b-4bcb-bdbb-770da3ee3a4c\") " pod="openstack/cinder-scheduler-0" Mar 11 09:02:16 crc kubenswrapper[4808]: I0311 09:02:16.045109 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9mh7\" (UniqueName: \"kubernetes.io/projected/cb14cc88-783b-4bcb-bdbb-770da3ee3a4c-kube-api-access-x9mh7\") pod \"cinder-scheduler-0\" (UID: \"cb14cc88-783b-4bcb-bdbb-770da3ee3a4c\") " pod="openstack/cinder-scheduler-0" Mar 11 09:02:16 crc kubenswrapper[4808]: I0311 09:02:16.069049 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Mar 11 09:02:16 crc kubenswrapper[4808]: I0311 09:02:16.070473 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 11 09:02:16 crc kubenswrapper[4808]: I0311 09:02:16.074344 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Mar 11 09:02:16 crc kubenswrapper[4808]: I0311 09:02:16.084434 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Mar 11 09:02:16 crc kubenswrapper[4808]: I0311 09:02:16.108236 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-688446ffb8-4n8n7"] Mar 11 09:02:16 crc kubenswrapper[4808]: I0311 09:02:16.112551 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6bf6365d-3a16-4b22-8d9a-0d261e818792-dns-svc\") pod \"dnsmasq-dns-7b8fcc65cc-vlcss\" (UID: \"6bf6365d-3a16-4b22-8d9a-0d261e818792\") " pod="openstack/dnsmasq-dns-7b8fcc65cc-vlcss" Mar 11 09:02:16 crc kubenswrapper[4808]: I0311 09:02:16.112600 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/30d088cb-412e-4f17-927e-545b63b3185b-config-data-custom\") pod \"cinder-api-0\" (UID: \"30d088cb-412e-4f17-927e-545b63b3185b\") " pod="openstack/cinder-api-0" Mar 11 09:02:16 crc kubenswrapper[4808]: I0311 09:02:16.112646 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6bf6365d-3a16-4b22-8d9a-0d261e818792-ovsdbserver-nb\") pod \"dnsmasq-dns-7b8fcc65cc-vlcss\" (UID: \"6bf6365d-3a16-4b22-8d9a-0d261e818792\") " pod="openstack/dnsmasq-dns-7b8fcc65cc-vlcss" Mar 11 09:02:16 crc kubenswrapper[4808]: I0311 09:02:16.112670 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6bf6365d-3a16-4b22-8d9a-0d261e818792-ovsdbserver-sb\") pod \"dnsmasq-dns-7b8fcc65cc-vlcss\" (UID: \"6bf6365d-3a16-4b22-8d9a-0d261e818792\") " pod="openstack/dnsmasq-dns-7b8fcc65cc-vlcss" Mar 11 09:02:16 crc kubenswrapper[4808]: I0311 09:02:16.112689 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/30d088cb-412e-4f17-927e-545b63b3185b-logs\") pod \"cinder-api-0\" (UID: \"30d088cb-412e-4f17-927e-545b63b3185b\") " pod="openstack/cinder-api-0" Mar 11 09:02:16 crc kubenswrapper[4808]: I0311 09:02:16.112707 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30d088cb-412e-4f17-927e-545b63b3185b-scripts\") pod \"cinder-api-0\" (UID: \"30d088cb-412e-4f17-927e-545b63b3185b\") " pod="openstack/cinder-api-0" Mar 11 09:02:16 crc kubenswrapper[4808]: I0311 09:02:16.112724 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6bf6365d-3a16-4b22-8d9a-0d261e818792-dns-swift-storage-0\") pod \"dnsmasq-dns-7b8fcc65cc-vlcss\" (UID: \"6bf6365d-3a16-4b22-8d9a-0d261e818792\") " pod="openstack/dnsmasq-dns-7b8fcc65cc-vlcss" Mar 11 09:02:16 crc kubenswrapper[4808]: I0311 09:02:16.112745 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bf6365d-3a16-4b22-8d9a-0d261e818792-config\") pod \"dnsmasq-dns-7b8fcc65cc-vlcss\" (UID: \"6bf6365d-3a16-4b22-8d9a-0d261e818792\") " pod="openstack/dnsmasq-dns-7b8fcc65cc-vlcss" Mar 11 09:02:16 crc kubenswrapper[4808]: I0311 09:02:16.112761 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/30d088cb-412e-4f17-927e-545b63b3185b-etc-machine-id\") pod \"cinder-api-0\" (UID: \"30d088cb-412e-4f17-927e-545b63b3185b\") " pod="openstack/cinder-api-0" Mar 11 09:02:16 crc kubenswrapper[4808]: I0311 09:02:16.113110 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30d088cb-412e-4f17-927e-545b63b3185b-config-data\") pod \"cinder-api-0\" (UID: \"30d088cb-412e-4f17-927e-545b63b3185b\") " pod="openstack/cinder-api-0" Mar 11 09:02:16 crc kubenswrapper[4808]: I0311 09:02:16.113159 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjb6g\" (UniqueName: \"kubernetes.io/projected/6bf6365d-3a16-4b22-8d9a-0d261e818792-kube-api-access-qjb6g\") pod \"dnsmasq-dns-7b8fcc65cc-vlcss\" (UID: \"6bf6365d-3a16-4b22-8d9a-0d261e818792\") " pod="openstack/dnsmasq-dns-7b8fcc65cc-vlcss" Mar 11 09:02:16 crc kubenswrapper[4808]: I0311 09:02:16.113177 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hpc9\" (UniqueName: \"kubernetes.io/projected/30d088cb-412e-4f17-927e-545b63b3185b-kube-api-access-5hpc9\") pod \"cinder-api-0\" (UID: \"30d088cb-412e-4f17-927e-545b63b3185b\") " pod="openstack/cinder-api-0" Mar 11 09:02:16 crc kubenswrapper[4808]: I0311 09:02:16.113230 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30d088cb-412e-4f17-927e-545b63b3185b-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"30d088cb-412e-4f17-927e-545b63b3185b\") " pod="openstack/cinder-api-0" Mar 11 09:02:16 crc kubenswrapper[4808]: I0311 09:02:16.113702 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6bf6365d-3a16-4b22-8d9a-0d261e818792-ovsdbserver-sb\") pod \"dnsmasq-dns-7b8fcc65cc-vlcss\" (UID: \"6bf6365d-3a16-4b22-8d9a-0d261e818792\") " pod="openstack/dnsmasq-dns-7b8fcc65cc-vlcss" Mar 11 09:02:16 crc kubenswrapper[4808]: I0311 09:02:16.113824 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-688446ffb8-4n8n7" Mar 11 09:02:16 crc kubenswrapper[4808]: I0311 09:02:16.113986 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6bf6365d-3a16-4b22-8d9a-0d261e818792-dns-swift-storage-0\") pod \"dnsmasq-dns-7b8fcc65cc-vlcss\" (UID: \"6bf6365d-3a16-4b22-8d9a-0d261e818792\") " pod="openstack/dnsmasq-dns-7b8fcc65cc-vlcss" Mar 11 09:02:16 crc kubenswrapper[4808]: I0311 09:02:16.114536 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bf6365d-3a16-4b22-8d9a-0d261e818792-config\") pod \"dnsmasq-dns-7b8fcc65cc-vlcss\" (UID: \"6bf6365d-3a16-4b22-8d9a-0d261e818792\") " pod="openstack/dnsmasq-dns-7b8fcc65cc-vlcss" Mar 11 09:02:16 crc kubenswrapper[4808]: I0311 09:02:16.115028 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6bf6365d-3a16-4b22-8d9a-0d261e818792-dns-svc\") pod \"dnsmasq-dns-7b8fcc65cc-vlcss\" (UID: \"6bf6365d-3a16-4b22-8d9a-0d261e818792\") " pod="openstack/dnsmasq-dns-7b8fcc65cc-vlcss" Mar 11 09:02:16 crc kubenswrapper[4808]: I0311 09:02:16.115557 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6bf6365d-3a16-4b22-8d9a-0d261e818792-ovsdbserver-nb\") pod \"dnsmasq-dns-7b8fcc65cc-vlcss\" (UID: \"6bf6365d-3a16-4b22-8d9a-0d261e818792\") " pod="openstack/dnsmasq-dns-7b8fcc65cc-vlcss" Mar 11 09:02:16 crc kubenswrapper[4808]: I0311 09:02:16.118603 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 11 09:02:16 crc kubenswrapper[4808]: I0311 09:02:16.120679 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Mar 11 09:02:16 crc kubenswrapper[4808]: I0311 09:02:16.120729 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-688446ffb8-4n8n7"] Mar 11 09:02:16 crc kubenswrapper[4808]: I0311 09:02:16.121320 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Mar 11 09:02:16 crc kubenswrapper[4808]: I0311 09:02:16.132393 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjb6g\" (UniqueName: \"kubernetes.io/projected/6bf6365d-3a16-4b22-8d9a-0d261e818792-kube-api-access-qjb6g\") pod \"dnsmasq-dns-7b8fcc65cc-vlcss\" (UID: \"6bf6365d-3a16-4b22-8d9a-0d261e818792\") " pod="openstack/dnsmasq-dns-7b8fcc65cc-vlcss" Mar 11 09:02:16 crc kubenswrapper[4808]: I0311 09:02:16.214251 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/909d233e-60cb-4a66-989b-2dc8706ea143-logs\") pod \"barbican-api-688446ffb8-4n8n7\" (UID: \"909d233e-60cb-4a66-989b-2dc8706ea143\") " pod="openstack/barbican-api-688446ffb8-4n8n7" Mar 11 09:02:16 crc kubenswrapper[4808]: I0311 09:02:16.214312 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/30d088cb-412e-4f17-927e-545b63b3185b-logs\") pod \"cinder-api-0\" (UID: \"30d088cb-412e-4f17-927e-545b63b3185b\") " pod="openstack/cinder-api-0" Mar 11 09:02:16 crc kubenswrapper[4808]: I0311 09:02:16.214371 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30d088cb-412e-4f17-927e-545b63b3185b-scripts\") pod \"cinder-api-0\" (UID: \"30d088cb-412e-4f17-927e-545b63b3185b\") " pod="openstack/cinder-api-0" Mar 11 09:02:16 crc kubenswrapper[4808]: I0311 09:02:16.214393 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/909d233e-60cb-4a66-989b-2dc8706ea143-public-tls-certs\") pod \"barbican-api-688446ffb8-4n8n7\" (UID: \"909d233e-60cb-4a66-989b-2dc8706ea143\") " pod="openstack/barbican-api-688446ffb8-4n8n7" Mar 11 09:02:16 crc kubenswrapper[4808]: I0311 09:02:16.214415 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/909d233e-60cb-4a66-989b-2dc8706ea143-config-data-custom\") pod \"barbican-api-688446ffb8-4n8n7\" (UID: \"909d233e-60cb-4a66-989b-2dc8706ea143\") " pod="openstack/barbican-api-688446ffb8-4n8n7" Mar 11 09:02:16 crc kubenswrapper[4808]: I0311 09:02:16.214431 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/909d233e-60cb-4a66-989b-2dc8706ea143-internal-tls-certs\") pod \"barbican-api-688446ffb8-4n8n7\" (UID: \"909d233e-60cb-4a66-989b-2dc8706ea143\") " pod="openstack/barbican-api-688446ffb8-4n8n7" Mar 11 09:02:16 crc kubenswrapper[4808]: I0311 09:02:16.214448 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/30d088cb-412e-4f17-927e-545b63b3185b-etc-machine-id\") pod \"cinder-api-0\" (UID: \"30d088cb-412e-4f17-927e-545b63b3185b\") " pod="openstack/cinder-api-0" Mar 11 09:02:16 crc kubenswrapper[4808]: I0311 09:02:16.214469 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pgj4\" (UniqueName: \"kubernetes.io/projected/909d233e-60cb-4a66-989b-2dc8706ea143-kube-api-access-8pgj4\") pod \"barbican-api-688446ffb8-4n8n7\" (UID: \"909d233e-60cb-4a66-989b-2dc8706ea143\") " pod="openstack/barbican-api-688446ffb8-4n8n7" Mar 11 09:02:16 crc kubenswrapper[4808]: I0311 09:02:16.214490 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/909d233e-60cb-4a66-989b-2dc8706ea143-config-data\") pod \"barbican-api-688446ffb8-4n8n7\" (UID: \"909d233e-60cb-4a66-989b-2dc8706ea143\") " pod="openstack/barbican-api-688446ffb8-4n8n7" Mar 11 09:02:16 crc kubenswrapper[4808]: I0311 09:02:16.214529 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/909d233e-60cb-4a66-989b-2dc8706ea143-combined-ca-bundle\") pod \"barbican-api-688446ffb8-4n8n7\" (UID: \"909d233e-60cb-4a66-989b-2dc8706ea143\") " pod="openstack/barbican-api-688446ffb8-4n8n7" Mar 11 09:02:16 crc kubenswrapper[4808]: I0311 09:02:16.214564 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30d088cb-412e-4f17-927e-545b63b3185b-config-data\") pod \"cinder-api-0\" (UID: \"30d088cb-412e-4f17-927e-545b63b3185b\") " pod="openstack/cinder-api-0" Mar 11 09:02:16 crc kubenswrapper[4808]: I0311 09:02:16.214583 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hpc9\" (UniqueName: \"kubernetes.io/projected/30d088cb-412e-4f17-927e-545b63b3185b-kube-api-access-5hpc9\") pod \"cinder-api-0\" (UID: \"30d088cb-412e-4f17-927e-545b63b3185b\") " pod="openstack/cinder-api-0" Mar 11 09:02:16 crc kubenswrapper[4808]: I0311 09:02:16.214614 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30d088cb-412e-4f17-927e-545b63b3185b-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"30d088cb-412e-4f17-927e-545b63b3185b\") " pod="openstack/cinder-api-0" Mar 11 09:02:16 crc kubenswrapper[4808]: I0311 09:02:16.214645 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/30d088cb-412e-4f17-927e-545b63b3185b-config-data-custom\") pod \"cinder-api-0\" (UID: \"30d088cb-412e-4f17-927e-545b63b3185b\") " pod="openstack/cinder-api-0" Mar 11 09:02:16 crc kubenswrapper[4808]: I0311 09:02:16.214778 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/30d088cb-412e-4f17-927e-545b63b3185b-logs\") pod \"cinder-api-0\" (UID: \"30d088cb-412e-4f17-927e-545b63b3185b\") " pod="openstack/cinder-api-0" Mar 11 09:02:16 crc kubenswrapper[4808]: I0311 09:02:16.215296 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/30d088cb-412e-4f17-927e-545b63b3185b-etc-machine-id\") pod \"cinder-api-0\" (UID: \"30d088cb-412e-4f17-927e-545b63b3185b\") " pod="openstack/cinder-api-0" Mar 11 09:02:16 crc kubenswrapper[4808]: I0311 09:02:16.221385 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30d088cb-412e-4f17-927e-545b63b3185b-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"30d088cb-412e-4f17-927e-545b63b3185b\") " pod="openstack/cinder-api-0" Mar 11 09:02:16 crc kubenswrapper[4808]: I0311 09:02:16.221543 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/30d088cb-412e-4f17-927e-545b63b3185b-config-data-custom\") pod \"cinder-api-0\" (UID: \"30d088cb-412e-4f17-927e-545b63b3185b\") " pod="openstack/cinder-api-0" Mar 11 09:02:16 crc kubenswrapper[4808]: I0311 09:02:16.224261 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30d088cb-412e-4f17-927e-545b63b3185b-config-data\") pod \"cinder-api-0\" (UID: \"30d088cb-412e-4f17-927e-545b63b3185b\") " pod="openstack/cinder-api-0" Mar 11 09:02:16 crc kubenswrapper[4808]: I0311 09:02:16.227104 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30d088cb-412e-4f17-927e-545b63b3185b-scripts\") pod \"cinder-api-0\" (UID: \"30d088cb-412e-4f17-927e-545b63b3185b\") " pod="openstack/cinder-api-0" Mar 11 09:02:16 crc kubenswrapper[4808]: I0311 09:02:16.230903 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hpc9\" (UniqueName: \"kubernetes.io/projected/30d088cb-412e-4f17-927e-545b63b3185b-kube-api-access-5hpc9\") pod \"cinder-api-0\" (UID: \"30d088cb-412e-4f17-927e-545b63b3185b\") " pod="openstack/cinder-api-0" Mar 11 09:02:16 crc kubenswrapper[4808]: I0311 09:02:16.235659 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b8fcc65cc-vlcss" Mar 11 09:02:16 crc kubenswrapper[4808]: I0311 09:02:16.316136 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/909d233e-60cb-4a66-989b-2dc8706ea143-logs\") pod \"barbican-api-688446ffb8-4n8n7\" (UID: \"909d233e-60cb-4a66-989b-2dc8706ea143\") " pod="openstack/barbican-api-688446ffb8-4n8n7" Mar 11 09:02:16 crc kubenswrapper[4808]: I0311 09:02:16.316204 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/909d233e-60cb-4a66-989b-2dc8706ea143-public-tls-certs\") pod \"barbican-api-688446ffb8-4n8n7\" (UID: \"909d233e-60cb-4a66-989b-2dc8706ea143\") " pod="openstack/barbican-api-688446ffb8-4n8n7" Mar 11 09:02:16 crc kubenswrapper[4808]: I0311 09:02:16.317293 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/909d233e-60cb-4a66-989b-2dc8706ea143-config-data-custom\") pod \"barbican-api-688446ffb8-4n8n7\" (UID: \"909d233e-60cb-4a66-989b-2dc8706ea143\") " pod="openstack/barbican-api-688446ffb8-4n8n7" Mar 11 09:02:16 crc kubenswrapper[4808]: I0311 09:02:16.317337 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/909d233e-60cb-4a66-989b-2dc8706ea143-internal-tls-certs\") pod \"barbican-api-688446ffb8-4n8n7\" (UID: \"909d233e-60cb-4a66-989b-2dc8706ea143\") " pod="openstack/barbican-api-688446ffb8-4n8n7" Mar 11 09:02:16 crc kubenswrapper[4808]: I0311 09:02:16.317412 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pgj4\" (UniqueName: \"kubernetes.io/projected/909d233e-60cb-4a66-989b-2dc8706ea143-kube-api-access-8pgj4\") pod \"barbican-api-688446ffb8-4n8n7\" (UID: \"909d233e-60cb-4a66-989b-2dc8706ea143\") " pod="openstack/barbican-api-688446ffb8-4n8n7" Mar 11 09:02:16 crc kubenswrapper[4808]: I0311 09:02:16.317446 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/909d233e-60cb-4a66-989b-2dc8706ea143-config-data\") pod \"barbican-api-688446ffb8-4n8n7\" (UID: \"909d233e-60cb-4a66-989b-2dc8706ea143\") " pod="openstack/barbican-api-688446ffb8-4n8n7" Mar 11 09:02:16 crc kubenswrapper[4808]: I0311 09:02:16.317509 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/909d233e-60cb-4a66-989b-2dc8706ea143-combined-ca-bundle\") pod \"barbican-api-688446ffb8-4n8n7\" (UID: \"909d233e-60cb-4a66-989b-2dc8706ea143\") " pod="openstack/barbican-api-688446ffb8-4n8n7" Mar 11 09:02:16 crc kubenswrapper[4808]: I0311 09:02:16.321082 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/909d233e-60cb-4a66-989b-2dc8706ea143-logs\") pod \"barbican-api-688446ffb8-4n8n7\" (UID: \"909d233e-60cb-4a66-989b-2dc8706ea143\") " pod="openstack/barbican-api-688446ffb8-4n8n7" Mar 11 09:02:16 crc kubenswrapper[4808]: I0311 09:02:16.324263 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/909d233e-60cb-4a66-989b-2dc8706ea143-combined-ca-bundle\") pod \"barbican-api-688446ffb8-4n8n7\" (UID: \"909d233e-60cb-4a66-989b-2dc8706ea143\") " pod="openstack/barbican-api-688446ffb8-4n8n7" Mar 11 09:02:16 crc kubenswrapper[4808]: I0311 09:02:16.326614 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/909d233e-60cb-4a66-989b-2dc8706ea143-internal-tls-certs\") pod \"barbican-api-688446ffb8-4n8n7\" (UID: \"909d233e-60cb-4a66-989b-2dc8706ea143\") " pod="openstack/barbican-api-688446ffb8-4n8n7" Mar 11 09:02:16 crc kubenswrapper[4808]: I0311 09:02:16.326924 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/909d233e-60cb-4a66-989b-2dc8706ea143-public-tls-certs\") pod \"barbican-api-688446ffb8-4n8n7\" (UID: \"909d233e-60cb-4a66-989b-2dc8706ea143\") " pod="openstack/barbican-api-688446ffb8-4n8n7" Mar 11 09:02:16 crc kubenswrapper[4808]: I0311 09:02:16.326958 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/909d233e-60cb-4a66-989b-2dc8706ea143-config-data-custom\") pod \"barbican-api-688446ffb8-4n8n7\" (UID: \"909d233e-60cb-4a66-989b-2dc8706ea143\") " pod="openstack/barbican-api-688446ffb8-4n8n7" Mar 11 09:02:16 crc kubenswrapper[4808]: I0311 09:02:16.333200 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/909d233e-60cb-4a66-989b-2dc8706ea143-config-data\") pod \"barbican-api-688446ffb8-4n8n7\" (UID: \"909d233e-60cb-4a66-989b-2dc8706ea143\") " pod="openstack/barbican-api-688446ffb8-4n8n7" Mar 11 09:02:16 crc kubenswrapper[4808]: I0311 09:02:16.340945 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pgj4\" (UniqueName: \"kubernetes.io/projected/909d233e-60cb-4a66-989b-2dc8706ea143-kube-api-access-8pgj4\") pod \"barbican-api-688446ffb8-4n8n7\" (UID: \"909d233e-60cb-4a66-989b-2dc8706ea143\") " pod="openstack/barbican-api-688446ffb8-4n8n7" Mar 11 09:02:16 crc kubenswrapper[4808]: I0311 09:02:16.404374 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 11 09:02:16 crc kubenswrapper[4808]: I0311 09:02:16.448767 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-688446ffb8-4n8n7" Mar 11 09:02:16 crc kubenswrapper[4808]: I0311 09:02:16.606501 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6b745fd47c-v25f8" event={"ID":"50b3975b-d699-4f86-8aba-3a00f99bfdbc","Type":"ContainerStarted","Data":"e3ce99f2e8ebb84a8c715f80b60a3bd785a8f332525b48b0979b54f0ca248423"} Mar 11 09:02:16 crc kubenswrapper[4808]: I0311 09:02:16.625422 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff4d7462-f6c6-4e65-b6ed-8ba0a281520a","Type":"ContainerStarted","Data":"f21d8ff4624c0e299072dec0ed64600ad702be3d85c6613566143c1a8fa60529"} Mar 11 09:02:16 crc kubenswrapper[4808]: I0311 09:02:16.633439 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-86db6b574-lsd58" event={"ID":"16eab58c-16f2-4054-aae1-d4de176db24c","Type":"ContainerStarted","Data":"aa0c1e84c07ee55b3422a4d17c6d167a2c103b98d130bfcf9441a6a8c53b6cb2"} Mar 11 09:02:16 crc kubenswrapper[4808]: I0311 09:02:16.633471 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-86db6b574-lsd58" event={"ID":"16eab58c-16f2-4054-aae1-d4de176db24c","Type":"ContainerStarted","Data":"b632c1c0284e2ec0e59641f479394eaf69354fe98cfe9ecbe781f8db04299ccf"} Mar 11 09:02:16 crc kubenswrapper[4808]: I0311 09:02:16.636445 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 11 09:02:16 crc kubenswrapper[4808]: I0311 09:02:16.640178 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-6b745fd47c-v25f8" podStartSLOduration=3.343558369 podStartE2EDuration="4.640163375s" podCreationTimestamp="2026-03-11 09:02:12 +0000 UTC" firstStartedPulling="2026-03-11 09:02:13.665533716 +0000 UTC m=+1384.618857036" lastFinishedPulling="2026-03-11 09:02:14.962138712 +0000 UTC m=+1385.915462042" observedRunningTime="2026-03-11 09:02:16.630188945 +0000 UTC m=+1387.583512265" watchObservedRunningTime="2026-03-11 09:02:16.640163375 +0000 UTC m=+1387.593486695" Mar 11 09:02:16 crc kubenswrapper[4808]: I0311 09:02:16.700796 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-86db6b574-lsd58" podStartSLOduration=2.932727538 podStartE2EDuration="4.700778447s" podCreationTimestamp="2026-03-11 09:02:12 +0000 UTC" firstStartedPulling="2026-03-11 09:02:13.885108948 +0000 UTC m=+1384.838432268" lastFinishedPulling="2026-03-11 09:02:15.653159867 +0000 UTC m=+1386.606483177" observedRunningTime="2026-03-11 09:02:16.65648599 +0000 UTC m=+1387.609809310" watchObservedRunningTime="2026-03-11 09:02:16.700778447 +0000 UTC m=+1387.654101767" Mar 11 09:02:16 crc kubenswrapper[4808]: I0311 09:02:16.746287 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7b8fcc65cc-vlcss"] Mar 11 09:02:16 crc kubenswrapper[4808]: I0311 09:02:16.987349 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Mar 11 09:02:16 crc kubenswrapper[4808]: W0311 09:02:16.995205 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod30d088cb_412e_4f17_927e_545b63b3185b.slice/crio-14b55860d3647abee0664bd24ed228e86d39d1653701705cf67418450765bc4e WatchSource:0}: Error finding container 14b55860d3647abee0664bd24ed228e86d39d1653701705cf67418450765bc4e: Status 404 returned error can't find the container with id 14b55860d3647abee0664bd24ed228e86d39d1653701705cf67418450765bc4e Mar 11 09:02:17 crc kubenswrapper[4808]: I0311 09:02:17.103433 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-688446ffb8-4n8n7"] Mar 11 09:02:17 crc kubenswrapper[4808]: W0311 09:02:17.143386 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod909d233e_60cb_4a66_989b_2dc8706ea143.slice/crio-93e2de8bb571ae2993d01391b8092a8ef73ba6e25e9b7acafa556b504387547b WatchSource:0}: Error finding container 93e2de8bb571ae2993d01391b8092a8ef73ba6e25e9b7acafa556b504387547b: Status 404 returned error can't find the container with id 93e2de8bb571ae2993d01391b8092a8ef73ba6e25e9b7acafa556b504387547b Mar 11 09:02:17 crc kubenswrapper[4808]: I0311 09:02:17.643049 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"cb14cc88-783b-4bcb-bdbb-770da3ee3a4c","Type":"ContainerStarted","Data":"ef11d43904c461c9a7cb0e91aecdcb9e474505e6dae6884d10adfc0afde9fb73"} Mar 11 09:02:17 crc kubenswrapper[4808]: I0311 09:02:17.645352 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-688446ffb8-4n8n7" event={"ID":"909d233e-60cb-4a66-989b-2dc8706ea143","Type":"ContainerStarted","Data":"199e1ac6e17d3bec4008ea266cd9404ffd73ec3cdee3f462464896d016c4f52b"} Mar 11 09:02:17 crc kubenswrapper[4808]: I0311 09:02:17.645402 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-688446ffb8-4n8n7" event={"ID":"909d233e-60cb-4a66-989b-2dc8706ea143","Type":"ContainerStarted","Data":"e9ba485b53ad1d07b1fb284bc1453eaeb1f6baef2ad3437f4b34571187340332"} Mar 11 09:02:17 crc kubenswrapper[4808]: I0311 09:02:17.645414 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-688446ffb8-4n8n7" event={"ID":"909d233e-60cb-4a66-989b-2dc8706ea143","Type":"ContainerStarted","Data":"93e2de8bb571ae2993d01391b8092a8ef73ba6e25e9b7acafa556b504387547b"} Mar 11 09:02:17 crc kubenswrapper[4808]: I0311 09:02:17.645507 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-688446ffb8-4n8n7" Mar 11 09:02:17 crc kubenswrapper[4808]: I0311 09:02:17.647460 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff4d7462-f6c6-4e65-b6ed-8ba0a281520a","Type":"ContainerStarted","Data":"d1944cb374befa9392ca208c08c1f194ef3875e319ef0b634adb20cebad58110"} Mar 11 09:02:17 crc kubenswrapper[4808]: I0311 09:02:17.648719 4808 generic.go:334] "Generic (PLEG): container finished" podID="6bf6365d-3a16-4b22-8d9a-0d261e818792" containerID="2f7920b834438b19cba2838ec5212d8513e9815595eaaa3e5978fa4f1e1cc7e2" exitCode=0 Mar 11 09:02:17 crc kubenswrapper[4808]: I0311 09:02:17.648791 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b8fcc65cc-vlcss" event={"ID":"6bf6365d-3a16-4b22-8d9a-0d261e818792","Type":"ContainerDied","Data":"2f7920b834438b19cba2838ec5212d8513e9815595eaaa3e5978fa4f1e1cc7e2"} Mar 11 09:02:17 crc kubenswrapper[4808]: I0311 09:02:17.648807 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b8fcc65cc-vlcss" event={"ID":"6bf6365d-3a16-4b22-8d9a-0d261e818792","Type":"ContainerStarted","Data":"e4c3094a831fbae89577ed216098a0d039a5839b3a6687b7e477f3ee6a940296"} Mar 11 09:02:17 crc kubenswrapper[4808]: I0311 09:02:17.649875 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"30d088cb-412e-4f17-927e-545b63b3185b","Type":"ContainerStarted","Data":"14b55860d3647abee0664bd24ed228e86d39d1653701705cf67418450765bc4e"} Mar 11 09:02:17 crc kubenswrapper[4808]: I0311 09:02:17.650194 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8449d68f4f-gc7c4" podUID="7a2bfbf2-850f-4ce8-a3f0-ead546d6775d" containerName="dnsmasq-dns" containerID="cri-o://592d5d8a780e07c5c8076fb769f13eb22a214281cb1ab22a56d45399de586dc9" gracePeriod=10 Mar 11 09:02:17 crc kubenswrapper[4808]: I0311 09:02:17.698015 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-688446ffb8-4n8n7" podStartSLOduration=1.697986632 podStartE2EDuration="1.697986632s" podCreationTimestamp="2026-03-11 09:02:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:02:17.680757421 +0000 UTC m=+1388.634080741" watchObservedRunningTime="2026-03-11 09:02:17.697986632 +0000 UTC m=+1388.651309972" Mar 11 09:02:18 crc kubenswrapper[4808]: I0311 09:02:18.223639 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8449d68f4f-gc7c4" Mar 11 09:02:18 crc kubenswrapper[4808]: I0311 09:02:18.307785 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7a2bfbf2-850f-4ce8-a3f0-ead546d6775d-ovsdbserver-sb\") pod \"7a2bfbf2-850f-4ce8-a3f0-ead546d6775d\" (UID: \"7a2bfbf2-850f-4ce8-a3f0-ead546d6775d\") " Mar 11 09:02:18 crc kubenswrapper[4808]: I0311 09:02:18.307842 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a2bfbf2-850f-4ce8-a3f0-ead546d6775d-config\") pod \"7a2bfbf2-850f-4ce8-a3f0-ead546d6775d\" (UID: \"7a2bfbf2-850f-4ce8-a3f0-ead546d6775d\") " Mar 11 09:02:18 crc kubenswrapper[4808]: I0311 09:02:18.307944 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qjllh\" (UniqueName: \"kubernetes.io/projected/7a2bfbf2-850f-4ce8-a3f0-ead546d6775d-kube-api-access-qjllh\") pod \"7a2bfbf2-850f-4ce8-a3f0-ead546d6775d\" (UID: \"7a2bfbf2-850f-4ce8-a3f0-ead546d6775d\") " Mar 11 09:02:18 crc kubenswrapper[4808]: I0311 09:02:18.308022 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7a2bfbf2-850f-4ce8-a3f0-ead546d6775d-dns-swift-storage-0\") pod \"7a2bfbf2-850f-4ce8-a3f0-ead546d6775d\" (UID: \"7a2bfbf2-850f-4ce8-a3f0-ead546d6775d\") " Mar 11 09:02:18 crc kubenswrapper[4808]: I0311 09:02:18.308047 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7a2bfbf2-850f-4ce8-a3f0-ead546d6775d-dns-svc\") pod \"7a2bfbf2-850f-4ce8-a3f0-ead546d6775d\" (UID: \"7a2bfbf2-850f-4ce8-a3f0-ead546d6775d\") " Mar 11 09:02:18 crc kubenswrapper[4808]: I0311 09:02:18.308094 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7a2bfbf2-850f-4ce8-a3f0-ead546d6775d-ovsdbserver-nb\") pod \"7a2bfbf2-850f-4ce8-a3f0-ead546d6775d\" (UID: \"7a2bfbf2-850f-4ce8-a3f0-ead546d6775d\") " Mar 11 09:02:18 crc kubenswrapper[4808]: I0311 09:02:18.324345 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a2bfbf2-850f-4ce8-a3f0-ead546d6775d-kube-api-access-qjllh" (OuterVolumeSpecName: "kube-api-access-qjllh") pod "7a2bfbf2-850f-4ce8-a3f0-ead546d6775d" (UID: "7a2bfbf2-850f-4ce8-a3f0-ead546d6775d"). InnerVolumeSpecName "kube-api-access-qjllh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:02:18 crc kubenswrapper[4808]: I0311 09:02:18.366034 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a2bfbf2-850f-4ce8-a3f0-ead546d6775d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "7a2bfbf2-850f-4ce8-a3f0-ead546d6775d" (UID: "7a2bfbf2-850f-4ce8-a3f0-ead546d6775d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:02:18 crc kubenswrapper[4808]: I0311 09:02:18.377859 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a2bfbf2-850f-4ce8-a3f0-ead546d6775d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7a2bfbf2-850f-4ce8-a3f0-ead546d6775d" (UID: "7a2bfbf2-850f-4ce8-a3f0-ead546d6775d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:02:18 crc kubenswrapper[4808]: I0311 09:02:18.387148 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a2bfbf2-850f-4ce8-a3f0-ead546d6775d-config" (OuterVolumeSpecName: "config") pod "7a2bfbf2-850f-4ce8-a3f0-ead546d6775d" (UID: "7a2bfbf2-850f-4ce8-a3f0-ead546d6775d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:02:18 crc kubenswrapper[4808]: I0311 09:02:18.395609 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a2bfbf2-850f-4ce8-a3f0-ead546d6775d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7a2bfbf2-850f-4ce8-a3f0-ead546d6775d" (UID: "7a2bfbf2-850f-4ce8-a3f0-ead546d6775d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:02:18 crc kubenswrapper[4808]: I0311 09:02:18.396735 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a2bfbf2-850f-4ce8-a3f0-ead546d6775d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7a2bfbf2-850f-4ce8-a3f0-ead546d6775d" (UID: "7a2bfbf2-850f-4ce8-a3f0-ead546d6775d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:02:18 crc kubenswrapper[4808]: I0311 09:02:18.410345 4808 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7a2bfbf2-850f-4ce8-a3f0-ead546d6775d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:18 crc kubenswrapper[4808]: I0311 09:02:18.410470 4808 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7a2bfbf2-850f-4ce8-a3f0-ead546d6775d-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:18 crc kubenswrapper[4808]: I0311 09:02:18.410487 4808 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7a2bfbf2-850f-4ce8-a3f0-ead546d6775d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:18 crc kubenswrapper[4808]: I0311 09:02:18.410499 4808 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7a2bfbf2-850f-4ce8-a3f0-ead546d6775d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:18 crc kubenswrapper[4808]: I0311 09:02:18.410510 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a2bfbf2-850f-4ce8-a3f0-ead546d6775d-config\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:18 crc kubenswrapper[4808]: I0311 09:02:18.410524 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qjllh\" (UniqueName: \"kubernetes.io/projected/7a2bfbf2-850f-4ce8-a3f0-ead546d6775d-kube-api-access-qjllh\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:18 crc kubenswrapper[4808]: I0311 09:02:18.666126 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b8fcc65cc-vlcss" event={"ID":"6bf6365d-3a16-4b22-8d9a-0d261e818792","Type":"ContainerStarted","Data":"4c597aa8c547f3e11152f1df0b7195c03b366c3d24dc3cf3f131df2be45a7c43"} Mar 11 09:02:18 crc kubenswrapper[4808]: I0311 09:02:18.667619 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7b8fcc65cc-vlcss" Mar 11 09:02:18 crc kubenswrapper[4808]: I0311 09:02:18.682960 4808 generic.go:334] "Generic (PLEG): container finished" podID="7a2bfbf2-850f-4ce8-a3f0-ead546d6775d" containerID="592d5d8a780e07c5c8076fb769f13eb22a214281cb1ab22a56d45399de586dc9" exitCode=0 Mar 11 09:02:18 crc kubenswrapper[4808]: I0311 09:02:18.683044 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8449d68f4f-gc7c4" event={"ID":"7a2bfbf2-850f-4ce8-a3f0-ead546d6775d","Type":"ContainerDied","Data":"592d5d8a780e07c5c8076fb769f13eb22a214281cb1ab22a56d45399de586dc9"} Mar 11 09:02:18 crc kubenswrapper[4808]: I0311 09:02:18.683094 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8449d68f4f-gc7c4" event={"ID":"7a2bfbf2-850f-4ce8-a3f0-ead546d6775d","Type":"ContainerDied","Data":"4a994fd90ee317f00e756ac339babdc78cb30960a8b2492df675ce3755203488"} Mar 11 09:02:18 crc kubenswrapper[4808]: I0311 09:02:18.683112 4808 scope.go:117] "RemoveContainer" containerID="592d5d8a780e07c5c8076fb769f13eb22a214281cb1ab22a56d45399de586dc9" Mar 11 09:02:18 crc kubenswrapper[4808]: I0311 09:02:18.683160 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8449d68f4f-gc7c4" Mar 11 09:02:18 crc kubenswrapper[4808]: I0311 09:02:18.702002 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"30d088cb-412e-4f17-927e-545b63b3185b","Type":"ContainerStarted","Data":"c5020b27fd47a8c5a7a76a26d9c16fed9c8c9186d14a473b08deb1259a7c4f00"} Mar 11 09:02:18 crc kubenswrapper[4808]: I0311 09:02:18.703001 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7b8fcc65cc-vlcss" podStartSLOduration=3.702987082 podStartE2EDuration="3.702987082s" podCreationTimestamp="2026-03-11 09:02:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:02:18.701759307 +0000 UTC m=+1389.655082627" watchObservedRunningTime="2026-03-11 09:02:18.702987082 +0000 UTC m=+1389.656310402" Mar 11 09:02:18 crc kubenswrapper[4808]: I0311 09:02:18.703072 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-688446ffb8-4n8n7" Mar 11 09:02:18 crc kubenswrapper[4808]: I0311 09:02:18.732678 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8449d68f4f-gc7c4"] Mar 11 09:02:18 crc kubenswrapper[4808]: I0311 09:02:18.741780 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8449d68f4f-gc7c4"] Mar 11 09:02:18 crc kubenswrapper[4808]: I0311 09:02:18.751643 4808 scope.go:117] "RemoveContainer" containerID="02ce4fa0a884b9a66ca078ee374caeb0cde33936df599a173bf8026c00d8f211" Mar 11 09:02:18 crc kubenswrapper[4808]: I0311 09:02:18.792200 4808 scope.go:117] "RemoveContainer" containerID="592d5d8a780e07c5c8076fb769f13eb22a214281cb1ab22a56d45399de586dc9" Mar 11 09:02:18 crc kubenswrapper[4808]: E0311 09:02:18.793030 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"592d5d8a780e07c5c8076fb769f13eb22a214281cb1ab22a56d45399de586dc9\": container with ID starting with 592d5d8a780e07c5c8076fb769f13eb22a214281cb1ab22a56d45399de586dc9 not found: ID does not exist" containerID="592d5d8a780e07c5c8076fb769f13eb22a214281cb1ab22a56d45399de586dc9" Mar 11 09:02:18 crc kubenswrapper[4808]: I0311 09:02:18.793084 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"592d5d8a780e07c5c8076fb769f13eb22a214281cb1ab22a56d45399de586dc9"} err="failed to get container status \"592d5d8a780e07c5c8076fb769f13eb22a214281cb1ab22a56d45399de586dc9\": rpc error: code = NotFound desc = could not find container \"592d5d8a780e07c5c8076fb769f13eb22a214281cb1ab22a56d45399de586dc9\": container with ID starting with 592d5d8a780e07c5c8076fb769f13eb22a214281cb1ab22a56d45399de586dc9 not found: ID does not exist" Mar 11 09:02:18 crc kubenswrapper[4808]: I0311 09:02:18.793104 4808 scope.go:117] "RemoveContainer" containerID="02ce4fa0a884b9a66ca078ee374caeb0cde33936df599a173bf8026c00d8f211" Mar 11 09:02:18 crc kubenswrapper[4808]: E0311 09:02:18.793726 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02ce4fa0a884b9a66ca078ee374caeb0cde33936df599a173bf8026c00d8f211\": container with ID starting with 02ce4fa0a884b9a66ca078ee374caeb0cde33936df599a173bf8026c00d8f211 not found: ID does not exist" containerID="02ce4fa0a884b9a66ca078ee374caeb0cde33936df599a173bf8026c00d8f211" Mar 11 09:02:18 crc kubenswrapper[4808]: I0311 09:02:18.793752 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02ce4fa0a884b9a66ca078ee374caeb0cde33936df599a173bf8026c00d8f211"} err="failed to get container status \"02ce4fa0a884b9a66ca078ee374caeb0cde33936df599a173bf8026c00d8f211\": rpc error: code = NotFound desc = could not find container \"02ce4fa0a884b9a66ca078ee374caeb0cde33936df599a173bf8026c00d8f211\": container with ID starting with 02ce4fa0a884b9a66ca078ee374caeb0cde33936df599a173bf8026c00d8f211 not found: ID does not exist" Mar 11 09:02:19 crc kubenswrapper[4808]: I0311 09:02:19.376114 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Mar 11 09:02:19 crc kubenswrapper[4808]: I0311 09:02:19.714028 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff4d7462-f6c6-4e65-b6ed-8ba0a281520a","Type":"ContainerStarted","Data":"52ab7660270800e9762a93dcee038a29b5580e99936735b6f8a6a592b04438f0"} Mar 11 09:02:19 crc kubenswrapper[4808]: I0311 09:02:19.714166 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 11 09:02:19 crc kubenswrapper[4808]: I0311 09:02:19.721732 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"30d088cb-412e-4f17-927e-545b63b3185b","Type":"ContainerStarted","Data":"3b8b47d740e3ab79c797e194f1cd9e9a83d232e3cfe789409208d2153f9ca92d"} Mar 11 09:02:19 crc kubenswrapper[4808]: I0311 09:02:19.721972 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Mar 11 09:02:19 crc kubenswrapper[4808]: I0311 09:02:19.723806 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"cb14cc88-783b-4bcb-bdbb-770da3ee3a4c","Type":"ContainerStarted","Data":"ed52150a97a48f8f1535a039331bb67ac74c8cc9b329a7a0f079513d8757921b"} Mar 11 09:02:19 crc kubenswrapper[4808]: I0311 09:02:19.742477 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.389156108 podStartE2EDuration="7.742459105s" podCreationTimestamp="2026-03-11 09:02:12 +0000 UTC" firstStartedPulling="2026-03-11 09:02:13.57657554 +0000 UTC m=+1384.529898850" lastFinishedPulling="2026-03-11 09:02:18.929878537 +0000 UTC m=+1389.883201847" observedRunningTime="2026-03-11 09:02:19.737932214 +0000 UTC m=+1390.691255534" watchObservedRunningTime="2026-03-11 09:02:19.742459105 +0000 UTC m=+1390.695782425" Mar 11 09:02:19 crc kubenswrapper[4808]: I0311 09:02:19.768660 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.768635796 podStartE2EDuration="3.768635796s" podCreationTimestamp="2026-03-11 09:02:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:02:19.757641437 +0000 UTC m=+1390.710964757" watchObservedRunningTime="2026-03-11 09:02:19.768635796 +0000 UTC m=+1390.721959126" Mar 11 09:02:19 crc kubenswrapper[4808]: I0311 09:02:19.800877 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a2bfbf2-850f-4ce8-a3f0-ead546d6775d" path="/var/lib/kubelet/pods/7a2bfbf2-850f-4ce8-a3f0-ead546d6775d/volumes" Mar 11 09:02:20 crc kubenswrapper[4808]: I0311 09:02:20.732765 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"cb14cc88-783b-4bcb-bdbb-770da3ee3a4c","Type":"ContainerStarted","Data":"20ae75e27c597ae9e6320c0668c37e44b4681167b9e5701c9c1fa2b7ecd147d0"} Mar 11 09:02:20 crc kubenswrapper[4808]: I0311 09:02:20.733019 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="30d088cb-412e-4f17-927e-545b63b3185b" containerName="cinder-api-log" containerID="cri-o://c5020b27fd47a8c5a7a76a26d9c16fed9c8c9186d14a473b08deb1259a7c4f00" gracePeriod=30 Mar 11 09:02:20 crc kubenswrapper[4808]: I0311 09:02:20.733105 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="30d088cb-412e-4f17-927e-545b63b3185b" containerName="cinder-api" containerID="cri-o://3b8b47d740e3ab79c797e194f1cd9e9a83d232e3cfe789409208d2153f9ca92d" gracePeriod=30 Mar 11 09:02:20 crc kubenswrapper[4808]: I0311 09:02:20.763632 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.357175217 podStartE2EDuration="5.763613276s" podCreationTimestamp="2026-03-11 09:02:15 +0000 UTC" firstStartedPulling="2026-03-11 09:02:16.685211045 +0000 UTC m=+1387.638534365" lastFinishedPulling="2026-03-11 09:02:18.091649104 +0000 UTC m=+1389.044972424" observedRunningTime="2026-03-11 09:02:20.754716708 +0000 UTC m=+1391.708040028" watchObservedRunningTime="2026-03-11 09:02:20.763613276 +0000 UTC m=+1391.716936596" Mar 11 09:02:21 crc kubenswrapper[4808]: I0311 09:02:21.119654 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Mar 11 09:02:21 crc kubenswrapper[4808]: I0311 09:02:21.958500 4808 generic.go:334] "Generic (PLEG): container finished" podID="30d088cb-412e-4f17-927e-545b63b3185b" containerID="3b8b47d740e3ab79c797e194f1cd9e9a83d232e3cfe789409208d2153f9ca92d" exitCode=0 Mar 11 09:02:21 crc kubenswrapper[4808]: I0311 09:02:21.958527 4808 generic.go:334] "Generic (PLEG): container finished" podID="30d088cb-412e-4f17-927e-545b63b3185b" containerID="c5020b27fd47a8c5a7a76a26d9c16fed9c8c9186d14a473b08deb1259a7c4f00" exitCode=143 Mar 11 09:02:21 crc kubenswrapper[4808]: I0311 09:02:21.961334 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"30d088cb-412e-4f17-927e-545b63b3185b","Type":"ContainerDied","Data":"3b8b47d740e3ab79c797e194f1cd9e9a83d232e3cfe789409208d2153f9ca92d"} Mar 11 09:02:21 crc kubenswrapper[4808]: I0311 09:02:21.961384 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"30d088cb-412e-4f17-927e-545b63b3185b","Type":"ContainerDied","Data":"c5020b27fd47a8c5a7a76a26d9c16fed9c8c9186d14a473b08deb1259a7c4f00"} Mar 11 09:02:22 crc kubenswrapper[4808]: I0311 09:02:22.057879 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 11 09:02:22 crc kubenswrapper[4808]: I0311 09:02:22.152246 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/30d088cb-412e-4f17-927e-545b63b3185b-config-data-custom\") pod \"30d088cb-412e-4f17-927e-545b63b3185b\" (UID: \"30d088cb-412e-4f17-927e-545b63b3185b\") " Mar 11 09:02:22 crc kubenswrapper[4808]: I0311 09:02:22.152535 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30d088cb-412e-4f17-927e-545b63b3185b-config-data\") pod \"30d088cb-412e-4f17-927e-545b63b3185b\" (UID: \"30d088cb-412e-4f17-927e-545b63b3185b\") " Mar 11 09:02:22 crc kubenswrapper[4808]: I0311 09:02:22.152636 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/30d088cb-412e-4f17-927e-545b63b3185b-etc-machine-id\") pod \"30d088cb-412e-4f17-927e-545b63b3185b\" (UID: \"30d088cb-412e-4f17-927e-545b63b3185b\") " Mar 11 09:02:22 crc kubenswrapper[4808]: I0311 09:02:22.152681 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30d088cb-412e-4f17-927e-545b63b3185b-combined-ca-bundle\") pod \"30d088cb-412e-4f17-927e-545b63b3185b\" (UID: \"30d088cb-412e-4f17-927e-545b63b3185b\") " Mar 11 09:02:22 crc kubenswrapper[4808]: I0311 09:02:22.152709 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/30d088cb-412e-4f17-927e-545b63b3185b-logs\") pod \"30d088cb-412e-4f17-927e-545b63b3185b\" (UID: \"30d088cb-412e-4f17-927e-545b63b3185b\") " Mar 11 09:02:22 crc kubenswrapper[4808]: I0311 09:02:22.152725 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5hpc9\" (UniqueName: \"kubernetes.io/projected/30d088cb-412e-4f17-927e-545b63b3185b-kube-api-access-5hpc9\") pod \"30d088cb-412e-4f17-927e-545b63b3185b\" (UID: \"30d088cb-412e-4f17-927e-545b63b3185b\") " Mar 11 09:02:22 crc kubenswrapper[4808]: I0311 09:02:22.152763 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30d088cb-412e-4f17-927e-545b63b3185b-scripts\") pod \"30d088cb-412e-4f17-927e-545b63b3185b\" (UID: \"30d088cb-412e-4f17-927e-545b63b3185b\") " Mar 11 09:02:22 crc kubenswrapper[4808]: I0311 09:02:22.157437 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/30d088cb-412e-4f17-927e-545b63b3185b-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "30d088cb-412e-4f17-927e-545b63b3185b" (UID: "30d088cb-412e-4f17-927e-545b63b3185b"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 11 09:02:22 crc kubenswrapper[4808]: I0311 09:02:22.165292 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30d088cb-412e-4f17-927e-545b63b3185b-logs" (OuterVolumeSpecName: "logs") pod "30d088cb-412e-4f17-927e-545b63b3185b" (UID: "30d088cb-412e-4f17-927e-545b63b3185b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:02:22 crc kubenswrapper[4808]: I0311 09:02:22.179559 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30d088cb-412e-4f17-927e-545b63b3185b-kube-api-access-5hpc9" (OuterVolumeSpecName: "kube-api-access-5hpc9") pod "30d088cb-412e-4f17-927e-545b63b3185b" (UID: "30d088cb-412e-4f17-927e-545b63b3185b"). InnerVolumeSpecName "kube-api-access-5hpc9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:02:22 crc kubenswrapper[4808]: I0311 09:02:22.179712 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30d088cb-412e-4f17-927e-545b63b3185b-scripts" (OuterVolumeSpecName: "scripts") pod "30d088cb-412e-4f17-927e-545b63b3185b" (UID: "30d088cb-412e-4f17-927e-545b63b3185b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:02:22 crc kubenswrapper[4808]: I0311 09:02:22.182469 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30d088cb-412e-4f17-927e-545b63b3185b-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "30d088cb-412e-4f17-927e-545b63b3185b" (UID: "30d088cb-412e-4f17-927e-545b63b3185b"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:02:22 crc kubenswrapper[4808]: I0311 09:02:22.230527 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30d088cb-412e-4f17-927e-545b63b3185b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "30d088cb-412e-4f17-927e-545b63b3185b" (UID: "30d088cb-412e-4f17-927e-545b63b3185b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:02:22 crc kubenswrapper[4808]: I0311 09:02:22.255028 4808 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/30d088cb-412e-4f17-927e-545b63b3185b-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:22 crc kubenswrapper[4808]: I0311 09:02:22.255067 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30d088cb-412e-4f17-927e-545b63b3185b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:22 crc kubenswrapper[4808]: I0311 09:02:22.255079 4808 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/30d088cb-412e-4f17-927e-545b63b3185b-logs\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:22 crc kubenswrapper[4808]: I0311 09:02:22.255093 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5hpc9\" (UniqueName: \"kubernetes.io/projected/30d088cb-412e-4f17-927e-545b63b3185b-kube-api-access-5hpc9\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:22 crc kubenswrapper[4808]: I0311 09:02:22.255108 4808 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30d088cb-412e-4f17-927e-545b63b3185b-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:22 crc kubenswrapper[4808]: I0311 09:02:22.255118 4808 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/30d088cb-412e-4f17-927e-545b63b3185b-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:22 crc kubenswrapper[4808]: I0311 09:02:22.257543 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-764ddbc49b-rd7qj" Mar 11 09:02:22 crc kubenswrapper[4808]: I0311 09:02:22.282630 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30d088cb-412e-4f17-927e-545b63b3185b-config-data" (OuterVolumeSpecName: "config-data") pod "30d088cb-412e-4f17-927e-545b63b3185b" (UID: "30d088cb-412e-4f17-927e-545b63b3185b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:02:22 crc kubenswrapper[4808]: I0311 09:02:22.357073 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30d088cb-412e-4f17-927e-545b63b3185b-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:22 crc kubenswrapper[4808]: I0311 09:02:22.563505 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-587fb944fc-6frn6"] Mar 11 09:02:22 crc kubenswrapper[4808]: I0311 09:02:22.563861 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-587fb944fc-6frn6" podUID="ba48f43e-924f-4ba8-af86-f0574b9a625a" containerName="neutron-api" containerID="cri-o://5744e1e3d7c24424d0ec7ee697296da0bd6bae1cef0d68b1c0d722bc0d72af72" gracePeriod=30 Mar 11 09:02:22 crc kubenswrapper[4808]: I0311 09:02:22.564024 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-587fb944fc-6frn6" podUID="ba48f43e-924f-4ba8-af86-f0574b9a625a" containerName="neutron-httpd" containerID="cri-o://61eb3cc64742aa5f8e0fcc5334433bb4c72d03a30181457fa988d9e6937fa649" gracePeriod=30 Mar 11 09:02:22 crc kubenswrapper[4808]: I0311 09:02:22.595533 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-587fb944fc-6frn6" podUID="ba48f43e-924f-4ba8-af86-f0574b9a625a" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.156:9696/\": read tcp 10.217.0.2:53172->10.217.0.156:9696: read: connection reset by peer" Mar 11 09:02:22 crc kubenswrapper[4808]: I0311 09:02:22.598671 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7676f56769-zslbs"] Mar 11 09:02:22 crc kubenswrapper[4808]: E0311 09:02:22.599134 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a2bfbf2-850f-4ce8-a3f0-ead546d6775d" containerName="init" Mar 11 09:02:22 crc kubenswrapper[4808]: I0311 09:02:22.599153 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a2bfbf2-850f-4ce8-a3f0-ead546d6775d" containerName="init" Mar 11 09:02:22 crc kubenswrapper[4808]: E0311 09:02:22.599162 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30d088cb-412e-4f17-927e-545b63b3185b" containerName="cinder-api-log" Mar 11 09:02:22 crc kubenswrapper[4808]: I0311 09:02:22.599181 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="30d088cb-412e-4f17-927e-545b63b3185b" containerName="cinder-api-log" Mar 11 09:02:22 crc kubenswrapper[4808]: E0311 09:02:22.599200 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a2bfbf2-850f-4ce8-a3f0-ead546d6775d" containerName="dnsmasq-dns" Mar 11 09:02:22 crc kubenswrapper[4808]: I0311 09:02:22.599207 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a2bfbf2-850f-4ce8-a3f0-ead546d6775d" containerName="dnsmasq-dns" Mar 11 09:02:22 crc kubenswrapper[4808]: E0311 09:02:22.599220 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30d088cb-412e-4f17-927e-545b63b3185b" containerName="cinder-api" Mar 11 09:02:22 crc kubenswrapper[4808]: I0311 09:02:22.599227 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="30d088cb-412e-4f17-927e-545b63b3185b" containerName="cinder-api" Mar 11 09:02:22 crc kubenswrapper[4808]: I0311 09:02:22.599400 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="30d088cb-412e-4f17-927e-545b63b3185b" containerName="cinder-api-log" Mar 11 09:02:22 crc kubenswrapper[4808]: I0311 09:02:22.599421 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="30d088cb-412e-4f17-927e-545b63b3185b" containerName="cinder-api" Mar 11 09:02:22 crc kubenswrapper[4808]: I0311 09:02:22.599437 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a2bfbf2-850f-4ce8-a3f0-ead546d6775d" containerName="dnsmasq-dns" Mar 11 09:02:22 crc kubenswrapper[4808]: I0311 09:02:22.600378 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7676f56769-zslbs" Mar 11 09:02:22 crc kubenswrapper[4808]: I0311 09:02:22.634421 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7676f56769-zslbs"] Mar 11 09:02:22 crc kubenswrapper[4808]: I0311 09:02:22.764498 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/37361775-fb6c-486f-8d7b-fd93f31bbaf5-ovndb-tls-certs\") pod \"neutron-7676f56769-zslbs\" (UID: \"37361775-fb6c-486f-8d7b-fd93f31bbaf5\") " pod="openstack/neutron-7676f56769-zslbs" Mar 11 09:02:22 crc kubenswrapper[4808]: I0311 09:02:22.764564 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/37361775-fb6c-486f-8d7b-fd93f31bbaf5-internal-tls-certs\") pod \"neutron-7676f56769-zslbs\" (UID: \"37361775-fb6c-486f-8d7b-fd93f31bbaf5\") " pod="openstack/neutron-7676f56769-zslbs" Mar 11 09:02:22 crc kubenswrapper[4808]: I0311 09:02:22.764624 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/37361775-fb6c-486f-8d7b-fd93f31bbaf5-config\") pod \"neutron-7676f56769-zslbs\" (UID: \"37361775-fb6c-486f-8d7b-fd93f31bbaf5\") " pod="openstack/neutron-7676f56769-zslbs" Mar 11 09:02:22 crc kubenswrapper[4808]: I0311 09:02:22.764678 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/37361775-fb6c-486f-8d7b-fd93f31bbaf5-httpd-config\") pod \"neutron-7676f56769-zslbs\" (UID: \"37361775-fb6c-486f-8d7b-fd93f31bbaf5\") " pod="openstack/neutron-7676f56769-zslbs" Mar 11 09:02:22 crc kubenswrapper[4808]: I0311 09:02:22.764702 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37361775-fb6c-486f-8d7b-fd93f31bbaf5-combined-ca-bundle\") pod \"neutron-7676f56769-zslbs\" (UID: \"37361775-fb6c-486f-8d7b-fd93f31bbaf5\") " pod="openstack/neutron-7676f56769-zslbs" Mar 11 09:02:22 crc kubenswrapper[4808]: I0311 09:02:22.764739 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/37361775-fb6c-486f-8d7b-fd93f31bbaf5-public-tls-certs\") pod \"neutron-7676f56769-zslbs\" (UID: \"37361775-fb6c-486f-8d7b-fd93f31bbaf5\") " pod="openstack/neutron-7676f56769-zslbs" Mar 11 09:02:22 crc kubenswrapper[4808]: I0311 09:02:22.764798 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crzd8\" (UniqueName: \"kubernetes.io/projected/37361775-fb6c-486f-8d7b-fd93f31bbaf5-kube-api-access-crzd8\") pod \"neutron-7676f56769-zslbs\" (UID: \"37361775-fb6c-486f-8d7b-fd93f31bbaf5\") " pod="openstack/neutron-7676f56769-zslbs" Mar 11 09:02:22 crc kubenswrapper[4808]: I0311 09:02:22.866079 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/37361775-fb6c-486f-8d7b-fd93f31bbaf5-internal-tls-certs\") pod \"neutron-7676f56769-zslbs\" (UID: \"37361775-fb6c-486f-8d7b-fd93f31bbaf5\") " pod="openstack/neutron-7676f56769-zslbs" Mar 11 09:02:22 crc kubenswrapper[4808]: I0311 09:02:22.866370 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/37361775-fb6c-486f-8d7b-fd93f31bbaf5-config\") pod \"neutron-7676f56769-zslbs\" (UID: \"37361775-fb6c-486f-8d7b-fd93f31bbaf5\") " pod="openstack/neutron-7676f56769-zslbs" Mar 11 09:02:22 crc kubenswrapper[4808]: I0311 09:02:22.866481 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/37361775-fb6c-486f-8d7b-fd93f31bbaf5-httpd-config\") pod \"neutron-7676f56769-zslbs\" (UID: \"37361775-fb6c-486f-8d7b-fd93f31bbaf5\") " pod="openstack/neutron-7676f56769-zslbs" Mar 11 09:02:22 crc kubenswrapper[4808]: I0311 09:02:22.866589 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37361775-fb6c-486f-8d7b-fd93f31bbaf5-combined-ca-bundle\") pod \"neutron-7676f56769-zslbs\" (UID: \"37361775-fb6c-486f-8d7b-fd93f31bbaf5\") " pod="openstack/neutron-7676f56769-zslbs" Mar 11 09:02:22 crc kubenswrapper[4808]: I0311 09:02:22.867088 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/37361775-fb6c-486f-8d7b-fd93f31bbaf5-public-tls-certs\") pod \"neutron-7676f56769-zslbs\" (UID: \"37361775-fb6c-486f-8d7b-fd93f31bbaf5\") " pod="openstack/neutron-7676f56769-zslbs" Mar 11 09:02:22 crc kubenswrapper[4808]: I0311 09:02:22.867229 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crzd8\" (UniqueName: \"kubernetes.io/projected/37361775-fb6c-486f-8d7b-fd93f31bbaf5-kube-api-access-crzd8\") pod \"neutron-7676f56769-zslbs\" (UID: \"37361775-fb6c-486f-8d7b-fd93f31bbaf5\") " pod="openstack/neutron-7676f56769-zslbs" Mar 11 09:02:22 crc kubenswrapper[4808]: I0311 09:02:22.867690 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/37361775-fb6c-486f-8d7b-fd93f31bbaf5-ovndb-tls-certs\") pod \"neutron-7676f56769-zslbs\" (UID: \"37361775-fb6c-486f-8d7b-fd93f31bbaf5\") " pod="openstack/neutron-7676f56769-zslbs" Mar 11 09:02:22 crc kubenswrapper[4808]: I0311 09:02:22.872262 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/37361775-fb6c-486f-8d7b-fd93f31bbaf5-internal-tls-certs\") pod \"neutron-7676f56769-zslbs\" (UID: \"37361775-fb6c-486f-8d7b-fd93f31bbaf5\") " pod="openstack/neutron-7676f56769-zslbs" Mar 11 09:02:22 crc kubenswrapper[4808]: I0311 09:02:22.873105 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/37361775-fb6c-486f-8d7b-fd93f31bbaf5-ovndb-tls-certs\") pod \"neutron-7676f56769-zslbs\" (UID: \"37361775-fb6c-486f-8d7b-fd93f31bbaf5\") " pod="openstack/neutron-7676f56769-zslbs" Mar 11 09:02:22 crc kubenswrapper[4808]: I0311 09:02:22.873291 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37361775-fb6c-486f-8d7b-fd93f31bbaf5-combined-ca-bundle\") pod \"neutron-7676f56769-zslbs\" (UID: \"37361775-fb6c-486f-8d7b-fd93f31bbaf5\") " pod="openstack/neutron-7676f56769-zslbs" Mar 11 09:02:22 crc kubenswrapper[4808]: I0311 09:02:22.873853 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/37361775-fb6c-486f-8d7b-fd93f31bbaf5-config\") pod \"neutron-7676f56769-zslbs\" (UID: \"37361775-fb6c-486f-8d7b-fd93f31bbaf5\") " pod="openstack/neutron-7676f56769-zslbs" Mar 11 09:02:22 crc kubenswrapper[4808]: I0311 09:02:22.873956 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/37361775-fb6c-486f-8d7b-fd93f31bbaf5-httpd-config\") pod \"neutron-7676f56769-zslbs\" (UID: \"37361775-fb6c-486f-8d7b-fd93f31bbaf5\") " pod="openstack/neutron-7676f56769-zslbs" Mar 11 09:02:22 crc kubenswrapper[4808]: I0311 09:02:22.875948 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/37361775-fb6c-486f-8d7b-fd93f31bbaf5-public-tls-certs\") pod \"neutron-7676f56769-zslbs\" (UID: \"37361775-fb6c-486f-8d7b-fd93f31bbaf5\") " pod="openstack/neutron-7676f56769-zslbs" Mar 11 09:02:22 crc kubenswrapper[4808]: I0311 09:02:22.885516 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crzd8\" (UniqueName: \"kubernetes.io/projected/37361775-fb6c-486f-8d7b-fd93f31bbaf5-kube-api-access-crzd8\") pod \"neutron-7676f56769-zslbs\" (UID: \"37361775-fb6c-486f-8d7b-fd93f31bbaf5\") " pod="openstack/neutron-7676f56769-zslbs" Mar 11 09:02:22 crc kubenswrapper[4808]: I0311 09:02:22.919650 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7676f56769-zslbs" Mar 11 09:02:22 crc kubenswrapper[4808]: I0311 09:02:22.986829 4808 generic.go:334] "Generic (PLEG): container finished" podID="ba48f43e-924f-4ba8-af86-f0574b9a625a" containerID="61eb3cc64742aa5f8e0fcc5334433bb4c72d03a30181457fa988d9e6937fa649" exitCode=0 Mar 11 09:02:22 crc kubenswrapper[4808]: I0311 09:02:22.986964 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-587fb944fc-6frn6" event={"ID":"ba48f43e-924f-4ba8-af86-f0574b9a625a","Type":"ContainerDied","Data":"61eb3cc64742aa5f8e0fcc5334433bb4c72d03a30181457fa988d9e6937fa649"} Mar 11 09:02:22 crc kubenswrapper[4808]: I0311 09:02:22.988988 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"30d088cb-412e-4f17-927e-545b63b3185b","Type":"ContainerDied","Data":"14b55860d3647abee0664bd24ed228e86d39d1653701705cf67418450765bc4e"} Mar 11 09:02:22 crc kubenswrapper[4808]: I0311 09:02:22.989094 4808 scope.go:117] "RemoveContainer" containerID="3b8b47d740e3ab79c797e194f1cd9e9a83d232e3cfe789409208d2153f9ca92d" Mar 11 09:02:22 crc kubenswrapper[4808]: I0311 09:02:22.989013 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 11 09:02:23 crc kubenswrapper[4808]: I0311 09:02:23.045515 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Mar 11 09:02:23 crc kubenswrapper[4808]: I0311 09:02:23.057160 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Mar 11 09:02:23 crc kubenswrapper[4808]: I0311 09:02:23.064861 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Mar 11 09:02:23 crc kubenswrapper[4808]: I0311 09:02:23.067087 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 11 09:02:23 crc kubenswrapper[4808]: I0311 09:02:23.074875 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Mar 11 09:02:23 crc kubenswrapper[4808]: I0311 09:02:23.075105 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Mar 11 09:02:23 crc kubenswrapper[4808]: I0311 09:02:23.075252 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Mar 11 09:02:23 crc kubenswrapper[4808]: I0311 09:02:23.075351 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Mar 11 09:02:23 crc kubenswrapper[4808]: I0311 09:02:23.120593 4808 scope.go:117] "RemoveContainer" containerID="c5020b27fd47a8c5a7a76a26d9c16fed9c8c9186d14a473b08deb1259a7c4f00" Mar 11 09:02:23 crc kubenswrapper[4808]: I0311 09:02:23.177456 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/45e8823d-6df6-41fb-b7cd-9cb19e680db1-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"45e8823d-6df6-41fb-b7cd-9cb19e680db1\") " pod="openstack/cinder-api-0" Mar 11 09:02:23 crc kubenswrapper[4808]: I0311 09:02:23.177815 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45e8823d-6df6-41fb-b7cd-9cb19e680db1-config-data\") pod \"cinder-api-0\" (UID: \"45e8823d-6df6-41fb-b7cd-9cb19e680db1\") " pod="openstack/cinder-api-0" Mar 11 09:02:23 crc kubenswrapper[4808]: I0311 09:02:23.177869 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2w44\" (UniqueName: \"kubernetes.io/projected/45e8823d-6df6-41fb-b7cd-9cb19e680db1-kube-api-access-f2w44\") pod \"cinder-api-0\" (UID: \"45e8823d-6df6-41fb-b7cd-9cb19e680db1\") " pod="openstack/cinder-api-0" Mar 11 09:02:23 crc kubenswrapper[4808]: I0311 09:02:23.177906 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/45e8823d-6df6-41fb-b7cd-9cb19e680db1-etc-machine-id\") pod \"cinder-api-0\" (UID: \"45e8823d-6df6-41fb-b7cd-9cb19e680db1\") " pod="openstack/cinder-api-0" Mar 11 09:02:23 crc kubenswrapper[4808]: I0311 09:02:23.177940 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45e8823d-6df6-41fb-b7cd-9cb19e680db1-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"45e8823d-6df6-41fb-b7cd-9cb19e680db1\") " pod="openstack/cinder-api-0" Mar 11 09:02:23 crc kubenswrapper[4808]: I0311 09:02:23.177996 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45e8823d-6df6-41fb-b7cd-9cb19e680db1-logs\") pod \"cinder-api-0\" (UID: \"45e8823d-6df6-41fb-b7cd-9cb19e680db1\") " pod="openstack/cinder-api-0" Mar 11 09:02:23 crc kubenswrapper[4808]: I0311 09:02:23.178078 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/45e8823d-6df6-41fb-b7cd-9cb19e680db1-public-tls-certs\") pod \"cinder-api-0\" (UID: \"45e8823d-6df6-41fb-b7cd-9cb19e680db1\") " pod="openstack/cinder-api-0" Mar 11 09:02:23 crc kubenswrapper[4808]: I0311 09:02:23.178119 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45e8823d-6df6-41fb-b7cd-9cb19e680db1-scripts\") pod \"cinder-api-0\" (UID: \"45e8823d-6df6-41fb-b7cd-9cb19e680db1\") " pod="openstack/cinder-api-0" Mar 11 09:02:23 crc kubenswrapper[4808]: I0311 09:02:23.178136 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/45e8823d-6df6-41fb-b7cd-9cb19e680db1-config-data-custom\") pod \"cinder-api-0\" (UID: \"45e8823d-6df6-41fb-b7cd-9cb19e680db1\") " pod="openstack/cinder-api-0" Mar 11 09:02:23 crc kubenswrapper[4808]: I0311 09:02:23.279930 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/45e8823d-6df6-41fb-b7cd-9cb19e680db1-public-tls-certs\") pod \"cinder-api-0\" (UID: \"45e8823d-6df6-41fb-b7cd-9cb19e680db1\") " pod="openstack/cinder-api-0" Mar 11 09:02:23 crc kubenswrapper[4808]: I0311 09:02:23.279999 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45e8823d-6df6-41fb-b7cd-9cb19e680db1-scripts\") pod \"cinder-api-0\" (UID: \"45e8823d-6df6-41fb-b7cd-9cb19e680db1\") " pod="openstack/cinder-api-0" Mar 11 09:02:23 crc kubenswrapper[4808]: I0311 09:02:23.280019 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/45e8823d-6df6-41fb-b7cd-9cb19e680db1-config-data-custom\") pod \"cinder-api-0\" (UID: \"45e8823d-6df6-41fb-b7cd-9cb19e680db1\") " pod="openstack/cinder-api-0" Mar 11 09:02:23 crc kubenswrapper[4808]: I0311 09:02:23.280064 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/45e8823d-6df6-41fb-b7cd-9cb19e680db1-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"45e8823d-6df6-41fb-b7cd-9cb19e680db1\") " pod="openstack/cinder-api-0" Mar 11 09:02:23 crc kubenswrapper[4808]: I0311 09:02:23.280113 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45e8823d-6df6-41fb-b7cd-9cb19e680db1-config-data\") pod \"cinder-api-0\" (UID: \"45e8823d-6df6-41fb-b7cd-9cb19e680db1\") " pod="openstack/cinder-api-0" Mar 11 09:02:23 crc kubenswrapper[4808]: I0311 09:02:23.280155 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2w44\" (UniqueName: \"kubernetes.io/projected/45e8823d-6df6-41fb-b7cd-9cb19e680db1-kube-api-access-f2w44\") pod \"cinder-api-0\" (UID: \"45e8823d-6df6-41fb-b7cd-9cb19e680db1\") " pod="openstack/cinder-api-0" Mar 11 09:02:23 crc kubenswrapper[4808]: I0311 09:02:23.280176 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/45e8823d-6df6-41fb-b7cd-9cb19e680db1-etc-machine-id\") pod \"cinder-api-0\" (UID: \"45e8823d-6df6-41fb-b7cd-9cb19e680db1\") " pod="openstack/cinder-api-0" Mar 11 09:02:23 crc kubenswrapper[4808]: I0311 09:02:23.280195 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45e8823d-6df6-41fb-b7cd-9cb19e680db1-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"45e8823d-6df6-41fb-b7cd-9cb19e680db1\") " pod="openstack/cinder-api-0" Mar 11 09:02:23 crc kubenswrapper[4808]: I0311 09:02:23.280210 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45e8823d-6df6-41fb-b7cd-9cb19e680db1-logs\") pod \"cinder-api-0\" (UID: \"45e8823d-6df6-41fb-b7cd-9cb19e680db1\") " pod="openstack/cinder-api-0" Mar 11 09:02:23 crc kubenswrapper[4808]: I0311 09:02:23.280620 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45e8823d-6df6-41fb-b7cd-9cb19e680db1-logs\") pod \"cinder-api-0\" (UID: \"45e8823d-6df6-41fb-b7cd-9cb19e680db1\") " pod="openstack/cinder-api-0" Mar 11 09:02:23 crc kubenswrapper[4808]: I0311 09:02:23.280794 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/45e8823d-6df6-41fb-b7cd-9cb19e680db1-etc-machine-id\") pod \"cinder-api-0\" (UID: \"45e8823d-6df6-41fb-b7cd-9cb19e680db1\") " pod="openstack/cinder-api-0" Mar 11 09:02:23 crc kubenswrapper[4808]: I0311 09:02:23.287873 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/45e8823d-6df6-41fb-b7cd-9cb19e680db1-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"45e8823d-6df6-41fb-b7cd-9cb19e680db1\") " pod="openstack/cinder-api-0" Mar 11 09:02:23 crc kubenswrapper[4808]: I0311 09:02:23.289934 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45e8823d-6df6-41fb-b7cd-9cb19e680db1-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"45e8823d-6df6-41fb-b7cd-9cb19e680db1\") " pod="openstack/cinder-api-0" Mar 11 09:02:23 crc kubenswrapper[4808]: I0311 09:02:23.290039 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/45e8823d-6df6-41fb-b7cd-9cb19e680db1-public-tls-certs\") pod \"cinder-api-0\" (UID: \"45e8823d-6df6-41fb-b7cd-9cb19e680db1\") " pod="openstack/cinder-api-0" Mar 11 09:02:23 crc kubenswrapper[4808]: I0311 09:02:23.290441 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45e8823d-6df6-41fb-b7cd-9cb19e680db1-scripts\") pod \"cinder-api-0\" (UID: \"45e8823d-6df6-41fb-b7cd-9cb19e680db1\") " pod="openstack/cinder-api-0" Mar 11 09:02:23 crc kubenswrapper[4808]: I0311 09:02:23.291224 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45e8823d-6df6-41fb-b7cd-9cb19e680db1-config-data\") pod \"cinder-api-0\" (UID: \"45e8823d-6df6-41fb-b7cd-9cb19e680db1\") " pod="openstack/cinder-api-0" Mar 11 09:02:23 crc kubenswrapper[4808]: I0311 09:02:23.294879 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/45e8823d-6df6-41fb-b7cd-9cb19e680db1-config-data-custom\") pod \"cinder-api-0\" (UID: \"45e8823d-6df6-41fb-b7cd-9cb19e680db1\") " pod="openstack/cinder-api-0" Mar 11 09:02:23 crc kubenswrapper[4808]: I0311 09:02:23.301231 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2w44\" (UniqueName: \"kubernetes.io/projected/45e8823d-6df6-41fb-b7cd-9cb19e680db1-kube-api-access-f2w44\") pod \"cinder-api-0\" (UID: \"45e8823d-6df6-41fb-b7cd-9cb19e680db1\") " pod="openstack/cinder-api-0" Mar 11 09:02:23 crc kubenswrapper[4808]: I0311 09:02:23.388279 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 11 09:02:23 crc kubenswrapper[4808]: I0311 09:02:23.694781 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7676f56769-zslbs"] Mar 11 09:02:23 crc kubenswrapper[4808]: W0311 09:02:23.700275 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod37361775_fb6c_486f_8d7b_fd93f31bbaf5.slice/crio-8c775eab8b7815798c1844ebf59bb7516455175fe0d7073a12c3138872d94ea1 WatchSource:0}: Error finding container 8c775eab8b7815798c1844ebf59bb7516455175fe0d7073a12c3138872d94ea1: Status 404 returned error can't find the container with id 8c775eab8b7815798c1844ebf59bb7516455175fe0d7073a12c3138872d94ea1 Mar 11 09:02:23 crc kubenswrapper[4808]: I0311 09:02:23.764687 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Mar 11 09:02:23 crc kubenswrapper[4808]: W0311 09:02:23.771415 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod45e8823d_6df6_41fb_b7cd_9cb19e680db1.slice/crio-b46362ca50d32a0a27a4ad3bf062ce747bfaf9bb1b99fc194374f9389ed1c025 WatchSource:0}: Error finding container b46362ca50d32a0a27a4ad3bf062ce747bfaf9bb1b99fc194374f9389ed1c025: Status 404 returned error can't find the container with id b46362ca50d32a0a27a4ad3bf062ce747bfaf9bb1b99fc194374f9389ed1c025 Mar 11 09:02:23 crc kubenswrapper[4808]: I0311 09:02:23.812471 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30d088cb-412e-4f17-927e-545b63b3185b" path="/var/lib/kubelet/pods/30d088cb-412e-4f17-927e-545b63b3185b/volumes" Mar 11 09:02:24 crc kubenswrapper[4808]: I0311 09:02:24.013311 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7676f56769-zslbs" event={"ID":"37361775-fb6c-486f-8d7b-fd93f31bbaf5","Type":"ContainerStarted","Data":"8c775eab8b7815798c1844ebf59bb7516455175fe0d7073a12c3138872d94ea1"} Mar 11 09:02:24 crc kubenswrapper[4808]: I0311 09:02:24.019064 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"45e8823d-6df6-41fb-b7cd-9cb19e680db1","Type":"ContainerStarted","Data":"b46362ca50d32a0a27a4ad3bf062ce747bfaf9bb1b99fc194374f9389ed1c025"} Mar 11 09:02:25 crc kubenswrapper[4808]: I0311 09:02:25.034108 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7676f56769-zslbs" event={"ID":"37361775-fb6c-486f-8d7b-fd93f31bbaf5","Type":"ContainerStarted","Data":"313c3f34d3027bd9947d2e5694c49e600d145074cf0323486a527bfeeb269fbc"} Mar 11 09:02:25 crc kubenswrapper[4808]: I0311 09:02:25.034721 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7676f56769-zslbs" event={"ID":"37361775-fb6c-486f-8d7b-fd93f31bbaf5","Type":"ContainerStarted","Data":"35ee99bbbf3c1d04399562decf8fafbf34dea50fb8418e32d3bedd96e8190659"} Mar 11 09:02:25 crc kubenswrapper[4808]: I0311 09:02:25.034746 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7676f56769-zslbs" Mar 11 09:02:25 crc kubenswrapper[4808]: I0311 09:02:25.039991 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"45e8823d-6df6-41fb-b7cd-9cb19e680db1","Type":"ContainerStarted","Data":"449aa92d48851baa151002599c1327e205ca3ac321e49b56f196db3dc8961bcc"} Mar 11 09:02:25 crc kubenswrapper[4808]: I0311 09:02:25.061485 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7676f56769-zslbs" podStartSLOduration=3.061465816 podStartE2EDuration="3.061465816s" podCreationTimestamp="2026-03-11 09:02:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:02:25.05780593 +0000 UTC m=+1396.011129250" watchObservedRunningTime="2026-03-11 09:02:25.061465816 +0000 UTC m=+1396.014789136" Mar 11 09:02:25 crc kubenswrapper[4808]: I0311 09:02:25.062947 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-86b7cb5956-tzj8f" Mar 11 09:02:25 crc kubenswrapper[4808]: I0311 09:02:25.465616 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-587fb944fc-6frn6" podUID="ba48f43e-924f-4ba8-af86-f0574b9a625a" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.156:9696/\": dial tcp 10.217.0.156:9696: connect: connection refused" Mar 11 09:02:25 crc kubenswrapper[4808]: I0311 09:02:25.500187 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-86b7cb5956-tzj8f" Mar 11 09:02:26 crc kubenswrapper[4808]: I0311 09:02:26.072510 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"45e8823d-6df6-41fb-b7cd-9cb19e680db1","Type":"ContainerStarted","Data":"600ad91b4b18f17d71cd6096452f04faaca352ace36331fc5ba1a67011114ccf"} Mar 11 09:02:26 crc kubenswrapper[4808]: I0311 09:02:26.100730 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.100713582 podStartE2EDuration="3.100713582s" podCreationTimestamp="2026-03-11 09:02:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:02:26.092535194 +0000 UTC m=+1397.045858514" watchObservedRunningTime="2026-03-11 09:02:26.100713582 +0000 UTC m=+1397.054036902" Mar 11 09:02:26 crc kubenswrapper[4808]: I0311 09:02:26.236514 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7b8fcc65cc-vlcss" Mar 11 09:02:26 crc kubenswrapper[4808]: I0311 09:02:26.291180 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7859c7799c-pp2j9"] Mar 11 09:02:26 crc kubenswrapper[4808]: I0311 09:02:26.291618 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7859c7799c-pp2j9" podUID="7fd87e10-8e91-4b2c-90a5-74cfcaa976b4" containerName="dnsmasq-dns" containerID="cri-o://9314f7b9a7ff3e21f94a52be821753a056f6b9ae94a252eb2518da9cd797e85a" gracePeriod=10 Mar 11 09:02:26 crc kubenswrapper[4808]: I0311 09:02:26.565717 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Mar 11 09:02:26 crc kubenswrapper[4808]: I0311 09:02:26.630264 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 11 09:02:26 crc kubenswrapper[4808]: I0311 09:02:26.678693 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7859c7799c-pp2j9" podUID="7fd87e10-8e91-4b2c-90a5-74cfcaa976b4" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.154:5353: connect: connection refused" Mar 11 09:02:27 crc kubenswrapper[4808]: I0311 09:02:27.046389 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7859c7799c-pp2j9" Mar 11 09:02:27 crc kubenswrapper[4808]: I0311 09:02:27.090534 4808 generic.go:334] "Generic (PLEG): container finished" podID="7fd87e10-8e91-4b2c-90a5-74cfcaa976b4" containerID="9314f7b9a7ff3e21f94a52be821753a056f6b9ae94a252eb2518da9cd797e85a" exitCode=0 Mar 11 09:02:27 crc kubenswrapper[4808]: I0311 09:02:27.090776 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="cb14cc88-783b-4bcb-bdbb-770da3ee3a4c" containerName="cinder-scheduler" containerID="cri-o://ed52150a97a48f8f1535a039331bb67ac74c8cc9b329a7a0f079513d8757921b" gracePeriod=30 Mar 11 09:02:27 crc kubenswrapper[4808]: I0311 09:02:27.091069 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7859c7799c-pp2j9" Mar 11 09:02:27 crc kubenswrapper[4808]: I0311 09:02:27.092850 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7859c7799c-pp2j9" event={"ID":"7fd87e10-8e91-4b2c-90a5-74cfcaa976b4","Type":"ContainerDied","Data":"9314f7b9a7ff3e21f94a52be821753a056f6b9ae94a252eb2518da9cd797e85a"} Mar 11 09:02:27 crc kubenswrapper[4808]: I0311 09:02:27.092900 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7859c7799c-pp2j9" event={"ID":"7fd87e10-8e91-4b2c-90a5-74cfcaa976b4","Type":"ContainerDied","Data":"d78875dacf15f406770744a6e4c466766d8aab49d2cf28b03058283f6e55dea9"} Mar 11 09:02:27 crc kubenswrapper[4808]: I0311 09:02:27.092920 4808 scope.go:117] "RemoveContainer" containerID="9314f7b9a7ff3e21f94a52be821753a056f6b9ae94a252eb2518da9cd797e85a" Mar 11 09:02:27 crc kubenswrapper[4808]: I0311 09:02:27.093150 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="cb14cc88-783b-4bcb-bdbb-770da3ee3a4c" containerName="probe" containerID="cri-o://20ae75e27c597ae9e6320c0668c37e44b4681167b9e5701c9c1fa2b7ecd147d0" gracePeriod=30 Mar 11 09:02:27 crc kubenswrapper[4808]: I0311 09:02:27.096481 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Mar 11 09:02:27 crc kubenswrapper[4808]: I0311 09:02:27.136567 4808 scope.go:117] "RemoveContainer" containerID="e7e0bcde2f5d58b16ee466e360d7efc7de50d5ebc509e72267663a75ec0038ca" Mar 11 09:02:27 crc kubenswrapper[4808]: I0311 09:02:27.194905 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7fd87e10-8e91-4b2c-90a5-74cfcaa976b4-dns-swift-storage-0\") pod \"7fd87e10-8e91-4b2c-90a5-74cfcaa976b4\" (UID: \"7fd87e10-8e91-4b2c-90a5-74cfcaa976b4\") " Mar 11 09:02:27 crc kubenswrapper[4808]: I0311 09:02:27.195331 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b5mb4\" (UniqueName: \"kubernetes.io/projected/7fd87e10-8e91-4b2c-90a5-74cfcaa976b4-kube-api-access-b5mb4\") pod \"7fd87e10-8e91-4b2c-90a5-74cfcaa976b4\" (UID: \"7fd87e10-8e91-4b2c-90a5-74cfcaa976b4\") " Mar 11 09:02:27 crc kubenswrapper[4808]: I0311 09:02:27.195418 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7fd87e10-8e91-4b2c-90a5-74cfcaa976b4-ovsdbserver-nb\") pod \"7fd87e10-8e91-4b2c-90a5-74cfcaa976b4\" (UID: \"7fd87e10-8e91-4b2c-90a5-74cfcaa976b4\") " Mar 11 09:02:27 crc kubenswrapper[4808]: I0311 09:02:27.195534 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7fd87e10-8e91-4b2c-90a5-74cfcaa976b4-dns-svc\") pod \"7fd87e10-8e91-4b2c-90a5-74cfcaa976b4\" (UID: \"7fd87e10-8e91-4b2c-90a5-74cfcaa976b4\") " Mar 11 09:02:27 crc kubenswrapper[4808]: I0311 09:02:27.195565 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fd87e10-8e91-4b2c-90a5-74cfcaa976b4-config\") pod \"7fd87e10-8e91-4b2c-90a5-74cfcaa976b4\" (UID: \"7fd87e10-8e91-4b2c-90a5-74cfcaa976b4\") " Mar 11 09:02:27 crc kubenswrapper[4808]: I0311 09:02:27.195614 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7fd87e10-8e91-4b2c-90a5-74cfcaa976b4-ovsdbserver-sb\") pod \"7fd87e10-8e91-4b2c-90a5-74cfcaa976b4\" (UID: \"7fd87e10-8e91-4b2c-90a5-74cfcaa976b4\") " Mar 11 09:02:27 crc kubenswrapper[4808]: I0311 09:02:27.200831 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fd87e10-8e91-4b2c-90a5-74cfcaa976b4-kube-api-access-b5mb4" (OuterVolumeSpecName: "kube-api-access-b5mb4") pod "7fd87e10-8e91-4b2c-90a5-74cfcaa976b4" (UID: "7fd87e10-8e91-4b2c-90a5-74cfcaa976b4"). InnerVolumeSpecName "kube-api-access-b5mb4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:02:27 crc kubenswrapper[4808]: I0311 09:02:27.231813 4808 scope.go:117] "RemoveContainer" containerID="9314f7b9a7ff3e21f94a52be821753a056f6b9ae94a252eb2518da9cd797e85a" Mar 11 09:02:27 crc kubenswrapper[4808]: E0311 09:02:27.243188 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9314f7b9a7ff3e21f94a52be821753a056f6b9ae94a252eb2518da9cd797e85a\": container with ID starting with 9314f7b9a7ff3e21f94a52be821753a056f6b9ae94a252eb2518da9cd797e85a not found: ID does not exist" containerID="9314f7b9a7ff3e21f94a52be821753a056f6b9ae94a252eb2518da9cd797e85a" Mar 11 09:02:27 crc kubenswrapper[4808]: I0311 09:02:27.243232 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9314f7b9a7ff3e21f94a52be821753a056f6b9ae94a252eb2518da9cd797e85a"} err="failed to get container status \"9314f7b9a7ff3e21f94a52be821753a056f6b9ae94a252eb2518da9cd797e85a\": rpc error: code = NotFound desc = could not find container \"9314f7b9a7ff3e21f94a52be821753a056f6b9ae94a252eb2518da9cd797e85a\": container with ID starting with 9314f7b9a7ff3e21f94a52be821753a056f6b9ae94a252eb2518da9cd797e85a not found: ID does not exist" Mar 11 09:02:27 crc kubenswrapper[4808]: I0311 09:02:27.243258 4808 scope.go:117] "RemoveContainer" containerID="e7e0bcde2f5d58b16ee466e360d7efc7de50d5ebc509e72267663a75ec0038ca" Mar 11 09:02:27 crc kubenswrapper[4808]: I0311 09:02:27.243488 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-74c8dbd954-d5nz4" Mar 11 09:02:27 crc kubenswrapper[4808]: E0311 09:02:27.243814 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7e0bcde2f5d58b16ee466e360d7efc7de50d5ebc509e72267663a75ec0038ca\": container with ID starting with e7e0bcde2f5d58b16ee466e360d7efc7de50d5ebc509e72267663a75ec0038ca not found: ID does not exist" containerID="e7e0bcde2f5d58b16ee466e360d7efc7de50d5ebc509e72267663a75ec0038ca" Mar 11 09:02:27 crc kubenswrapper[4808]: I0311 09:02:27.243833 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7e0bcde2f5d58b16ee466e360d7efc7de50d5ebc509e72267663a75ec0038ca"} err="failed to get container status \"e7e0bcde2f5d58b16ee466e360d7efc7de50d5ebc509e72267663a75ec0038ca\": rpc error: code = NotFound desc = could not find container \"e7e0bcde2f5d58b16ee466e360d7efc7de50d5ebc509e72267663a75ec0038ca\": container with ID starting with e7e0bcde2f5d58b16ee466e360d7efc7de50d5ebc509e72267663a75ec0038ca not found: ID does not exist" Mar 11 09:02:27 crc kubenswrapper[4808]: I0311 09:02:27.292972 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7fd87e10-8e91-4b2c-90a5-74cfcaa976b4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7fd87e10-8e91-4b2c-90a5-74cfcaa976b4" (UID: "7fd87e10-8e91-4b2c-90a5-74cfcaa976b4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:02:27 crc kubenswrapper[4808]: I0311 09:02:27.297565 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b5mb4\" (UniqueName: \"kubernetes.io/projected/7fd87e10-8e91-4b2c-90a5-74cfcaa976b4-kube-api-access-b5mb4\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:27 crc kubenswrapper[4808]: I0311 09:02:27.297588 4808 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7fd87e10-8e91-4b2c-90a5-74cfcaa976b4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:27 crc kubenswrapper[4808]: I0311 09:02:27.298758 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7fd87e10-8e91-4b2c-90a5-74cfcaa976b4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7fd87e10-8e91-4b2c-90a5-74cfcaa976b4" (UID: "7fd87e10-8e91-4b2c-90a5-74cfcaa976b4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:02:27 crc kubenswrapper[4808]: I0311 09:02:27.319214 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7fd87e10-8e91-4b2c-90a5-74cfcaa976b4-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "7fd87e10-8e91-4b2c-90a5-74cfcaa976b4" (UID: "7fd87e10-8e91-4b2c-90a5-74cfcaa976b4"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:02:27 crc kubenswrapper[4808]: I0311 09:02:27.319858 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7fd87e10-8e91-4b2c-90a5-74cfcaa976b4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7fd87e10-8e91-4b2c-90a5-74cfcaa976b4" (UID: "7fd87e10-8e91-4b2c-90a5-74cfcaa976b4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:02:27 crc kubenswrapper[4808]: I0311 09:02:27.344539 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7fd87e10-8e91-4b2c-90a5-74cfcaa976b4-config" (OuterVolumeSpecName: "config") pod "7fd87e10-8e91-4b2c-90a5-74cfcaa976b4" (UID: "7fd87e10-8e91-4b2c-90a5-74cfcaa976b4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:02:27 crc kubenswrapper[4808]: I0311 09:02:27.399541 4808 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7fd87e10-8e91-4b2c-90a5-74cfcaa976b4-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:27 crc kubenswrapper[4808]: I0311 09:02:27.399576 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fd87e10-8e91-4b2c-90a5-74cfcaa976b4-config\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:27 crc kubenswrapper[4808]: I0311 09:02:27.399585 4808 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7fd87e10-8e91-4b2c-90a5-74cfcaa976b4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:27 crc kubenswrapper[4808]: I0311 09:02:27.399595 4808 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7fd87e10-8e91-4b2c-90a5-74cfcaa976b4-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:27 crc kubenswrapper[4808]: I0311 09:02:27.522667 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7859c7799c-pp2j9"] Mar 11 09:02:27 crc kubenswrapper[4808]: I0311 09:02:27.529965 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7859c7799c-pp2j9"] Mar 11 09:02:27 crc kubenswrapper[4808]: I0311 09:02:27.800091 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fd87e10-8e91-4b2c-90a5-74cfcaa976b4" path="/var/lib/kubelet/pods/7fd87e10-8e91-4b2c-90a5-74cfcaa976b4/volumes" Mar 11 09:02:28 crc kubenswrapper[4808]: I0311 09:02:28.101992 4808 generic.go:334] "Generic (PLEG): container finished" podID="cb14cc88-783b-4bcb-bdbb-770da3ee3a4c" containerID="20ae75e27c597ae9e6320c0668c37e44b4681167b9e5701c9c1fa2b7ecd147d0" exitCode=0 Mar 11 09:02:28 crc kubenswrapper[4808]: I0311 09:02:28.102879 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"cb14cc88-783b-4bcb-bdbb-770da3ee3a4c","Type":"ContainerDied","Data":"20ae75e27c597ae9e6320c0668c37e44b4681167b9e5701c9c1fa2b7ecd147d0"} Mar 11 09:02:28 crc kubenswrapper[4808]: I0311 09:02:28.150896 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-74c8dbd954-d5nz4" Mar 11 09:02:28 crc kubenswrapper[4808]: I0311 09:02:28.414438 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-665556c5fd-bnc2f"] Mar 11 09:02:28 crc kubenswrapper[4808]: E0311 09:02:28.415068 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fd87e10-8e91-4b2c-90a5-74cfcaa976b4" containerName="dnsmasq-dns" Mar 11 09:02:28 crc kubenswrapper[4808]: I0311 09:02:28.415084 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fd87e10-8e91-4b2c-90a5-74cfcaa976b4" containerName="dnsmasq-dns" Mar 11 09:02:28 crc kubenswrapper[4808]: E0311 09:02:28.415104 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fd87e10-8e91-4b2c-90a5-74cfcaa976b4" containerName="init" Mar 11 09:02:28 crc kubenswrapper[4808]: I0311 09:02:28.415111 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fd87e10-8e91-4b2c-90a5-74cfcaa976b4" containerName="init" Mar 11 09:02:28 crc kubenswrapper[4808]: I0311 09:02:28.415309 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fd87e10-8e91-4b2c-90a5-74cfcaa976b4" containerName="dnsmasq-dns" Mar 11 09:02:28 crc kubenswrapper[4808]: I0311 09:02:28.416277 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-665556c5fd-bnc2f" Mar 11 09:02:28 crc kubenswrapper[4808]: I0311 09:02:28.438915 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-665556c5fd-bnc2f"] Mar 11 09:02:28 crc kubenswrapper[4808]: I0311 09:02:28.531300 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91b4b44f-8d7c-4933-b20d-a1d79d7c90b4-config-data\") pod \"placement-665556c5fd-bnc2f\" (UID: \"91b4b44f-8d7c-4933-b20d-a1d79d7c90b4\") " pod="openstack/placement-665556c5fd-bnc2f" Mar 11 09:02:28 crc kubenswrapper[4808]: I0311 09:02:28.531398 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66gj5\" (UniqueName: \"kubernetes.io/projected/91b4b44f-8d7c-4933-b20d-a1d79d7c90b4-kube-api-access-66gj5\") pod \"placement-665556c5fd-bnc2f\" (UID: \"91b4b44f-8d7c-4933-b20d-a1d79d7c90b4\") " pod="openstack/placement-665556c5fd-bnc2f" Mar 11 09:02:28 crc kubenswrapper[4808]: I0311 09:02:28.531434 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/91b4b44f-8d7c-4933-b20d-a1d79d7c90b4-logs\") pod \"placement-665556c5fd-bnc2f\" (UID: \"91b4b44f-8d7c-4933-b20d-a1d79d7c90b4\") " pod="openstack/placement-665556c5fd-bnc2f" Mar 11 09:02:28 crc kubenswrapper[4808]: I0311 09:02:28.531465 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/91b4b44f-8d7c-4933-b20d-a1d79d7c90b4-internal-tls-certs\") pod \"placement-665556c5fd-bnc2f\" (UID: \"91b4b44f-8d7c-4933-b20d-a1d79d7c90b4\") " pod="openstack/placement-665556c5fd-bnc2f" Mar 11 09:02:28 crc kubenswrapper[4808]: I0311 09:02:28.531500 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91b4b44f-8d7c-4933-b20d-a1d79d7c90b4-scripts\") pod \"placement-665556c5fd-bnc2f\" (UID: \"91b4b44f-8d7c-4933-b20d-a1d79d7c90b4\") " pod="openstack/placement-665556c5fd-bnc2f" Mar 11 09:02:28 crc kubenswrapper[4808]: I0311 09:02:28.531614 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/91b4b44f-8d7c-4933-b20d-a1d79d7c90b4-public-tls-certs\") pod \"placement-665556c5fd-bnc2f\" (UID: \"91b4b44f-8d7c-4933-b20d-a1d79d7c90b4\") " pod="openstack/placement-665556c5fd-bnc2f" Mar 11 09:02:28 crc kubenswrapper[4808]: I0311 09:02:28.531646 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91b4b44f-8d7c-4933-b20d-a1d79d7c90b4-combined-ca-bundle\") pod \"placement-665556c5fd-bnc2f\" (UID: \"91b4b44f-8d7c-4933-b20d-a1d79d7c90b4\") " pod="openstack/placement-665556c5fd-bnc2f" Mar 11 09:02:28 crc kubenswrapper[4808]: I0311 09:02:28.634608 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91b4b44f-8d7c-4933-b20d-a1d79d7c90b4-config-data\") pod \"placement-665556c5fd-bnc2f\" (UID: \"91b4b44f-8d7c-4933-b20d-a1d79d7c90b4\") " pod="openstack/placement-665556c5fd-bnc2f" Mar 11 09:02:28 crc kubenswrapper[4808]: I0311 09:02:28.634668 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66gj5\" (UniqueName: \"kubernetes.io/projected/91b4b44f-8d7c-4933-b20d-a1d79d7c90b4-kube-api-access-66gj5\") pod \"placement-665556c5fd-bnc2f\" (UID: \"91b4b44f-8d7c-4933-b20d-a1d79d7c90b4\") " pod="openstack/placement-665556c5fd-bnc2f" Mar 11 09:02:28 crc kubenswrapper[4808]: I0311 09:02:28.634694 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/91b4b44f-8d7c-4933-b20d-a1d79d7c90b4-logs\") pod \"placement-665556c5fd-bnc2f\" (UID: \"91b4b44f-8d7c-4933-b20d-a1d79d7c90b4\") " pod="openstack/placement-665556c5fd-bnc2f" Mar 11 09:02:28 crc kubenswrapper[4808]: I0311 09:02:28.634717 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/91b4b44f-8d7c-4933-b20d-a1d79d7c90b4-internal-tls-certs\") pod \"placement-665556c5fd-bnc2f\" (UID: \"91b4b44f-8d7c-4933-b20d-a1d79d7c90b4\") " pod="openstack/placement-665556c5fd-bnc2f" Mar 11 09:02:28 crc kubenswrapper[4808]: I0311 09:02:28.634738 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91b4b44f-8d7c-4933-b20d-a1d79d7c90b4-scripts\") pod \"placement-665556c5fd-bnc2f\" (UID: \"91b4b44f-8d7c-4933-b20d-a1d79d7c90b4\") " pod="openstack/placement-665556c5fd-bnc2f" Mar 11 09:02:28 crc kubenswrapper[4808]: I0311 09:02:28.634803 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/91b4b44f-8d7c-4933-b20d-a1d79d7c90b4-public-tls-certs\") pod \"placement-665556c5fd-bnc2f\" (UID: \"91b4b44f-8d7c-4933-b20d-a1d79d7c90b4\") " pod="openstack/placement-665556c5fd-bnc2f" Mar 11 09:02:28 crc kubenswrapper[4808]: I0311 09:02:28.634827 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91b4b44f-8d7c-4933-b20d-a1d79d7c90b4-combined-ca-bundle\") pod \"placement-665556c5fd-bnc2f\" (UID: \"91b4b44f-8d7c-4933-b20d-a1d79d7c90b4\") " pod="openstack/placement-665556c5fd-bnc2f" Mar 11 09:02:28 crc kubenswrapper[4808]: I0311 09:02:28.635184 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/91b4b44f-8d7c-4933-b20d-a1d79d7c90b4-logs\") pod \"placement-665556c5fd-bnc2f\" (UID: \"91b4b44f-8d7c-4933-b20d-a1d79d7c90b4\") " pod="openstack/placement-665556c5fd-bnc2f" Mar 11 09:02:28 crc kubenswrapper[4808]: I0311 09:02:28.642035 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/91b4b44f-8d7c-4933-b20d-a1d79d7c90b4-internal-tls-certs\") pod \"placement-665556c5fd-bnc2f\" (UID: \"91b4b44f-8d7c-4933-b20d-a1d79d7c90b4\") " pod="openstack/placement-665556c5fd-bnc2f" Mar 11 09:02:28 crc kubenswrapper[4808]: I0311 09:02:28.642713 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91b4b44f-8d7c-4933-b20d-a1d79d7c90b4-scripts\") pod \"placement-665556c5fd-bnc2f\" (UID: \"91b4b44f-8d7c-4933-b20d-a1d79d7c90b4\") " pod="openstack/placement-665556c5fd-bnc2f" Mar 11 09:02:28 crc kubenswrapper[4808]: I0311 09:02:28.643962 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91b4b44f-8d7c-4933-b20d-a1d79d7c90b4-config-data\") pod \"placement-665556c5fd-bnc2f\" (UID: \"91b4b44f-8d7c-4933-b20d-a1d79d7c90b4\") " pod="openstack/placement-665556c5fd-bnc2f" Mar 11 09:02:28 crc kubenswrapper[4808]: I0311 09:02:28.647806 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/91b4b44f-8d7c-4933-b20d-a1d79d7c90b4-public-tls-certs\") pod \"placement-665556c5fd-bnc2f\" (UID: \"91b4b44f-8d7c-4933-b20d-a1d79d7c90b4\") " pod="openstack/placement-665556c5fd-bnc2f" Mar 11 09:02:28 crc kubenswrapper[4808]: I0311 09:02:28.665152 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91b4b44f-8d7c-4933-b20d-a1d79d7c90b4-combined-ca-bundle\") pod \"placement-665556c5fd-bnc2f\" (UID: \"91b4b44f-8d7c-4933-b20d-a1d79d7c90b4\") " pod="openstack/placement-665556c5fd-bnc2f" Mar 11 09:02:28 crc kubenswrapper[4808]: I0311 09:02:28.665687 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66gj5\" (UniqueName: \"kubernetes.io/projected/91b4b44f-8d7c-4933-b20d-a1d79d7c90b4-kube-api-access-66gj5\") pod \"placement-665556c5fd-bnc2f\" (UID: \"91b4b44f-8d7c-4933-b20d-a1d79d7c90b4\") " pod="openstack/placement-665556c5fd-bnc2f" Mar 11 09:02:28 crc kubenswrapper[4808]: I0311 09:02:28.736951 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-665556c5fd-bnc2f" Mar 11 09:02:28 crc kubenswrapper[4808]: I0311 09:02:28.957733 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-688446ffb8-4n8n7" Mar 11 09:02:29 crc kubenswrapper[4808]: I0311 09:02:29.123142 4808 generic.go:334] "Generic (PLEG): container finished" podID="ba48f43e-924f-4ba8-af86-f0574b9a625a" containerID="5744e1e3d7c24424d0ec7ee697296da0bd6bae1cef0d68b1c0d722bc0d72af72" exitCode=0 Mar 11 09:02:29 crc kubenswrapper[4808]: I0311 09:02:29.123482 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-587fb944fc-6frn6" event={"ID":"ba48f43e-924f-4ba8-af86-f0574b9a625a","Type":"ContainerDied","Data":"5744e1e3d7c24424d0ec7ee697296da0bd6bae1cef0d68b1c0d722bc0d72af72"} Mar 11 09:02:29 crc kubenswrapper[4808]: I0311 09:02:29.353540 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-587fb944fc-6frn6" Mar 11 09:02:29 crc kubenswrapper[4808]: I0311 09:02:29.452122 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-688446ffb8-4n8n7" Mar 11 09:02:29 crc kubenswrapper[4808]: I0311 09:02:29.455212 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba48f43e-924f-4ba8-af86-f0574b9a625a-internal-tls-certs\") pod \"ba48f43e-924f-4ba8-af86-f0574b9a625a\" (UID: \"ba48f43e-924f-4ba8-af86-f0574b9a625a\") " Mar 11 09:02:29 crc kubenswrapper[4808]: I0311 09:02:29.455298 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ba48f43e-924f-4ba8-af86-f0574b9a625a-httpd-config\") pod \"ba48f43e-924f-4ba8-af86-f0574b9a625a\" (UID: \"ba48f43e-924f-4ba8-af86-f0574b9a625a\") " Mar 11 09:02:29 crc kubenswrapper[4808]: I0311 09:02:29.455743 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba48f43e-924f-4ba8-af86-f0574b9a625a-ovndb-tls-certs\") pod \"ba48f43e-924f-4ba8-af86-f0574b9a625a\" (UID: \"ba48f43e-924f-4ba8-af86-f0574b9a625a\") " Mar 11 09:02:29 crc kubenswrapper[4808]: I0311 09:02:29.455852 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba48f43e-924f-4ba8-af86-f0574b9a625a-combined-ca-bundle\") pod \"ba48f43e-924f-4ba8-af86-f0574b9a625a\" (UID: \"ba48f43e-924f-4ba8-af86-f0574b9a625a\") " Mar 11 09:02:29 crc kubenswrapper[4808]: I0311 09:02:29.456004 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x54nj\" (UniqueName: \"kubernetes.io/projected/ba48f43e-924f-4ba8-af86-f0574b9a625a-kube-api-access-x54nj\") pod \"ba48f43e-924f-4ba8-af86-f0574b9a625a\" (UID: \"ba48f43e-924f-4ba8-af86-f0574b9a625a\") " Mar 11 09:02:29 crc kubenswrapper[4808]: I0311 09:02:29.456043 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ba48f43e-924f-4ba8-af86-f0574b9a625a-config\") pod \"ba48f43e-924f-4ba8-af86-f0574b9a625a\" (UID: \"ba48f43e-924f-4ba8-af86-f0574b9a625a\") " Mar 11 09:02:29 crc kubenswrapper[4808]: I0311 09:02:29.456065 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba48f43e-924f-4ba8-af86-f0574b9a625a-public-tls-certs\") pod \"ba48f43e-924f-4ba8-af86-f0574b9a625a\" (UID: \"ba48f43e-924f-4ba8-af86-f0574b9a625a\") " Mar 11 09:02:29 crc kubenswrapper[4808]: I0311 09:02:29.470755 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba48f43e-924f-4ba8-af86-f0574b9a625a-kube-api-access-x54nj" (OuterVolumeSpecName: "kube-api-access-x54nj") pod "ba48f43e-924f-4ba8-af86-f0574b9a625a" (UID: "ba48f43e-924f-4ba8-af86-f0574b9a625a"). InnerVolumeSpecName "kube-api-access-x54nj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:02:29 crc kubenswrapper[4808]: I0311 09:02:29.495764 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba48f43e-924f-4ba8-af86-f0574b9a625a-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "ba48f43e-924f-4ba8-af86-f0574b9a625a" (UID: "ba48f43e-924f-4ba8-af86-f0574b9a625a"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:02:29 crc kubenswrapper[4808]: I0311 09:02:29.493896 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-665556c5fd-bnc2f"] Mar 11 09:02:29 crc kubenswrapper[4808]: I0311 09:02:29.553661 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-86b7cb5956-tzj8f"] Mar 11 09:02:29 crc kubenswrapper[4808]: I0311 09:02:29.554324 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-86b7cb5956-tzj8f" podUID="a322ec3d-e25c-4ccf-8a5d-af1b78914c8f" containerName="barbican-api-log" containerID="cri-o://7c27079d33e451ffa153a65361da4dfa11e1369b3f6cff896dfa022cdc0aee8f" gracePeriod=30 Mar 11 09:02:29 crc kubenswrapper[4808]: I0311 09:02:29.554843 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-86b7cb5956-tzj8f" podUID="a322ec3d-e25c-4ccf-8a5d-af1b78914c8f" containerName="barbican-api" containerID="cri-o://d8aa3bfc204c1fbab362a3e358821b2edbdd78652875ef2e61ff3450beb8bec7" gracePeriod=30 Mar 11 09:02:29 crc kubenswrapper[4808]: I0311 09:02:29.573630 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x54nj\" (UniqueName: \"kubernetes.io/projected/ba48f43e-924f-4ba8-af86-f0574b9a625a-kube-api-access-x54nj\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:29 crc kubenswrapper[4808]: I0311 09:02:29.573854 4808 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ba48f43e-924f-4ba8-af86-f0574b9a625a-httpd-config\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:29 crc kubenswrapper[4808]: I0311 09:02:29.603540 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba48f43e-924f-4ba8-af86-f0574b9a625a-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "ba48f43e-924f-4ba8-af86-f0574b9a625a" (UID: "ba48f43e-924f-4ba8-af86-f0574b9a625a"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:02:29 crc kubenswrapper[4808]: I0311 09:02:29.608559 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba48f43e-924f-4ba8-af86-f0574b9a625a-config" (OuterVolumeSpecName: "config") pod "ba48f43e-924f-4ba8-af86-f0574b9a625a" (UID: "ba48f43e-924f-4ba8-af86-f0574b9a625a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:02:29 crc kubenswrapper[4808]: I0311 09:02:29.628527 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba48f43e-924f-4ba8-af86-f0574b9a625a-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "ba48f43e-924f-4ba8-af86-f0574b9a625a" (UID: "ba48f43e-924f-4ba8-af86-f0574b9a625a"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:02:29 crc kubenswrapper[4808]: I0311 09:02:29.646805 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba48f43e-924f-4ba8-af86-f0574b9a625a-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "ba48f43e-924f-4ba8-af86-f0574b9a625a" (UID: "ba48f43e-924f-4ba8-af86-f0574b9a625a"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:02:29 crc kubenswrapper[4808]: I0311 09:02:29.651627 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba48f43e-924f-4ba8-af86-f0574b9a625a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ba48f43e-924f-4ba8-af86-f0574b9a625a" (UID: "ba48f43e-924f-4ba8-af86-f0574b9a625a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:02:29 crc kubenswrapper[4808]: I0311 09:02:29.678642 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/ba48f43e-924f-4ba8-af86-f0574b9a625a-config\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:29 crc kubenswrapper[4808]: I0311 09:02:29.678676 4808 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba48f43e-924f-4ba8-af86-f0574b9a625a-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:29 crc kubenswrapper[4808]: I0311 09:02:29.678687 4808 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba48f43e-924f-4ba8-af86-f0574b9a625a-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:29 crc kubenswrapper[4808]: I0311 09:02:29.678695 4808 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba48f43e-924f-4ba8-af86-f0574b9a625a-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:29 crc kubenswrapper[4808]: I0311 09:02:29.678704 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba48f43e-924f-4ba8-af86-f0574b9a625a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:29 crc kubenswrapper[4808]: I0311 09:02:29.698146 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-6447d66dcc-mc8df" Mar 11 09:02:30 crc kubenswrapper[4808]: I0311 09:02:30.136745 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-587fb944fc-6frn6" event={"ID":"ba48f43e-924f-4ba8-af86-f0574b9a625a","Type":"ContainerDied","Data":"ad946938b3f80d69c159970b1ea98d5dbcc84406c0a5fdac7640214bf8c43d88"} Mar 11 09:02:30 crc kubenswrapper[4808]: I0311 09:02:30.136762 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-587fb944fc-6frn6" Mar 11 09:02:30 crc kubenswrapper[4808]: I0311 09:02:30.137080 4808 scope.go:117] "RemoveContainer" containerID="61eb3cc64742aa5f8e0fcc5334433bb4c72d03a30181457fa988d9e6937fa649" Mar 11 09:02:30 crc kubenswrapper[4808]: I0311 09:02:30.147007 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-665556c5fd-bnc2f" event={"ID":"91b4b44f-8d7c-4933-b20d-a1d79d7c90b4","Type":"ContainerStarted","Data":"797976e443dd5e136b0f3528631bbb358b7bde1c228efadea193d6fc83f5f1cb"} Mar 11 09:02:30 crc kubenswrapper[4808]: I0311 09:02:30.147064 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-665556c5fd-bnc2f" event={"ID":"91b4b44f-8d7c-4933-b20d-a1d79d7c90b4","Type":"ContainerStarted","Data":"9bd7b73d921af51c66c9406fce69c56ff4d3946cb37941b50732c8f9d3483ca1"} Mar 11 09:02:30 crc kubenswrapper[4808]: I0311 09:02:30.147075 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-665556c5fd-bnc2f" event={"ID":"91b4b44f-8d7c-4933-b20d-a1d79d7c90b4","Type":"ContainerStarted","Data":"88aad07c41b45aa7c909d0c3c3f13a25003548f918ddba0ce040d2df8905eba5"} Mar 11 09:02:30 crc kubenswrapper[4808]: I0311 09:02:30.147129 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-665556c5fd-bnc2f" Mar 11 09:02:30 crc kubenswrapper[4808]: I0311 09:02:30.147749 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-665556c5fd-bnc2f" Mar 11 09:02:30 crc kubenswrapper[4808]: I0311 09:02:30.150478 4808 generic.go:334] "Generic (PLEG): container finished" podID="a322ec3d-e25c-4ccf-8a5d-af1b78914c8f" containerID="7c27079d33e451ffa153a65361da4dfa11e1369b3f6cff896dfa022cdc0aee8f" exitCode=143 Mar 11 09:02:30 crc kubenswrapper[4808]: I0311 09:02:30.150527 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-86b7cb5956-tzj8f" event={"ID":"a322ec3d-e25c-4ccf-8a5d-af1b78914c8f","Type":"ContainerDied","Data":"7c27079d33e451ffa153a65361da4dfa11e1369b3f6cff896dfa022cdc0aee8f"} Mar 11 09:02:30 crc kubenswrapper[4808]: I0311 09:02:30.163473 4808 scope.go:117] "RemoveContainer" containerID="5744e1e3d7c24424d0ec7ee697296da0bd6bae1cef0d68b1c0d722bc0d72af72" Mar 11 09:02:30 crc kubenswrapper[4808]: I0311 09:02:30.172396 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-587fb944fc-6frn6"] Mar 11 09:02:30 crc kubenswrapper[4808]: I0311 09:02:30.177176 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-587fb944fc-6frn6"] Mar 11 09:02:30 crc kubenswrapper[4808]: I0311 09:02:30.188879 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-665556c5fd-bnc2f" podStartSLOduration=2.188859797 podStartE2EDuration="2.188859797s" podCreationTimestamp="2026-03-11 09:02:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:02:30.180847174 +0000 UTC m=+1401.134170504" watchObservedRunningTime="2026-03-11 09:02:30.188859797 +0000 UTC m=+1401.142183117" Mar 11 09:02:31 crc kubenswrapper[4808]: I0311 09:02:31.577476 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 11 09:02:31 crc kubenswrapper[4808]: I0311 09:02:31.619712 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x9mh7\" (UniqueName: \"kubernetes.io/projected/cb14cc88-783b-4bcb-bdbb-770da3ee3a4c-kube-api-access-x9mh7\") pod \"cb14cc88-783b-4bcb-bdbb-770da3ee3a4c\" (UID: \"cb14cc88-783b-4bcb-bdbb-770da3ee3a4c\") " Mar 11 09:02:31 crc kubenswrapper[4808]: I0311 09:02:31.619971 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cb14cc88-783b-4bcb-bdbb-770da3ee3a4c-etc-machine-id\") pod \"cb14cc88-783b-4bcb-bdbb-770da3ee3a4c\" (UID: \"cb14cc88-783b-4bcb-bdbb-770da3ee3a4c\") " Mar 11 09:02:31 crc kubenswrapper[4808]: I0311 09:02:31.620157 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb14cc88-783b-4bcb-bdbb-770da3ee3a4c-config-data\") pod \"cb14cc88-783b-4bcb-bdbb-770da3ee3a4c\" (UID: \"cb14cc88-783b-4bcb-bdbb-770da3ee3a4c\") " Mar 11 09:02:31 crc kubenswrapper[4808]: I0311 09:02:31.620377 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cb14cc88-783b-4bcb-bdbb-770da3ee3a4c-scripts\") pod \"cb14cc88-783b-4bcb-bdbb-770da3ee3a4c\" (UID: \"cb14cc88-783b-4bcb-bdbb-770da3ee3a4c\") " Mar 11 09:02:31 crc kubenswrapper[4808]: I0311 09:02:31.620480 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cb14cc88-783b-4bcb-bdbb-770da3ee3a4c-config-data-custom\") pod \"cb14cc88-783b-4bcb-bdbb-770da3ee3a4c\" (UID: \"cb14cc88-783b-4bcb-bdbb-770da3ee3a4c\") " Mar 11 09:02:31 crc kubenswrapper[4808]: I0311 09:02:31.620602 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb14cc88-783b-4bcb-bdbb-770da3ee3a4c-combined-ca-bundle\") pod \"cb14cc88-783b-4bcb-bdbb-770da3ee3a4c\" (UID: \"cb14cc88-783b-4bcb-bdbb-770da3ee3a4c\") " Mar 11 09:02:31 crc kubenswrapper[4808]: I0311 09:02:31.620623 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb14cc88-783b-4bcb-bdbb-770da3ee3a4c-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "cb14cc88-783b-4bcb-bdbb-770da3ee3a4c" (UID: "cb14cc88-783b-4bcb-bdbb-770da3ee3a4c"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 11 09:02:31 crc kubenswrapper[4808]: I0311 09:02:31.626823 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb14cc88-783b-4bcb-bdbb-770da3ee3a4c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "cb14cc88-783b-4bcb-bdbb-770da3ee3a4c" (UID: "cb14cc88-783b-4bcb-bdbb-770da3ee3a4c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:02:31 crc kubenswrapper[4808]: I0311 09:02:31.627223 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb14cc88-783b-4bcb-bdbb-770da3ee3a4c-scripts" (OuterVolumeSpecName: "scripts") pod "cb14cc88-783b-4bcb-bdbb-770da3ee3a4c" (UID: "cb14cc88-783b-4bcb-bdbb-770da3ee3a4c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:02:31 crc kubenswrapper[4808]: I0311 09:02:31.649754 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb14cc88-783b-4bcb-bdbb-770da3ee3a4c-kube-api-access-x9mh7" (OuterVolumeSpecName: "kube-api-access-x9mh7") pod "cb14cc88-783b-4bcb-bdbb-770da3ee3a4c" (UID: "cb14cc88-783b-4bcb-bdbb-770da3ee3a4c"). InnerVolumeSpecName "kube-api-access-x9mh7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:02:31 crc kubenswrapper[4808]: I0311 09:02:31.682198 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb14cc88-783b-4bcb-bdbb-770da3ee3a4c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cb14cc88-783b-4bcb-bdbb-770da3ee3a4c" (UID: "cb14cc88-783b-4bcb-bdbb-770da3ee3a4c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:02:31 crc kubenswrapper[4808]: I0311 09:02:31.723377 4808 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cb14cc88-783b-4bcb-bdbb-770da3ee3a4c-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:31 crc kubenswrapper[4808]: I0311 09:02:31.723589 4808 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cb14cc88-783b-4bcb-bdbb-770da3ee3a4c-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:31 crc kubenswrapper[4808]: I0311 09:02:31.723649 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb14cc88-783b-4bcb-bdbb-770da3ee3a4c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:31 crc kubenswrapper[4808]: I0311 09:02:31.723729 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x9mh7\" (UniqueName: \"kubernetes.io/projected/cb14cc88-783b-4bcb-bdbb-770da3ee3a4c-kube-api-access-x9mh7\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:31 crc kubenswrapper[4808]: I0311 09:02:31.723784 4808 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cb14cc88-783b-4bcb-bdbb-770da3ee3a4c-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:31 crc kubenswrapper[4808]: I0311 09:02:31.757557 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb14cc88-783b-4bcb-bdbb-770da3ee3a4c-config-data" (OuterVolumeSpecName: "config-data") pod "cb14cc88-783b-4bcb-bdbb-770da3ee3a4c" (UID: "cb14cc88-783b-4bcb-bdbb-770da3ee3a4c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:02:31 crc kubenswrapper[4808]: I0311 09:02:31.801789 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba48f43e-924f-4ba8-af86-f0574b9a625a" path="/var/lib/kubelet/pods/ba48f43e-924f-4ba8-af86-f0574b9a625a/volumes" Mar 11 09:02:31 crc kubenswrapper[4808]: I0311 09:02:31.825967 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb14cc88-783b-4bcb-bdbb-770da3ee3a4c-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:32 crc kubenswrapper[4808]: I0311 09:02:32.174509 4808 generic.go:334] "Generic (PLEG): container finished" podID="cb14cc88-783b-4bcb-bdbb-770da3ee3a4c" containerID="ed52150a97a48f8f1535a039331bb67ac74c8cc9b329a7a0f079513d8757921b" exitCode=0 Mar 11 09:02:32 crc kubenswrapper[4808]: I0311 09:02:32.174571 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"cb14cc88-783b-4bcb-bdbb-770da3ee3a4c","Type":"ContainerDied","Data":"ed52150a97a48f8f1535a039331bb67ac74c8cc9b329a7a0f079513d8757921b"} Mar 11 09:02:32 crc kubenswrapper[4808]: I0311 09:02:32.174599 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"cb14cc88-783b-4bcb-bdbb-770da3ee3a4c","Type":"ContainerDied","Data":"ef11d43904c461c9a7cb0e91aecdcb9e474505e6dae6884d10adfc0afde9fb73"} Mar 11 09:02:32 crc kubenswrapper[4808]: I0311 09:02:32.174633 4808 scope.go:117] "RemoveContainer" containerID="20ae75e27c597ae9e6320c0668c37e44b4681167b9e5701c9c1fa2b7ecd147d0" Mar 11 09:02:32 crc kubenswrapper[4808]: I0311 09:02:32.174893 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 11 09:02:32 crc kubenswrapper[4808]: I0311 09:02:32.203413 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 11 09:02:32 crc kubenswrapper[4808]: I0311 09:02:32.213864 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 11 09:02:32 crc kubenswrapper[4808]: I0311 09:02:32.221552 4808 scope.go:117] "RemoveContainer" containerID="ed52150a97a48f8f1535a039331bb67ac74c8cc9b329a7a0f079513d8757921b" Mar 11 09:02:32 crc kubenswrapper[4808]: I0311 09:02:32.237113 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Mar 11 09:02:32 crc kubenswrapper[4808]: E0311 09:02:32.237543 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba48f43e-924f-4ba8-af86-f0574b9a625a" containerName="neutron-httpd" Mar 11 09:02:32 crc kubenswrapper[4808]: I0311 09:02:32.237569 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba48f43e-924f-4ba8-af86-f0574b9a625a" containerName="neutron-httpd" Mar 11 09:02:32 crc kubenswrapper[4808]: E0311 09:02:32.237613 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba48f43e-924f-4ba8-af86-f0574b9a625a" containerName="neutron-api" Mar 11 09:02:32 crc kubenswrapper[4808]: I0311 09:02:32.237623 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba48f43e-924f-4ba8-af86-f0574b9a625a" containerName="neutron-api" Mar 11 09:02:32 crc kubenswrapper[4808]: E0311 09:02:32.237640 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb14cc88-783b-4bcb-bdbb-770da3ee3a4c" containerName="cinder-scheduler" Mar 11 09:02:32 crc kubenswrapper[4808]: I0311 09:02:32.237647 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb14cc88-783b-4bcb-bdbb-770da3ee3a4c" containerName="cinder-scheduler" Mar 11 09:02:32 crc kubenswrapper[4808]: E0311 09:02:32.237661 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb14cc88-783b-4bcb-bdbb-770da3ee3a4c" containerName="probe" Mar 11 09:02:32 crc kubenswrapper[4808]: I0311 09:02:32.237668 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb14cc88-783b-4bcb-bdbb-770da3ee3a4c" containerName="probe" Mar 11 09:02:32 crc kubenswrapper[4808]: I0311 09:02:32.237875 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba48f43e-924f-4ba8-af86-f0574b9a625a" containerName="neutron-httpd" Mar 11 09:02:32 crc kubenswrapper[4808]: I0311 09:02:32.237898 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba48f43e-924f-4ba8-af86-f0574b9a625a" containerName="neutron-api" Mar 11 09:02:32 crc kubenswrapper[4808]: I0311 09:02:32.237918 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb14cc88-783b-4bcb-bdbb-770da3ee3a4c" containerName="cinder-scheduler" Mar 11 09:02:32 crc kubenswrapper[4808]: I0311 09:02:32.237939 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb14cc88-783b-4bcb-bdbb-770da3ee3a4c" containerName="probe" Mar 11 09:02:32 crc kubenswrapper[4808]: I0311 09:02:32.239577 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 11 09:02:32 crc kubenswrapper[4808]: I0311 09:02:32.243212 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Mar 11 09:02:32 crc kubenswrapper[4808]: I0311 09:02:32.257691 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 11 09:02:32 crc kubenswrapper[4808]: I0311 09:02:32.269372 4808 scope.go:117] "RemoveContainer" containerID="20ae75e27c597ae9e6320c0668c37e44b4681167b9e5701c9c1fa2b7ecd147d0" Mar 11 09:02:32 crc kubenswrapper[4808]: E0311 09:02:32.271524 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20ae75e27c597ae9e6320c0668c37e44b4681167b9e5701c9c1fa2b7ecd147d0\": container with ID starting with 20ae75e27c597ae9e6320c0668c37e44b4681167b9e5701c9c1fa2b7ecd147d0 not found: ID does not exist" containerID="20ae75e27c597ae9e6320c0668c37e44b4681167b9e5701c9c1fa2b7ecd147d0" Mar 11 09:02:32 crc kubenswrapper[4808]: I0311 09:02:32.271569 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20ae75e27c597ae9e6320c0668c37e44b4681167b9e5701c9c1fa2b7ecd147d0"} err="failed to get container status \"20ae75e27c597ae9e6320c0668c37e44b4681167b9e5701c9c1fa2b7ecd147d0\": rpc error: code = NotFound desc = could not find container \"20ae75e27c597ae9e6320c0668c37e44b4681167b9e5701c9c1fa2b7ecd147d0\": container with ID starting with 20ae75e27c597ae9e6320c0668c37e44b4681167b9e5701c9c1fa2b7ecd147d0 not found: ID does not exist" Mar 11 09:02:32 crc kubenswrapper[4808]: I0311 09:02:32.271595 4808 scope.go:117] "RemoveContainer" containerID="ed52150a97a48f8f1535a039331bb67ac74c8cc9b329a7a0f079513d8757921b" Mar 11 09:02:32 crc kubenswrapper[4808]: E0311 09:02:32.271989 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed52150a97a48f8f1535a039331bb67ac74c8cc9b329a7a0f079513d8757921b\": container with ID starting with ed52150a97a48f8f1535a039331bb67ac74c8cc9b329a7a0f079513d8757921b not found: ID does not exist" containerID="ed52150a97a48f8f1535a039331bb67ac74c8cc9b329a7a0f079513d8757921b" Mar 11 09:02:32 crc kubenswrapper[4808]: I0311 09:02:32.272021 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed52150a97a48f8f1535a039331bb67ac74c8cc9b329a7a0f079513d8757921b"} err="failed to get container status \"ed52150a97a48f8f1535a039331bb67ac74c8cc9b329a7a0f079513d8757921b\": rpc error: code = NotFound desc = could not find container \"ed52150a97a48f8f1535a039331bb67ac74c8cc9b329a7a0f079513d8757921b\": container with ID starting with ed52150a97a48f8f1535a039331bb67ac74c8cc9b329a7a0f079513d8757921b not found: ID does not exist" Mar 11 09:02:32 crc kubenswrapper[4808]: I0311 09:02:32.440942 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vp7ld\" (UniqueName: \"kubernetes.io/projected/bf0df220-037c-4d17-b4ac-93f6d7eb4fa0-kube-api-access-vp7ld\") pod \"cinder-scheduler-0\" (UID: \"bf0df220-037c-4d17-b4ac-93f6d7eb4fa0\") " pod="openstack/cinder-scheduler-0" Mar 11 09:02:32 crc kubenswrapper[4808]: I0311 09:02:32.441107 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf0df220-037c-4d17-b4ac-93f6d7eb4fa0-config-data\") pod \"cinder-scheduler-0\" (UID: \"bf0df220-037c-4d17-b4ac-93f6d7eb4fa0\") " pod="openstack/cinder-scheduler-0" Mar 11 09:02:32 crc kubenswrapper[4808]: I0311 09:02:32.441194 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bf0df220-037c-4d17-b4ac-93f6d7eb4fa0-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"bf0df220-037c-4d17-b4ac-93f6d7eb4fa0\") " pod="openstack/cinder-scheduler-0" Mar 11 09:02:32 crc kubenswrapper[4808]: I0311 09:02:32.441226 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bf0df220-037c-4d17-b4ac-93f6d7eb4fa0-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"bf0df220-037c-4d17-b4ac-93f6d7eb4fa0\") " pod="openstack/cinder-scheduler-0" Mar 11 09:02:32 crc kubenswrapper[4808]: I0311 09:02:32.441282 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf0df220-037c-4d17-b4ac-93f6d7eb4fa0-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"bf0df220-037c-4d17-b4ac-93f6d7eb4fa0\") " pod="openstack/cinder-scheduler-0" Mar 11 09:02:32 crc kubenswrapper[4808]: I0311 09:02:32.441394 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf0df220-037c-4d17-b4ac-93f6d7eb4fa0-scripts\") pod \"cinder-scheduler-0\" (UID: \"bf0df220-037c-4d17-b4ac-93f6d7eb4fa0\") " pod="openstack/cinder-scheduler-0" Mar 11 09:02:32 crc kubenswrapper[4808]: I0311 09:02:32.543211 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bf0df220-037c-4d17-b4ac-93f6d7eb4fa0-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"bf0df220-037c-4d17-b4ac-93f6d7eb4fa0\") " pod="openstack/cinder-scheduler-0" Mar 11 09:02:32 crc kubenswrapper[4808]: I0311 09:02:32.543276 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bf0df220-037c-4d17-b4ac-93f6d7eb4fa0-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"bf0df220-037c-4d17-b4ac-93f6d7eb4fa0\") " pod="openstack/cinder-scheduler-0" Mar 11 09:02:32 crc kubenswrapper[4808]: I0311 09:02:32.543335 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf0df220-037c-4d17-b4ac-93f6d7eb4fa0-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"bf0df220-037c-4d17-b4ac-93f6d7eb4fa0\") " pod="openstack/cinder-scheduler-0" Mar 11 09:02:32 crc kubenswrapper[4808]: I0311 09:02:32.543420 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf0df220-037c-4d17-b4ac-93f6d7eb4fa0-scripts\") pod \"cinder-scheduler-0\" (UID: \"bf0df220-037c-4d17-b4ac-93f6d7eb4fa0\") " pod="openstack/cinder-scheduler-0" Mar 11 09:02:32 crc kubenswrapper[4808]: I0311 09:02:32.543527 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vp7ld\" (UniqueName: \"kubernetes.io/projected/bf0df220-037c-4d17-b4ac-93f6d7eb4fa0-kube-api-access-vp7ld\") pod \"cinder-scheduler-0\" (UID: \"bf0df220-037c-4d17-b4ac-93f6d7eb4fa0\") " pod="openstack/cinder-scheduler-0" Mar 11 09:02:32 crc kubenswrapper[4808]: I0311 09:02:32.543635 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf0df220-037c-4d17-b4ac-93f6d7eb4fa0-config-data\") pod \"cinder-scheduler-0\" (UID: \"bf0df220-037c-4d17-b4ac-93f6d7eb4fa0\") " pod="openstack/cinder-scheduler-0" Mar 11 09:02:32 crc kubenswrapper[4808]: I0311 09:02:32.544396 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bf0df220-037c-4d17-b4ac-93f6d7eb4fa0-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"bf0df220-037c-4d17-b4ac-93f6d7eb4fa0\") " pod="openstack/cinder-scheduler-0" Mar 11 09:02:32 crc kubenswrapper[4808]: I0311 09:02:32.548084 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bf0df220-037c-4d17-b4ac-93f6d7eb4fa0-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"bf0df220-037c-4d17-b4ac-93f6d7eb4fa0\") " pod="openstack/cinder-scheduler-0" Mar 11 09:02:32 crc kubenswrapper[4808]: I0311 09:02:32.548923 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf0df220-037c-4d17-b4ac-93f6d7eb4fa0-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"bf0df220-037c-4d17-b4ac-93f6d7eb4fa0\") " pod="openstack/cinder-scheduler-0" Mar 11 09:02:32 crc kubenswrapper[4808]: I0311 09:02:32.551066 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf0df220-037c-4d17-b4ac-93f6d7eb4fa0-scripts\") pod \"cinder-scheduler-0\" (UID: \"bf0df220-037c-4d17-b4ac-93f6d7eb4fa0\") " pod="openstack/cinder-scheduler-0" Mar 11 09:02:32 crc kubenswrapper[4808]: I0311 09:02:32.561013 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vp7ld\" (UniqueName: \"kubernetes.io/projected/bf0df220-037c-4d17-b4ac-93f6d7eb4fa0-kube-api-access-vp7ld\") pod \"cinder-scheduler-0\" (UID: \"bf0df220-037c-4d17-b4ac-93f6d7eb4fa0\") " pod="openstack/cinder-scheduler-0" Mar 11 09:02:32 crc kubenswrapper[4808]: I0311 09:02:32.561544 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf0df220-037c-4d17-b4ac-93f6d7eb4fa0-config-data\") pod \"cinder-scheduler-0\" (UID: \"bf0df220-037c-4d17-b4ac-93f6d7eb4fa0\") " pod="openstack/cinder-scheduler-0" Mar 11 09:02:32 crc kubenswrapper[4808]: I0311 09:02:32.860015 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 11 09:02:33 crc kubenswrapper[4808]: I0311 09:02:33.200368 4808 generic.go:334] "Generic (PLEG): container finished" podID="a322ec3d-e25c-4ccf-8a5d-af1b78914c8f" containerID="d8aa3bfc204c1fbab362a3e358821b2edbdd78652875ef2e61ff3450beb8bec7" exitCode=0 Mar 11 09:02:33 crc kubenswrapper[4808]: I0311 09:02:33.200402 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-86b7cb5956-tzj8f" event={"ID":"a322ec3d-e25c-4ccf-8a5d-af1b78914c8f","Type":"ContainerDied","Data":"d8aa3bfc204c1fbab362a3e358821b2edbdd78652875ef2e61ff3450beb8bec7"} Mar 11 09:02:33 crc kubenswrapper[4808]: I0311 09:02:33.200995 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-86b7cb5956-tzj8f" event={"ID":"a322ec3d-e25c-4ccf-8a5d-af1b78914c8f","Type":"ContainerDied","Data":"3e71278cc3326489d8d7e180d446a590787add68f637d62ac244a8afb27e9af1"} Mar 11 09:02:33 crc kubenswrapper[4808]: I0311 09:02:33.201023 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e71278cc3326489d8d7e180d446a590787add68f637d62ac244a8afb27e9af1" Mar 11 09:02:33 crc kubenswrapper[4808]: I0311 09:02:33.208706 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-86b7cb5956-tzj8f" Mar 11 09:02:33 crc kubenswrapper[4808]: I0311 09:02:33.251201 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Mar 11 09:02:33 crc kubenswrapper[4808]: E0311 09:02:33.252817 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a322ec3d-e25c-4ccf-8a5d-af1b78914c8f" containerName="barbican-api" Mar 11 09:02:33 crc kubenswrapper[4808]: I0311 09:02:33.252845 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="a322ec3d-e25c-4ccf-8a5d-af1b78914c8f" containerName="barbican-api" Mar 11 09:02:33 crc kubenswrapper[4808]: E0311 09:02:33.252865 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a322ec3d-e25c-4ccf-8a5d-af1b78914c8f" containerName="barbican-api-log" Mar 11 09:02:33 crc kubenswrapper[4808]: I0311 09:02:33.252873 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="a322ec3d-e25c-4ccf-8a5d-af1b78914c8f" containerName="barbican-api-log" Mar 11 09:02:33 crc kubenswrapper[4808]: I0311 09:02:33.253117 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="a322ec3d-e25c-4ccf-8a5d-af1b78914c8f" containerName="barbican-api-log" Mar 11 09:02:33 crc kubenswrapper[4808]: I0311 09:02:33.253162 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="a322ec3d-e25c-4ccf-8a5d-af1b78914c8f" containerName="barbican-api" Mar 11 09:02:33 crc kubenswrapper[4808]: I0311 09:02:33.253826 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 11 09:02:33 crc kubenswrapper[4808]: I0311 09:02:33.257964 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Mar 11 09:02:33 crc kubenswrapper[4808]: I0311 09:02:33.258856 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Mar 11 09:02:33 crc kubenswrapper[4808]: I0311 09:02:33.259027 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Mar 11 09:02:33 crc kubenswrapper[4808]: I0311 09:02:33.259207 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-nwfsm" Mar 11 09:02:33 crc kubenswrapper[4808]: I0311 09:02:33.353536 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 11 09:02:33 crc kubenswrapper[4808]: I0311 09:02:33.384125 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a322ec3d-e25c-4ccf-8a5d-af1b78914c8f-config-data\") pod \"a322ec3d-e25c-4ccf-8a5d-af1b78914c8f\" (UID: \"a322ec3d-e25c-4ccf-8a5d-af1b78914c8f\") " Mar 11 09:02:33 crc kubenswrapper[4808]: I0311 09:02:33.384183 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a322ec3d-e25c-4ccf-8a5d-af1b78914c8f-combined-ca-bundle\") pod \"a322ec3d-e25c-4ccf-8a5d-af1b78914c8f\" (UID: \"a322ec3d-e25c-4ccf-8a5d-af1b78914c8f\") " Mar 11 09:02:33 crc kubenswrapper[4808]: I0311 09:02:33.384251 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a322ec3d-e25c-4ccf-8a5d-af1b78914c8f-config-data-custom\") pod \"a322ec3d-e25c-4ccf-8a5d-af1b78914c8f\" (UID: \"a322ec3d-e25c-4ccf-8a5d-af1b78914c8f\") " Mar 11 09:02:33 crc kubenswrapper[4808]: I0311 09:02:33.384317 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a322ec3d-e25c-4ccf-8a5d-af1b78914c8f-logs\") pod \"a322ec3d-e25c-4ccf-8a5d-af1b78914c8f\" (UID: \"a322ec3d-e25c-4ccf-8a5d-af1b78914c8f\") " Mar 11 09:02:33 crc kubenswrapper[4808]: I0311 09:02:33.384340 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wr9mx\" (UniqueName: \"kubernetes.io/projected/a322ec3d-e25c-4ccf-8a5d-af1b78914c8f-kube-api-access-wr9mx\") pod \"a322ec3d-e25c-4ccf-8a5d-af1b78914c8f\" (UID: \"a322ec3d-e25c-4ccf-8a5d-af1b78914c8f\") " Mar 11 09:02:33 crc kubenswrapper[4808]: I0311 09:02:33.384822 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9deef18-212d-4f90-adbe-84f8bb0177e1-combined-ca-bundle\") pod \"openstackclient\" (UID: \"f9deef18-212d-4f90-adbe-84f8bb0177e1\") " pod="openstack/openstackclient" Mar 11 09:02:33 crc kubenswrapper[4808]: I0311 09:02:33.384869 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8c7t\" (UniqueName: \"kubernetes.io/projected/f9deef18-212d-4f90-adbe-84f8bb0177e1-kube-api-access-p8c7t\") pod \"openstackclient\" (UID: \"f9deef18-212d-4f90-adbe-84f8bb0177e1\") " pod="openstack/openstackclient" Mar 11 09:02:33 crc kubenswrapper[4808]: I0311 09:02:33.384892 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f9deef18-212d-4f90-adbe-84f8bb0177e1-openstack-config\") pod \"openstackclient\" (UID: \"f9deef18-212d-4f90-adbe-84f8bb0177e1\") " pod="openstack/openstackclient" Mar 11 09:02:33 crc kubenswrapper[4808]: I0311 09:02:33.384933 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f9deef18-212d-4f90-adbe-84f8bb0177e1-openstack-config-secret\") pod \"openstackclient\" (UID: \"f9deef18-212d-4f90-adbe-84f8bb0177e1\") " pod="openstack/openstackclient" Mar 11 09:02:33 crc kubenswrapper[4808]: I0311 09:02:33.386149 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a322ec3d-e25c-4ccf-8a5d-af1b78914c8f-logs" (OuterVolumeSpecName: "logs") pod "a322ec3d-e25c-4ccf-8a5d-af1b78914c8f" (UID: "a322ec3d-e25c-4ccf-8a5d-af1b78914c8f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:02:33 crc kubenswrapper[4808]: I0311 09:02:33.388327 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a322ec3d-e25c-4ccf-8a5d-af1b78914c8f-kube-api-access-wr9mx" (OuterVolumeSpecName: "kube-api-access-wr9mx") pod "a322ec3d-e25c-4ccf-8a5d-af1b78914c8f" (UID: "a322ec3d-e25c-4ccf-8a5d-af1b78914c8f"). InnerVolumeSpecName "kube-api-access-wr9mx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:02:33 crc kubenswrapper[4808]: I0311 09:02:33.388997 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a322ec3d-e25c-4ccf-8a5d-af1b78914c8f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a322ec3d-e25c-4ccf-8a5d-af1b78914c8f" (UID: "a322ec3d-e25c-4ccf-8a5d-af1b78914c8f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:02:33 crc kubenswrapper[4808]: I0311 09:02:33.416106 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a322ec3d-e25c-4ccf-8a5d-af1b78914c8f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a322ec3d-e25c-4ccf-8a5d-af1b78914c8f" (UID: "a322ec3d-e25c-4ccf-8a5d-af1b78914c8f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:02:33 crc kubenswrapper[4808]: I0311 09:02:33.428456 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a322ec3d-e25c-4ccf-8a5d-af1b78914c8f-config-data" (OuterVolumeSpecName: "config-data") pod "a322ec3d-e25c-4ccf-8a5d-af1b78914c8f" (UID: "a322ec3d-e25c-4ccf-8a5d-af1b78914c8f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:02:33 crc kubenswrapper[4808]: I0311 09:02:33.486816 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9deef18-212d-4f90-adbe-84f8bb0177e1-combined-ca-bundle\") pod \"openstackclient\" (UID: \"f9deef18-212d-4f90-adbe-84f8bb0177e1\") " pod="openstack/openstackclient" Mar 11 09:02:33 crc kubenswrapper[4808]: I0311 09:02:33.486861 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8c7t\" (UniqueName: \"kubernetes.io/projected/f9deef18-212d-4f90-adbe-84f8bb0177e1-kube-api-access-p8c7t\") pod \"openstackclient\" (UID: \"f9deef18-212d-4f90-adbe-84f8bb0177e1\") " pod="openstack/openstackclient" Mar 11 09:02:33 crc kubenswrapper[4808]: I0311 09:02:33.486884 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f9deef18-212d-4f90-adbe-84f8bb0177e1-openstack-config\") pod \"openstackclient\" (UID: \"f9deef18-212d-4f90-adbe-84f8bb0177e1\") " pod="openstack/openstackclient" Mar 11 09:02:33 crc kubenswrapper[4808]: I0311 09:02:33.486919 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f9deef18-212d-4f90-adbe-84f8bb0177e1-openstack-config-secret\") pod \"openstackclient\" (UID: \"f9deef18-212d-4f90-adbe-84f8bb0177e1\") " pod="openstack/openstackclient" Mar 11 09:02:33 crc kubenswrapper[4808]: I0311 09:02:33.487008 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a322ec3d-e25c-4ccf-8a5d-af1b78914c8f-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:33 crc kubenswrapper[4808]: I0311 09:02:33.487019 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a322ec3d-e25c-4ccf-8a5d-af1b78914c8f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:33 crc kubenswrapper[4808]: I0311 09:02:33.487029 4808 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a322ec3d-e25c-4ccf-8a5d-af1b78914c8f-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:33 crc kubenswrapper[4808]: I0311 09:02:33.487037 4808 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a322ec3d-e25c-4ccf-8a5d-af1b78914c8f-logs\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:33 crc kubenswrapper[4808]: I0311 09:02:33.487045 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wr9mx\" (UniqueName: \"kubernetes.io/projected/a322ec3d-e25c-4ccf-8a5d-af1b78914c8f-kube-api-access-wr9mx\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:33 crc kubenswrapper[4808]: I0311 09:02:33.488015 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f9deef18-212d-4f90-adbe-84f8bb0177e1-openstack-config\") pod \"openstackclient\" (UID: \"f9deef18-212d-4f90-adbe-84f8bb0177e1\") " pod="openstack/openstackclient" Mar 11 09:02:33 crc kubenswrapper[4808]: I0311 09:02:33.499040 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f9deef18-212d-4f90-adbe-84f8bb0177e1-openstack-config-secret\") pod \"openstackclient\" (UID: \"f9deef18-212d-4f90-adbe-84f8bb0177e1\") " pod="openstack/openstackclient" Mar 11 09:02:33 crc kubenswrapper[4808]: I0311 09:02:33.499241 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9deef18-212d-4f90-adbe-84f8bb0177e1-combined-ca-bundle\") pod \"openstackclient\" (UID: \"f9deef18-212d-4f90-adbe-84f8bb0177e1\") " pod="openstack/openstackclient" Mar 11 09:02:33 crc kubenswrapper[4808]: I0311 09:02:33.504103 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8c7t\" (UniqueName: \"kubernetes.io/projected/f9deef18-212d-4f90-adbe-84f8bb0177e1-kube-api-access-p8c7t\") pod \"openstackclient\" (UID: \"f9deef18-212d-4f90-adbe-84f8bb0177e1\") " pod="openstack/openstackclient" Mar 11 09:02:33 crc kubenswrapper[4808]: I0311 09:02:33.572969 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 11 09:02:33 crc kubenswrapper[4808]: I0311 09:02:33.800135 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb14cc88-783b-4bcb-bdbb-770da3ee3a4c" path="/var/lib/kubelet/pods/cb14cc88-783b-4bcb-bdbb-770da3ee3a4c/volumes" Mar 11 09:02:34 crc kubenswrapper[4808]: I0311 09:02:34.039797 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Mar 11 09:02:34 crc kubenswrapper[4808]: W0311 09:02:34.048451 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf9deef18_212d_4f90_adbe_84f8bb0177e1.slice/crio-c56a17b436c1cbb93386b8afa966880c3fbb5a6b9d4558dc14343d3a9befc357 WatchSource:0}: Error finding container c56a17b436c1cbb93386b8afa966880c3fbb5a6b9d4558dc14343d3a9befc357: Status 404 returned error can't find the container with id c56a17b436c1cbb93386b8afa966880c3fbb5a6b9d4558dc14343d3a9befc357 Mar 11 09:02:34 crc kubenswrapper[4808]: I0311 09:02:34.234992 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"bf0df220-037c-4d17-b4ac-93f6d7eb4fa0","Type":"ContainerStarted","Data":"aba62c8514267bc46dbefc96570aade30f30e9975367a34465e330970831b81c"} Mar 11 09:02:34 crc kubenswrapper[4808]: I0311 09:02:34.235038 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"bf0df220-037c-4d17-b4ac-93f6d7eb4fa0","Type":"ContainerStarted","Data":"cd2b7d4db2246b62c0bfd63ef26fafb9db81109e82e362b45e8eac747588705e"} Mar 11 09:02:34 crc kubenswrapper[4808]: I0311 09:02:34.241501 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-86b7cb5956-tzj8f" Mar 11 09:02:34 crc kubenswrapper[4808]: I0311 09:02:34.242481 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"f9deef18-212d-4f90-adbe-84f8bb0177e1","Type":"ContainerStarted","Data":"c56a17b436c1cbb93386b8afa966880c3fbb5a6b9d4558dc14343d3a9befc357"} Mar 11 09:02:34 crc kubenswrapper[4808]: I0311 09:02:34.272658 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-86b7cb5956-tzj8f"] Mar 11 09:02:34 crc kubenswrapper[4808]: I0311 09:02:34.283741 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-86b7cb5956-tzj8f"] Mar 11 09:02:35 crc kubenswrapper[4808]: I0311 09:02:35.251028 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"bf0df220-037c-4d17-b4ac-93f6d7eb4fa0","Type":"ContainerStarted","Data":"fa00d39da6c2ff243156e8ec3d359bdd8214eb14e877427f3b6d367217f70a14"} Mar 11 09:02:35 crc kubenswrapper[4808]: I0311 09:02:35.279483 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.279466489 podStartE2EDuration="3.279466489s" podCreationTimestamp="2026-03-11 09:02:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:02:35.272699712 +0000 UTC m=+1406.226023042" watchObservedRunningTime="2026-03-11 09:02:35.279466489 +0000 UTC m=+1406.232789809" Mar 11 09:02:35 crc kubenswrapper[4808]: I0311 09:02:35.651872 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Mar 11 09:02:35 crc kubenswrapper[4808]: I0311 09:02:35.799397 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a322ec3d-e25c-4ccf-8a5d-af1b78914c8f" path="/var/lib/kubelet/pods/a322ec3d-e25c-4ccf-8a5d-af1b78914c8f/volumes" Mar 11 09:02:37 crc kubenswrapper[4808]: I0311 09:02:37.860957 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Mar 11 09:02:38 crc kubenswrapper[4808]: I0311 09:02:38.212402 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-8749b8c99-fl7cg"] Mar 11 09:02:38 crc kubenswrapper[4808]: I0311 09:02:38.214180 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-8749b8c99-fl7cg" Mar 11 09:02:38 crc kubenswrapper[4808]: I0311 09:02:38.216966 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Mar 11 09:02:38 crc kubenswrapper[4808]: I0311 09:02:38.217074 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Mar 11 09:02:38 crc kubenswrapper[4808]: I0311 09:02:38.217146 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Mar 11 09:02:38 crc kubenswrapper[4808]: I0311 09:02:38.251229 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-8749b8c99-fl7cg"] Mar 11 09:02:38 crc kubenswrapper[4808]: I0311 09:02:38.398807 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c8cf2302-c420-4e0f-a292-a601a5f66bfa-run-httpd\") pod \"swift-proxy-8749b8c99-fl7cg\" (UID: \"c8cf2302-c420-4e0f-a292-a601a5f66bfa\") " pod="openstack/swift-proxy-8749b8c99-fl7cg" Mar 11 09:02:38 crc kubenswrapper[4808]: I0311 09:02:38.398874 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8cf2302-c420-4e0f-a292-a601a5f66bfa-public-tls-certs\") pod \"swift-proxy-8749b8c99-fl7cg\" (UID: \"c8cf2302-c420-4e0f-a292-a601a5f66bfa\") " pod="openstack/swift-proxy-8749b8c99-fl7cg" Mar 11 09:02:38 crc kubenswrapper[4808]: I0311 09:02:38.398901 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c8cf2302-c420-4e0f-a292-a601a5f66bfa-log-httpd\") pod \"swift-proxy-8749b8c99-fl7cg\" (UID: \"c8cf2302-c420-4e0f-a292-a601a5f66bfa\") " pod="openstack/swift-proxy-8749b8c99-fl7cg" Mar 11 09:02:38 crc kubenswrapper[4808]: I0311 09:02:38.398937 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fcw2\" (UniqueName: \"kubernetes.io/projected/c8cf2302-c420-4e0f-a292-a601a5f66bfa-kube-api-access-6fcw2\") pod \"swift-proxy-8749b8c99-fl7cg\" (UID: \"c8cf2302-c420-4e0f-a292-a601a5f66bfa\") " pod="openstack/swift-proxy-8749b8c99-fl7cg" Mar 11 09:02:38 crc kubenswrapper[4808]: I0311 09:02:38.399018 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8cf2302-c420-4e0f-a292-a601a5f66bfa-internal-tls-certs\") pod \"swift-proxy-8749b8c99-fl7cg\" (UID: \"c8cf2302-c420-4e0f-a292-a601a5f66bfa\") " pod="openstack/swift-proxy-8749b8c99-fl7cg" Mar 11 09:02:38 crc kubenswrapper[4808]: I0311 09:02:38.399086 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c8cf2302-c420-4e0f-a292-a601a5f66bfa-etc-swift\") pod \"swift-proxy-8749b8c99-fl7cg\" (UID: \"c8cf2302-c420-4e0f-a292-a601a5f66bfa\") " pod="openstack/swift-proxy-8749b8c99-fl7cg" Mar 11 09:02:38 crc kubenswrapper[4808]: I0311 09:02:38.399150 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8cf2302-c420-4e0f-a292-a601a5f66bfa-config-data\") pod \"swift-proxy-8749b8c99-fl7cg\" (UID: \"c8cf2302-c420-4e0f-a292-a601a5f66bfa\") " pod="openstack/swift-proxy-8749b8c99-fl7cg" Mar 11 09:02:38 crc kubenswrapper[4808]: I0311 09:02:38.399177 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8cf2302-c420-4e0f-a292-a601a5f66bfa-combined-ca-bundle\") pod \"swift-proxy-8749b8c99-fl7cg\" (UID: \"c8cf2302-c420-4e0f-a292-a601a5f66bfa\") " pod="openstack/swift-proxy-8749b8c99-fl7cg" Mar 11 09:02:38 crc kubenswrapper[4808]: I0311 09:02:38.501129 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8cf2302-c420-4e0f-a292-a601a5f66bfa-config-data\") pod \"swift-proxy-8749b8c99-fl7cg\" (UID: \"c8cf2302-c420-4e0f-a292-a601a5f66bfa\") " pod="openstack/swift-proxy-8749b8c99-fl7cg" Mar 11 09:02:38 crc kubenswrapper[4808]: I0311 09:02:38.501446 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8cf2302-c420-4e0f-a292-a601a5f66bfa-combined-ca-bundle\") pod \"swift-proxy-8749b8c99-fl7cg\" (UID: \"c8cf2302-c420-4e0f-a292-a601a5f66bfa\") " pod="openstack/swift-proxy-8749b8c99-fl7cg" Mar 11 09:02:38 crc kubenswrapper[4808]: I0311 09:02:38.501479 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c8cf2302-c420-4e0f-a292-a601a5f66bfa-run-httpd\") pod \"swift-proxy-8749b8c99-fl7cg\" (UID: \"c8cf2302-c420-4e0f-a292-a601a5f66bfa\") " pod="openstack/swift-proxy-8749b8c99-fl7cg" Mar 11 09:02:38 crc kubenswrapper[4808]: I0311 09:02:38.501516 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8cf2302-c420-4e0f-a292-a601a5f66bfa-public-tls-certs\") pod \"swift-proxy-8749b8c99-fl7cg\" (UID: \"c8cf2302-c420-4e0f-a292-a601a5f66bfa\") " pod="openstack/swift-proxy-8749b8c99-fl7cg" Mar 11 09:02:38 crc kubenswrapper[4808]: I0311 09:02:38.501540 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c8cf2302-c420-4e0f-a292-a601a5f66bfa-log-httpd\") pod \"swift-proxy-8749b8c99-fl7cg\" (UID: \"c8cf2302-c420-4e0f-a292-a601a5f66bfa\") " pod="openstack/swift-proxy-8749b8c99-fl7cg" Mar 11 09:02:38 crc kubenswrapper[4808]: I0311 09:02:38.501584 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fcw2\" (UniqueName: \"kubernetes.io/projected/c8cf2302-c420-4e0f-a292-a601a5f66bfa-kube-api-access-6fcw2\") pod \"swift-proxy-8749b8c99-fl7cg\" (UID: \"c8cf2302-c420-4e0f-a292-a601a5f66bfa\") " pod="openstack/swift-proxy-8749b8c99-fl7cg" Mar 11 09:02:38 crc kubenswrapper[4808]: I0311 09:02:38.501631 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8cf2302-c420-4e0f-a292-a601a5f66bfa-internal-tls-certs\") pod \"swift-proxy-8749b8c99-fl7cg\" (UID: \"c8cf2302-c420-4e0f-a292-a601a5f66bfa\") " pod="openstack/swift-proxy-8749b8c99-fl7cg" Mar 11 09:02:38 crc kubenswrapper[4808]: I0311 09:02:38.501671 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c8cf2302-c420-4e0f-a292-a601a5f66bfa-etc-swift\") pod \"swift-proxy-8749b8c99-fl7cg\" (UID: \"c8cf2302-c420-4e0f-a292-a601a5f66bfa\") " pod="openstack/swift-proxy-8749b8c99-fl7cg" Mar 11 09:02:38 crc kubenswrapper[4808]: I0311 09:02:38.502666 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c8cf2302-c420-4e0f-a292-a601a5f66bfa-run-httpd\") pod \"swift-proxy-8749b8c99-fl7cg\" (UID: \"c8cf2302-c420-4e0f-a292-a601a5f66bfa\") " pod="openstack/swift-proxy-8749b8c99-fl7cg" Mar 11 09:02:38 crc kubenswrapper[4808]: I0311 09:02:38.503163 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c8cf2302-c420-4e0f-a292-a601a5f66bfa-log-httpd\") pod \"swift-proxy-8749b8c99-fl7cg\" (UID: \"c8cf2302-c420-4e0f-a292-a601a5f66bfa\") " pod="openstack/swift-proxy-8749b8c99-fl7cg" Mar 11 09:02:38 crc kubenswrapper[4808]: I0311 09:02:38.508576 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8cf2302-c420-4e0f-a292-a601a5f66bfa-public-tls-certs\") pod \"swift-proxy-8749b8c99-fl7cg\" (UID: \"c8cf2302-c420-4e0f-a292-a601a5f66bfa\") " pod="openstack/swift-proxy-8749b8c99-fl7cg" Mar 11 09:02:38 crc kubenswrapper[4808]: I0311 09:02:38.514775 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c8cf2302-c420-4e0f-a292-a601a5f66bfa-etc-swift\") pod \"swift-proxy-8749b8c99-fl7cg\" (UID: \"c8cf2302-c420-4e0f-a292-a601a5f66bfa\") " pod="openstack/swift-proxy-8749b8c99-fl7cg" Mar 11 09:02:38 crc kubenswrapper[4808]: I0311 09:02:38.521340 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8cf2302-c420-4e0f-a292-a601a5f66bfa-config-data\") pod \"swift-proxy-8749b8c99-fl7cg\" (UID: \"c8cf2302-c420-4e0f-a292-a601a5f66bfa\") " pod="openstack/swift-proxy-8749b8c99-fl7cg" Mar 11 09:02:38 crc kubenswrapper[4808]: I0311 09:02:38.525537 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8cf2302-c420-4e0f-a292-a601a5f66bfa-internal-tls-certs\") pod \"swift-proxy-8749b8c99-fl7cg\" (UID: \"c8cf2302-c420-4e0f-a292-a601a5f66bfa\") " pod="openstack/swift-proxy-8749b8c99-fl7cg" Mar 11 09:02:38 crc kubenswrapper[4808]: I0311 09:02:38.525987 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fcw2\" (UniqueName: \"kubernetes.io/projected/c8cf2302-c420-4e0f-a292-a601a5f66bfa-kube-api-access-6fcw2\") pod \"swift-proxy-8749b8c99-fl7cg\" (UID: \"c8cf2302-c420-4e0f-a292-a601a5f66bfa\") " pod="openstack/swift-proxy-8749b8c99-fl7cg" Mar 11 09:02:38 crc kubenswrapper[4808]: I0311 09:02:38.527656 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8cf2302-c420-4e0f-a292-a601a5f66bfa-combined-ca-bundle\") pod \"swift-proxy-8749b8c99-fl7cg\" (UID: \"c8cf2302-c420-4e0f-a292-a601a5f66bfa\") " pod="openstack/swift-proxy-8749b8c99-fl7cg" Mar 11 09:02:38 crc kubenswrapper[4808]: I0311 09:02:38.538787 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-8749b8c99-fl7cg" Mar 11 09:02:40 crc kubenswrapper[4808]: I0311 09:02:40.938512 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 11 09:02:40 crc kubenswrapper[4808]: I0311 09:02:40.939248 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ff4d7462-f6c6-4e65-b6ed-8ba0a281520a" containerName="ceilometer-central-agent" containerID="cri-o://83e4299f8db969047f115b2e6d865e8ed4751b1a09a6cc0470a630153af91366" gracePeriod=30 Mar 11 09:02:40 crc kubenswrapper[4808]: I0311 09:02:40.939956 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ff4d7462-f6c6-4e65-b6ed-8ba0a281520a" containerName="proxy-httpd" containerID="cri-o://52ab7660270800e9762a93dcee038a29b5580e99936735b6f8a6a592b04438f0" gracePeriod=30 Mar 11 09:02:40 crc kubenswrapper[4808]: I0311 09:02:40.940005 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ff4d7462-f6c6-4e65-b6ed-8ba0a281520a" containerName="sg-core" containerID="cri-o://d1944cb374befa9392ca208c08c1f194ef3875e319ef0b634adb20cebad58110" gracePeriod=30 Mar 11 09:02:40 crc kubenswrapper[4808]: I0311 09:02:40.940036 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ff4d7462-f6c6-4e65-b6ed-8ba0a281520a" containerName="ceilometer-notification-agent" containerID="cri-o://f21d8ff4624c0e299072dec0ed64600ad702be3d85c6613566143c1a8fa60529" gracePeriod=30 Mar 11 09:02:40 crc kubenswrapper[4808]: I0311 09:02:40.943018 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Mar 11 09:02:41 crc kubenswrapper[4808]: I0311 09:02:41.356475 4808 generic.go:334] "Generic (PLEG): container finished" podID="ff4d7462-f6c6-4e65-b6ed-8ba0a281520a" containerID="52ab7660270800e9762a93dcee038a29b5580e99936735b6f8a6a592b04438f0" exitCode=0 Mar 11 09:02:41 crc kubenswrapper[4808]: I0311 09:02:41.356511 4808 generic.go:334] "Generic (PLEG): container finished" podID="ff4d7462-f6c6-4e65-b6ed-8ba0a281520a" containerID="d1944cb374befa9392ca208c08c1f194ef3875e319ef0b634adb20cebad58110" exitCode=2 Mar 11 09:02:41 crc kubenswrapper[4808]: I0311 09:02:41.356531 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff4d7462-f6c6-4e65-b6ed-8ba0a281520a","Type":"ContainerDied","Data":"52ab7660270800e9762a93dcee038a29b5580e99936735b6f8a6a592b04438f0"} Mar 11 09:02:41 crc kubenswrapper[4808]: I0311 09:02:41.356561 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff4d7462-f6c6-4e65-b6ed-8ba0a281520a","Type":"ContainerDied","Data":"d1944cb374befa9392ca208c08c1f194ef3875e319ef0b634adb20cebad58110"} Mar 11 09:02:42 crc kubenswrapper[4808]: I0311 09:02:42.074223 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-g579h"] Mar 11 09:02:42 crc kubenswrapper[4808]: I0311 09:02:42.081312 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-g579h" Mar 11 09:02:42 crc kubenswrapper[4808]: I0311 09:02:42.085662 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-g579h"] Mar 11 09:02:42 crc kubenswrapper[4808]: I0311 09:02:42.171351 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrl2w\" (UniqueName: \"kubernetes.io/projected/e7af695b-1871-4cff-91ad-0bf62afc9ef6-kube-api-access-zrl2w\") pod \"nova-api-db-create-g579h\" (UID: \"e7af695b-1871-4cff-91ad-0bf62afc9ef6\") " pod="openstack/nova-api-db-create-g579h" Mar 11 09:02:42 crc kubenswrapper[4808]: I0311 09:02:42.171488 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7af695b-1871-4cff-91ad-0bf62afc9ef6-operator-scripts\") pod \"nova-api-db-create-g579h\" (UID: \"e7af695b-1871-4cff-91ad-0bf62afc9ef6\") " pod="openstack/nova-api-db-create-g579h" Mar 11 09:02:42 crc kubenswrapper[4808]: I0311 09:02:42.276689 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrl2w\" (UniqueName: \"kubernetes.io/projected/e7af695b-1871-4cff-91ad-0bf62afc9ef6-kube-api-access-zrl2w\") pod \"nova-api-db-create-g579h\" (UID: \"e7af695b-1871-4cff-91ad-0bf62afc9ef6\") " pod="openstack/nova-api-db-create-g579h" Mar 11 09:02:42 crc kubenswrapper[4808]: I0311 09:02:42.276757 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7af695b-1871-4cff-91ad-0bf62afc9ef6-operator-scripts\") pod \"nova-api-db-create-g579h\" (UID: \"e7af695b-1871-4cff-91ad-0bf62afc9ef6\") " pod="openstack/nova-api-db-create-g579h" Mar 11 09:02:42 crc kubenswrapper[4808]: I0311 09:02:42.277881 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7af695b-1871-4cff-91ad-0bf62afc9ef6-operator-scripts\") pod \"nova-api-db-create-g579h\" (UID: \"e7af695b-1871-4cff-91ad-0bf62afc9ef6\") " pod="openstack/nova-api-db-create-g579h" Mar 11 09:02:42 crc kubenswrapper[4808]: I0311 09:02:42.278309 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-8tb5k"] Mar 11 09:02:42 crc kubenswrapper[4808]: I0311 09:02:42.279408 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-8tb5k" Mar 11 09:02:42 crc kubenswrapper[4808]: I0311 09:02:42.290926 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-c59a-account-create-update-xzlgx"] Mar 11 09:02:42 crc kubenswrapper[4808]: I0311 09:02:42.292087 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-c59a-account-create-update-xzlgx" Mar 11 09:02:42 crc kubenswrapper[4808]: I0311 09:02:42.293894 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Mar 11 09:02:42 crc kubenswrapper[4808]: I0311 09:02:42.301307 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-8tb5k"] Mar 11 09:02:42 crc kubenswrapper[4808]: I0311 09:02:42.318105 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrl2w\" (UniqueName: \"kubernetes.io/projected/e7af695b-1871-4cff-91ad-0bf62afc9ef6-kube-api-access-zrl2w\") pod \"nova-api-db-create-g579h\" (UID: \"e7af695b-1871-4cff-91ad-0bf62afc9ef6\") " pod="openstack/nova-api-db-create-g579h" Mar 11 09:02:42 crc kubenswrapper[4808]: I0311 09:02:42.327935 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-c59a-account-create-update-xzlgx"] Mar 11 09:02:42 crc kubenswrapper[4808]: I0311 09:02:42.380032 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krq89\" (UniqueName: \"kubernetes.io/projected/75d7eafe-ecf9-4784-853c-7538a5bb00ca-kube-api-access-krq89\") pod \"nova-cell0-db-create-8tb5k\" (UID: \"75d7eafe-ecf9-4784-853c-7538a5bb00ca\") " pod="openstack/nova-cell0-db-create-8tb5k" Mar 11 09:02:42 crc kubenswrapper[4808]: I0311 09:02:42.380141 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75d7eafe-ecf9-4784-853c-7538a5bb00ca-operator-scripts\") pod \"nova-cell0-db-create-8tb5k\" (UID: \"75d7eafe-ecf9-4784-853c-7538a5bb00ca\") " pod="openstack/nova-cell0-db-create-8tb5k" Mar 11 09:02:42 crc kubenswrapper[4808]: I0311 09:02:42.380208 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/44caf86d-00aa-48a8-b56d-d4395487da92-operator-scripts\") pod \"nova-api-c59a-account-create-update-xzlgx\" (UID: \"44caf86d-00aa-48a8-b56d-d4395487da92\") " pod="openstack/nova-api-c59a-account-create-update-xzlgx" Mar 11 09:02:42 crc kubenswrapper[4808]: I0311 09:02:42.380279 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccdpt\" (UniqueName: \"kubernetes.io/projected/44caf86d-00aa-48a8-b56d-d4395487da92-kube-api-access-ccdpt\") pod \"nova-api-c59a-account-create-update-xzlgx\" (UID: \"44caf86d-00aa-48a8-b56d-d4395487da92\") " pod="openstack/nova-api-c59a-account-create-update-xzlgx" Mar 11 09:02:42 crc kubenswrapper[4808]: I0311 09:02:42.398007 4808 generic.go:334] "Generic (PLEG): container finished" podID="ff4d7462-f6c6-4e65-b6ed-8ba0a281520a" containerID="f21d8ff4624c0e299072dec0ed64600ad702be3d85c6613566143c1a8fa60529" exitCode=0 Mar 11 09:02:42 crc kubenswrapper[4808]: I0311 09:02:42.398049 4808 generic.go:334] "Generic (PLEG): container finished" podID="ff4d7462-f6c6-4e65-b6ed-8ba0a281520a" containerID="83e4299f8db969047f115b2e6d865e8ed4751b1a09a6cc0470a630153af91366" exitCode=0 Mar 11 09:02:42 crc kubenswrapper[4808]: I0311 09:02:42.398072 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff4d7462-f6c6-4e65-b6ed-8ba0a281520a","Type":"ContainerDied","Data":"f21d8ff4624c0e299072dec0ed64600ad702be3d85c6613566143c1a8fa60529"} Mar 11 09:02:42 crc kubenswrapper[4808]: I0311 09:02:42.398116 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff4d7462-f6c6-4e65-b6ed-8ba0a281520a","Type":"ContainerDied","Data":"83e4299f8db969047f115b2e6d865e8ed4751b1a09a6cc0470a630153af91366"} Mar 11 09:02:42 crc kubenswrapper[4808]: I0311 09:02:42.409754 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-g579h" Mar 11 09:02:42 crc kubenswrapper[4808]: I0311 09:02:42.417703 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-vqtl9"] Mar 11 09:02:42 crc kubenswrapper[4808]: I0311 09:02:42.419220 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-vqtl9" Mar 11 09:02:42 crc kubenswrapper[4808]: I0311 09:02:42.427826 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-vqtl9"] Mar 11 09:02:42 crc kubenswrapper[4808]: I0311 09:02:42.483811 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krq89\" (UniqueName: \"kubernetes.io/projected/75d7eafe-ecf9-4784-853c-7538a5bb00ca-kube-api-access-krq89\") pod \"nova-cell0-db-create-8tb5k\" (UID: \"75d7eafe-ecf9-4784-853c-7538a5bb00ca\") " pod="openstack/nova-cell0-db-create-8tb5k" Mar 11 09:02:42 crc kubenswrapper[4808]: I0311 09:02:42.483888 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75d7eafe-ecf9-4784-853c-7538a5bb00ca-operator-scripts\") pod \"nova-cell0-db-create-8tb5k\" (UID: \"75d7eafe-ecf9-4784-853c-7538a5bb00ca\") " pod="openstack/nova-cell0-db-create-8tb5k" Mar 11 09:02:42 crc kubenswrapper[4808]: I0311 09:02:42.483955 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/44caf86d-00aa-48a8-b56d-d4395487da92-operator-scripts\") pod \"nova-api-c59a-account-create-update-xzlgx\" (UID: \"44caf86d-00aa-48a8-b56d-d4395487da92\") " pod="openstack/nova-api-c59a-account-create-update-xzlgx" Mar 11 09:02:42 crc kubenswrapper[4808]: I0311 09:02:42.485268 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75d7eafe-ecf9-4784-853c-7538a5bb00ca-operator-scripts\") pod \"nova-cell0-db-create-8tb5k\" (UID: \"75d7eafe-ecf9-4784-853c-7538a5bb00ca\") " pod="openstack/nova-cell0-db-create-8tb5k" Mar 11 09:02:42 crc kubenswrapper[4808]: I0311 09:02:42.488377 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ccdpt\" (UniqueName: \"kubernetes.io/projected/44caf86d-00aa-48a8-b56d-d4395487da92-kube-api-access-ccdpt\") pod \"nova-api-c59a-account-create-update-xzlgx\" (UID: \"44caf86d-00aa-48a8-b56d-d4395487da92\") " pod="openstack/nova-api-c59a-account-create-update-xzlgx" Mar 11 09:02:42 crc kubenswrapper[4808]: I0311 09:02:42.489113 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/44caf86d-00aa-48a8-b56d-d4395487da92-operator-scripts\") pod \"nova-api-c59a-account-create-update-xzlgx\" (UID: \"44caf86d-00aa-48a8-b56d-d4395487da92\") " pod="openstack/nova-api-c59a-account-create-update-xzlgx" Mar 11 09:02:42 crc kubenswrapper[4808]: I0311 09:02:42.498433 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-ac71-account-create-update-q4864"] Mar 11 09:02:42 crc kubenswrapper[4808]: I0311 09:02:42.499900 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-ac71-account-create-update-q4864" Mar 11 09:02:42 crc kubenswrapper[4808]: I0311 09:02:42.501971 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Mar 11 09:02:42 crc kubenswrapper[4808]: I0311 09:02:42.505467 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-ac71-account-create-update-q4864"] Mar 11 09:02:42 crc kubenswrapper[4808]: I0311 09:02:42.507059 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krq89\" (UniqueName: \"kubernetes.io/projected/75d7eafe-ecf9-4784-853c-7538a5bb00ca-kube-api-access-krq89\") pod \"nova-cell0-db-create-8tb5k\" (UID: \"75d7eafe-ecf9-4784-853c-7538a5bb00ca\") " pod="openstack/nova-cell0-db-create-8tb5k" Mar 11 09:02:42 crc kubenswrapper[4808]: I0311 09:02:42.510845 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ccdpt\" (UniqueName: \"kubernetes.io/projected/44caf86d-00aa-48a8-b56d-d4395487da92-kube-api-access-ccdpt\") pod \"nova-api-c59a-account-create-update-xzlgx\" (UID: \"44caf86d-00aa-48a8-b56d-d4395487da92\") " pod="openstack/nova-api-c59a-account-create-update-xzlgx" Mar 11 09:02:42 crc kubenswrapper[4808]: I0311 09:02:42.590440 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqmt2\" (UniqueName: \"kubernetes.io/projected/730a184e-642a-4df2-a747-c04625a046b8-kube-api-access-kqmt2\") pod \"nova-cell1-db-create-vqtl9\" (UID: \"730a184e-642a-4df2-a747-c04625a046b8\") " pod="openstack/nova-cell1-db-create-vqtl9" Mar 11 09:02:42 crc kubenswrapper[4808]: I0311 09:02:42.590742 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb889123-a36e-4211-af3c-a0febc942f46-operator-scripts\") pod \"nova-cell0-ac71-account-create-update-q4864\" (UID: \"eb889123-a36e-4211-af3c-a0febc942f46\") " pod="openstack/nova-cell0-ac71-account-create-update-q4864" Mar 11 09:02:42 crc kubenswrapper[4808]: I0311 09:02:42.590788 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/730a184e-642a-4df2-a747-c04625a046b8-operator-scripts\") pod \"nova-cell1-db-create-vqtl9\" (UID: \"730a184e-642a-4df2-a747-c04625a046b8\") " pod="openstack/nova-cell1-db-create-vqtl9" Mar 11 09:02:42 crc kubenswrapper[4808]: I0311 09:02:42.590837 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mb7n2\" (UniqueName: \"kubernetes.io/projected/eb889123-a36e-4211-af3c-a0febc942f46-kube-api-access-mb7n2\") pod \"nova-cell0-ac71-account-create-update-q4864\" (UID: \"eb889123-a36e-4211-af3c-a0febc942f46\") " pod="openstack/nova-cell0-ac71-account-create-update-q4864" Mar 11 09:02:42 crc kubenswrapper[4808]: I0311 09:02:42.597473 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-8tb5k" Mar 11 09:02:42 crc kubenswrapper[4808]: I0311 09:02:42.676521 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-c59a-account-create-update-xzlgx" Mar 11 09:02:42 crc kubenswrapper[4808]: I0311 09:02:42.690532 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-3a44-account-create-update-prpnm"] Mar 11 09:02:42 crc kubenswrapper[4808]: I0311 09:02:42.692058 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-3a44-account-create-update-prpnm" Mar 11 09:02:42 crc kubenswrapper[4808]: I0311 09:02:42.692888 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mb7n2\" (UniqueName: \"kubernetes.io/projected/eb889123-a36e-4211-af3c-a0febc942f46-kube-api-access-mb7n2\") pod \"nova-cell0-ac71-account-create-update-q4864\" (UID: \"eb889123-a36e-4211-af3c-a0febc942f46\") " pod="openstack/nova-cell0-ac71-account-create-update-q4864" Mar 11 09:02:42 crc kubenswrapper[4808]: I0311 09:02:42.693018 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqmt2\" (UniqueName: \"kubernetes.io/projected/730a184e-642a-4df2-a747-c04625a046b8-kube-api-access-kqmt2\") pod \"nova-cell1-db-create-vqtl9\" (UID: \"730a184e-642a-4df2-a747-c04625a046b8\") " pod="openstack/nova-cell1-db-create-vqtl9" Mar 11 09:02:42 crc kubenswrapper[4808]: I0311 09:02:42.693084 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb889123-a36e-4211-af3c-a0febc942f46-operator-scripts\") pod \"nova-cell0-ac71-account-create-update-q4864\" (UID: \"eb889123-a36e-4211-af3c-a0febc942f46\") " pod="openstack/nova-cell0-ac71-account-create-update-q4864" Mar 11 09:02:42 crc kubenswrapper[4808]: I0311 09:02:42.693112 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/730a184e-642a-4df2-a747-c04625a046b8-operator-scripts\") pod \"nova-cell1-db-create-vqtl9\" (UID: \"730a184e-642a-4df2-a747-c04625a046b8\") " pod="openstack/nova-cell1-db-create-vqtl9" Mar 11 09:02:42 crc kubenswrapper[4808]: I0311 09:02:42.693909 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/730a184e-642a-4df2-a747-c04625a046b8-operator-scripts\") pod \"nova-cell1-db-create-vqtl9\" (UID: \"730a184e-642a-4df2-a747-c04625a046b8\") " pod="openstack/nova-cell1-db-create-vqtl9" Mar 11 09:02:42 crc kubenswrapper[4808]: I0311 09:02:42.694405 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb889123-a36e-4211-af3c-a0febc942f46-operator-scripts\") pod \"nova-cell0-ac71-account-create-update-q4864\" (UID: \"eb889123-a36e-4211-af3c-a0febc942f46\") " pod="openstack/nova-cell0-ac71-account-create-update-q4864" Mar 11 09:02:42 crc kubenswrapper[4808]: I0311 09:02:42.695800 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Mar 11 09:02:42 crc kubenswrapper[4808]: I0311 09:02:42.701554 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-3a44-account-create-update-prpnm"] Mar 11 09:02:42 crc kubenswrapper[4808]: I0311 09:02:42.719499 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqmt2\" (UniqueName: \"kubernetes.io/projected/730a184e-642a-4df2-a747-c04625a046b8-kube-api-access-kqmt2\") pod \"nova-cell1-db-create-vqtl9\" (UID: \"730a184e-642a-4df2-a747-c04625a046b8\") " pod="openstack/nova-cell1-db-create-vqtl9" Mar 11 09:02:42 crc kubenswrapper[4808]: I0311 09:02:42.719866 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mb7n2\" (UniqueName: \"kubernetes.io/projected/eb889123-a36e-4211-af3c-a0febc942f46-kube-api-access-mb7n2\") pod \"nova-cell0-ac71-account-create-update-q4864\" (UID: \"eb889123-a36e-4211-af3c-a0febc942f46\") " pod="openstack/nova-cell0-ac71-account-create-update-q4864" Mar 11 09:02:42 crc kubenswrapper[4808]: I0311 09:02:42.737725 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-vqtl9" Mar 11 09:02:42 crc kubenswrapper[4808]: I0311 09:02:42.794504 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/42045259-953d-4ea6-bda4-d24008a021b6-operator-scripts\") pod \"nova-cell1-3a44-account-create-update-prpnm\" (UID: \"42045259-953d-4ea6-bda4-d24008a021b6\") " pod="openstack/nova-cell1-3a44-account-create-update-prpnm" Mar 11 09:02:42 crc kubenswrapper[4808]: I0311 09:02:42.794559 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wql7f\" (UniqueName: \"kubernetes.io/projected/42045259-953d-4ea6-bda4-d24008a021b6-kube-api-access-wql7f\") pod \"nova-cell1-3a44-account-create-update-prpnm\" (UID: \"42045259-953d-4ea6-bda4-d24008a021b6\") " pod="openstack/nova-cell1-3a44-account-create-update-prpnm" Mar 11 09:02:42 crc kubenswrapper[4808]: I0311 09:02:42.891672 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-ac71-account-create-update-q4864" Mar 11 09:02:42 crc kubenswrapper[4808]: I0311 09:02:42.895824 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/42045259-953d-4ea6-bda4-d24008a021b6-operator-scripts\") pod \"nova-cell1-3a44-account-create-update-prpnm\" (UID: \"42045259-953d-4ea6-bda4-d24008a021b6\") " pod="openstack/nova-cell1-3a44-account-create-update-prpnm" Mar 11 09:02:42 crc kubenswrapper[4808]: I0311 09:02:42.895894 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wql7f\" (UniqueName: \"kubernetes.io/projected/42045259-953d-4ea6-bda4-d24008a021b6-kube-api-access-wql7f\") pod \"nova-cell1-3a44-account-create-update-prpnm\" (UID: \"42045259-953d-4ea6-bda4-d24008a021b6\") " pod="openstack/nova-cell1-3a44-account-create-update-prpnm" Mar 11 09:02:42 crc kubenswrapper[4808]: I0311 09:02:42.897121 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/42045259-953d-4ea6-bda4-d24008a021b6-operator-scripts\") pod \"nova-cell1-3a44-account-create-update-prpnm\" (UID: \"42045259-953d-4ea6-bda4-d24008a021b6\") " pod="openstack/nova-cell1-3a44-account-create-update-prpnm" Mar 11 09:02:42 crc kubenswrapper[4808]: I0311 09:02:42.914257 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wql7f\" (UniqueName: \"kubernetes.io/projected/42045259-953d-4ea6-bda4-d24008a021b6-kube-api-access-wql7f\") pod \"nova-cell1-3a44-account-create-update-prpnm\" (UID: \"42045259-953d-4ea6-bda4-d24008a021b6\") " pod="openstack/nova-cell1-3a44-account-create-update-prpnm" Mar 11 09:02:43 crc kubenswrapper[4808]: I0311 09:02:43.013914 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-3a44-account-create-update-prpnm" Mar 11 09:02:43 crc kubenswrapper[4808]: I0311 09:02:43.043113 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="ff4d7462-f6c6-4e65-b6ed-8ba0a281520a" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.161:3000/\": dial tcp 10.217.0.161:3000: connect: connection refused" Mar 11 09:02:43 crc kubenswrapper[4808]: I0311 09:02:43.095413 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Mar 11 09:02:44 crc kubenswrapper[4808]: I0311 09:02:44.424886 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"f9deef18-212d-4f90-adbe-84f8bb0177e1","Type":"ContainerStarted","Data":"daa9d5be0dd494f080e478e2334f54c0d62a05a1d3e5c21ee9c65ba6d4767c26"} Mar 11 09:02:44 crc kubenswrapper[4808]: I0311 09:02:44.448177 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=1.500160098 podStartE2EDuration="11.448155703s" podCreationTimestamp="2026-03-11 09:02:33 +0000 UTC" firstStartedPulling="2026-03-11 09:02:34.054184905 +0000 UTC m=+1405.007508225" lastFinishedPulling="2026-03-11 09:02:44.00218051 +0000 UTC m=+1414.955503830" observedRunningTime="2026-03-11 09:02:44.442011044 +0000 UTC m=+1415.395334364" watchObservedRunningTime="2026-03-11 09:02:44.448155703 +0000 UTC m=+1415.401479023" Mar 11 09:02:44 crc kubenswrapper[4808]: I0311 09:02:44.451969 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff4d7462-f6c6-4e65-b6ed-8ba0a281520a","Type":"ContainerDied","Data":"b4bb3bff315f5341075247c519d8f306799a8259fdd75e883cc1e10b032d7ea0"} Mar 11 09:02:44 crc kubenswrapper[4808]: I0311 09:02:44.452011 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4bb3bff315f5341075247c519d8f306799a8259fdd75e883cc1e10b032d7ea0" Mar 11 09:02:44 crc kubenswrapper[4808]: I0311 09:02:44.475261 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 11 09:02:44 crc kubenswrapper[4808]: I0311 09:02:44.555445 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-vqtl9"] Mar 11 09:02:44 crc kubenswrapper[4808]: I0311 09:02:44.649019 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fsss5\" (UniqueName: \"kubernetes.io/projected/ff4d7462-f6c6-4e65-b6ed-8ba0a281520a-kube-api-access-fsss5\") pod \"ff4d7462-f6c6-4e65-b6ed-8ba0a281520a\" (UID: \"ff4d7462-f6c6-4e65-b6ed-8ba0a281520a\") " Mar 11 09:02:44 crc kubenswrapper[4808]: I0311 09:02:44.649215 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff4d7462-f6c6-4e65-b6ed-8ba0a281520a-combined-ca-bundle\") pod \"ff4d7462-f6c6-4e65-b6ed-8ba0a281520a\" (UID: \"ff4d7462-f6c6-4e65-b6ed-8ba0a281520a\") " Mar 11 09:02:44 crc kubenswrapper[4808]: I0311 09:02:44.649268 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff4d7462-f6c6-4e65-b6ed-8ba0a281520a-config-data\") pod \"ff4d7462-f6c6-4e65-b6ed-8ba0a281520a\" (UID: \"ff4d7462-f6c6-4e65-b6ed-8ba0a281520a\") " Mar 11 09:02:44 crc kubenswrapper[4808]: I0311 09:02:44.649327 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff4d7462-f6c6-4e65-b6ed-8ba0a281520a-scripts\") pod \"ff4d7462-f6c6-4e65-b6ed-8ba0a281520a\" (UID: \"ff4d7462-f6c6-4e65-b6ed-8ba0a281520a\") " Mar 11 09:02:44 crc kubenswrapper[4808]: I0311 09:02:44.649423 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff4d7462-f6c6-4e65-b6ed-8ba0a281520a-log-httpd\") pod \"ff4d7462-f6c6-4e65-b6ed-8ba0a281520a\" (UID: \"ff4d7462-f6c6-4e65-b6ed-8ba0a281520a\") " Mar 11 09:02:44 crc kubenswrapper[4808]: I0311 09:02:44.649481 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ff4d7462-f6c6-4e65-b6ed-8ba0a281520a-sg-core-conf-yaml\") pod \"ff4d7462-f6c6-4e65-b6ed-8ba0a281520a\" (UID: \"ff4d7462-f6c6-4e65-b6ed-8ba0a281520a\") " Mar 11 09:02:44 crc kubenswrapper[4808]: I0311 09:02:44.649546 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff4d7462-f6c6-4e65-b6ed-8ba0a281520a-run-httpd\") pod \"ff4d7462-f6c6-4e65-b6ed-8ba0a281520a\" (UID: \"ff4d7462-f6c6-4e65-b6ed-8ba0a281520a\") " Mar 11 09:02:44 crc kubenswrapper[4808]: I0311 09:02:44.649988 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff4d7462-f6c6-4e65-b6ed-8ba0a281520a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ff4d7462-f6c6-4e65-b6ed-8ba0a281520a" (UID: "ff4d7462-f6c6-4e65-b6ed-8ba0a281520a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:02:44 crc kubenswrapper[4808]: I0311 09:02:44.650264 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff4d7462-f6c6-4e65-b6ed-8ba0a281520a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ff4d7462-f6c6-4e65-b6ed-8ba0a281520a" (UID: "ff4d7462-f6c6-4e65-b6ed-8ba0a281520a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:02:44 crc kubenswrapper[4808]: I0311 09:02:44.650794 4808 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff4d7462-f6c6-4e65-b6ed-8ba0a281520a-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:44 crc kubenswrapper[4808]: I0311 09:02:44.650818 4808 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff4d7462-f6c6-4e65-b6ed-8ba0a281520a-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:44 crc kubenswrapper[4808]: I0311 09:02:44.656551 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff4d7462-f6c6-4e65-b6ed-8ba0a281520a-kube-api-access-fsss5" (OuterVolumeSpecName: "kube-api-access-fsss5") pod "ff4d7462-f6c6-4e65-b6ed-8ba0a281520a" (UID: "ff4d7462-f6c6-4e65-b6ed-8ba0a281520a"). InnerVolumeSpecName "kube-api-access-fsss5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:02:44 crc kubenswrapper[4808]: I0311 09:02:44.662149 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff4d7462-f6c6-4e65-b6ed-8ba0a281520a-scripts" (OuterVolumeSpecName: "scripts") pod "ff4d7462-f6c6-4e65-b6ed-8ba0a281520a" (UID: "ff4d7462-f6c6-4e65-b6ed-8ba0a281520a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:02:44 crc kubenswrapper[4808]: I0311 09:02:44.684627 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff4d7462-f6c6-4e65-b6ed-8ba0a281520a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ff4d7462-f6c6-4e65-b6ed-8ba0a281520a" (UID: "ff4d7462-f6c6-4e65-b6ed-8ba0a281520a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:02:44 crc kubenswrapper[4808]: I0311 09:02:44.746233 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff4d7462-f6c6-4e65-b6ed-8ba0a281520a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ff4d7462-f6c6-4e65-b6ed-8ba0a281520a" (UID: "ff4d7462-f6c6-4e65-b6ed-8ba0a281520a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:02:44 crc kubenswrapper[4808]: I0311 09:02:44.752384 4808 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff4d7462-f6c6-4e65-b6ed-8ba0a281520a-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:44 crc kubenswrapper[4808]: I0311 09:02:44.752413 4808 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ff4d7462-f6c6-4e65-b6ed-8ba0a281520a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:44 crc kubenswrapper[4808]: I0311 09:02:44.752425 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fsss5\" (UniqueName: \"kubernetes.io/projected/ff4d7462-f6c6-4e65-b6ed-8ba0a281520a-kube-api-access-fsss5\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:44 crc kubenswrapper[4808]: I0311 09:02:44.752436 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff4d7462-f6c6-4e65-b6ed-8ba0a281520a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:44 crc kubenswrapper[4808]: I0311 09:02:44.844995 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff4d7462-f6c6-4e65-b6ed-8ba0a281520a-config-data" (OuterVolumeSpecName: "config-data") pod "ff4d7462-f6c6-4e65-b6ed-8ba0a281520a" (UID: "ff4d7462-f6c6-4e65-b6ed-8ba0a281520a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:02:44 crc kubenswrapper[4808]: I0311 09:02:44.858917 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff4d7462-f6c6-4e65-b6ed-8ba0a281520a-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:44 crc kubenswrapper[4808]: I0311 09:02:44.883034 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-ac71-account-create-update-q4864"] Mar 11 09:02:44 crc kubenswrapper[4808]: W0311 09:02:44.888593 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeb889123_a36e_4211_af3c_a0febc942f46.slice/crio-102c9207eef200504cb42c060ab9cc38a7e1afce60d3876b191af29a36da5de6 WatchSource:0}: Error finding container 102c9207eef200504cb42c060ab9cc38a7e1afce60d3876b191af29a36da5de6: Status 404 returned error can't find the container with id 102c9207eef200504cb42c060ab9cc38a7e1afce60d3876b191af29a36da5de6 Mar 11 09:02:44 crc kubenswrapper[4808]: I0311 09:02:44.902773 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-3a44-account-create-update-prpnm"] Mar 11 09:02:44 crc kubenswrapper[4808]: I0311 09:02:44.937324 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-c59a-account-create-update-xzlgx"] Mar 11 09:02:45 crc kubenswrapper[4808]: I0311 09:02:45.062017 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-g579h"] Mar 11 09:02:45 crc kubenswrapper[4808]: I0311 09:02:45.127384 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-8tb5k"] Mar 11 09:02:45 crc kubenswrapper[4808]: I0311 09:02:45.210815 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-8749b8c99-fl7cg"] Mar 11 09:02:45 crc kubenswrapper[4808]: W0311 09:02:45.221988 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc8cf2302_c420_4e0f_a292_a601a5f66bfa.slice/crio-53a5aa11a7efd6aa2edf975053013012ba3c78d137ef5363d2a76cff4d87301c WatchSource:0}: Error finding container 53a5aa11a7efd6aa2edf975053013012ba3c78d137ef5363d2a76cff4d87301c: Status 404 returned error can't find the container with id 53a5aa11a7efd6aa2edf975053013012ba3c78d137ef5363d2a76cff4d87301c Mar 11 09:02:45 crc kubenswrapper[4808]: I0311 09:02:45.467076 4808 generic.go:334] "Generic (PLEG): container finished" podID="eb889123-a36e-4211-af3c-a0febc942f46" containerID="5bbe12a55844c71bd4cfc6765ae8a6f51e3c137772a16659625da2613a4bab60" exitCode=0 Mar 11 09:02:45 crc kubenswrapper[4808]: I0311 09:02:45.467328 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-ac71-account-create-update-q4864" event={"ID":"eb889123-a36e-4211-af3c-a0febc942f46","Type":"ContainerDied","Data":"5bbe12a55844c71bd4cfc6765ae8a6f51e3c137772a16659625da2613a4bab60"} Mar 11 09:02:45 crc kubenswrapper[4808]: I0311 09:02:45.467431 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-ac71-account-create-update-q4864" event={"ID":"eb889123-a36e-4211-af3c-a0febc942f46","Type":"ContainerStarted","Data":"102c9207eef200504cb42c060ab9cc38a7e1afce60d3876b191af29a36da5de6"} Mar 11 09:02:45 crc kubenswrapper[4808]: I0311 09:02:45.469572 4808 generic.go:334] "Generic (PLEG): container finished" podID="730a184e-642a-4df2-a747-c04625a046b8" containerID="7698d68a645522d06d9e9b82db59dae460efc7f1486f465cafc9671f6152e29c" exitCode=0 Mar 11 09:02:45 crc kubenswrapper[4808]: I0311 09:02:45.469631 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-vqtl9" event={"ID":"730a184e-642a-4df2-a747-c04625a046b8","Type":"ContainerDied","Data":"7698d68a645522d06d9e9b82db59dae460efc7f1486f465cafc9671f6152e29c"} Mar 11 09:02:45 crc kubenswrapper[4808]: I0311 09:02:45.469661 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-vqtl9" event={"ID":"730a184e-642a-4df2-a747-c04625a046b8","Type":"ContainerStarted","Data":"44e4db4f9103c3889617fd4ef863c57331972f667967ea0057b6f140c81a60ae"} Mar 11 09:02:45 crc kubenswrapper[4808]: I0311 09:02:45.474669 4808 generic.go:334] "Generic (PLEG): container finished" podID="44caf86d-00aa-48a8-b56d-d4395487da92" containerID="c57aa8a8af604e7085005636bc6d04d8b2f1f852212d6609092d95311d5a8ea9" exitCode=0 Mar 11 09:02:45 crc kubenswrapper[4808]: I0311 09:02:45.474739 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-c59a-account-create-update-xzlgx" event={"ID":"44caf86d-00aa-48a8-b56d-d4395487da92","Type":"ContainerDied","Data":"c57aa8a8af604e7085005636bc6d04d8b2f1f852212d6609092d95311d5a8ea9"} Mar 11 09:02:45 crc kubenswrapper[4808]: I0311 09:02:45.474760 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-c59a-account-create-update-xzlgx" event={"ID":"44caf86d-00aa-48a8-b56d-d4395487da92","Type":"ContainerStarted","Data":"7e03ba45948dc8b53568b79c34208c372f1813647f99d92310a420eef1cdbb93"} Mar 11 09:02:45 crc kubenswrapper[4808]: I0311 09:02:45.477071 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-g579h" event={"ID":"e7af695b-1871-4cff-91ad-0bf62afc9ef6","Type":"ContainerStarted","Data":"649e62172af41ee32fa8ad64ffdf69d7b43c61361a5f0fec3f85bbd20e28c3fa"} Mar 11 09:02:45 crc kubenswrapper[4808]: I0311 09:02:45.477109 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-g579h" event={"ID":"e7af695b-1871-4cff-91ad-0bf62afc9ef6","Type":"ContainerStarted","Data":"dd7259951c89ad7f639f7187469625ac47c5693464f8e280bb97b5acd9b2db9f"} Mar 11 09:02:45 crc kubenswrapper[4808]: I0311 09:02:45.479398 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-8tb5k" event={"ID":"75d7eafe-ecf9-4784-853c-7538a5bb00ca","Type":"ContainerStarted","Data":"bdae89947aee5d01574bc898d1bb0cd1ef75b87aa4aea1f2faa3e6aefa15e2fa"} Mar 11 09:02:45 crc kubenswrapper[4808]: I0311 09:02:45.479425 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-8tb5k" event={"ID":"75d7eafe-ecf9-4784-853c-7538a5bb00ca","Type":"ContainerStarted","Data":"a34ac1543d894d02e1ad9cb2eae0ade7caeb7be288a14842df2f6fa865f723a1"} Mar 11 09:02:45 crc kubenswrapper[4808]: I0311 09:02:45.481088 4808 generic.go:334] "Generic (PLEG): container finished" podID="42045259-953d-4ea6-bda4-d24008a021b6" containerID="ed78a0f60c4b6e77480698d703044dd5de4e4c818ad638d0fa966fddfb16ff16" exitCode=0 Mar 11 09:02:45 crc kubenswrapper[4808]: I0311 09:02:45.481130 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-3a44-account-create-update-prpnm" event={"ID":"42045259-953d-4ea6-bda4-d24008a021b6","Type":"ContainerDied","Data":"ed78a0f60c4b6e77480698d703044dd5de4e4c818ad638d0fa966fddfb16ff16"} Mar 11 09:02:45 crc kubenswrapper[4808]: I0311 09:02:45.481147 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-3a44-account-create-update-prpnm" event={"ID":"42045259-953d-4ea6-bda4-d24008a021b6","Type":"ContainerStarted","Data":"8cc683b7122380e337686240323c4fe27aad281ef169d431d817430b1d33ed0b"} Mar 11 09:02:45 crc kubenswrapper[4808]: I0311 09:02:45.482162 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 11 09:02:45 crc kubenswrapper[4808]: I0311 09:02:45.482498 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-8749b8c99-fl7cg" event={"ID":"c8cf2302-c420-4e0f-a292-a601a5f66bfa","Type":"ContainerStarted","Data":"53a5aa11a7efd6aa2edf975053013012ba3c78d137ef5363d2a76cff4d87301c"} Mar 11 09:02:45 crc kubenswrapper[4808]: I0311 09:02:45.535830 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-g579h" podStartSLOduration=3.5358080960000002 podStartE2EDuration="3.535808096s" podCreationTimestamp="2026-03-11 09:02:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:02:45.52078731 +0000 UTC m=+1416.474110630" watchObservedRunningTime="2026-03-11 09:02:45.535808096 +0000 UTC m=+1416.489131426" Mar 11 09:02:45 crc kubenswrapper[4808]: I0311 09:02:45.559397 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-8tb5k" podStartSLOduration=3.559381052 podStartE2EDuration="3.559381052s" podCreationTimestamp="2026-03-11 09:02:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:02:45.553699987 +0000 UTC m=+1416.507023307" watchObservedRunningTime="2026-03-11 09:02:45.559381052 +0000 UTC m=+1416.512704372" Mar 11 09:02:45 crc kubenswrapper[4808]: I0311 09:02:45.698653 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 11 09:02:45 crc kubenswrapper[4808]: I0311 09:02:45.707909 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 11 09:02:45 crc kubenswrapper[4808]: I0311 09:02:45.720750 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 11 09:02:45 crc kubenswrapper[4808]: E0311 09:02:45.721096 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff4d7462-f6c6-4e65-b6ed-8ba0a281520a" containerName="ceilometer-notification-agent" Mar 11 09:02:45 crc kubenswrapper[4808]: I0311 09:02:45.721113 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff4d7462-f6c6-4e65-b6ed-8ba0a281520a" containerName="ceilometer-notification-agent" Mar 11 09:02:45 crc kubenswrapper[4808]: E0311 09:02:45.721140 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff4d7462-f6c6-4e65-b6ed-8ba0a281520a" containerName="proxy-httpd" Mar 11 09:02:45 crc kubenswrapper[4808]: I0311 09:02:45.721147 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff4d7462-f6c6-4e65-b6ed-8ba0a281520a" containerName="proxy-httpd" Mar 11 09:02:45 crc kubenswrapper[4808]: E0311 09:02:45.721154 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff4d7462-f6c6-4e65-b6ed-8ba0a281520a" containerName="sg-core" Mar 11 09:02:45 crc kubenswrapper[4808]: I0311 09:02:45.721160 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff4d7462-f6c6-4e65-b6ed-8ba0a281520a" containerName="sg-core" Mar 11 09:02:45 crc kubenswrapper[4808]: E0311 09:02:45.721170 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff4d7462-f6c6-4e65-b6ed-8ba0a281520a" containerName="ceilometer-central-agent" Mar 11 09:02:45 crc kubenswrapper[4808]: I0311 09:02:45.721176 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff4d7462-f6c6-4e65-b6ed-8ba0a281520a" containerName="ceilometer-central-agent" Mar 11 09:02:45 crc kubenswrapper[4808]: I0311 09:02:45.721345 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff4d7462-f6c6-4e65-b6ed-8ba0a281520a" containerName="proxy-httpd" Mar 11 09:02:45 crc kubenswrapper[4808]: I0311 09:02:45.721375 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff4d7462-f6c6-4e65-b6ed-8ba0a281520a" containerName="sg-core" Mar 11 09:02:45 crc kubenswrapper[4808]: I0311 09:02:45.721392 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff4d7462-f6c6-4e65-b6ed-8ba0a281520a" containerName="ceilometer-central-agent" Mar 11 09:02:45 crc kubenswrapper[4808]: I0311 09:02:45.721403 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff4d7462-f6c6-4e65-b6ed-8ba0a281520a" containerName="ceilometer-notification-agent" Mar 11 09:02:45 crc kubenswrapper[4808]: I0311 09:02:45.722912 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 11 09:02:45 crc kubenswrapper[4808]: I0311 09:02:45.724980 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 11 09:02:45 crc kubenswrapper[4808]: I0311 09:02:45.727439 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 11 09:02:45 crc kubenswrapper[4808]: I0311 09:02:45.735969 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 11 09:02:45 crc kubenswrapper[4808]: I0311 09:02:45.805954 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff4d7462-f6c6-4e65-b6ed-8ba0a281520a" path="/var/lib/kubelet/pods/ff4d7462-f6c6-4e65-b6ed-8ba0a281520a/volumes" Mar 11 09:02:45 crc kubenswrapper[4808]: I0311 09:02:45.889779 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/739ef767-9cc0-4f25-82d6-4f17ee457f61-config-data\") pod \"ceilometer-0\" (UID: \"739ef767-9cc0-4f25-82d6-4f17ee457f61\") " pod="openstack/ceilometer-0" Mar 11 09:02:45 crc kubenswrapper[4808]: I0311 09:02:45.890151 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/739ef767-9cc0-4f25-82d6-4f17ee457f61-log-httpd\") pod \"ceilometer-0\" (UID: \"739ef767-9cc0-4f25-82d6-4f17ee457f61\") " pod="openstack/ceilometer-0" Mar 11 09:02:45 crc kubenswrapper[4808]: I0311 09:02:45.890211 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/739ef767-9cc0-4f25-82d6-4f17ee457f61-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"739ef767-9cc0-4f25-82d6-4f17ee457f61\") " pod="openstack/ceilometer-0" Mar 11 09:02:45 crc kubenswrapper[4808]: I0311 09:02:45.890253 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmdjb\" (UniqueName: \"kubernetes.io/projected/739ef767-9cc0-4f25-82d6-4f17ee457f61-kube-api-access-qmdjb\") pod \"ceilometer-0\" (UID: \"739ef767-9cc0-4f25-82d6-4f17ee457f61\") " pod="openstack/ceilometer-0" Mar 11 09:02:45 crc kubenswrapper[4808]: I0311 09:02:45.890274 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/739ef767-9cc0-4f25-82d6-4f17ee457f61-scripts\") pod \"ceilometer-0\" (UID: \"739ef767-9cc0-4f25-82d6-4f17ee457f61\") " pod="openstack/ceilometer-0" Mar 11 09:02:45 crc kubenswrapper[4808]: I0311 09:02:45.890300 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/739ef767-9cc0-4f25-82d6-4f17ee457f61-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"739ef767-9cc0-4f25-82d6-4f17ee457f61\") " pod="openstack/ceilometer-0" Mar 11 09:02:45 crc kubenswrapper[4808]: I0311 09:02:45.890325 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/739ef767-9cc0-4f25-82d6-4f17ee457f61-run-httpd\") pod \"ceilometer-0\" (UID: \"739ef767-9cc0-4f25-82d6-4f17ee457f61\") " pod="openstack/ceilometer-0" Mar 11 09:02:45 crc kubenswrapper[4808]: I0311 09:02:45.992219 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/739ef767-9cc0-4f25-82d6-4f17ee457f61-run-httpd\") pod \"ceilometer-0\" (UID: \"739ef767-9cc0-4f25-82d6-4f17ee457f61\") " pod="openstack/ceilometer-0" Mar 11 09:02:45 crc kubenswrapper[4808]: I0311 09:02:45.992302 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/739ef767-9cc0-4f25-82d6-4f17ee457f61-config-data\") pod \"ceilometer-0\" (UID: \"739ef767-9cc0-4f25-82d6-4f17ee457f61\") " pod="openstack/ceilometer-0" Mar 11 09:02:45 crc kubenswrapper[4808]: I0311 09:02:45.992376 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/739ef767-9cc0-4f25-82d6-4f17ee457f61-log-httpd\") pod \"ceilometer-0\" (UID: \"739ef767-9cc0-4f25-82d6-4f17ee457f61\") " pod="openstack/ceilometer-0" Mar 11 09:02:45 crc kubenswrapper[4808]: I0311 09:02:45.992417 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/739ef767-9cc0-4f25-82d6-4f17ee457f61-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"739ef767-9cc0-4f25-82d6-4f17ee457f61\") " pod="openstack/ceilometer-0" Mar 11 09:02:45 crc kubenswrapper[4808]: I0311 09:02:45.992454 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmdjb\" (UniqueName: \"kubernetes.io/projected/739ef767-9cc0-4f25-82d6-4f17ee457f61-kube-api-access-qmdjb\") pod \"ceilometer-0\" (UID: \"739ef767-9cc0-4f25-82d6-4f17ee457f61\") " pod="openstack/ceilometer-0" Mar 11 09:02:45 crc kubenswrapper[4808]: I0311 09:02:45.992476 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/739ef767-9cc0-4f25-82d6-4f17ee457f61-scripts\") pod \"ceilometer-0\" (UID: \"739ef767-9cc0-4f25-82d6-4f17ee457f61\") " pod="openstack/ceilometer-0" Mar 11 09:02:45 crc kubenswrapper[4808]: I0311 09:02:45.992506 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/739ef767-9cc0-4f25-82d6-4f17ee457f61-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"739ef767-9cc0-4f25-82d6-4f17ee457f61\") " pod="openstack/ceilometer-0" Mar 11 09:02:45 crc kubenswrapper[4808]: I0311 09:02:45.996786 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/739ef767-9cc0-4f25-82d6-4f17ee457f61-log-httpd\") pod \"ceilometer-0\" (UID: \"739ef767-9cc0-4f25-82d6-4f17ee457f61\") " pod="openstack/ceilometer-0" Mar 11 09:02:45 crc kubenswrapper[4808]: I0311 09:02:45.996828 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/739ef767-9cc0-4f25-82d6-4f17ee457f61-run-httpd\") pod \"ceilometer-0\" (UID: \"739ef767-9cc0-4f25-82d6-4f17ee457f61\") " pod="openstack/ceilometer-0" Mar 11 09:02:45 crc kubenswrapper[4808]: I0311 09:02:45.998422 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/739ef767-9cc0-4f25-82d6-4f17ee457f61-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"739ef767-9cc0-4f25-82d6-4f17ee457f61\") " pod="openstack/ceilometer-0" Mar 11 09:02:45 crc kubenswrapper[4808]: I0311 09:02:45.998713 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/739ef767-9cc0-4f25-82d6-4f17ee457f61-scripts\") pod \"ceilometer-0\" (UID: \"739ef767-9cc0-4f25-82d6-4f17ee457f61\") " pod="openstack/ceilometer-0" Mar 11 09:02:45 crc kubenswrapper[4808]: I0311 09:02:45.998919 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/739ef767-9cc0-4f25-82d6-4f17ee457f61-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"739ef767-9cc0-4f25-82d6-4f17ee457f61\") " pod="openstack/ceilometer-0" Mar 11 09:02:46 crc kubenswrapper[4808]: I0311 09:02:46.003669 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/739ef767-9cc0-4f25-82d6-4f17ee457f61-config-data\") pod \"ceilometer-0\" (UID: \"739ef767-9cc0-4f25-82d6-4f17ee457f61\") " pod="openstack/ceilometer-0" Mar 11 09:02:46 crc kubenswrapper[4808]: I0311 09:02:46.018646 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmdjb\" (UniqueName: \"kubernetes.io/projected/739ef767-9cc0-4f25-82d6-4f17ee457f61-kube-api-access-qmdjb\") pod \"ceilometer-0\" (UID: \"739ef767-9cc0-4f25-82d6-4f17ee457f61\") " pod="openstack/ceilometer-0" Mar 11 09:02:46 crc kubenswrapper[4808]: I0311 09:02:46.043174 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 11 09:02:46 crc kubenswrapper[4808]: I0311 09:02:46.493661 4808 generic.go:334] "Generic (PLEG): container finished" podID="e7af695b-1871-4cff-91ad-0bf62afc9ef6" containerID="649e62172af41ee32fa8ad64ffdf69d7b43c61361a5f0fec3f85bbd20e28c3fa" exitCode=0 Mar 11 09:02:46 crc kubenswrapper[4808]: I0311 09:02:46.493705 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-g579h" event={"ID":"e7af695b-1871-4cff-91ad-0bf62afc9ef6","Type":"ContainerDied","Data":"649e62172af41ee32fa8ad64ffdf69d7b43c61361a5f0fec3f85bbd20e28c3fa"} Mar 11 09:02:46 crc kubenswrapper[4808]: I0311 09:02:46.499383 4808 generic.go:334] "Generic (PLEG): container finished" podID="75d7eafe-ecf9-4784-853c-7538a5bb00ca" containerID="bdae89947aee5d01574bc898d1bb0cd1ef75b87aa4aea1f2faa3e6aefa15e2fa" exitCode=0 Mar 11 09:02:46 crc kubenswrapper[4808]: I0311 09:02:46.499456 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-8tb5k" event={"ID":"75d7eafe-ecf9-4784-853c-7538a5bb00ca","Type":"ContainerDied","Data":"bdae89947aee5d01574bc898d1bb0cd1ef75b87aa4aea1f2faa3e6aefa15e2fa"} Mar 11 09:02:46 crc kubenswrapper[4808]: I0311 09:02:46.501669 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-8749b8c99-fl7cg" event={"ID":"c8cf2302-c420-4e0f-a292-a601a5f66bfa","Type":"ContainerStarted","Data":"3a2a599e4e760a6afe597af2b5d260b266e6c1394ce37e557149afa679478a32"} Mar 11 09:02:46 crc kubenswrapper[4808]: I0311 09:02:46.501722 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-8749b8c99-fl7cg" event={"ID":"c8cf2302-c420-4e0f-a292-a601a5f66bfa","Type":"ContainerStarted","Data":"1fd24f68692976a2a706a06224d08627bc5728d36c24719be4fb6237bdcc7311"} Mar 11 09:02:46 crc kubenswrapper[4808]: I0311 09:02:46.525398 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 11 09:02:46 crc kubenswrapper[4808]: I0311 09:02:46.549116 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-8749b8c99-fl7cg" podStartSLOduration=8.549098419 podStartE2EDuration="8.549098419s" podCreationTimestamp="2026-03-11 09:02:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:02:46.536196584 +0000 UTC m=+1417.489519904" watchObservedRunningTime="2026-03-11 09:02:46.549098419 +0000 UTC m=+1417.502421749" Mar 11 09:02:46 crc kubenswrapper[4808]: W0311 09:02:46.662595 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod739ef767_9cc0_4f25_82d6_4f17ee457f61.slice/crio-3b4e8ac696b140ed6583f58f7391f9d008ec8082bbf95d7ddb74490abcade0f1 WatchSource:0}: Error finding container 3b4e8ac696b140ed6583f58f7391f9d008ec8082bbf95d7ddb74490abcade0f1: Status 404 returned error can't find the container with id 3b4e8ac696b140ed6583f58f7391f9d008ec8082bbf95d7ddb74490abcade0f1 Mar 11 09:02:46 crc kubenswrapper[4808]: I0311 09:02:46.663490 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 11 09:02:46 crc kubenswrapper[4808]: I0311 09:02:46.847594 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 11 09:02:46 crc kubenswrapper[4808]: I0311 09:02:46.848208 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="fd05834b-3326-4f0c-a5b8-2a0e28e5782e" containerName="glance-log" containerID="cri-o://4a2f973d3d8aa7749026d2a72e93b744c4d626ad6b2c8dfbd20c14b2858430d8" gracePeriod=30 Mar 11 09:02:46 crc kubenswrapper[4808]: I0311 09:02:46.848810 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="fd05834b-3326-4f0c-a5b8-2a0e28e5782e" containerName="glance-httpd" containerID="cri-o://b41ea5d8370aa5fc880ec07310ffc8a8ad1a2432531ddb92270dee58ab23797c" gracePeriod=30 Mar 11 09:02:47 crc kubenswrapper[4808]: I0311 09:02:47.040171 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-ac71-account-create-update-q4864" Mar 11 09:02:47 crc kubenswrapper[4808]: I0311 09:02:47.070561 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-3a44-account-create-update-prpnm" Mar 11 09:02:47 crc kubenswrapper[4808]: I0311 09:02:47.107105 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-c59a-account-create-update-xzlgx" Mar 11 09:02:47 crc kubenswrapper[4808]: I0311 09:02:47.147993 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mb7n2\" (UniqueName: \"kubernetes.io/projected/eb889123-a36e-4211-af3c-a0febc942f46-kube-api-access-mb7n2\") pod \"eb889123-a36e-4211-af3c-a0febc942f46\" (UID: \"eb889123-a36e-4211-af3c-a0febc942f46\") " Mar 11 09:02:47 crc kubenswrapper[4808]: I0311 09:02:47.148071 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb889123-a36e-4211-af3c-a0febc942f46-operator-scripts\") pod \"eb889123-a36e-4211-af3c-a0febc942f46\" (UID: \"eb889123-a36e-4211-af3c-a0febc942f46\") " Mar 11 09:02:47 crc kubenswrapper[4808]: I0311 09:02:47.148812 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb889123-a36e-4211-af3c-a0febc942f46-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "eb889123-a36e-4211-af3c-a0febc942f46" (UID: "eb889123-a36e-4211-af3c-a0febc942f46"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:02:47 crc kubenswrapper[4808]: I0311 09:02:47.149798 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-vqtl9" Mar 11 09:02:47 crc kubenswrapper[4808]: I0311 09:02:47.157748 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb889123-a36e-4211-af3c-a0febc942f46-kube-api-access-mb7n2" (OuterVolumeSpecName: "kube-api-access-mb7n2") pod "eb889123-a36e-4211-af3c-a0febc942f46" (UID: "eb889123-a36e-4211-af3c-a0febc942f46"). InnerVolumeSpecName "kube-api-access-mb7n2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:02:47 crc kubenswrapper[4808]: I0311 09:02:47.249598 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kqmt2\" (UniqueName: \"kubernetes.io/projected/730a184e-642a-4df2-a747-c04625a046b8-kube-api-access-kqmt2\") pod \"730a184e-642a-4df2-a747-c04625a046b8\" (UID: \"730a184e-642a-4df2-a747-c04625a046b8\") " Mar 11 09:02:47 crc kubenswrapper[4808]: I0311 09:02:47.250034 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/730a184e-642a-4df2-a747-c04625a046b8-operator-scripts\") pod \"730a184e-642a-4df2-a747-c04625a046b8\" (UID: \"730a184e-642a-4df2-a747-c04625a046b8\") " Mar 11 09:02:47 crc kubenswrapper[4808]: I0311 09:02:47.250142 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/42045259-953d-4ea6-bda4-d24008a021b6-operator-scripts\") pod \"42045259-953d-4ea6-bda4-d24008a021b6\" (UID: \"42045259-953d-4ea6-bda4-d24008a021b6\") " Mar 11 09:02:47 crc kubenswrapper[4808]: I0311 09:02:47.250170 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ccdpt\" (UniqueName: \"kubernetes.io/projected/44caf86d-00aa-48a8-b56d-d4395487da92-kube-api-access-ccdpt\") pod \"44caf86d-00aa-48a8-b56d-d4395487da92\" (UID: \"44caf86d-00aa-48a8-b56d-d4395487da92\") " Mar 11 09:02:47 crc kubenswrapper[4808]: I0311 09:02:47.250207 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/44caf86d-00aa-48a8-b56d-d4395487da92-operator-scripts\") pod \"44caf86d-00aa-48a8-b56d-d4395487da92\" (UID: \"44caf86d-00aa-48a8-b56d-d4395487da92\") " Mar 11 09:02:47 crc kubenswrapper[4808]: I0311 09:02:47.250236 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wql7f\" (UniqueName: \"kubernetes.io/projected/42045259-953d-4ea6-bda4-d24008a021b6-kube-api-access-wql7f\") pod \"42045259-953d-4ea6-bda4-d24008a021b6\" (UID: \"42045259-953d-4ea6-bda4-d24008a021b6\") " Mar 11 09:02:47 crc kubenswrapper[4808]: I0311 09:02:47.250642 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mb7n2\" (UniqueName: \"kubernetes.io/projected/eb889123-a36e-4211-af3c-a0febc942f46-kube-api-access-mb7n2\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:47 crc kubenswrapper[4808]: I0311 09:02:47.250656 4808 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb889123-a36e-4211-af3c-a0febc942f46-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:47 crc kubenswrapper[4808]: I0311 09:02:47.251696 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/730a184e-642a-4df2-a747-c04625a046b8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "730a184e-642a-4df2-a747-c04625a046b8" (UID: "730a184e-642a-4df2-a747-c04625a046b8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:02:47 crc kubenswrapper[4808]: I0311 09:02:47.251779 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44caf86d-00aa-48a8-b56d-d4395487da92-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "44caf86d-00aa-48a8-b56d-d4395487da92" (UID: "44caf86d-00aa-48a8-b56d-d4395487da92"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:02:47 crc kubenswrapper[4808]: I0311 09:02:47.253244 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42045259-953d-4ea6-bda4-d24008a021b6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "42045259-953d-4ea6-bda4-d24008a021b6" (UID: "42045259-953d-4ea6-bda4-d24008a021b6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:02:47 crc kubenswrapper[4808]: I0311 09:02:47.253622 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/730a184e-642a-4df2-a747-c04625a046b8-kube-api-access-kqmt2" (OuterVolumeSpecName: "kube-api-access-kqmt2") pod "730a184e-642a-4df2-a747-c04625a046b8" (UID: "730a184e-642a-4df2-a747-c04625a046b8"). InnerVolumeSpecName "kube-api-access-kqmt2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:02:47 crc kubenswrapper[4808]: I0311 09:02:47.254388 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44caf86d-00aa-48a8-b56d-d4395487da92-kube-api-access-ccdpt" (OuterVolumeSpecName: "kube-api-access-ccdpt") pod "44caf86d-00aa-48a8-b56d-d4395487da92" (UID: "44caf86d-00aa-48a8-b56d-d4395487da92"). InnerVolumeSpecName "kube-api-access-ccdpt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:02:47 crc kubenswrapper[4808]: I0311 09:02:47.254968 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42045259-953d-4ea6-bda4-d24008a021b6-kube-api-access-wql7f" (OuterVolumeSpecName: "kube-api-access-wql7f") pod "42045259-953d-4ea6-bda4-d24008a021b6" (UID: "42045259-953d-4ea6-bda4-d24008a021b6"). InnerVolumeSpecName "kube-api-access-wql7f". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:02:47 crc kubenswrapper[4808]: I0311 09:02:47.352022 4808 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/42045259-953d-4ea6-bda4-d24008a021b6-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:47 crc kubenswrapper[4808]: I0311 09:02:47.352058 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ccdpt\" (UniqueName: \"kubernetes.io/projected/44caf86d-00aa-48a8-b56d-d4395487da92-kube-api-access-ccdpt\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:47 crc kubenswrapper[4808]: I0311 09:02:47.352070 4808 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/44caf86d-00aa-48a8-b56d-d4395487da92-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:47 crc kubenswrapper[4808]: I0311 09:02:47.352090 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wql7f\" (UniqueName: \"kubernetes.io/projected/42045259-953d-4ea6-bda4-d24008a021b6-kube-api-access-wql7f\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:47 crc kubenswrapper[4808]: I0311 09:02:47.352101 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kqmt2\" (UniqueName: \"kubernetes.io/projected/730a184e-642a-4df2-a747-c04625a046b8-kube-api-access-kqmt2\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:47 crc kubenswrapper[4808]: I0311 09:02:47.352109 4808 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/730a184e-642a-4df2-a747-c04625a046b8-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:47 crc kubenswrapper[4808]: I0311 09:02:47.509818 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"739ef767-9cc0-4f25-82d6-4f17ee457f61","Type":"ContainerStarted","Data":"3b4e8ac696b140ed6583f58f7391f9d008ec8082bbf95d7ddb74490abcade0f1"} Mar 11 09:02:47 crc kubenswrapper[4808]: I0311 09:02:47.511180 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-ac71-account-create-update-q4864" event={"ID":"eb889123-a36e-4211-af3c-a0febc942f46","Type":"ContainerDied","Data":"102c9207eef200504cb42c060ab9cc38a7e1afce60d3876b191af29a36da5de6"} Mar 11 09:02:47 crc kubenswrapper[4808]: I0311 09:02:47.511208 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="102c9207eef200504cb42c060ab9cc38a7e1afce60d3876b191af29a36da5de6" Mar 11 09:02:47 crc kubenswrapper[4808]: I0311 09:02:47.511259 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-ac71-account-create-update-q4864" Mar 11 09:02:47 crc kubenswrapper[4808]: I0311 09:02:47.519100 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-vqtl9" event={"ID":"730a184e-642a-4df2-a747-c04625a046b8","Type":"ContainerDied","Data":"44e4db4f9103c3889617fd4ef863c57331972f667967ea0057b6f140c81a60ae"} Mar 11 09:02:47 crc kubenswrapper[4808]: I0311 09:02:47.519148 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44e4db4f9103c3889617fd4ef863c57331972f667967ea0057b6f140c81a60ae" Mar 11 09:02:47 crc kubenswrapper[4808]: I0311 09:02:47.519205 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-vqtl9" Mar 11 09:02:47 crc kubenswrapper[4808]: I0311 09:02:47.523947 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-c59a-account-create-update-xzlgx" event={"ID":"44caf86d-00aa-48a8-b56d-d4395487da92","Type":"ContainerDied","Data":"7e03ba45948dc8b53568b79c34208c372f1813647f99d92310a420eef1cdbb93"} Mar 11 09:02:47 crc kubenswrapper[4808]: I0311 09:02:47.523978 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-c59a-account-create-update-xzlgx" Mar 11 09:02:47 crc kubenswrapper[4808]: I0311 09:02:47.523983 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e03ba45948dc8b53568b79c34208c372f1813647f99d92310a420eef1cdbb93" Mar 11 09:02:47 crc kubenswrapper[4808]: I0311 09:02:47.529122 4808 generic.go:334] "Generic (PLEG): container finished" podID="fd05834b-3326-4f0c-a5b8-2a0e28e5782e" containerID="4a2f973d3d8aa7749026d2a72e93b744c4d626ad6b2c8dfbd20c14b2858430d8" exitCode=143 Mar 11 09:02:47 crc kubenswrapper[4808]: I0311 09:02:47.529270 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fd05834b-3326-4f0c-a5b8-2a0e28e5782e","Type":"ContainerDied","Data":"4a2f973d3d8aa7749026d2a72e93b744c4d626ad6b2c8dfbd20c14b2858430d8"} Mar 11 09:02:47 crc kubenswrapper[4808]: I0311 09:02:47.532223 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-3a44-account-create-update-prpnm" Mar 11 09:02:47 crc kubenswrapper[4808]: I0311 09:02:47.532891 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-3a44-account-create-update-prpnm" event={"ID":"42045259-953d-4ea6-bda4-d24008a021b6","Type":"ContainerDied","Data":"8cc683b7122380e337686240323c4fe27aad281ef169d431d817430b1d33ed0b"} Mar 11 09:02:47 crc kubenswrapper[4808]: I0311 09:02:47.532944 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8cc683b7122380e337686240323c4fe27aad281ef169d431d817430b1d33ed0b" Mar 11 09:02:47 crc kubenswrapper[4808]: I0311 09:02:47.533040 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-8749b8c99-fl7cg" Mar 11 09:02:47 crc kubenswrapper[4808]: I0311 09:02:47.533303 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-8749b8c99-fl7cg" Mar 11 09:02:48 crc kubenswrapper[4808]: I0311 09:02:48.059385 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-g579h" Mar 11 09:02:48 crc kubenswrapper[4808]: I0311 09:02:48.064576 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-8tb5k" Mar 11 09:02:48 crc kubenswrapper[4808]: I0311 09:02:48.089935 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-krq89\" (UniqueName: \"kubernetes.io/projected/75d7eafe-ecf9-4784-853c-7538a5bb00ca-kube-api-access-krq89\") pod \"75d7eafe-ecf9-4784-853c-7538a5bb00ca\" (UID: \"75d7eafe-ecf9-4784-853c-7538a5bb00ca\") " Mar 11 09:02:48 crc kubenswrapper[4808]: I0311 09:02:48.090016 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75d7eafe-ecf9-4784-853c-7538a5bb00ca-operator-scripts\") pod \"75d7eafe-ecf9-4784-853c-7538a5bb00ca\" (UID: \"75d7eafe-ecf9-4784-853c-7538a5bb00ca\") " Mar 11 09:02:48 crc kubenswrapper[4808]: I0311 09:02:48.090170 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7af695b-1871-4cff-91ad-0bf62afc9ef6-operator-scripts\") pod \"e7af695b-1871-4cff-91ad-0bf62afc9ef6\" (UID: \"e7af695b-1871-4cff-91ad-0bf62afc9ef6\") " Mar 11 09:02:48 crc kubenswrapper[4808]: I0311 09:02:48.090325 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrl2w\" (UniqueName: \"kubernetes.io/projected/e7af695b-1871-4cff-91ad-0bf62afc9ef6-kube-api-access-zrl2w\") pod \"e7af695b-1871-4cff-91ad-0bf62afc9ef6\" (UID: \"e7af695b-1871-4cff-91ad-0bf62afc9ef6\") " Mar 11 09:02:48 crc kubenswrapper[4808]: I0311 09:02:48.090492 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75d7eafe-ecf9-4784-853c-7538a5bb00ca-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "75d7eafe-ecf9-4784-853c-7538a5bb00ca" (UID: "75d7eafe-ecf9-4784-853c-7538a5bb00ca"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:02:48 crc kubenswrapper[4808]: I0311 09:02:48.090536 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7af695b-1871-4cff-91ad-0bf62afc9ef6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e7af695b-1871-4cff-91ad-0bf62afc9ef6" (UID: "e7af695b-1871-4cff-91ad-0bf62afc9ef6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:02:48 crc kubenswrapper[4808]: I0311 09:02:48.090858 4808 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7af695b-1871-4cff-91ad-0bf62afc9ef6-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:48 crc kubenswrapper[4808]: I0311 09:02:48.090885 4808 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75d7eafe-ecf9-4784-853c-7538a5bb00ca-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:48 crc kubenswrapper[4808]: I0311 09:02:48.094555 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75d7eafe-ecf9-4784-853c-7538a5bb00ca-kube-api-access-krq89" (OuterVolumeSpecName: "kube-api-access-krq89") pod "75d7eafe-ecf9-4784-853c-7538a5bb00ca" (UID: "75d7eafe-ecf9-4784-853c-7538a5bb00ca"). InnerVolumeSpecName "kube-api-access-krq89". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:02:48 crc kubenswrapper[4808]: I0311 09:02:48.094717 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7af695b-1871-4cff-91ad-0bf62afc9ef6-kube-api-access-zrl2w" (OuterVolumeSpecName: "kube-api-access-zrl2w") pod "e7af695b-1871-4cff-91ad-0bf62afc9ef6" (UID: "e7af695b-1871-4cff-91ad-0bf62afc9ef6"). InnerVolumeSpecName "kube-api-access-zrl2w". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:02:48 crc kubenswrapper[4808]: I0311 09:02:48.192927 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zrl2w\" (UniqueName: \"kubernetes.io/projected/e7af695b-1871-4cff-91ad-0bf62afc9ef6-kube-api-access-zrl2w\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:48 crc kubenswrapper[4808]: I0311 09:02:48.192962 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-krq89\" (UniqueName: \"kubernetes.io/projected/75d7eafe-ecf9-4784-853c-7538a5bb00ca-kube-api-access-krq89\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:48 crc kubenswrapper[4808]: I0311 09:02:48.543098 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-g579h" event={"ID":"e7af695b-1871-4cff-91ad-0bf62afc9ef6","Type":"ContainerDied","Data":"dd7259951c89ad7f639f7187469625ac47c5693464f8e280bb97b5acd9b2db9f"} Mar 11 09:02:48 crc kubenswrapper[4808]: I0311 09:02:48.543327 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd7259951c89ad7f639f7187469625ac47c5693464f8e280bb97b5acd9b2db9f" Mar 11 09:02:48 crc kubenswrapper[4808]: I0311 09:02:48.543129 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-g579h" Mar 11 09:02:48 crc kubenswrapper[4808]: I0311 09:02:48.545098 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-8tb5k" Mar 11 09:02:48 crc kubenswrapper[4808]: I0311 09:02:48.545094 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-8tb5k" event={"ID":"75d7eafe-ecf9-4784-853c-7538a5bb00ca","Type":"ContainerDied","Data":"a34ac1543d894d02e1ad9cb2eae0ade7caeb7be288a14842df2f6fa865f723a1"} Mar 11 09:02:48 crc kubenswrapper[4808]: I0311 09:02:48.545228 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a34ac1543d894d02e1ad9cb2eae0ade7caeb7be288a14842df2f6fa865f723a1" Mar 11 09:02:48 crc kubenswrapper[4808]: I0311 09:02:48.546600 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"739ef767-9cc0-4f25-82d6-4f17ee457f61","Type":"ContainerStarted","Data":"ed0a461d69f053368430d74e235ea6c53d79ec014f123770df58bd842adc0a7b"} Mar 11 09:02:48 crc kubenswrapper[4808]: I0311 09:02:48.931454 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 11 09:02:48 crc kubenswrapper[4808]: I0311 09:02:48.931748 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="1a9b795b-5646-4126-b1fb-609c53efdf13" containerName="glance-log" containerID="cri-o://a9eb276a3708ab32e878bcdcf4d3597cb44173cc940b8b74f6a1dee37bbdfdf6" gracePeriod=30 Mar 11 09:02:48 crc kubenswrapper[4808]: I0311 09:02:48.931818 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="1a9b795b-5646-4126-b1fb-609c53efdf13" containerName="glance-httpd" containerID="cri-o://8c9f08a6349aaa6356f7534edf61cd2a47293bb1bb86157e93a29265615a1e5b" gracePeriod=30 Mar 11 09:02:49 crc kubenswrapper[4808]: I0311 09:02:49.554695 4808 scope.go:117] "RemoveContainer" containerID="0cf42abbdc063b77a117eb7b7888a5cd44a70d34d36139998e533c45e5c2b395" Mar 11 09:02:49 crc kubenswrapper[4808]: I0311 09:02:49.560604 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"739ef767-9cc0-4f25-82d6-4f17ee457f61","Type":"ContainerStarted","Data":"2bec327a0d71ff728f791812031118beaf98ec1c3699bce351678b0eb18a3eb3"} Mar 11 09:02:49 crc kubenswrapper[4808]: I0311 09:02:49.562785 4808 generic.go:334] "Generic (PLEG): container finished" podID="1a9b795b-5646-4126-b1fb-609c53efdf13" containerID="a9eb276a3708ab32e878bcdcf4d3597cb44173cc940b8b74f6a1dee37bbdfdf6" exitCode=143 Mar 11 09:02:49 crc kubenswrapper[4808]: I0311 09:02:49.562820 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1a9b795b-5646-4126-b1fb-609c53efdf13","Type":"ContainerDied","Data":"a9eb276a3708ab32e878bcdcf4d3597cb44173cc940b8b74f6a1dee37bbdfdf6"} Mar 11 09:02:50 crc kubenswrapper[4808]: I0311 09:02:50.014671 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="fd05834b-3326-4f0c-a5b8-2a0e28e5782e" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.151:9292/healthcheck\": read tcp 10.217.0.2:49066->10.217.0.151:9292: read: connection reset by peer" Mar 11 09:02:50 crc kubenswrapper[4808]: I0311 09:02:50.014967 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="fd05834b-3326-4f0c-a5b8-2a0e28e5782e" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.151:9292/healthcheck\": read tcp 10.217.0.2:49064->10.217.0.151:9292: read: connection reset by peer" Mar 11 09:02:50 crc kubenswrapper[4808]: I0311 09:02:50.446945 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 11 09:02:50 crc kubenswrapper[4808]: I0311 09:02:50.535620 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd05834b-3326-4f0c-a5b8-2a0e28e5782e-internal-tls-certs\") pod \"fd05834b-3326-4f0c-a5b8-2a0e28e5782e\" (UID: \"fd05834b-3326-4f0c-a5b8-2a0e28e5782e\") " Mar 11 09:02:50 crc kubenswrapper[4808]: I0311 09:02:50.535655 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd05834b-3326-4f0c-a5b8-2a0e28e5782e-config-data\") pod \"fd05834b-3326-4f0c-a5b8-2a0e28e5782e\" (UID: \"fd05834b-3326-4f0c-a5b8-2a0e28e5782e\") " Mar 11 09:02:50 crc kubenswrapper[4808]: I0311 09:02:50.535701 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fd05834b-3326-4f0c-a5b8-2a0e28e5782e-httpd-run\") pod \"fd05834b-3326-4f0c-a5b8-2a0e28e5782e\" (UID: \"fd05834b-3326-4f0c-a5b8-2a0e28e5782e\") " Mar 11 09:02:50 crc kubenswrapper[4808]: I0311 09:02:50.535760 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hzxhn\" (UniqueName: \"kubernetes.io/projected/fd05834b-3326-4f0c-a5b8-2a0e28e5782e-kube-api-access-hzxhn\") pod \"fd05834b-3326-4f0c-a5b8-2a0e28e5782e\" (UID: \"fd05834b-3326-4f0c-a5b8-2a0e28e5782e\") " Mar 11 09:02:50 crc kubenswrapper[4808]: I0311 09:02:50.535803 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"fd05834b-3326-4f0c-a5b8-2a0e28e5782e\" (UID: \"fd05834b-3326-4f0c-a5b8-2a0e28e5782e\") " Mar 11 09:02:50 crc kubenswrapper[4808]: I0311 09:02:50.535825 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd05834b-3326-4f0c-a5b8-2a0e28e5782e-combined-ca-bundle\") pod \"fd05834b-3326-4f0c-a5b8-2a0e28e5782e\" (UID: \"fd05834b-3326-4f0c-a5b8-2a0e28e5782e\") " Mar 11 09:02:50 crc kubenswrapper[4808]: I0311 09:02:50.535844 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd05834b-3326-4f0c-a5b8-2a0e28e5782e-logs\") pod \"fd05834b-3326-4f0c-a5b8-2a0e28e5782e\" (UID: \"fd05834b-3326-4f0c-a5b8-2a0e28e5782e\") " Mar 11 09:02:50 crc kubenswrapper[4808]: I0311 09:02:50.535879 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd05834b-3326-4f0c-a5b8-2a0e28e5782e-scripts\") pod \"fd05834b-3326-4f0c-a5b8-2a0e28e5782e\" (UID: \"fd05834b-3326-4f0c-a5b8-2a0e28e5782e\") " Mar 11 09:02:50 crc kubenswrapper[4808]: I0311 09:02:50.537180 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd05834b-3326-4f0c-a5b8-2a0e28e5782e-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "fd05834b-3326-4f0c-a5b8-2a0e28e5782e" (UID: "fd05834b-3326-4f0c-a5b8-2a0e28e5782e"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:02:50 crc kubenswrapper[4808]: I0311 09:02:50.538209 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd05834b-3326-4f0c-a5b8-2a0e28e5782e-logs" (OuterVolumeSpecName: "logs") pod "fd05834b-3326-4f0c-a5b8-2a0e28e5782e" (UID: "fd05834b-3326-4f0c-a5b8-2a0e28e5782e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:02:50 crc kubenswrapper[4808]: I0311 09:02:50.547441 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd05834b-3326-4f0c-a5b8-2a0e28e5782e-kube-api-access-hzxhn" (OuterVolumeSpecName: "kube-api-access-hzxhn") pod "fd05834b-3326-4f0c-a5b8-2a0e28e5782e" (UID: "fd05834b-3326-4f0c-a5b8-2a0e28e5782e"). InnerVolumeSpecName "kube-api-access-hzxhn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:02:50 crc kubenswrapper[4808]: I0311 09:02:50.551467 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd05834b-3326-4f0c-a5b8-2a0e28e5782e-scripts" (OuterVolumeSpecName: "scripts") pod "fd05834b-3326-4f0c-a5b8-2a0e28e5782e" (UID: "fd05834b-3326-4f0c-a5b8-2a0e28e5782e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:02:50 crc kubenswrapper[4808]: I0311 09:02:50.552485 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance") pod "fd05834b-3326-4f0c-a5b8-2a0e28e5782e" (UID: "fd05834b-3326-4f0c-a5b8-2a0e28e5782e"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 11 09:02:50 crc kubenswrapper[4808]: I0311 09:02:50.579735 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"739ef767-9cc0-4f25-82d6-4f17ee457f61","Type":"ContainerStarted","Data":"ca53a94271c4cc2abe22fff358b29cea2d31a56a715fbe439d44661a8ab9cf8a"} Mar 11 09:02:50 crc kubenswrapper[4808]: I0311 09:02:50.588328 4808 generic.go:334] "Generic (PLEG): container finished" podID="fd05834b-3326-4f0c-a5b8-2a0e28e5782e" containerID="b41ea5d8370aa5fc880ec07310ffc8a8ad1a2432531ddb92270dee58ab23797c" exitCode=0 Mar 11 09:02:50 crc kubenswrapper[4808]: I0311 09:02:50.588460 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fd05834b-3326-4f0c-a5b8-2a0e28e5782e","Type":"ContainerDied","Data":"b41ea5d8370aa5fc880ec07310ffc8a8ad1a2432531ddb92270dee58ab23797c"} Mar 11 09:02:50 crc kubenswrapper[4808]: I0311 09:02:50.588496 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fd05834b-3326-4f0c-a5b8-2a0e28e5782e","Type":"ContainerDied","Data":"b37f8681748e68387428da1d9fc7e8d6acf041470008e3ea53e89725348b9548"} Mar 11 09:02:50 crc kubenswrapper[4808]: I0311 09:02:50.588540 4808 scope.go:117] "RemoveContainer" containerID="b41ea5d8370aa5fc880ec07310ffc8a8ad1a2432531ddb92270dee58ab23797c" Mar 11 09:02:50 crc kubenswrapper[4808]: I0311 09:02:50.588539 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 11 09:02:50 crc kubenswrapper[4808]: I0311 09:02:50.603488 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd05834b-3326-4f0c-a5b8-2a0e28e5782e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fd05834b-3326-4f0c-a5b8-2a0e28e5782e" (UID: "fd05834b-3326-4f0c-a5b8-2a0e28e5782e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:02:50 crc kubenswrapper[4808]: I0311 09:02:50.624449 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd05834b-3326-4f0c-a5b8-2a0e28e5782e-config-data" (OuterVolumeSpecName: "config-data") pod "fd05834b-3326-4f0c-a5b8-2a0e28e5782e" (UID: "fd05834b-3326-4f0c-a5b8-2a0e28e5782e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:02:50 crc kubenswrapper[4808]: I0311 09:02:50.633168 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd05834b-3326-4f0c-a5b8-2a0e28e5782e-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "fd05834b-3326-4f0c-a5b8-2a0e28e5782e" (UID: "fd05834b-3326-4f0c-a5b8-2a0e28e5782e"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:02:50 crc kubenswrapper[4808]: I0311 09:02:50.637296 4808 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd05834b-3326-4f0c-a5b8-2a0e28e5782e-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:50 crc kubenswrapper[4808]: I0311 09:02:50.637333 4808 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd05834b-3326-4f0c-a5b8-2a0e28e5782e-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:50 crc kubenswrapper[4808]: I0311 09:02:50.637349 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd05834b-3326-4f0c-a5b8-2a0e28e5782e-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:50 crc kubenswrapper[4808]: I0311 09:02:50.637377 4808 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fd05834b-3326-4f0c-a5b8-2a0e28e5782e-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:50 crc kubenswrapper[4808]: I0311 09:02:50.637391 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hzxhn\" (UniqueName: \"kubernetes.io/projected/fd05834b-3326-4f0c-a5b8-2a0e28e5782e-kube-api-access-hzxhn\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:50 crc kubenswrapper[4808]: I0311 09:02:50.637424 4808 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Mar 11 09:02:50 crc kubenswrapper[4808]: I0311 09:02:50.637438 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd05834b-3326-4f0c-a5b8-2a0e28e5782e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:50 crc kubenswrapper[4808]: I0311 09:02:50.637449 4808 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd05834b-3326-4f0c-a5b8-2a0e28e5782e-logs\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:50 crc kubenswrapper[4808]: I0311 09:02:50.661115 4808 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Mar 11 09:02:50 crc kubenswrapper[4808]: I0311 09:02:50.717293 4808 scope.go:117] "RemoveContainer" containerID="4a2f973d3d8aa7749026d2a72e93b744c4d626ad6b2c8dfbd20c14b2858430d8" Mar 11 09:02:50 crc kubenswrapper[4808]: I0311 09:02:50.738698 4808 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:50 crc kubenswrapper[4808]: I0311 09:02:50.738833 4808 scope.go:117] "RemoveContainer" containerID="b41ea5d8370aa5fc880ec07310ffc8a8ad1a2432531ddb92270dee58ab23797c" Mar 11 09:02:50 crc kubenswrapper[4808]: E0311 09:02:50.739189 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b41ea5d8370aa5fc880ec07310ffc8a8ad1a2432531ddb92270dee58ab23797c\": container with ID starting with b41ea5d8370aa5fc880ec07310ffc8a8ad1a2432531ddb92270dee58ab23797c not found: ID does not exist" containerID="b41ea5d8370aa5fc880ec07310ffc8a8ad1a2432531ddb92270dee58ab23797c" Mar 11 09:02:50 crc kubenswrapper[4808]: I0311 09:02:50.739219 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b41ea5d8370aa5fc880ec07310ffc8a8ad1a2432531ddb92270dee58ab23797c"} err="failed to get container status \"b41ea5d8370aa5fc880ec07310ffc8a8ad1a2432531ddb92270dee58ab23797c\": rpc error: code = NotFound desc = could not find container \"b41ea5d8370aa5fc880ec07310ffc8a8ad1a2432531ddb92270dee58ab23797c\": container with ID starting with b41ea5d8370aa5fc880ec07310ffc8a8ad1a2432531ddb92270dee58ab23797c not found: ID does not exist" Mar 11 09:02:50 crc kubenswrapper[4808]: I0311 09:02:50.739238 4808 scope.go:117] "RemoveContainer" containerID="4a2f973d3d8aa7749026d2a72e93b744c4d626ad6b2c8dfbd20c14b2858430d8" Mar 11 09:02:50 crc kubenswrapper[4808]: E0311 09:02:50.739458 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a2f973d3d8aa7749026d2a72e93b744c4d626ad6b2c8dfbd20c14b2858430d8\": container with ID starting with 4a2f973d3d8aa7749026d2a72e93b744c4d626ad6b2c8dfbd20c14b2858430d8 not found: ID does not exist" containerID="4a2f973d3d8aa7749026d2a72e93b744c4d626ad6b2c8dfbd20c14b2858430d8" Mar 11 09:02:50 crc kubenswrapper[4808]: I0311 09:02:50.739483 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a2f973d3d8aa7749026d2a72e93b744c4d626ad6b2c8dfbd20c14b2858430d8"} err="failed to get container status \"4a2f973d3d8aa7749026d2a72e93b744c4d626ad6b2c8dfbd20c14b2858430d8\": rpc error: code = NotFound desc = could not find container \"4a2f973d3d8aa7749026d2a72e93b744c4d626ad6b2c8dfbd20c14b2858430d8\": container with ID starting with 4a2f973d3d8aa7749026d2a72e93b744c4d626ad6b2c8dfbd20c14b2858430d8 not found: ID does not exist" Mar 11 09:02:50 crc kubenswrapper[4808]: I0311 09:02:50.926892 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 11 09:02:50 crc kubenswrapper[4808]: I0311 09:02:50.942116 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 11 09:02:50 crc kubenswrapper[4808]: I0311 09:02:50.961640 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 11 09:02:50 crc kubenswrapper[4808]: E0311 09:02:50.961972 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="730a184e-642a-4df2-a747-c04625a046b8" containerName="mariadb-database-create" Mar 11 09:02:50 crc kubenswrapper[4808]: I0311 09:02:50.961992 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="730a184e-642a-4df2-a747-c04625a046b8" containerName="mariadb-database-create" Mar 11 09:02:50 crc kubenswrapper[4808]: E0311 09:02:50.962009 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb889123-a36e-4211-af3c-a0febc942f46" containerName="mariadb-account-create-update" Mar 11 09:02:50 crc kubenswrapper[4808]: I0311 09:02:50.962017 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb889123-a36e-4211-af3c-a0febc942f46" containerName="mariadb-account-create-update" Mar 11 09:02:50 crc kubenswrapper[4808]: E0311 09:02:50.962030 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7af695b-1871-4cff-91ad-0bf62afc9ef6" containerName="mariadb-database-create" Mar 11 09:02:50 crc kubenswrapper[4808]: I0311 09:02:50.962036 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7af695b-1871-4cff-91ad-0bf62afc9ef6" containerName="mariadb-database-create" Mar 11 09:02:50 crc kubenswrapper[4808]: E0311 09:02:50.962043 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75d7eafe-ecf9-4784-853c-7538a5bb00ca" containerName="mariadb-database-create" Mar 11 09:02:50 crc kubenswrapper[4808]: I0311 09:02:50.962049 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="75d7eafe-ecf9-4784-853c-7538a5bb00ca" containerName="mariadb-database-create" Mar 11 09:02:50 crc kubenswrapper[4808]: E0311 09:02:50.962057 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd05834b-3326-4f0c-a5b8-2a0e28e5782e" containerName="glance-httpd" Mar 11 09:02:50 crc kubenswrapper[4808]: I0311 09:02:50.962063 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd05834b-3326-4f0c-a5b8-2a0e28e5782e" containerName="glance-httpd" Mar 11 09:02:50 crc kubenswrapper[4808]: E0311 09:02:50.962080 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd05834b-3326-4f0c-a5b8-2a0e28e5782e" containerName="glance-log" Mar 11 09:02:50 crc kubenswrapper[4808]: I0311 09:02:50.962085 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd05834b-3326-4f0c-a5b8-2a0e28e5782e" containerName="glance-log" Mar 11 09:02:50 crc kubenswrapper[4808]: E0311 09:02:50.962095 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44caf86d-00aa-48a8-b56d-d4395487da92" containerName="mariadb-account-create-update" Mar 11 09:02:50 crc kubenswrapper[4808]: I0311 09:02:50.962102 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="44caf86d-00aa-48a8-b56d-d4395487da92" containerName="mariadb-account-create-update" Mar 11 09:02:50 crc kubenswrapper[4808]: E0311 09:02:50.962123 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42045259-953d-4ea6-bda4-d24008a021b6" containerName="mariadb-account-create-update" Mar 11 09:02:50 crc kubenswrapper[4808]: I0311 09:02:50.962129 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="42045259-953d-4ea6-bda4-d24008a021b6" containerName="mariadb-account-create-update" Mar 11 09:02:50 crc kubenswrapper[4808]: I0311 09:02:50.962303 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7af695b-1871-4cff-91ad-0bf62afc9ef6" containerName="mariadb-database-create" Mar 11 09:02:50 crc kubenswrapper[4808]: I0311 09:02:50.962317 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd05834b-3326-4f0c-a5b8-2a0e28e5782e" containerName="glance-httpd" Mar 11 09:02:50 crc kubenswrapper[4808]: I0311 09:02:50.962332 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="44caf86d-00aa-48a8-b56d-d4395487da92" containerName="mariadb-account-create-update" Mar 11 09:02:50 crc kubenswrapper[4808]: I0311 09:02:50.962341 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb889123-a36e-4211-af3c-a0febc942f46" containerName="mariadb-account-create-update" Mar 11 09:02:50 crc kubenswrapper[4808]: I0311 09:02:50.962354 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd05834b-3326-4f0c-a5b8-2a0e28e5782e" containerName="glance-log" Mar 11 09:02:50 crc kubenswrapper[4808]: I0311 09:02:50.962381 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="75d7eafe-ecf9-4784-853c-7538a5bb00ca" containerName="mariadb-database-create" Mar 11 09:02:50 crc kubenswrapper[4808]: I0311 09:02:50.962390 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="42045259-953d-4ea6-bda4-d24008a021b6" containerName="mariadb-account-create-update" Mar 11 09:02:50 crc kubenswrapper[4808]: I0311 09:02:50.962400 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="730a184e-642a-4df2-a747-c04625a046b8" containerName="mariadb-database-create" Mar 11 09:02:50 crc kubenswrapper[4808]: I0311 09:02:50.963278 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 11 09:02:50 crc kubenswrapper[4808]: I0311 09:02:50.966892 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Mar 11 09:02:50 crc kubenswrapper[4808]: I0311 09:02:50.967074 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Mar 11 09:02:50 crc kubenswrapper[4808]: I0311 09:02:50.988283 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 11 09:02:51 crc kubenswrapper[4808]: I0311 09:02:51.158270 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b3cf17f3-18e6-43f9-ab09-5882a99ffa51-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b3cf17f3-18e6-43f9-ab09-5882a99ffa51\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:02:51 crc kubenswrapper[4808]: I0311 09:02:51.158789 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnbxq\" (UniqueName: \"kubernetes.io/projected/b3cf17f3-18e6-43f9-ab09-5882a99ffa51-kube-api-access-qnbxq\") pod \"glance-default-internal-api-0\" (UID: \"b3cf17f3-18e6-43f9-ab09-5882a99ffa51\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:02:51 crc kubenswrapper[4808]: I0311 09:02:51.158902 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"b3cf17f3-18e6-43f9-ab09-5882a99ffa51\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:02:51 crc kubenswrapper[4808]: I0311 09:02:51.158932 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3cf17f3-18e6-43f9-ab09-5882a99ffa51-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b3cf17f3-18e6-43f9-ab09-5882a99ffa51\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:02:51 crc kubenswrapper[4808]: I0311 09:02:51.159024 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3cf17f3-18e6-43f9-ab09-5882a99ffa51-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b3cf17f3-18e6-43f9-ab09-5882a99ffa51\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:02:51 crc kubenswrapper[4808]: I0311 09:02:51.159072 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3cf17f3-18e6-43f9-ab09-5882a99ffa51-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b3cf17f3-18e6-43f9-ab09-5882a99ffa51\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:02:51 crc kubenswrapper[4808]: I0311 09:02:51.159167 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3cf17f3-18e6-43f9-ab09-5882a99ffa51-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b3cf17f3-18e6-43f9-ab09-5882a99ffa51\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:02:51 crc kubenswrapper[4808]: I0311 09:02:51.159206 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b3cf17f3-18e6-43f9-ab09-5882a99ffa51-logs\") pod \"glance-default-internal-api-0\" (UID: \"b3cf17f3-18e6-43f9-ab09-5882a99ffa51\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:02:51 crc kubenswrapper[4808]: I0311 09:02:51.261300 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3cf17f3-18e6-43f9-ab09-5882a99ffa51-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b3cf17f3-18e6-43f9-ab09-5882a99ffa51\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:02:51 crc kubenswrapper[4808]: I0311 09:02:51.261433 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3cf17f3-18e6-43f9-ab09-5882a99ffa51-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b3cf17f3-18e6-43f9-ab09-5882a99ffa51\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:02:51 crc kubenswrapper[4808]: I0311 09:02:51.261474 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b3cf17f3-18e6-43f9-ab09-5882a99ffa51-logs\") pod \"glance-default-internal-api-0\" (UID: \"b3cf17f3-18e6-43f9-ab09-5882a99ffa51\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:02:51 crc kubenswrapper[4808]: I0311 09:02:51.261506 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b3cf17f3-18e6-43f9-ab09-5882a99ffa51-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b3cf17f3-18e6-43f9-ab09-5882a99ffa51\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:02:51 crc kubenswrapper[4808]: I0311 09:02:51.261547 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnbxq\" (UniqueName: \"kubernetes.io/projected/b3cf17f3-18e6-43f9-ab09-5882a99ffa51-kube-api-access-qnbxq\") pod \"glance-default-internal-api-0\" (UID: \"b3cf17f3-18e6-43f9-ab09-5882a99ffa51\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:02:51 crc kubenswrapper[4808]: I0311 09:02:51.261610 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"b3cf17f3-18e6-43f9-ab09-5882a99ffa51\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:02:51 crc kubenswrapper[4808]: I0311 09:02:51.261636 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3cf17f3-18e6-43f9-ab09-5882a99ffa51-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b3cf17f3-18e6-43f9-ab09-5882a99ffa51\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:02:51 crc kubenswrapper[4808]: I0311 09:02:51.261694 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3cf17f3-18e6-43f9-ab09-5882a99ffa51-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b3cf17f3-18e6-43f9-ab09-5882a99ffa51\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:02:51 crc kubenswrapper[4808]: I0311 09:02:51.262219 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b3cf17f3-18e6-43f9-ab09-5882a99ffa51-logs\") pod \"glance-default-internal-api-0\" (UID: \"b3cf17f3-18e6-43f9-ab09-5882a99ffa51\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:02:51 crc kubenswrapper[4808]: I0311 09:02:51.262380 4808 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"b3cf17f3-18e6-43f9-ab09-5882a99ffa51\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-internal-api-0" Mar 11 09:02:51 crc kubenswrapper[4808]: I0311 09:02:51.262420 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b3cf17f3-18e6-43f9-ab09-5882a99ffa51-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b3cf17f3-18e6-43f9-ab09-5882a99ffa51\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:02:51 crc kubenswrapper[4808]: I0311 09:02:51.266431 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3cf17f3-18e6-43f9-ab09-5882a99ffa51-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b3cf17f3-18e6-43f9-ab09-5882a99ffa51\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:02:51 crc kubenswrapper[4808]: I0311 09:02:51.267019 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3cf17f3-18e6-43f9-ab09-5882a99ffa51-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b3cf17f3-18e6-43f9-ab09-5882a99ffa51\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:02:51 crc kubenswrapper[4808]: I0311 09:02:51.267845 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3cf17f3-18e6-43f9-ab09-5882a99ffa51-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b3cf17f3-18e6-43f9-ab09-5882a99ffa51\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:02:51 crc kubenswrapper[4808]: I0311 09:02:51.269045 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3cf17f3-18e6-43f9-ab09-5882a99ffa51-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b3cf17f3-18e6-43f9-ab09-5882a99ffa51\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:02:51 crc kubenswrapper[4808]: I0311 09:02:51.285016 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnbxq\" (UniqueName: \"kubernetes.io/projected/b3cf17f3-18e6-43f9-ab09-5882a99ffa51-kube-api-access-qnbxq\") pod \"glance-default-internal-api-0\" (UID: \"b3cf17f3-18e6-43f9-ab09-5882a99ffa51\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:02:51 crc kubenswrapper[4808]: I0311 09:02:51.294110 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"b3cf17f3-18e6-43f9-ab09-5882a99ffa51\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:02:51 crc kubenswrapper[4808]: I0311 09:02:51.584856 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 11 09:02:51 crc kubenswrapper[4808]: I0311 09:02:51.821611 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd05834b-3326-4f0c-a5b8-2a0e28e5782e" path="/var/lib/kubelet/pods/fd05834b-3326-4f0c-a5b8-2a0e28e5782e/volumes" Mar 11 09:02:52 crc kubenswrapper[4808]: I0311 09:02:52.293745 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 11 09:02:52 crc kubenswrapper[4808]: W0311 09:02:52.297714 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb3cf17f3_18e6_43f9_ab09_5882a99ffa51.slice/crio-635a62079ecd8d7a8be3b970aa7555337e5bc87059f40e3f248650e4fcbdcc13 WatchSource:0}: Error finding container 635a62079ecd8d7a8be3b970aa7555337e5bc87059f40e3f248650e4fcbdcc13: Status 404 returned error can't find the container with id 635a62079ecd8d7a8be3b970aa7555337e5bc87059f40e3f248650e4fcbdcc13 Mar 11 09:02:52 crc kubenswrapper[4808]: I0311 09:02:52.624032 4808 generic.go:334] "Generic (PLEG): container finished" podID="1a9b795b-5646-4126-b1fb-609c53efdf13" containerID="8c9f08a6349aaa6356f7534edf61cd2a47293bb1bb86157e93a29265615a1e5b" exitCode=0 Mar 11 09:02:52 crc kubenswrapper[4808]: I0311 09:02:52.624100 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1a9b795b-5646-4126-b1fb-609c53efdf13","Type":"ContainerDied","Data":"8c9f08a6349aaa6356f7534edf61cd2a47293bb1bb86157e93a29265615a1e5b"} Mar 11 09:02:52 crc kubenswrapper[4808]: I0311 09:02:52.637956 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"739ef767-9cc0-4f25-82d6-4f17ee457f61","Type":"ContainerStarted","Data":"bac8715c274189d3e68892b1485f0b76eead2cb66eb41971200f40fbddb0ae58"} Mar 11 09:02:52 crc kubenswrapper[4808]: I0311 09:02:52.639373 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="739ef767-9cc0-4f25-82d6-4f17ee457f61" containerName="ceilometer-central-agent" containerID="cri-o://ed0a461d69f053368430d74e235ea6c53d79ec014f123770df58bd842adc0a7b" gracePeriod=30 Mar 11 09:02:52 crc kubenswrapper[4808]: I0311 09:02:52.639523 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="739ef767-9cc0-4f25-82d6-4f17ee457f61" containerName="proxy-httpd" containerID="cri-o://bac8715c274189d3e68892b1485f0b76eead2cb66eb41971200f40fbddb0ae58" gracePeriod=30 Mar 11 09:02:52 crc kubenswrapper[4808]: I0311 09:02:52.639569 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="739ef767-9cc0-4f25-82d6-4f17ee457f61" containerName="sg-core" containerID="cri-o://ca53a94271c4cc2abe22fff358b29cea2d31a56a715fbe439d44661a8ab9cf8a" gracePeriod=30 Mar 11 09:02:52 crc kubenswrapper[4808]: I0311 09:02:52.639612 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="739ef767-9cc0-4f25-82d6-4f17ee457f61" containerName="ceilometer-notification-agent" containerID="cri-o://2bec327a0d71ff728f791812031118beaf98ec1c3699bce351678b0eb18a3eb3" gracePeriod=30 Mar 11 09:02:52 crc kubenswrapper[4808]: I0311 09:02:52.639734 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 11 09:02:52 crc kubenswrapper[4808]: I0311 09:02:52.642057 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b3cf17f3-18e6-43f9-ab09-5882a99ffa51","Type":"ContainerStarted","Data":"635a62079ecd8d7a8be3b970aa7555337e5bc87059f40e3f248650e4fcbdcc13"} Mar 11 09:02:52 crc kubenswrapper[4808]: I0311 09:02:52.670222 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.651946244 podStartE2EDuration="7.670205672s" podCreationTimestamp="2026-03-11 09:02:45 +0000 UTC" firstStartedPulling="2026-03-11 09:02:46.668903461 +0000 UTC m=+1417.622226781" lastFinishedPulling="2026-03-11 09:02:51.687162889 +0000 UTC m=+1422.640486209" observedRunningTime="2026-03-11 09:02:52.668957626 +0000 UTC m=+1423.622280946" watchObservedRunningTime="2026-03-11 09:02:52.670205672 +0000 UTC m=+1423.623528992" Mar 11 09:02:52 crc kubenswrapper[4808]: I0311 09:02:52.717620 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 11 09:02:52 crc kubenswrapper[4808]: I0311 09:02:52.902013 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ff9b5\" (UniqueName: \"kubernetes.io/projected/1a9b795b-5646-4126-b1fb-609c53efdf13-kube-api-access-ff9b5\") pod \"1a9b795b-5646-4126-b1fb-609c53efdf13\" (UID: \"1a9b795b-5646-4126-b1fb-609c53efdf13\") " Mar 11 09:02:52 crc kubenswrapper[4808]: I0311 09:02:52.902679 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a9b795b-5646-4126-b1fb-609c53efdf13-public-tls-certs\") pod \"1a9b795b-5646-4126-b1fb-609c53efdf13\" (UID: \"1a9b795b-5646-4126-b1fb-609c53efdf13\") " Mar 11 09:02:52 crc kubenswrapper[4808]: I0311 09:02:52.902727 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a9b795b-5646-4126-b1fb-609c53efdf13-combined-ca-bundle\") pod \"1a9b795b-5646-4126-b1fb-609c53efdf13\" (UID: \"1a9b795b-5646-4126-b1fb-609c53efdf13\") " Mar 11 09:02:52 crc kubenswrapper[4808]: I0311 09:02:52.902765 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1a9b795b-5646-4126-b1fb-609c53efdf13-httpd-run\") pod \"1a9b795b-5646-4126-b1fb-609c53efdf13\" (UID: \"1a9b795b-5646-4126-b1fb-609c53efdf13\") " Mar 11 09:02:52 crc kubenswrapper[4808]: I0311 09:02:52.902796 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1a9b795b-5646-4126-b1fb-609c53efdf13-logs\") pod \"1a9b795b-5646-4126-b1fb-609c53efdf13\" (UID: \"1a9b795b-5646-4126-b1fb-609c53efdf13\") " Mar 11 09:02:52 crc kubenswrapper[4808]: I0311 09:02:52.902821 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1a9b795b-5646-4126-b1fb-609c53efdf13-scripts\") pod \"1a9b795b-5646-4126-b1fb-609c53efdf13\" (UID: \"1a9b795b-5646-4126-b1fb-609c53efdf13\") " Mar 11 09:02:52 crc kubenswrapper[4808]: I0311 09:02:52.902871 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a9b795b-5646-4126-b1fb-609c53efdf13-config-data\") pod \"1a9b795b-5646-4126-b1fb-609c53efdf13\" (UID: \"1a9b795b-5646-4126-b1fb-609c53efdf13\") " Mar 11 09:02:52 crc kubenswrapper[4808]: I0311 09:02:52.902899 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"1a9b795b-5646-4126-b1fb-609c53efdf13\" (UID: \"1a9b795b-5646-4126-b1fb-609c53efdf13\") " Mar 11 09:02:52 crc kubenswrapper[4808]: I0311 09:02:52.910082 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a9b795b-5646-4126-b1fb-609c53efdf13-kube-api-access-ff9b5" (OuterVolumeSpecName: "kube-api-access-ff9b5") pod "1a9b795b-5646-4126-b1fb-609c53efdf13" (UID: "1a9b795b-5646-4126-b1fb-609c53efdf13"). InnerVolumeSpecName "kube-api-access-ff9b5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:02:52 crc kubenswrapper[4808]: I0311 09:02:52.914715 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a9b795b-5646-4126-b1fb-609c53efdf13-logs" (OuterVolumeSpecName: "logs") pod "1a9b795b-5646-4126-b1fb-609c53efdf13" (UID: "1a9b795b-5646-4126-b1fb-609c53efdf13"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:02:52 crc kubenswrapper[4808]: I0311 09:02:52.914756 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a9b795b-5646-4126-b1fb-609c53efdf13-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "1a9b795b-5646-4126-b1fb-609c53efdf13" (UID: "1a9b795b-5646-4126-b1fb-609c53efdf13"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:02:52 crc kubenswrapper[4808]: I0311 09:02:52.914804 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a9b795b-5646-4126-b1fb-609c53efdf13-scripts" (OuterVolumeSpecName: "scripts") pod "1a9b795b-5646-4126-b1fb-609c53efdf13" (UID: "1a9b795b-5646-4126-b1fb-609c53efdf13"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:02:52 crc kubenswrapper[4808]: I0311 09:02:52.917497 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "1a9b795b-5646-4126-b1fb-609c53efdf13" (UID: "1a9b795b-5646-4126-b1fb-609c53efdf13"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 11 09:02:52 crc kubenswrapper[4808]: I0311 09:02:52.970769 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7676f56769-zslbs" Mar 11 09:02:52 crc kubenswrapper[4808]: I0311 09:02:52.973971 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a9b795b-5646-4126-b1fb-609c53efdf13-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1a9b795b-5646-4126-b1fb-609c53efdf13" (UID: "1a9b795b-5646-4126-b1fb-609c53efdf13"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:02:53 crc kubenswrapper[4808]: I0311 09:02:53.005401 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a9b795b-5646-4126-b1fb-609c53efdf13-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:53 crc kubenswrapper[4808]: I0311 09:02:53.005435 4808 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1a9b795b-5646-4126-b1fb-609c53efdf13-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:53 crc kubenswrapper[4808]: I0311 09:02:53.005446 4808 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1a9b795b-5646-4126-b1fb-609c53efdf13-logs\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:53 crc kubenswrapper[4808]: I0311 09:02:53.005454 4808 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1a9b795b-5646-4126-b1fb-609c53efdf13-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:53 crc kubenswrapper[4808]: I0311 09:02:53.005484 4808 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Mar 11 09:02:53 crc kubenswrapper[4808]: I0311 09:02:53.005493 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ff9b5\" (UniqueName: \"kubernetes.io/projected/1a9b795b-5646-4126-b1fb-609c53efdf13-kube-api-access-ff9b5\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:53 crc kubenswrapper[4808]: I0311 09:02:53.071414 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a9b795b-5646-4126-b1fb-609c53efdf13-config-data" (OuterVolumeSpecName: "config-data") pod "1a9b795b-5646-4126-b1fb-609c53efdf13" (UID: "1a9b795b-5646-4126-b1fb-609c53efdf13"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:02:53 crc kubenswrapper[4808]: I0311 09:02:53.082463 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-764ddbc49b-rd7qj"] Mar 11 09:02:53 crc kubenswrapper[4808]: I0311 09:02:53.082755 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-764ddbc49b-rd7qj" podUID="1d8e77d9-ecc3-4798-9a33-aeaa7e24a907" containerName="neutron-api" containerID="cri-o://8dd16f3fab19aaa57bb055c80d4f3c1c63cf41c143b321c09b021c8b3dd5a644" gracePeriod=30 Mar 11 09:02:53 crc kubenswrapper[4808]: I0311 09:02:53.083160 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-764ddbc49b-rd7qj" podUID="1d8e77d9-ecc3-4798-9a33-aeaa7e24a907" containerName="neutron-httpd" containerID="cri-o://8add428764c0b0d0319171f3520b3b32308c89e35aac004b113120480955c405" gracePeriod=30 Mar 11 09:02:53 crc kubenswrapper[4808]: I0311 09:02:53.083528 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a9b795b-5646-4126-b1fb-609c53efdf13-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "1a9b795b-5646-4126-b1fb-609c53efdf13" (UID: "1a9b795b-5646-4126-b1fb-609c53efdf13"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:02:53 crc kubenswrapper[4808]: I0311 09:02:53.107704 4808 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a9b795b-5646-4126-b1fb-609c53efdf13-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:53 crc kubenswrapper[4808]: I0311 09:02:53.107730 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a9b795b-5646-4126-b1fb-609c53efdf13-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:53 crc kubenswrapper[4808]: I0311 09:02:53.109827 4808 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Mar 11 09:02:53 crc kubenswrapper[4808]: I0311 09:02:53.126025 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-dmgm7"] Mar 11 09:02:53 crc kubenswrapper[4808]: E0311 09:02:53.126446 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a9b795b-5646-4126-b1fb-609c53efdf13" containerName="glance-httpd" Mar 11 09:02:53 crc kubenswrapper[4808]: I0311 09:02:53.126457 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a9b795b-5646-4126-b1fb-609c53efdf13" containerName="glance-httpd" Mar 11 09:02:53 crc kubenswrapper[4808]: E0311 09:02:53.126473 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a9b795b-5646-4126-b1fb-609c53efdf13" containerName="glance-log" Mar 11 09:02:53 crc kubenswrapper[4808]: I0311 09:02:53.126479 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a9b795b-5646-4126-b1fb-609c53efdf13" containerName="glance-log" Mar 11 09:02:53 crc kubenswrapper[4808]: I0311 09:02:53.126658 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a9b795b-5646-4126-b1fb-609c53efdf13" containerName="glance-httpd" Mar 11 09:02:53 crc kubenswrapper[4808]: I0311 09:02:53.126678 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a9b795b-5646-4126-b1fb-609c53efdf13" containerName="glance-log" Mar 11 09:02:53 crc kubenswrapper[4808]: I0311 09:02:53.127312 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-dmgm7" Mar 11 09:02:53 crc kubenswrapper[4808]: I0311 09:02:53.133838 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-h5bjm" Mar 11 09:02:53 crc kubenswrapper[4808]: I0311 09:02:53.133996 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Mar 11 09:02:53 crc kubenswrapper[4808]: I0311 09:02:53.134097 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Mar 11 09:02:53 crc kubenswrapper[4808]: I0311 09:02:53.158220 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-dmgm7"] Mar 11 09:02:53 crc kubenswrapper[4808]: I0311 09:02:53.209704 4808 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:53 crc kubenswrapper[4808]: I0311 09:02:53.311934 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16e5dd70-3baf-4a95-be4e-d3f27d963aa6-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-dmgm7\" (UID: \"16e5dd70-3baf-4a95-be4e-d3f27d963aa6\") " pod="openstack/nova-cell0-conductor-db-sync-dmgm7" Mar 11 09:02:53 crc kubenswrapper[4808]: I0311 09:02:53.311990 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16e5dd70-3baf-4a95-be4e-d3f27d963aa6-config-data\") pod \"nova-cell0-conductor-db-sync-dmgm7\" (UID: \"16e5dd70-3baf-4a95-be4e-d3f27d963aa6\") " pod="openstack/nova-cell0-conductor-db-sync-dmgm7" Mar 11 09:02:53 crc kubenswrapper[4808]: I0311 09:02:53.312162 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16e5dd70-3baf-4a95-be4e-d3f27d963aa6-scripts\") pod \"nova-cell0-conductor-db-sync-dmgm7\" (UID: \"16e5dd70-3baf-4a95-be4e-d3f27d963aa6\") " pod="openstack/nova-cell0-conductor-db-sync-dmgm7" Mar 11 09:02:53 crc kubenswrapper[4808]: I0311 09:02:53.312423 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ds98q\" (UniqueName: \"kubernetes.io/projected/16e5dd70-3baf-4a95-be4e-d3f27d963aa6-kube-api-access-ds98q\") pod \"nova-cell0-conductor-db-sync-dmgm7\" (UID: \"16e5dd70-3baf-4a95-be4e-d3f27d963aa6\") " pod="openstack/nova-cell0-conductor-db-sync-dmgm7" Mar 11 09:02:53 crc kubenswrapper[4808]: I0311 09:02:53.414138 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16e5dd70-3baf-4a95-be4e-d3f27d963aa6-scripts\") pod \"nova-cell0-conductor-db-sync-dmgm7\" (UID: \"16e5dd70-3baf-4a95-be4e-d3f27d963aa6\") " pod="openstack/nova-cell0-conductor-db-sync-dmgm7" Mar 11 09:02:53 crc kubenswrapper[4808]: I0311 09:02:53.414229 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ds98q\" (UniqueName: \"kubernetes.io/projected/16e5dd70-3baf-4a95-be4e-d3f27d963aa6-kube-api-access-ds98q\") pod \"nova-cell0-conductor-db-sync-dmgm7\" (UID: \"16e5dd70-3baf-4a95-be4e-d3f27d963aa6\") " pod="openstack/nova-cell0-conductor-db-sync-dmgm7" Mar 11 09:02:53 crc kubenswrapper[4808]: I0311 09:02:53.414281 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16e5dd70-3baf-4a95-be4e-d3f27d963aa6-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-dmgm7\" (UID: \"16e5dd70-3baf-4a95-be4e-d3f27d963aa6\") " pod="openstack/nova-cell0-conductor-db-sync-dmgm7" Mar 11 09:02:53 crc kubenswrapper[4808]: I0311 09:02:53.414313 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16e5dd70-3baf-4a95-be4e-d3f27d963aa6-config-data\") pod \"nova-cell0-conductor-db-sync-dmgm7\" (UID: \"16e5dd70-3baf-4a95-be4e-d3f27d963aa6\") " pod="openstack/nova-cell0-conductor-db-sync-dmgm7" Mar 11 09:02:53 crc kubenswrapper[4808]: I0311 09:02:53.418126 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16e5dd70-3baf-4a95-be4e-d3f27d963aa6-config-data\") pod \"nova-cell0-conductor-db-sync-dmgm7\" (UID: \"16e5dd70-3baf-4a95-be4e-d3f27d963aa6\") " pod="openstack/nova-cell0-conductor-db-sync-dmgm7" Mar 11 09:02:53 crc kubenswrapper[4808]: I0311 09:02:53.418447 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16e5dd70-3baf-4a95-be4e-d3f27d963aa6-scripts\") pod \"nova-cell0-conductor-db-sync-dmgm7\" (UID: \"16e5dd70-3baf-4a95-be4e-d3f27d963aa6\") " pod="openstack/nova-cell0-conductor-db-sync-dmgm7" Mar 11 09:02:53 crc kubenswrapper[4808]: I0311 09:02:53.418799 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16e5dd70-3baf-4a95-be4e-d3f27d963aa6-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-dmgm7\" (UID: \"16e5dd70-3baf-4a95-be4e-d3f27d963aa6\") " pod="openstack/nova-cell0-conductor-db-sync-dmgm7" Mar 11 09:02:53 crc kubenswrapper[4808]: I0311 09:02:53.429709 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ds98q\" (UniqueName: \"kubernetes.io/projected/16e5dd70-3baf-4a95-be4e-d3f27d963aa6-kube-api-access-ds98q\") pod \"nova-cell0-conductor-db-sync-dmgm7\" (UID: \"16e5dd70-3baf-4a95-be4e-d3f27d963aa6\") " pod="openstack/nova-cell0-conductor-db-sync-dmgm7" Mar 11 09:02:53 crc kubenswrapper[4808]: I0311 09:02:53.459020 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-dmgm7" Mar 11 09:02:53 crc kubenswrapper[4808]: I0311 09:02:53.546953 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-8749b8c99-fl7cg" Mar 11 09:02:53 crc kubenswrapper[4808]: I0311 09:02:53.558000 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-8749b8c99-fl7cg" Mar 11 09:02:53 crc kubenswrapper[4808]: I0311 09:02:53.710754 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b3cf17f3-18e6-43f9-ab09-5882a99ffa51","Type":"ContainerStarted","Data":"d3ed6d04157e55ecb439f27f04de8f8ce7e201a4c77868d475de1fccec4b926a"} Mar 11 09:02:53 crc kubenswrapper[4808]: I0311 09:02:53.723465 4808 generic.go:334] "Generic (PLEG): container finished" podID="1d8e77d9-ecc3-4798-9a33-aeaa7e24a907" containerID="8add428764c0b0d0319171f3520b3b32308c89e35aac004b113120480955c405" exitCode=0 Mar 11 09:02:53 crc kubenswrapper[4808]: I0311 09:02:53.723530 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-764ddbc49b-rd7qj" event={"ID":"1d8e77d9-ecc3-4798-9a33-aeaa7e24a907","Type":"ContainerDied","Data":"8add428764c0b0d0319171f3520b3b32308c89e35aac004b113120480955c405"} Mar 11 09:02:53 crc kubenswrapper[4808]: I0311 09:02:53.746334 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1a9b795b-5646-4126-b1fb-609c53efdf13","Type":"ContainerDied","Data":"1095bead7237ff4804966e486e7e792116f58e022df7a3fb9f0c1ba328442629"} Mar 11 09:02:53 crc kubenswrapper[4808]: I0311 09:02:53.746409 4808 scope.go:117] "RemoveContainer" containerID="8c9f08a6349aaa6356f7534edf61cd2a47293bb1bb86157e93a29265615a1e5b" Mar 11 09:02:53 crc kubenswrapper[4808]: I0311 09:02:53.746562 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 11 09:02:53 crc kubenswrapper[4808]: I0311 09:02:53.787823 4808 generic.go:334] "Generic (PLEG): container finished" podID="739ef767-9cc0-4f25-82d6-4f17ee457f61" containerID="bac8715c274189d3e68892b1485f0b76eead2cb66eb41971200f40fbddb0ae58" exitCode=0 Mar 11 09:02:53 crc kubenswrapper[4808]: I0311 09:02:53.787857 4808 generic.go:334] "Generic (PLEG): container finished" podID="739ef767-9cc0-4f25-82d6-4f17ee457f61" containerID="ca53a94271c4cc2abe22fff358b29cea2d31a56a715fbe439d44661a8ab9cf8a" exitCode=2 Mar 11 09:02:53 crc kubenswrapper[4808]: I0311 09:02:53.787865 4808 generic.go:334] "Generic (PLEG): container finished" podID="739ef767-9cc0-4f25-82d6-4f17ee457f61" containerID="2bec327a0d71ff728f791812031118beaf98ec1c3699bce351678b0eb18a3eb3" exitCode=0 Mar 11 09:02:53 crc kubenswrapper[4808]: I0311 09:02:53.788852 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"739ef767-9cc0-4f25-82d6-4f17ee457f61","Type":"ContainerDied","Data":"bac8715c274189d3e68892b1485f0b76eead2cb66eb41971200f40fbddb0ae58"} Mar 11 09:02:53 crc kubenswrapper[4808]: I0311 09:02:53.788890 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"739ef767-9cc0-4f25-82d6-4f17ee457f61","Type":"ContainerDied","Data":"ca53a94271c4cc2abe22fff358b29cea2d31a56a715fbe439d44661a8ab9cf8a"} Mar 11 09:02:53 crc kubenswrapper[4808]: I0311 09:02:53.807117 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"739ef767-9cc0-4f25-82d6-4f17ee457f61","Type":"ContainerDied","Data":"2bec327a0d71ff728f791812031118beaf98ec1c3699bce351678b0eb18a3eb3"} Mar 11 09:02:53 crc kubenswrapper[4808]: I0311 09:02:53.818582 4808 scope.go:117] "RemoveContainer" containerID="a9eb276a3708ab32e878bcdcf4d3597cb44173cc940b8b74f6a1dee37bbdfdf6" Mar 11 09:02:53 crc kubenswrapper[4808]: I0311 09:02:53.842460 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 11 09:02:53 crc kubenswrapper[4808]: I0311 09:02:53.888632 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 11 09:02:53 crc kubenswrapper[4808]: I0311 09:02:53.904417 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Mar 11 09:02:53 crc kubenswrapper[4808]: I0311 09:02:53.905772 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 11 09:02:53 crc kubenswrapper[4808]: I0311 09:02:53.905853 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 11 09:02:53 crc kubenswrapper[4808]: I0311 09:02:53.909756 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Mar 11 09:02:53 crc kubenswrapper[4808]: I0311 09:02:53.913662 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Mar 11 09:02:54 crc kubenswrapper[4808]: I0311 09:02:54.044797 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45a36b4a-f974-46f6-a719-9765499308ed-scripts\") pod \"glance-default-external-api-0\" (UID: \"45a36b4a-f974-46f6-a719-9765499308ed\") " pod="openstack/glance-default-external-api-0" Mar 11 09:02:54 crc kubenswrapper[4808]: I0311 09:02:54.045111 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45a36b4a-f974-46f6-a719-9765499308ed-config-data\") pod \"glance-default-external-api-0\" (UID: \"45a36b4a-f974-46f6-a719-9765499308ed\") " pod="openstack/glance-default-external-api-0" Mar 11 09:02:54 crc kubenswrapper[4808]: I0311 09:02:54.045139 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45a36b4a-f974-46f6-a719-9765499308ed-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"45a36b4a-f974-46f6-a719-9765499308ed\") " pod="openstack/glance-default-external-api-0" Mar 11 09:02:54 crc kubenswrapper[4808]: I0311 09:02:54.045169 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/45a36b4a-f974-46f6-a719-9765499308ed-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"45a36b4a-f974-46f6-a719-9765499308ed\") " pod="openstack/glance-default-external-api-0" Mar 11 09:02:54 crc kubenswrapper[4808]: I0311 09:02:54.045214 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcsns\" (UniqueName: \"kubernetes.io/projected/45a36b4a-f974-46f6-a719-9765499308ed-kube-api-access-wcsns\") pod \"glance-default-external-api-0\" (UID: \"45a36b4a-f974-46f6-a719-9765499308ed\") " pod="openstack/glance-default-external-api-0" Mar 11 09:02:54 crc kubenswrapper[4808]: I0311 09:02:54.045235 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"45a36b4a-f974-46f6-a719-9765499308ed\") " pod="openstack/glance-default-external-api-0" Mar 11 09:02:54 crc kubenswrapper[4808]: I0311 09:02:54.045288 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/45a36b4a-f974-46f6-a719-9765499308ed-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"45a36b4a-f974-46f6-a719-9765499308ed\") " pod="openstack/glance-default-external-api-0" Mar 11 09:02:54 crc kubenswrapper[4808]: I0311 09:02:54.045326 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45a36b4a-f974-46f6-a719-9765499308ed-logs\") pod \"glance-default-external-api-0\" (UID: \"45a36b4a-f974-46f6-a719-9765499308ed\") " pod="openstack/glance-default-external-api-0" Mar 11 09:02:54 crc kubenswrapper[4808]: I0311 09:02:54.092272 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-dmgm7"] Mar 11 09:02:54 crc kubenswrapper[4808]: W0311 09:02:54.101579 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod16e5dd70_3baf_4a95_be4e_d3f27d963aa6.slice/crio-7a6ae4efd2d1831d694d2b121e1d686b2864d282cf76b009f326aefb6ac90c29 WatchSource:0}: Error finding container 7a6ae4efd2d1831d694d2b121e1d686b2864d282cf76b009f326aefb6ac90c29: Status 404 returned error can't find the container with id 7a6ae4efd2d1831d694d2b121e1d686b2864d282cf76b009f326aefb6ac90c29 Mar 11 09:02:54 crc kubenswrapper[4808]: I0311 09:02:54.151254 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"45a36b4a-f974-46f6-a719-9765499308ed\") " pod="openstack/glance-default-external-api-0" Mar 11 09:02:54 crc kubenswrapper[4808]: I0311 09:02:54.151350 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/45a36b4a-f974-46f6-a719-9765499308ed-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"45a36b4a-f974-46f6-a719-9765499308ed\") " pod="openstack/glance-default-external-api-0" Mar 11 09:02:54 crc kubenswrapper[4808]: I0311 09:02:54.151407 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45a36b4a-f974-46f6-a719-9765499308ed-logs\") pod \"glance-default-external-api-0\" (UID: \"45a36b4a-f974-46f6-a719-9765499308ed\") " pod="openstack/glance-default-external-api-0" Mar 11 09:02:54 crc kubenswrapper[4808]: I0311 09:02:54.151462 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45a36b4a-f974-46f6-a719-9765499308ed-scripts\") pod \"glance-default-external-api-0\" (UID: \"45a36b4a-f974-46f6-a719-9765499308ed\") " pod="openstack/glance-default-external-api-0" Mar 11 09:02:54 crc kubenswrapper[4808]: I0311 09:02:54.151499 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45a36b4a-f974-46f6-a719-9765499308ed-config-data\") pod \"glance-default-external-api-0\" (UID: \"45a36b4a-f974-46f6-a719-9765499308ed\") " pod="openstack/glance-default-external-api-0" Mar 11 09:02:54 crc kubenswrapper[4808]: I0311 09:02:54.151526 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45a36b4a-f974-46f6-a719-9765499308ed-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"45a36b4a-f974-46f6-a719-9765499308ed\") " pod="openstack/glance-default-external-api-0" Mar 11 09:02:54 crc kubenswrapper[4808]: I0311 09:02:54.151554 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/45a36b4a-f974-46f6-a719-9765499308ed-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"45a36b4a-f974-46f6-a719-9765499308ed\") " pod="openstack/glance-default-external-api-0" Mar 11 09:02:54 crc kubenswrapper[4808]: I0311 09:02:54.151594 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcsns\" (UniqueName: \"kubernetes.io/projected/45a36b4a-f974-46f6-a719-9765499308ed-kube-api-access-wcsns\") pod \"glance-default-external-api-0\" (UID: \"45a36b4a-f974-46f6-a719-9765499308ed\") " pod="openstack/glance-default-external-api-0" Mar 11 09:02:54 crc kubenswrapper[4808]: I0311 09:02:54.151760 4808 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"45a36b4a-f974-46f6-a719-9765499308ed\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Mar 11 09:02:54 crc kubenswrapper[4808]: I0311 09:02:54.154231 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45a36b4a-f974-46f6-a719-9765499308ed-logs\") pod \"glance-default-external-api-0\" (UID: \"45a36b4a-f974-46f6-a719-9765499308ed\") " pod="openstack/glance-default-external-api-0" Mar 11 09:02:54 crc kubenswrapper[4808]: I0311 09:02:54.157081 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/45a36b4a-f974-46f6-a719-9765499308ed-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"45a36b4a-f974-46f6-a719-9765499308ed\") " pod="openstack/glance-default-external-api-0" Mar 11 09:02:54 crc kubenswrapper[4808]: I0311 09:02:54.162622 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45a36b4a-f974-46f6-a719-9765499308ed-scripts\") pod \"glance-default-external-api-0\" (UID: \"45a36b4a-f974-46f6-a719-9765499308ed\") " pod="openstack/glance-default-external-api-0" Mar 11 09:02:54 crc kubenswrapper[4808]: I0311 09:02:54.166129 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45a36b4a-f974-46f6-a719-9765499308ed-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"45a36b4a-f974-46f6-a719-9765499308ed\") " pod="openstack/glance-default-external-api-0" Mar 11 09:02:54 crc kubenswrapper[4808]: I0311 09:02:54.181206 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45a36b4a-f974-46f6-a719-9765499308ed-config-data\") pod \"glance-default-external-api-0\" (UID: \"45a36b4a-f974-46f6-a719-9765499308ed\") " pod="openstack/glance-default-external-api-0" Mar 11 09:02:54 crc kubenswrapper[4808]: I0311 09:02:54.196495 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcsns\" (UniqueName: \"kubernetes.io/projected/45a36b4a-f974-46f6-a719-9765499308ed-kube-api-access-wcsns\") pod \"glance-default-external-api-0\" (UID: \"45a36b4a-f974-46f6-a719-9765499308ed\") " pod="openstack/glance-default-external-api-0" Mar 11 09:02:54 crc kubenswrapper[4808]: I0311 09:02:54.196575 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/45a36b4a-f974-46f6-a719-9765499308ed-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"45a36b4a-f974-46f6-a719-9765499308ed\") " pod="openstack/glance-default-external-api-0" Mar 11 09:02:54 crc kubenswrapper[4808]: I0311 09:02:54.222957 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"45a36b4a-f974-46f6-a719-9765499308ed\") " pod="openstack/glance-default-external-api-0" Mar 11 09:02:54 crc kubenswrapper[4808]: I0311 09:02:54.261902 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 11 09:02:54 crc kubenswrapper[4808]: I0311 09:02:54.799414 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b3cf17f3-18e6-43f9-ab09-5882a99ffa51","Type":"ContainerStarted","Data":"69ba1fe4eed62164ec20e29e3973c8d7238e07cc99a3de39bc2e21c2312afc26"} Mar 11 09:02:54 crc kubenswrapper[4808]: I0311 09:02:54.800473 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-dmgm7" event={"ID":"16e5dd70-3baf-4a95-be4e-d3f27d963aa6","Type":"ContainerStarted","Data":"7a6ae4efd2d1831d694d2b121e1d686b2864d282cf76b009f326aefb6ac90c29"} Mar 11 09:02:54 crc kubenswrapper[4808]: I0311 09:02:54.807680 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 11 09:02:54 crc kubenswrapper[4808]: I0311 09:02:54.824845 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.824820097 podStartE2EDuration="4.824820097s" podCreationTimestamp="2026-03-11 09:02:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:02:54.82181686 +0000 UTC m=+1425.775140180" watchObservedRunningTime="2026-03-11 09:02:54.824820097 +0000 UTC m=+1425.778143417" Mar 11 09:02:55 crc kubenswrapper[4808]: I0311 09:02:55.806932 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a9b795b-5646-4126-b1fb-609c53efdf13" path="/var/lib/kubelet/pods/1a9b795b-5646-4126-b1fb-609c53efdf13/volumes" Mar 11 09:02:55 crc kubenswrapper[4808]: I0311 09:02:55.829328 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"45a36b4a-f974-46f6-a719-9765499308ed","Type":"ContainerStarted","Data":"b19b84b99164bc73197e675a39d0a76695e78a944f51b46351fa56c764200830"} Mar 11 09:02:55 crc kubenswrapper[4808]: I0311 09:02:55.829380 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"45a36b4a-f974-46f6-a719-9765499308ed","Type":"ContainerStarted","Data":"ed23abd17b0f6436dcbda8e69d671ae6eccdff2892f515ea882b5ba02bae2340"} Mar 11 09:02:56 crc kubenswrapper[4808]: I0311 09:02:56.335234 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-764ddbc49b-rd7qj" Mar 11 09:02:56 crc kubenswrapper[4808]: I0311 09:02:56.407720 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1d8e77d9-ecc3-4798-9a33-aeaa7e24a907-config\") pod \"1d8e77d9-ecc3-4798-9a33-aeaa7e24a907\" (UID: \"1d8e77d9-ecc3-4798-9a33-aeaa7e24a907\") " Mar 11 09:02:56 crc kubenswrapper[4808]: I0311 09:02:56.407765 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1d8e77d9-ecc3-4798-9a33-aeaa7e24a907-httpd-config\") pod \"1d8e77d9-ecc3-4798-9a33-aeaa7e24a907\" (UID: \"1d8e77d9-ecc3-4798-9a33-aeaa7e24a907\") " Mar 11 09:02:56 crc kubenswrapper[4808]: I0311 09:02:56.407811 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d8e77d9-ecc3-4798-9a33-aeaa7e24a907-combined-ca-bundle\") pod \"1d8e77d9-ecc3-4798-9a33-aeaa7e24a907\" (UID: \"1d8e77d9-ecc3-4798-9a33-aeaa7e24a907\") " Mar 11 09:02:56 crc kubenswrapper[4808]: I0311 09:02:56.407908 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d8e77d9-ecc3-4798-9a33-aeaa7e24a907-ovndb-tls-certs\") pod \"1d8e77d9-ecc3-4798-9a33-aeaa7e24a907\" (UID: \"1d8e77d9-ecc3-4798-9a33-aeaa7e24a907\") " Mar 11 09:02:56 crc kubenswrapper[4808]: I0311 09:02:56.407943 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jv42w\" (UniqueName: \"kubernetes.io/projected/1d8e77d9-ecc3-4798-9a33-aeaa7e24a907-kube-api-access-jv42w\") pod \"1d8e77d9-ecc3-4798-9a33-aeaa7e24a907\" (UID: \"1d8e77d9-ecc3-4798-9a33-aeaa7e24a907\") " Mar 11 09:02:56 crc kubenswrapper[4808]: I0311 09:02:56.417030 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d8e77d9-ecc3-4798-9a33-aeaa7e24a907-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "1d8e77d9-ecc3-4798-9a33-aeaa7e24a907" (UID: "1d8e77d9-ecc3-4798-9a33-aeaa7e24a907"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:02:56 crc kubenswrapper[4808]: I0311 09:02:56.420589 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d8e77d9-ecc3-4798-9a33-aeaa7e24a907-kube-api-access-jv42w" (OuterVolumeSpecName: "kube-api-access-jv42w") pod "1d8e77d9-ecc3-4798-9a33-aeaa7e24a907" (UID: "1d8e77d9-ecc3-4798-9a33-aeaa7e24a907"). InnerVolumeSpecName "kube-api-access-jv42w". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:02:56 crc kubenswrapper[4808]: I0311 09:02:56.493038 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d8e77d9-ecc3-4798-9a33-aeaa7e24a907-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1d8e77d9-ecc3-4798-9a33-aeaa7e24a907" (UID: "1d8e77d9-ecc3-4798-9a33-aeaa7e24a907"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:02:56 crc kubenswrapper[4808]: I0311 09:02:56.508529 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d8e77d9-ecc3-4798-9a33-aeaa7e24a907-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "1d8e77d9-ecc3-4798-9a33-aeaa7e24a907" (UID: "1d8e77d9-ecc3-4798-9a33-aeaa7e24a907"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:02:56 crc kubenswrapper[4808]: I0311 09:02:56.509844 4808 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d8e77d9-ecc3-4798-9a33-aeaa7e24a907-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:56 crc kubenswrapper[4808]: I0311 09:02:56.509876 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jv42w\" (UniqueName: \"kubernetes.io/projected/1d8e77d9-ecc3-4798-9a33-aeaa7e24a907-kube-api-access-jv42w\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:56 crc kubenswrapper[4808]: I0311 09:02:56.509892 4808 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1d8e77d9-ecc3-4798-9a33-aeaa7e24a907-httpd-config\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:56 crc kubenswrapper[4808]: I0311 09:02:56.509905 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d8e77d9-ecc3-4798-9a33-aeaa7e24a907-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:56 crc kubenswrapper[4808]: I0311 09:02:56.521264 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d8e77d9-ecc3-4798-9a33-aeaa7e24a907-config" (OuterVolumeSpecName: "config") pod "1d8e77d9-ecc3-4798-9a33-aeaa7e24a907" (UID: "1d8e77d9-ecc3-4798-9a33-aeaa7e24a907"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:02:56 crc kubenswrapper[4808]: I0311 09:02:56.611823 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/1d8e77d9-ecc3-4798-9a33-aeaa7e24a907-config\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:56 crc kubenswrapper[4808]: I0311 09:02:56.878652 4808 generic.go:334] "Generic (PLEG): container finished" podID="1d8e77d9-ecc3-4798-9a33-aeaa7e24a907" containerID="8dd16f3fab19aaa57bb055c80d4f3c1c63cf41c143b321c09b021c8b3dd5a644" exitCode=0 Mar 11 09:02:56 crc kubenswrapper[4808]: I0311 09:02:56.878714 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-764ddbc49b-rd7qj" event={"ID":"1d8e77d9-ecc3-4798-9a33-aeaa7e24a907","Type":"ContainerDied","Data":"8dd16f3fab19aaa57bb055c80d4f3c1c63cf41c143b321c09b021c8b3dd5a644"} Mar 11 09:02:56 crc kubenswrapper[4808]: I0311 09:02:56.878741 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-764ddbc49b-rd7qj" event={"ID":"1d8e77d9-ecc3-4798-9a33-aeaa7e24a907","Type":"ContainerDied","Data":"5b947e34552d117576ef7a2239d01f276eb51c651b264f2535121438fd3c5777"} Mar 11 09:02:56 crc kubenswrapper[4808]: I0311 09:02:56.878757 4808 scope.go:117] "RemoveContainer" containerID="8add428764c0b0d0319171f3520b3b32308c89e35aac004b113120480955c405" Mar 11 09:02:56 crc kubenswrapper[4808]: I0311 09:02:56.878878 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-764ddbc49b-rd7qj" Mar 11 09:02:56 crc kubenswrapper[4808]: I0311 09:02:56.881816 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"45a36b4a-f974-46f6-a719-9765499308ed","Type":"ContainerStarted","Data":"cf55bdcbeb3d626ba9dfd3112f8a4875f325fed6c1ddb8829be7326c5b814762"} Mar 11 09:02:56 crc kubenswrapper[4808]: I0311 09:02:56.913557 4808 scope.go:117] "RemoveContainer" containerID="8dd16f3fab19aaa57bb055c80d4f3c1c63cf41c143b321c09b021c8b3dd5a644" Mar 11 09:02:56 crc kubenswrapper[4808]: I0311 09:02:56.922436 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.922413686 podStartE2EDuration="3.922413686s" podCreationTimestamp="2026-03-11 09:02:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:02:56.906066211 +0000 UTC m=+1427.859389541" watchObservedRunningTime="2026-03-11 09:02:56.922413686 +0000 UTC m=+1427.875737016" Mar 11 09:02:56 crc kubenswrapper[4808]: I0311 09:02:56.935617 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-764ddbc49b-rd7qj"] Mar 11 09:02:56 crc kubenswrapper[4808]: I0311 09:02:56.942167 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-764ddbc49b-rd7qj"] Mar 11 09:02:56 crc kubenswrapper[4808]: I0311 09:02:56.952622 4808 scope.go:117] "RemoveContainer" containerID="8add428764c0b0d0319171f3520b3b32308c89e35aac004b113120480955c405" Mar 11 09:02:56 crc kubenswrapper[4808]: E0311 09:02:56.953111 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8add428764c0b0d0319171f3520b3b32308c89e35aac004b113120480955c405\": container with ID starting with 8add428764c0b0d0319171f3520b3b32308c89e35aac004b113120480955c405 not found: ID does not exist" containerID="8add428764c0b0d0319171f3520b3b32308c89e35aac004b113120480955c405" Mar 11 09:02:56 crc kubenswrapper[4808]: I0311 09:02:56.953149 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8add428764c0b0d0319171f3520b3b32308c89e35aac004b113120480955c405"} err="failed to get container status \"8add428764c0b0d0319171f3520b3b32308c89e35aac004b113120480955c405\": rpc error: code = NotFound desc = could not find container \"8add428764c0b0d0319171f3520b3b32308c89e35aac004b113120480955c405\": container with ID starting with 8add428764c0b0d0319171f3520b3b32308c89e35aac004b113120480955c405 not found: ID does not exist" Mar 11 09:02:56 crc kubenswrapper[4808]: I0311 09:02:56.953187 4808 scope.go:117] "RemoveContainer" containerID="8dd16f3fab19aaa57bb055c80d4f3c1c63cf41c143b321c09b021c8b3dd5a644" Mar 11 09:02:56 crc kubenswrapper[4808]: E0311 09:02:56.953529 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8dd16f3fab19aaa57bb055c80d4f3c1c63cf41c143b321c09b021c8b3dd5a644\": container with ID starting with 8dd16f3fab19aaa57bb055c80d4f3c1c63cf41c143b321c09b021c8b3dd5a644 not found: ID does not exist" containerID="8dd16f3fab19aaa57bb055c80d4f3c1c63cf41c143b321c09b021c8b3dd5a644" Mar 11 09:02:56 crc kubenswrapper[4808]: I0311 09:02:56.953561 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8dd16f3fab19aaa57bb055c80d4f3c1c63cf41c143b321c09b021c8b3dd5a644"} err="failed to get container status \"8dd16f3fab19aaa57bb055c80d4f3c1c63cf41c143b321c09b021c8b3dd5a644\": rpc error: code = NotFound desc = could not find container \"8dd16f3fab19aaa57bb055c80d4f3c1c63cf41c143b321c09b021c8b3dd5a644\": container with ID starting with 8dd16f3fab19aaa57bb055c80d4f3c1c63cf41c143b321c09b021c8b3dd5a644 not found: ID does not exist" Mar 11 09:02:57 crc kubenswrapper[4808]: I0311 09:02:57.801475 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d8e77d9-ecc3-4798-9a33-aeaa7e24a907" path="/var/lib/kubelet/pods/1d8e77d9-ecc3-4798-9a33-aeaa7e24a907/volumes" Mar 11 09:02:59 crc kubenswrapper[4808]: I0311 09:02:59.851616 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-665556c5fd-bnc2f" Mar 11 09:02:59 crc kubenswrapper[4808]: I0311 09:02:59.852651 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-665556c5fd-bnc2f" Mar 11 09:02:59 crc kubenswrapper[4808]: I0311 09:02:59.976163 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-74c8dbd954-d5nz4"] Mar 11 09:02:59 crc kubenswrapper[4808]: I0311 09:02:59.976484 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-74c8dbd954-d5nz4" podUID="0f04cfb3-ade7-4ab5-a497-0be1d758cad7" containerName="placement-log" containerID="cri-o://553c984b0b8459dbd21f31f9d84cf1612f1aeedce28219353b4b8bf87f44b318" gracePeriod=30 Mar 11 09:02:59 crc kubenswrapper[4808]: I0311 09:02:59.976734 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-74c8dbd954-d5nz4" podUID="0f04cfb3-ade7-4ab5-a497-0be1d758cad7" containerName="placement-api" containerID="cri-o://2b68ac5b1779d86b6bec7e3aa128e429df297b656bb8d0ebf0dffd758c3998f1" gracePeriod=30 Mar 11 09:03:00 crc kubenswrapper[4808]: I0311 09:03:00.020471 4808 generic.go:334] "Generic (PLEG): container finished" podID="739ef767-9cc0-4f25-82d6-4f17ee457f61" containerID="ed0a461d69f053368430d74e235ea6c53d79ec014f123770df58bd842adc0a7b" exitCode=0 Mar 11 09:03:00 crc kubenswrapper[4808]: I0311 09:03:00.020511 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"739ef767-9cc0-4f25-82d6-4f17ee457f61","Type":"ContainerDied","Data":"ed0a461d69f053368430d74e235ea6c53d79ec014f123770df58bd842adc0a7b"} Mar 11 09:03:01 crc kubenswrapper[4808]: I0311 09:03:01.032265 4808 generic.go:334] "Generic (PLEG): container finished" podID="0f04cfb3-ade7-4ab5-a497-0be1d758cad7" containerID="553c984b0b8459dbd21f31f9d84cf1612f1aeedce28219353b4b8bf87f44b318" exitCode=143 Mar 11 09:03:01 crc kubenswrapper[4808]: I0311 09:03:01.032382 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-74c8dbd954-d5nz4" event={"ID":"0f04cfb3-ade7-4ab5-a497-0be1d758cad7","Type":"ContainerDied","Data":"553c984b0b8459dbd21f31f9d84cf1612f1aeedce28219353b4b8bf87f44b318"} Mar 11 09:03:01 crc kubenswrapper[4808]: I0311 09:03:01.586041 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Mar 11 09:03:01 crc kubenswrapper[4808]: I0311 09:03:01.588625 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Mar 11 09:03:01 crc kubenswrapper[4808]: I0311 09:03:01.633414 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Mar 11 09:03:01 crc kubenswrapper[4808]: I0311 09:03:01.633720 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Mar 11 09:03:02 crc kubenswrapper[4808]: I0311 09:03:02.048102 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Mar 11 09:03:02 crc kubenswrapper[4808]: I0311 09:03:02.048183 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Mar 11 09:03:04 crc kubenswrapper[4808]: I0311 09:03:04.055976 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Mar 11 09:03:04 crc kubenswrapper[4808]: I0311 09:03:04.063996 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 11 09:03:04 crc kubenswrapper[4808]: I0311 09:03:04.068994 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"739ef767-9cc0-4f25-82d6-4f17ee457f61","Type":"ContainerDied","Data":"3b4e8ac696b140ed6583f58f7391f9d008ec8082bbf95d7ddb74490abcade0f1"} Mar 11 09:03:04 crc kubenswrapper[4808]: I0311 09:03:04.069045 4808 scope.go:117] "RemoveContainer" containerID="bac8715c274189d3e68892b1485f0b76eead2cb66eb41971200f40fbddb0ae58" Mar 11 09:03:04 crc kubenswrapper[4808]: I0311 09:03:04.072098 4808 generic.go:334] "Generic (PLEG): container finished" podID="0f04cfb3-ade7-4ab5-a497-0be1d758cad7" containerID="2b68ac5b1779d86b6bec7e3aa128e429df297b656bb8d0ebf0dffd758c3998f1" exitCode=0 Mar 11 09:03:04 crc kubenswrapper[4808]: I0311 09:03:04.072168 4808 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 11 09:03:04 crc kubenswrapper[4808]: I0311 09:03:04.072455 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-74c8dbd954-d5nz4" event={"ID":"0f04cfb3-ade7-4ab5-a497-0be1d758cad7","Type":"ContainerDied","Data":"2b68ac5b1779d86b6bec7e3aa128e429df297b656bb8d0ebf0dffd758c3998f1"} Mar 11 09:03:04 crc kubenswrapper[4808]: I0311 09:03:04.109525 4808 scope.go:117] "RemoveContainer" containerID="ca53a94271c4cc2abe22fff358b29cea2d31a56a715fbe439d44661a8ab9cf8a" Mar 11 09:03:04 crc kubenswrapper[4808]: I0311 09:03:04.131526 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-74c8dbd954-d5nz4" Mar 11 09:03:04 crc kubenswrapper[4808]: I0311 09:03:04.138098 4808 scope.go:117] "RemoveContainer" containerID="2bec327a0d71ff728f791812031118beaf98ec1c3699bce351678b0eb18a3eb3" Mar 11 09:03:04 crc kubenswrapper[4808]: I0311 09:03:04.166408 4808 scope.go:117] "RemoveContainer" containerID="ed0a461d69f053368430d74e235ea6c53d79ec014f123770df58bd842adc0a7b" Mar 11 09:03:04 crc kubenswrapper[4808]: I0311 09:03:04.209266 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/739ef767-9cc0-4f25-82d6-4f17ee457f61-sg-core-conf-yaml\") pod \"739ef767-9cc0-4f25-82d6-4f17ee457f61\" (UID: \"739ef767-9cc0-4f25-82d6-4f17ee457f61\") " Mar 11 09:03:04 crc kubenswrapper[4808]: I0311 09:03:04.209324 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/739ef767-9cc0-4f25-82d6-4f17ee457f61-config-data\") pod \"739ef767-9cc0-4f25-82d6-4f17ee457f61\" (UID: \"739ef767-9cc0-4f25-82d6-4f17ee457f61\") " Mar 11 09:03:04 crc kubenswrapper[4808]: I0311 09:03:04.209403 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/739ef767-9cc0-4f25-82d6-4f17ee457f61-log-httpd\") pod \"739ef767-9cc0-4f25-82d6-4f17ee457f61\" (UID: \"739ef767-9cc0-4f25-82d6-4f17ee457f61\") " Mar 11 09:03:04 crc kubenswrapper[4808]: I0311 09:03:04.209447 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qmdjb\" (UniqueName: \"kubernetes.io/projected/739ef767-9cc0-4f25-82d6-4f17ee457f61-kube-api-access-qmdjb\") pod \"739ef767-9cc0-4f25-82d6-4f17ee457f61\" (UID: \"739ef767-9cc0-4f25-82d6-4f17ee457f61\") " Mar 11 09:03:04 crc kubenswrapper[4808]: I0311 09:03:04.209564 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/739ef767-9cc0-4f25-82d6-4f17ee457f61-run-httpd\") pod \"739ef767-9cc0-4f25-82d6-4f17ee457f61\" (UID: \"739ef767-9cc0-4f25-82d6-4f17ee457f61\") " Mar 11 09:03:04 crc kubenswrapper[4808]: I0311 09:03:04.209609 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/739ef767-9cc0-4f25-82d6-4f17ee457f61-combined-ca-bundle\") pod \"739ef767-9cc0-4f25-82d6-4f17ee457f61\" (UID: \"739ef767-9cc0-4f25-82d6-4f17ee457f61\") " Mar 11 09:03:04 crc kubenswrapper[4808]: I0311 09:03:04.209690 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/739ef767-9cc0-4f25-82d6-4f17ee457f61-scripts\") pod \"739ef767-9cc0-4f25-82d6-4f17ee457f61\" (UID: \"739ef767-9cc0-4f25-82d6-4f17ee457f61\") " Mar 11 09:03:04 crc kubenswrapper[4808]: I0311 09:03:04.210305 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/739ef767-9cc0-4f25-82d6-4f17ee457f61-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "739ef767-9cc0-4f25-82d6-4f17ee457f61" (UID: "739ef767-9cc0-4f25-82d6-4f17ee457f61"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:03:04 crc kubenswrapper[4808]: I0311 09:03:04.210727 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/739ef767-9cc0-4f25-82d6-4f17ee457f61-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "739ef767-9cc0-4f25-82d6-4f17ee457f61" (UID: "739ef767-9cc0-4f25-82d6-4f17ee457f61"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:03:04 crc kubenswrapper[4808]: I0311 09:03:04.214132 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Mar 11 09:03:04 crc kubenswrapper[4808]: I0311 09:03:04.214158 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/739ef767-9cc0-4f25-82d6-4f17ee457f61-kube-api-access-qmdjb" (OuterVolumeSpecName: "kube-api-access-qmdjb") pod "739ef767-9cc0-4f25-82d6-4f17ee457f61" (UID: "739ef767-9cc0-4f25-82d6-4f17ee457f61"). InnerVolumeSpecName "kube-api-access-qmdjb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:03:04 crc kubenswrapper[4808]: I0311 09:03:04.222704 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/739ef767-9cc0-4f25-82d6-4f17ee457f61-scripts" (OuterVolumeSpecName: "scripts") pod "739ef767-9cc0-4f25-82d6-4f17ee457f61" (UID: "739ef767-9cc0-4f25-82d6-4f17ee457f61"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:03:04 crc kubenswrapper[4808]: I0311 09:03:04.257523 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/739ef767-9cc0-4f25-82d6-4f17ee457f61-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "739ef767-9cc0-4f25-82d6-4f17ee457f61" (UID: "739ef767-9cc0-4f25-82d6-4f17ee457f61"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:03:04 crc kubenswrapper[4808]: I0311 09:03:04.263845 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Mar 11 09:03:04 crc kubenswrapper[4808]: I0311 09:03:04.264066 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Mar 11 09:03:04 crc kubenswrapper[4808]: I0311 09:03:04.310948 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f04cfb3-ade7-4ab5-a497-0be1d758cad7-config-data\") pod \"0f04cfb3-ade7-4ab5-a497-0be1d758cad7\" (UID: \"0f04cfb3-ade7-4ab5-a497-0be1d758cad7\") " Mar 11 09:03:04 crc kubenswrapper[4808]: I0311 09:03:04.312251 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f04cfb3-ade7-4ab5-a497-0be1d758cad7-combined-ca-bundle\") pod \"0f04cfb3-ade7-4ab5-a497-0be1d758cad7\" (UID: \"0f04cfb3-ade7-4ab5-a497-0be1d758cad7\") " Mar 11 09:03:04 crc kubenswrapper[4808]: I0311 09:03:04.312397 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f04cfb3-ade7-4ab5-a497-0be1d758cad7-logs\") pod \"0f04cfb3-ade7-4ab5-a497-0be1d758cad7\" (UID: \"0f04cfb3-ade7-4ab5-a497-0be1d758cad7\") " Mar 11 09:03:04 crc kubenswrapper[4808]: I0311 09:03:04.312556 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f04cfb3-ade7-4ab5-a497-0be1d758cad7-public-tls-certs\") pod \"0f04cfb3-ade7-4ab5-a497-0be1d758cad7\" (UID: \"0f04cfb3-ade7-4ab5-a497-0be1d758cad7\") " Mar 11 09:03:04 crc kubenswrapper[4808]: I0311 09:03:04.312732 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f04cfb3-ade7-4ab5-a497-0be1d758cad7-scripts\") pod \"0f04cfb3-ade7-4ab5-a497-0be1d758cad7\" (UID: \"0f04cfb3-ade7-4ab5-a497-0be1d758cad7\") " Mar 11 09:03:04 crc kubenswrapper[4808]: I0311 09:03:04.312861 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6bvwz\" (UniqueName: \"kubernetes.io/projected/0f04cfb3-ade7-4ab5-a497-0be1d758cad7-kube-api-access-6bvwz\") pod \"0f04cfb3-ade7-4ab5-a497-0be1d758cad7\" (UID: \"0f04cfb3-ade7-4ab5-a497-0be1d758cad7\") " Mar 11 09:03:04 crc kubenswrapper[4808]: I0311 09:03:04.312959 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f04cfb3-ade7-4ab5-a497-0be1d758cad7-internal-tls-certs\") pod \"0f04cfb3-ade7-4ab5-a497-0be1d758cad7\" (UID: \"0f04cfb3-ade7-4ab5-a497-0be1d758cad7\") " Mar 11 09:03:04 crc kubenswrapper[4808]: I0311 09:03:04.313741 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f04cfb3-ade7-4ab5-a497-0be1d758cad7-logs" (OuterVolumeSpecName: "logs") pod "0f04cfb3-ade7-4ab5-a497-0be1d758cad7" (UID: "0f04cfb3-ade7-4ab5-a497-0be1d758cad7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:03:04 crc kubenswrapper[4808]: I0311 09:03:04.315825 4808 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/739ef767-9cc0-4f25-82d6-4f17ee457f61-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:03:04 crc kubenswrapper[4808]: I0311 09:03:04.315920 4808 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/739ef767-9cc0-4f25-82d6-4f17ee457f61-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 11 09:03:04 crc kubenswrapper[4808]: I0311 09:03:04.315981 4808 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/739ef767-9cc0-4f25-82d6-4f17ee457f61-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 11 09:03:04 crc kubenswrapper[4808]: I0311 09:03:04.316044 4808 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f04cfb3-ade7-4ab5-a497-0be1d758cad7-logs\") on node \"crc\" DevicePath \"\"" Mar 11 09:03:04 crc kubenswrapper[4808]: I0311 09:03:04.316106 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qmdjb\" (UniqueName: \"kubernetes.io/projected/739ef767-9cc0-4f25-82d6-4f17ee457f61-kube-api-access-qmdjb\") on node \"crc\" DevicePath \"\"" Mar 11 09:03:04 crc kubenswrapper[4808]: I0311 09:03:04.316167 4808 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/739ef767-9cc0-4f25-82d6-4f17ee457f61-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 11 09:03:04 crc kubenswrapper[4808]: I0311 09:03:04.315870 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Mar 11 09:03:04 crc kubenswrapper[4808]: I0311 09:03:04.319183 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f04cfb3-ade7-4ab5-a497-0be1d758cad7-scripts" (OuterVolumeSpecName: "scripts") pod "0f04cfb3-ade7-4ab5-a497-0be1d758cad7" (UID: "0f04cfb3-ade7-4ab5-a497-0be1d758cad7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:03:04 crc kubenswrapper[4808]: I0311 09:03:04.320310 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Mar 11 09:03:04 crc kubenswrapper[4808]: I0311 09:03:04.326717 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f04cfb3-ade7-4ab5-a497-0be1d758cad7-kube-api-access-6bvwz" (OuterVolumeSpecName: "kube-api-access-6bvwz") pod "0f04cfb3-ade7-4ab5-a497-0be1d758cad7" (UID: "0f04cfb3-ade7-4ab5-a497-0be1d758cad7"). InnerVolumeSpecName "kube-api-access-6bvwz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:03:04 crc kubenswrapper[4808]: I0311 09:03:04.338771 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/739ef767-9cc0-4f25-82d6-4f17ee457f61-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "739ef767-9cc0-4f25-82d6-4f17ee457f61" (UID: "739ef767-9cc0-4f25-82d6-4f17ee457f61"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:03:04 crc kubenswrapper[4808]: I0311 09:03:04.368056 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/739ef767-9cc0-4f25-82d6-4f17ee457f61-config-data" (OuterVolumeSpecName: "config-data") pod "739ef767-9cc0-4f25-82d6-4f17ee457f61" (UID: "739ef767-9cc0-4f25-82d6-4f17ee457f61"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:03:04 crc kubenswrapper[4808]: I0311 09:03:04.383101 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f04cfb3-ade7-4ab5-a497-0be1d758cad7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0f04cfb3-ade7-4ab5-a497-0be1d758cad7" (UID: "0f04cfb3-ade7-4ab5-a497-0be1d758cad7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:03:04 crc kubenswrapper[4808]: I0311 09:03:04.395011 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f04cfb3-ade7-4ab5-a497-0be1d758cad7-config-data" (OuterVolumeSpecName: "config-data") pod "0f04cfb3-ade7-4ab5-a497-0be1d758cad7" (UID: "0f04cfb3-ade7-4ab5-a497-0be1d758cad7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:03:04 crc kubenswrapper[4808]: I0311 09:03:04.418483 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/739ef767-9cc0-4f25-82d6-4f17ee457f61-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:03:04 crc kubenswrapper[4808]: I0311 09:03:04.418521 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f04cfb3-ade7-4ab5-a497-0be1d758cad7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:03:04 crc kubenswrapper[4808]: I0311 09:03:04.418535 4808 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f04cfb3-ade7-4ab5-a497-0be1d758cad7-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:03:04 crc kubenswrapper[4808]: I0311 09:03:04.418546 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/739ef767-9cc0-4f25-82d6-4f17ee457f61-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:03:04 crc kubenswrapper[4808]: I0311 09:03:04.418558 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6bvwz\" (UniqueName: \"kubernetes.io/projected/0f04cfb3-ade7-4ab5-a497-0be1d758cad7-kube-api-access-6bvwz\") on node \"crc\" DevicePath \"\"" Mar 11 09:03:04 crc kubenswrapper[4808]: I0311 09:03:04.418569 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f04cfb3-ade7-4ab5-a497-0be1d758cad7-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:03:04 crc kubenswrapper[4808]: I0311 09:03:04.432521 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f04cfb3-ade7-4ab5-a497-0be1d758cad7-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "0f04cfb3-ade7-4ab5-a497-0be1d758cad7" (UID: "0f04cfb3-ade7-4ab5-a497-0be1d758cad7"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:03:04 crc kubenswrapper[4808]: I0311 09:03:04.442671 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f04cfb3-ade7-4ab5-a497-0be1d758cad7-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "0f04cfb3-ade7-4ab5-a497-0be1d758cad7" (UID: "0f04cfb3-ade7-4ab5-a497-0be1d758cad7"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:03:04 crc kubenswrapper[4808]: I0311 09:03:04.519989 4808 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f04cfb3-ade7-4ab5-a497-0be1d758cad7-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 11 09:03:04 crc kubenswrapper[4808]: I0311 09:03:04.520020 4808 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f04cfb3-ade7-4ab5-a497-0be1d758cad7-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 11 09:03:05 crc kubenswrapper[4808]: I0311 09:03:05.082097 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 11 09:03:05 crc kubenswrapper[4808]: I0311 09:03:05.085321 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-dmgm7" event={"ID":"16e5dd70-3baf-4a95-be4e-d3f27d963aa6","Type":"ContainerStarted","Data":"f69486871219bd5a3cab0b6e63e7846b8bd6ca99e12a1918f2001e44c12769b8"} Mar 11 09:03:05 crc kubenswrapper[4808]: I0311 09:03:05.092251 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-74c8dbd954-d5nz4" event={"ID":"0f04cfb3-ade7-4ab5-a497-0be1d758cad7","Type":"ContainerDied","Data":"b4280b9132054522d6ddf770c67d9de49349a38b2b63727f0ff4611247da7761"} Mar 11 09:03:05 crc kubenswrapper[4808]: I0311 09:03:05.092294 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Mar 11 09:03:05 crc kubenswrapper[4808]: I0311 09:03:05.092311 4808 scope.go:117] "RemoveContainer" containerID="2b68ac5b1779d86b6bec7e3aa128e429df297b656bb8d0ebf0dffd758c3998f1" Mar 11 09:03:05 crc kubenswrapper[4808]: I0311 09:03:05.092463 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-74c8dbd954-d5nz4" Mar 11 09:03:05 crc kubenswrapper[4808]: I0311 09:03:05.093503 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Mar 11 09:03:05 crc kubenswrapper[4808]: I0311 09:03:05.115113 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-dmgm7" podStartSLOduration=2.590734779 podStartE2EDuration="12.115084851s" podCreationTimestamp="2026-03-11 09:02:53 +0000 UTC" firstStartedPulling="2026-03-11 09:02:54.109826636 +0000 UTC m=+1425.063149956" lastFinishedPulling="2026-03-11 09:03:03.634176708 +0000 UTC m=+1434.587500028" observedRunningTime="2026-03-11 09:03:05.110190958 +0000 UTC m=+1436.063514278" watchObservedRunningTime="2026-03-11 09:03:05.115084851 +0000 UTC m=+1436.068408201" Mar 11 09:03:05 crc kubenswrapper[4808]: I0311 09:03:05.117955 4808 scope.go:117] "RemoveContainer" containerID="553c984b0b8459dbd21f31f9d84cf1612f1aeedce28219353b4b8bf87f44b318" Mar 11 09:03:05 crc kubenswrapper[4808]: I0311 09:03:05.154399 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 11 09:03:05 crc kubenswrapper[4808]: I0311 09:03:05.161428 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 11 09:03:05 crc kubenswrapper[4808]: I0311 09:03:05.168855 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-74c8dbd954-d5nz4"] Mar 11 09:03:05 crc kubenswrapper[4808]: I0311 09:03:05.176067 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-74c8dbd954-d5nz4"] Mar 11 09:03:05 crc kubenswrapper[4808]: I0311 09:03:05.200114 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 11 09:03:05 crc kubenswrapper[4808]: E0311 09:03:05.200790 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="739ef767-9cc0-4f25-82d6-4f17ee457f61" containerName="ceilometer-central-agent" Mar 11 09:03:05 crc kubenswrapper[4808]: I0311 09:03:05.200869 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="739ef767-9cc0-4f25-82d6-4f17ee457f61" containerName="ceilometer-central-agent" Mar 11 09:03:05 crc kubenswrapper[4808]: E0311 09:03:05.200927 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f04cfb3-ade7-4ab5-a497-0be1d758cad7" containerName="placement-api" Mar 11 09:03:05 crc kubenswrapper[4808]: I0311 09:03:05.200983 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f04cfb3-ade7-4ab5-a497-0be1d758cad7" containerName="placement-api" Mar 11 09:03:05 crc kubenswrapper[4808]: E0311 09:03:05.201053 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d8e77d9-ecc3-4798-9a33-aeaa7e24a907" containerName="neutron-api" Mar 11 09:03:05 crc kubenswrapper[4808]: I0311 09:03:05.201103 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d8e77d9-ecc3-4798-9a33-aeaa7e24a907" containerName="neutron-api" Mar 11 09:03:05 crc kubenswrapper[4808]: E0311 09:03:05.201158 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d8e77d9-ecc3-4798-9a33-aeaa7e24a907" containerName="neutron-httpd" Mar 11 09:03:05 crc kubenswrapper[4808]: I0311 09:03:05.201214 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d8e77d9-ecc3-4798-9a33-aeaa7e24a907" containerName="neutron-httpd" Mar 11 09:03:05 crc kubenswrapper[4808]: E0311 09:03:05.201267 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f04cfb3-ade7-4ab5-a497-0be1d758cad7" containerName="placement-log" Mar 11 09:03:05 crc kubenswrapper[4808]: I0311 09:03:05.201322 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f04cfb3-ade7-4ab5-a497-0be1d758cad7" containerName="placement-log" Mar 11 09:03:05 crc kubenswrapper[4808]: E0311 09:03:05.201400 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="739ef767-9cc0-4f25-82d6-4f17ee457f61" containerName="sg-core" Mar 11 09:03:05 crc kubenswrapper[4808]: I0311 09:03:05.201452 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="739ef767-9cc0-4f25-82d6-4f17ee457f61" containerName="sg-core" Mar 11 09:03:05 crc kubenswrapper[4808]: E0311 09:03:05.201510 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="739ef767-9cc0-4f25-82d6-4f17ee457f61" containerName="proxy-httpd" Mar 11 09:03:05 crc kubenswrapper[4808]: I0311 09:03:05.201566 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="739ef767-9cc0-4f25-82d6-4f17ee457f61" containerName="proxy-httpd" Mar 11 09:03:05 crc kubenswrapper[4808]: E0311 09:03:05.201637 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="739ef767-9cc0-4f25-82d6-4f17ee457f61" containerName="ceilometer-notification-agent" Mar 11 09:03:05 crc kubenswrapper[4808]: I0311 09:03:05.201689 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="739ef767-9cc0-4f25-82d6-4f17ee457f61" containerName="ceilometer-notification-agent" Mar 11 09:03:05 crc kubenswrapper[4808]: I0311 09:03:05.201916 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="739ef767-9cc0-4f25-82d6-4f17ee457f61" containerName="sg-core" Mar 11 09:03:05 crc kubenswrapper[4808]: I0311 09:03:05.201992 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d8e77d9-ecc3-4798-9a33-aeaa7e24a907" containerName="neutron-api" Mar 11 09:03:05 crc kubenswrapper[4808]: I0311 09:03:05.202046 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f04cfb3-ade7-4ab5-a497-0be1d758cad7" containerName="placement-log" Mar 11 09:03:05 crc kubenswrapper[4808]: I0311 09:03:05.202106 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="739ef767-9cc0-4f25-82d6-4f17ee457f61" containerName="ceilometer-central-agent" Mar 11 09:03:05 crc kubenswrapper[4808]: I0311 09:03:05.202229 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f04cfb3-ade7-4ab5-a497-0be1d758cad7" containerName="placement-api" Mar 11 09:03:05 crc kubenswrapper[4808]: I0311 09:03:05.202283 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d8e77d9-ecc3-4798-9a33-aeaa7e24a907" containerName="neutron-httpd" Mar 11 09:03:05 crc kubenswrapper[4808]: I0311 09:03:05.202347 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="739ef767-9cc0-4f25-82d6-4f17ee457f61" containerName="proxy-httpd" Mar 11 09:03:05 crc kubenswrapper[4808]: I0311 09:03:05.202414 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="739ef767-9cc0-4f25-82d6-4f17ee457f61" containerName="ceilometer-notification-agent" Mar 11 09:03:05 crc kubenswrapper[4808]: I0311 09:03:05.204297 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 11 09:03:05 crc kubenswrapper[4808]: I0311 09:03:05.207076 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 11 09:03:05 crc kubenswrapper[4808]: I0311 09:03:05.207951 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 11 09:03:05 crc kubenswrapper[4808]: I0311 09:03:05.207961 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 11 09:03:05 crc kubenswrapper[4808]: I0311 09:03:05.335326 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3de31e55-309e-4501-87bd-656bc7aa9371-log-httpd\") pod \"ceilometer-0\" (UID: \"3de31e55-309e-4501-87bd-656bc7aa9371\") " pod="openstack/ceilometer-0" Mar 11 09:03:05 crc kubenswrapper[4808]: I0311 09:03:05.335711 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3de31e55-309e-4501-87bd-656bc7aa9371-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3de31e55-309e-4501-87bd-656bc7aa9371\") " pod="openstack/ceilometer-0" Mar 11 09:03:05 crc kubenswrapper[4808]: I0311 09:03:05.335771 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcwm5\" (UniqueName: \"kubernetes.io/projected/3de31e55-309e-4501-87bd-656bc7aa9371-kube-api-access-fcwm5\") pod \"ceilometer-0\" (UID: \"3de31e55-309e-4501-87bd-656bc7aa9371\") " pod="openstack/ceilometer-0" Mar 11 09:03:05 crc kubenswrapper[4808]: I0311 09:03:05.335818 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3de31e55-309e-4501-87bd-656bc7aa9371-config-data\") pod \"ceilometer-0\" (UID: \"3de31e55-309e-4501-87bd-656bc7aa9371\") " pod="openstack/ceilometer-0" Mar 11 09:03:05 crc kubenswrapper[4808]: I0311 09:03:05.335835 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3de31e55-309e-4501-87bd-656bc7aa9371-scripts\") pod \"ceilometer-0\" (UID: \"3de31e55-309e-4501-87bd-656bc7aa9371\") " pod="openstack/ceilometer-0" Mar 11 09:03:05 crc kubenswrapper[4808]: I0311 09:03:05.335855 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3de31e55-309e-4501-87bd-656bc7aa9371-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3de31e55-309e-4501-87bd-656bc7aa9371\") " pod="openstack/ceilometer-0" Mar 11 09:03:05 crc kubenswrapper[4808]: I0311 09:03:05.335869 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3de31e55-309e-4501-87bd-656bc7aa9371-run-httpd\") pod \"ceilometer-0\" (UID: \"3de31e55-309e-4501-87bd-656bc7aa9371\") " pod="openstack/ceilometer-0" Mar 11 09:03:05 crc kubenswrapper[4808]: I0311 09:03:05.437729 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcwm5\" (UniqueName: \"kubernetes.io/projected/3de31e55-309e-4501-87bd-656bc7aa9371-kube-api-access-fcwm5\") pod \"ceilometer-0\" (UID: \"3de31e55-309e-4501-87bd-656bc7aa9371\") " pod="openstack/ceilometer-0" Mar 11 09:03:05 crc kubenswrapper[4808]: I0311 09:03:05.437823 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3de31e55-309e-4501-87bd-656bc7aa9371-config-data\") pod \"ceilometer-0\" (UID: \"3de31e55-309e-4501-87bd-656bc7aa9371\") " pod="openstack/ceilometer-0" Mar 11 09:03:05 crc kubenswrapper[4808]: I0311 09:03:05.437848 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3de31e55-309e-4501-87bd-656bc7aa9371-scripts\") pod \"ceilometer-0\" (UID: \"3de31e55-309e-4501-87bd-656bc7aa9371\") " pod="openstack/ceilometer-0" Mar 11 09:03:05 crc kubenswrapper[4808]: I0311 09:03:05.437873 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3de31e55-309e-4501-87bd-656bc7aa9371-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3de31e55-309e-4501-87bd-656bc7aa9371\") " pod="openstack/ceilometer-0" Mar 11 09:03:05 crc kubenswrapper[4808]: I0311 09:03:05.437894 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3de31e55-309e-4501-87bd-656bc7aa9371-run-httpd\") pod \"ceilometer-0\" (UID: \"3de31e55-309e-4501-87bd-656bc7aa9371\") " pod="openstack/ceilometer-0" Mar 11 09:03:05 crc kubenswrapper[4808]: I0311 09:03:05.438013 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3de31e55-309e-4501-87bd-656bc7aa9371-log-httpd\") pod \"ceilometer-0\" (UID: \"3de31e55-309e-4501-87bd-656bc7aa9371\") " pod="openstack/ceilometer-0" Mar 11 09:03:05 crc kubenswrapper[4808]: I0311 09:03:05.438045 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3de31e55-309e-4501-87bd-656bc7aa9371-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3de31e55-309e-4501-87bd-656bc7aa9371\") " pod="openstack/ceilometer-0" Mar 11 09:03:05 crc kubenswrapper[4808]: I0311 09:03:05.439602 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3de31e55-309e-4501-87bd-656bc7aa9371-run-httpd\") pod \"ceilometer-0\" (UID: \"3de31e55-309e-4501-87bd-656bc7aa9371\") " pod="openstack/ceilometer-0" Mar 11 09:03:05 crc kubenswrapper[4808]: I0311 09:03:05.439634 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3de31e55-309e-4501-87bd-656bc7aa9371-log-httpd\") pod \"ceilometer-0\" (UID: \"3de31e55-309e-4501-87bd-656bc7aa9371\") " pod="openstack/ceilometer-0" Mar 11 09:03:05 crc kubenswrapper[4808]: I0311 09:03:05.443114 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3de31e55-309e-4501-87bd-656bc7aa9371-config-data\") pod \"ceilometer-0\" (UID: \"3de31e55-309e-4501-87bd-656bc7aa9371\") " pod="openstack/ceilometer-0" Mar 11 09:03:05 crc kubenswrapper[4808]: I0311 09:03:05.443297 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3de31e55-309e-4501-87bd-656bc7aa9371-scripts\") pod \"ceilometer-0\" (UID: \"3de31e55-309e-4501-87bd-656bc7aa9371\") " pod="openstack/ceilometer-0" Mar 11 09:03:05 crc kubenswrapper[4808]: I0311 09:03:05.443646 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3de31e55-309e-4501-87bd-656bc7aa9371-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3de31e55-309e-4501-87bd-656bc7aa9371\") " pod="openstack/ceilometer-0" Mar 11 09:03:05 crc kubenswrapper[4808]: I0311 09:03:05.444340 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3de31e55-309e-4501-87bd-656bc7aa9371-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3de31e55-309e-4501-87bd-656bc7aa9371\") " pod="openstack/ceilometer-0" Mar 11 09:03:05 crc kubenswrapper[4808]: I0311 09:03:05.475945 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcwm5\" (UniqueName: \"kubernetes.io/projected/3de31e55-309e-4501-87bd-656bc7aa9371-kube-api-access-fcwm5\") pod \"ceilometer-0\" (UID: \"3de31e55-309e-4501-87bd-656bc7aa9371\") " pod="openstack/ceilometer-0" Mar 11 09:03:05 crc kubenswrapper[4808]: I0311 09:03:05.521287 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 11 09:03:05 crc kubenswrapper[4808]: I0311 09:03:05.807290 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f04cfb3-ade7-4ab5-a497-0be1d758cad7" path="/var/lib/kubelet/pods/0f04cfb3-ade7-4ab5-a497-0be1d758cad7/volumes" Mar 11 09:03:05 crc kubenswrapper[4808]: I0311 09:03:05.808140 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="739ef767-9cc0-4f25-82d6-4f17ee457f61" path="/var/lib/kubelet/pods/739ef767-9cc0-4f25-82d6-4f17ee457f61/volumes" Mar 11 09:03:06 crc kubenswrapper[4808]: I0311 09:03:06.011532 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 11 09:03:06 crc kubenswrapper[4808]: W0311 09:03:06.020248 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3de31e55_309e_4501_87bd_656bc7aa9371.slice/crio-8a8d07ad8486f9f409e88404cf7043aa3de1f91cd75355cf9a82b489f36bd6ae WatchSource:0}: Error finding container 8a8d07ad8486f9f409e88404cf7043aa3de1f91cd75355cf9a82b489f36bd6ae: Status 404 returned error can't find the container with id 8a8d07ad8486f9f409e88404cf7043aa3de1f91cd75355cf9a82b489f36bd6ae Mar 11 09:03:06 crc kubenswrapper[4808]: I0311 09:03:06.099240 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3de31e55-309e-4501-87bd-656bc7aa9371","Type":"ContainerStarted","Data":"8a8d07ad8486f9f409e88404cf7043aa3de1f91cd75355cf9a82b489f36bd6ae"} Mar 11 09:03:07 crc kubenswrapper[4808]: I0311 09:03:07.051395 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Mar 11 09:03:07 crc kubenswrapper[4808]: I0311 09:03:07.090958 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Mar 11 09:03:07 crc kubenswrapper[4808]: I0311 09:03:07.144652 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3de31e55-309e-4501-87bd-656bc7aa9371","Type":"ContainerStarted","Data":"4ed7ed8b84fbb2f437e683872a6f8fc305094422bcecfc1826baf479a5fc57eb"} Mar 11 09:03:10 crc kubenswrapper[4808]: I0311 09:03:10.174027 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3de31e55-309e-4501-87bd-656bc7aa9371","Type":"ContainerStarted","Data":"fec39c8a8c3d87d26583ad89ff3733ac0678350e8f612a52e8f61ffd8e3ba207"} Mar 11 09:03:11 crc kubenswrapper[4808]: I0311 09:03:11.185539 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3de31e55-309e-4501-87bd-656bc7aa9371","Type":"ContainerStarted","Data":"da3f628818cb376f34eaf2a8c86934f297856acda7d6a10bc69704ab26aec7db"} Mar 11 09:03:13 crc kubenswrapper[4808]: I0311 09:03:13.204309 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3de31e55-309e-4501-87bd-656bc7aa9371","Type":"ContainerStarted","Data":"e23720cbf5892f051460487965ac84601f530a5305b2287572756a2eda3f540f"} Mar 11 09:03:13 crc kubenswrapper[4808]: I0311 09:03:13.204814 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 11 09:03:13 crc kubenswrapper[4808]: I0311 09:03:13.226110 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.955322358 podStartE2EDuration="8.226095422s" podCreationTimestamp="2026-03-11 09:03:05 +0000 UTC" firstStartedPulling="2026-03-11 09:03:06.023182655 +0000 UTC m=+1436.976505975" lastFinishedPulling="2026-03-11 09:03:12.293955729 +0000 UTC m=+1443.247279039" observedRunningTime="2026-03-11 09:03:13.222346213 +0000 UTC m=+1444.175669533" watchObservedRunningTime="2026-03-11 09:03:13.226095422 +0000 UTC m=+1444.179418742" Mar 11 09:03:13 crc kubenswrapper[4808]: I0311 09:03:13.707732 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 11 09:03:15 crc kubenswrapper[4808]: I0311 09:03:15.219176 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3de31e55-309e-4501-87bd-656bc7aa9371" containerName="ceilometer-central-agent" containerID="cri-o://4ed7ed8b84fbb2f437e683872a6f8fc305094422bcecfc1826baf479a5fc57eb" gracePeriod=30 Mar 11 09:03:15 crc kubenswrapper[4808]: I0311 09:03:15.219237 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3de31e55-309e-4501-87bd-656bc7aa9371" containerName="proxy-httpd" containerID="cri-o://e23720cbf5892f051460487965ac84601f530a5305b2287572756a2eda3f540f" gracePeriod=30 Mar 11 09:03:15 crc kubenswrapper[4808]: I0311 09:03:15.219260 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3de31e55-309e-4501-87bd-656bc7aa9371" containerName="ceilometer-notification-agent" containerID="cri-o://fec39c8a8c3d87d26583ad89ff3733ac0678350e8f612a52e8f61ffd8e3ba207" gracePeriod=30 Mar 11 09:03:15 crc kubenswrapper[4808]: I0311 09:03:15.219241 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3de31e55-309e-4501-87bd-656bc7aa9371" containerName="sg-core" containerID="cri-o://da3f628818cb376f34eaf2a8c86934f297856acda7d6a10bc69704ab26aec7db" gracePeriod=30 Mar 11 09:03:16 crc kubenswrapper[4808]: I0311 09:03:16.023055 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 11 09:03:16 crc kubenswrapper[4808]: I0311 09:03:16.130437 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3de31e55-309e-4501-87bd-656bc7aa9371-config-data\") pod \"3de31e55-309e-4501-87bd-656bc7aa9371\" (UID: \"3de31e55-309e-4501-87bd-656bc7aa9371\") " Mar 11 09:03:16 crc kubenswrapper[4808]: I0311 09:03:16.130748 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3de31e55-309e-4501-87bd-656bc7aa9371-run-httpd\") pod \"3de31e55-309e-4501-87bd-656bc7aa9371\" (UID: \"3de31e55-309e-4501-87bd-656bc7aa9371\") " Mar 11 09:03:16 crc kubenswrapper[4808]: I0311 09:03:16.130777 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcwm5\" (UniqueName: \"kubernetes.io/projected/3de31e55-309e-4501-87bd-656bc7aa9371-kube-api-access-fcwm5\") pod \"3de31e55-309e-4501-87bd-656bc7aa9371\" (UID: \"3de31e55-309e-4501-87bd-656bc7aa9371\") " Mar 11 09:03:16 crc kubenswrapper[4808]: I0311 09:03:16.130858 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3de31e55-309e-4501-87bd-656bc7aa9371-sg-core-conf-yaml\") pod \"3de31e55-309e-4501-87bd-656bc7aa9371\" (UID: \"3de31e55-309e-4501-87bd-656bc7aa9371\") " Mar 11 09:03:16 crc kubenswrapper[4808]: I0311 09:03:16.130930 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3de31e55-309e-4501-87bd-656bc7aa9371-combined-ca-bundle\") pod \"3de31e55-309e-4501-87bd-656bc7aa9371\" (UID: \"3de31e55-309e-4501-87bd-656bc7aa9371\") " Mar 11 09:03:16 crc kubenswrapper[4808]: I0311 09:03:16.130995 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3de31e55-309e-4501-87bd-656bc7aa9371-log-httpd\") pod \"3de31e55-309e-4501-87bd-656bc7aa9371\" (UID: \"3de31e55-309e-4501-87bd-656bc7aa9371\") " Mar 11 09:03:16 crc kubenswrapper[4808]: I0311 09:03:16.131049 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3de31e55-309e-4501-87bd-656bc7aa9371-scripts\") pod \"3de31e55-309e-4501-87bd-656bc7aa9371\" (UID: \"3de31e55-309e-4501-87bd-656bc7aa9371\") " Mar 11 09:03:16 crc kubenswrapper[4808]: I0311 09:03:16.131393 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3de31e55-309e-4501-87bd-656bc7aa9371-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "3de31e55-309e-4501-87bd-656bc7aa9371" (UID: "3de31e55-309e-4501-87bd-656bc7aa9371"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:03:16 crc kubenswrapper[4808]: I0311 09:03:16.131715 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3de31e55-309e-4501-87bd-656bc7aa9371-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "3de31e55-309e-4501-87bd-656bc7aa9371" (UID: "3de31e55-309e-4501-87bd-656bc7aa9371"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:03:16 crc kubenswrapper[4808]: I0311 09:03:16.131736 4808 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3de31e55-309e-4501-87bd-656bc7aa9371-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 11 09:03:16 crc kubenswrapper[4808]: I0311 09:03:16.136308 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3de31e55-309e-4501-87bd-656bc7aa9371-scripts" (OuterVolumeSpecName: "scripts") pod "3de31e55-309e-4501-87bd-656bc7aa9371" (UID: "3de31e55-309e-4501-87bd-656bc7aa9371"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:03:16 crc kubenswrapper[4808]: I0311 09:03:16.137846 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3de31e55-309e-4501-87bd-656bc7aa9371-kube-api-access-fcwm5" (OuterVolumeSpecName: "kube-api-access-fcwm5") pod "3de31e55-309e-4501-87bd-656bc7aa9371" (UID: "3de31e55-309e-4501-87bd-656bc7aa9371"). InnerVolumeSpecName "kube-api-access-fcwm5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:03:16 crc kubenswrapper[4808]: I0311 09:03:16.167874 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3de31e55-309e-4501-87bd-656bc7aa9371-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "3de31e55-309e-4501-87bd-656bc7aa9371" (UID: "3de31e55-309e-4501-87bd-656bc7aa9371"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:03:16 crc kubenswrapper[4808]: I0311 09:03:16.224733 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3de31e55-309e-4501-87bd-656bc7aa9371-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3de31e55-309e-4501-87bd-656bc7aa9371" (UID: "3de31e55-309e-4501-87bd-656bc7aa9371"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:03:16 crc kubenswrapper[4808]: I0311 09:03:16.231006 4808 generic.go:334] "Generic (PLEG): container finished" podID="16e5dd70-3baf-4a95-be4e-d3f27d963aa6" containerID="f69486871219bd5a3cab0b6e63e7846b8bd6ca99e12a1918f2001e44c12769b8" exitCode=0 Mar 11 09:03:16 crc kubenswrapper[4808]: I0311 09:03:16.231066 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-dmgm7" event={"ID":"16e5dd70-3baf-4a95-be4e-d3f27d963aa6","Type":"ContainerDied","Data":"f69486871219bd5a3cab0b6e63e7846b8bd6ca99e12a1918f2001e44c12769b8"} Mar 11 09:03:16 crc kubenswrapper[4808]: I0311 09:03:16.233396 4808 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3de31e55-309e-4501-87bd-656bc7aa9371-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 11 09:03:16 crc kubenswrapper[4808]: I0311 09:03:16.233430 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcwm5\" (UniqueName: \"kubernetes.io/projected/3de31e55-309e-4501-87bd-656bc7aa9371-kube-api-access-fcwm5\") on node \"crc\" DevicePath \"\"" Mar 11 09:03:16 crc kubenswrapper[4808]: I0311 09:03:16.233446 4808 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3de31e55-309e-4501-87bd-656bc7aa9371-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 11 09:03:16 crc kubenswrapper[4808]: I0311 09:03:16.233461 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3de31e55-309e-4501-87bd-656bc7aa9371-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:03:16 crc kubenswrapper[4808]: I0311 09:03:16.233508 4808 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3de31e55-309e-4501-87bd-656bc7aa9371-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:03:16 crc kubenswrapper[4808]: I0311 09:03:16.239033 4808 generic.go:334] "Generic (PLEG): container finished" podID="3de31e55-309e-4501-87bd-656bc7aa9371" containerID="e23720cbf5892f051460487965ac84601f530a5305b2287572756a2eda3f540f" exitCode=0 Mar 11 09:03:16 crc kubenswrapper[4808]: I0311 09:03:16.239062 4808 generic.go:334] "Generic (PLEG): container finished" podID="3de31e55-309e-4501-87bd-656bc7aa9371" containerID="da3f628818cb376f34eaf2a8c86934f297856acda7d6a10bc69704ab26aec7db" exitCode=2 Mar 11 09:03:16 crc kubenswrapper[4808]: I0311 09:03:16.239071 4808 generic.go:334] "Generic (PLEG): container finished" podID="3de31e55-309e-4501-87bd-656bc7aa9371" containerID="fec39c8a8c3d87d26583ad89ff3733ac0678350e8f612a52e8f61ffd8e3ba207" exitCode=0 Mar 11 09:03:16 crc kubenswrapper[4808]: I0311 09:03:16.239081 4808 generic.go:334] "Generic (PLEG): container finished" podID="3de31e55-309e-4501-87bd-656bc7aa9371" containerID="4ed7ed8b84fbb2f437e683872a6f8fc305094422bcecfc1826baf479a5fc57eb" exitCode=0 Mar 11 09:03:16 crc kubenswrapper[4808]: I0311 09:03:16.239103 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3de31e55-309e-4501-87bd-656bc7aa9371","Type":"ContainerDied","Data":"e23720cbf5892f051460487965ac84601f530a5305b2287572756a2eda3f540f"} Mar 11 09:03:16 crc kubenswrapper[4808]: I0311 09:03:16.239130 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3de31e55-309e-4501-87bd-656bc7aa9371","Type":"ContainerDied","Data":"da3f628818cb376f34eaf2a8c86934f297856acda7d6a10bc69704ab26aec7db"} Mar 11 09:03:16 crc kubenswrapper[4808]: I0311 09:03:16.239143 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3de31e55-309e-4501-87bd-656bc7aa9371","Type":"ContainerDied","Data":"fec39c8a8c3d87d26583ad89ff3733ac0678350e8f612a52e8f61ffd8e3ba207"} Mar 11 09:03:16 crc kubenswrapper[4808]: I0311 09:03:16.239153 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3de31e55-309e-4501-87bd-656bc7aa9371","Type":"ContainerDied","Data":"4ed7ed8b84fbb2f437e683872a6f8fc305094422bcecfc1826baf479a5fc57eb"} Mar 11 09:03:16 crc kubenswrapper[4808]: I0311 09:03:16.239166 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3de31e55-309e-4501-87bd-656bc7aa9371","Type":"ContainerDied","Data":"8a8d07ad8486f9f409e88404cf7043aa3de1f91cd75355cf9a82b489f36bd6ae"} Mar 11 09:03:16 crc kubenswrapper[4808]: I0311 09:03:16.239184 4808 scope.go:117] "RemoveContainer" containerID="e23720cbf5892f051460487965ac84601f530a5305b2287572756a2eda3f540f" Mar 11 09:03:16 crc kubenswrapper[4808]: I0311 09:03:16.239280 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 11 09:03:16 crc kubenswrapper[4808]: I0311 09:03:16.249567 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3de31e55-309e-4501-87bd-656bc7aa9371-config-data" (OuterVolumeSpecName: "config-data") pod "3de31e55-309e-4501-87bd-656bc7aa9371" (UID: "3de31e55-309e-4501-87bd-656bc7aa9371"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:03:16 crc kubenswrapper[4808]: I0311 09:03:16.269353 4808 scope.go:117] "RemoveContainer" containerID="da3f628818cb376f34eaf2a8c86934f297856acda7d6a10bc69704ab26aec7db" Mar 11 09:03:16 crc kubenswrapper[4808]: I0311 09:03:16.290333 4808 scope.go:117] "RemoveContainer" containerID="fec39c8a8c3d87d26583ad89ff3733ac0678350e8f612a52e8f61ffd8e3ba207" Mar 11 09:03:16 crc kubenswrapper[4808]: I0311 09:03:16.308050 4808 scope.go:117] "RemoveContainer" containerID="4ed7ed8b84fbb2f437e683872a6f8fc305094422bcecfc1826baf479a5fc57eb" Mar 11 09:03:16 crc kubenswrapper[4808]: I0311 09:03:16.328109 4808 scope.go:117] "RemoveContainer" containerID="e23720cbf5892f051460487965ac84601f530a5305b2287572756a2eda3f540f" Mar 11 09:03:16 crc kubenswrapper[4808]: E0311 09:03:16.328574 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e23720cbf5892f051460487965ac84601f530a5305b2287572756a2eda3f540f\": container with ID starting with e23720cbf5892f051460487965ac84601f530a5305b2287572756a2eda3f540f not found: ID does not exist" containerID="e23720cbf5892f051460487965ac84601f530a5305b2287572756a2eda3f540f" Mar 11 09:03:16 crc kubenswrapper[4808]: I0311 09:03:16.328615 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e23720cbf5892f051460487965ac84601f530a5305b2287572756a2eda3f540f"} err="failed to get container status \"e23720cbf5892f051460487965ac84601f530a5305b2287572756a2eda3f540f\": rpc error: code = NotFound desc = could not find container \"e23720cbf5892f051460487965ac84601f530a5305b2287572756a2eda3f540f\": container with ID starting with e23720cbf5892f051460487965ac84601f530a5305b2287572756a2eda3f540f not found: ID does not exist" Mar 11 09:03:16 crc kubenswrapper[4808]: I0311 09:03:16.328638 4808 scope.go:117] "RemoveContainer" containerID="da3f628818cb376f34eaf2a8c86934f297856acda7d6a10bc69704ab26aec7db" Mar 11 09:03:16 crc kubenswrapper[4808]: E0311 09:03:16.328978 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da3f628818cb376f34eaf2a8c86934f297856acda7d6a10bc69704ab26aec7db\": container with ID starting with da3f628818cb376f34eaf2a8c86934f297856acda7d6a10bc69704ab26aec7db not found: ID does not exist" containerID="da3f628818cb376f34eaf2a8c86934f297856acda7d6a10bc69704ab26aec7db" Mar 11 09:03:16 crc kubenswrapper[4808]: I0311 09:03:16.329012 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da3f628818cb376f34eaf2a8c86934f297856acda7d6a10bc69704ab26aec7db"} err="failed to get container status \"da3f628818cb376f34eaf2a8c86934f297856acda7d6a10bc69704ab26aec7db\": rpc error: code = NotFound desc = could not find container \"da3f628818cb376f34eaf2a8c86934f297856acda7d6a10bc69704ab26aec7db\": container with ID starting with da3f628818cb376f34eaf2a8c86934f297856acda7d6a10bc69704ab26aec7db not found: ID does not exist" Mar 11 09:03:16 crc kubenswrapper[4808]: I0311 09:03:16.329038 4808 scope.go:117] "RemoveContainer" containerID="fec39c8a8c3d87d26583ad89ff3733ac0678350e8f612a52e8f61ffd8e3ba207" Mar 11 09:03:16 crc kubenswrapper[4808]: E0311 09:03:16.329272 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fec39c8a8c3d87d26583ad89ff3733ac0678350e8f612a52e8f61ffd8e3ba207\": container with ID starting with fec39c8a8c3d87d26583ad89ff3733ac0678350e8f612a52e8f61ffd8e3ba207 not found: ID does not exist" containerID="fec39c8a8c3d87d26583ad89ff3733ac0678350e8f612a52e8f61ffd8e3ba207" Mar 11 09:03:16 crc kubenswrapper[4808]: I0311 09:03:16.329295 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fec39c8a8c3d87d26583ad89ff3733ac0678350e8f612a52e8f61ffd8e3ba207"} err="failed to get container status \"fec39c8a8c3d87d26583ad89ff3733ac0678350e8f612a52e8f61ffd8e3ba207\": rpc error: code = NotFound desc = could not find container \"fec39c8a8c3d87d26583ad89ff3733ac0678350e8f612a52e8f61ffd8e3ba207\": container with ID starting with fec39c8a8c3d87d26583ad89ff3733ac0678350e8f612a52e8f61ffd8e3ba207 not found: ID does not exist" Mar 11 09:03:16 crc kubenswrapper[4808]: I0311 09:03:16.329309 4808 scope.go:117] "RemoveContainer" containerID="4ed7ed8b84fbb2f437e683872a6f8fc305094422bcecfc1826baf479a5fc57eb" Mar 11 09:03:16 crc kubenswrapper[4808]: E0311 09:03:16.329665 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ed7ed8b84fbb2f437e683872a6f8fc305094422bcecfc1826baf479a5fc57eb\": container with ID starting with 4ed7ed8b84fbb2f437e683872a6f8fc305094422bcecfc1826baf479a5fc57eb not found: ID does not exist" containerID="4ed7ed8b84fbb2f437e683872a6f8fc305094422bcecfc1826baf479a5fc57eb" Mar 11 09:03:16 crc kubenswrapper[4808]: I0311 09:03:16.329682 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ed7ed8b84fbb2f437e683872a6f8fc305094422bcecfc1826baf479a5fc57eb"} err="failed to get container status \"4ed7ed8b84fbb2f437e683872a6f8fc305094422bcecfc1826baf479a5fc57eb\": rpc error: code = NotFound desc = could not find container \"4ed7ed8b84fbb2f437e683872a6f8fc305094422bcecfc1826baf479a5fc57eb\": container with ID starting with 4ed7ed8b84fbb2f437e683872a6f8fc305094422bcecfc1826baf479a5fc57eb not found: ID does not exist" Mar 11 09:03:16 crc kubenswrapper[4808]: I0311 09:03:16.329695 4808 scope.go:117] "RemoveContainer" containerID="e23720cbf5892f051460487965ac84601f530a5305b2287572756a2eda3f540f" Mar 11 09:03:16 crc kubenswrapper[4808]: I0311 09:03:16.330092 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e23720cbf5892f051460487965ac84601f530a5305b2287572756a2eda3f540f"} err="failed to get container status \"e23720cbf5892f051460487965ac84601f530a5305b2287572756a2eda3f540f\": rpc error: code = NotFound desc = could not find container \"e23720cbf5892f051460487965ac84601f530a5305b2287572756a2eda3f540f\": container with ID starting with e23720cbf5892f051460487965ac84601f530a5305b2287572756a2eda3f540f not found: ID does not exist" Mar 11 09:03:16 crc kubenswrapper[4808]: I0311 09:03:16.330152 4808 scope.go:117] "RemoveContainer" containerID="da3f628818cb376f34eaf2a8c86934f297856acda7d6a10bc69704ab26aec7db" Mar 11 09:03:16 crc kubenswrapper[4808]: I0311 09:03:16.330524 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da3f628818cb376f34eaf2a8c86934f297856acda7d6a10bc69704ab26aec7db"} err="failed to get container status \"da3f628818cb376f34eaf2a8c86934f297856acda7d6a10bc69704ab26aec7db\": rpc error: code = NotFound desc = could not find container \"da3f628818cb376f34eaf2a8c86934f297856acda7d6a10bc69704ab26aec7db\": container with ID starting with da3f628818cb376f34eaf2a8c86934f297856acda7d6a10bc69704ab26aec7db not found: ID does not exist" Mar 11 09:03:16 crc kubenswrapper[4808]: I0311 09:03:16.330551 4808 scope.go:117] "RemoveContainer" containerID="fec39c8a8c3d87d26583ad89ff3733ac0678350e8f612a52e8f61ffd8e3ba207" Mar 11 09:03:16 crc kubenswrapper[4808]: I0311 09:03:16.330771 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fec39c8a8c3d87d26583ad89ff3733ac0678350e8f612a52e8f61ffd8e3ba207"} err="failed to get container status \"fec39c8a8c3d87d26583ad89ff3733ac0678350e8f612a52e8f61ffd8e3ba207\": rpc error: code = NotFound desc = could not find container \"fec39c8a8c3d87d26583ad89ff3733ac0678350e8f612a52e8f61ffd8e3ba207\": container with ID starting with fec39c8a8c3d87d26583ad89ff3733ac0678350e8f612a52e8f61ffd8e3ba207 not found: ID does not exist" Mar 11 09:03:16 crc kubenswrapper[4808]: I0311 09:03:16.330790 4808 scope.go:117] "RemoveContainer" containerID="4ed7ed8b84fbb2f437e683872a6f8fc305094422bcecfc1826baf479a5fc57eb" Mar 11 09:03:16 crc kubenswrapper[4808]: I0311 09:03:16.331003 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ed7ed8b84fbb2f437e683872a6f8fc305094422bcecfc1826baf479a5fc57eb"} err="failed to get container status \"4ed7ed8b84fbb2f437e683872a6f8fc305094422bcecfc1826baf479a5fc57eb\": rpc error: code = NotFound desc = could not find container \"4ed7ed8b84fbb2f437e683872a6f8fc305094422bcecfc1826baf479a5fc57eb\": container with ID starting with 4ed7ed8b84fbb2f437e683872a6f8fc305094422bcecfc1826baf479a5fc57eb not found: ID does not exist" Mar 11 09:03:16 crc kubenswrapper[4808]: I0311 09:03:16.331029 4808 scope.go:117] "RemoveContainer" containerID="e23720cbf5892f051460487965ac84601f530a5305b2287572756a2eda3f540f" Mar 11 09:03:16 crc kubenswrapper[4808]: I0311 09:03:16.331225 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e23720cbf5892f051460487965ac84601f530a5305b2287572756a2eda3f540f"} err="failed to get container status \"e23720cbf5892f051460487965ac84601f530a5305b2287572756a2eda3f540f\": rpc error: code = NotFound desc = could not find container \"e23720cbf5892f051460487965ac84601f530a5305b2287572756a2eda3f540f\": container with ID starting with e23720cbf5892f051460487965ac84601f530a5305b2287572756a2eda3f540f not found: ID does not exist" Mar 11 09:03:16 crc kubenswrapper[4808]: I0311 09:03:16.331241 4808 scope.go:117] "RemoveContainer" containerID="da3f628818cb376f34eaf2a8c86934f297856acda7d6a10bc69704ab26aec7db" Mar 11 09:03:16 crc kubenswrapper[4808]: I0311 09:03:16.331503 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da3f628818cb376f34eaf2a8c86934f297856acda7d6a10bc69704ab26aec7db"} err="failed to get container status \"da3f628818cb376f34eaf2a8c86934f297856acda7d6a10bc69704ab26aec7db\": rpc error: code = NotFound desc = could not find container \"da3f628818cb376f34eaf2a8c86934f297856acda7d6a10bc69704ab26aec7db\": container with ID starting with da3f628818cb376f34eaf2a8c86934f297856acda7d6a10bc69704ab26aec7db not found: ID does not exist" Mar 11 09:03:16 crc kubenswrapper[4808]: I0311 09:03:16.331546 4808 scope.go:117] "RemoveContainer" containerID="fec39c8a8c3d87d26583ad89ff3733ac0678350e8f612a52e8f61ffd8e3ba207" Mar 11 09:03:16 crc kubenswrapper[4808]: I0311 09:03:16.331802 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fec39c8a8c3d87d26583ad89ff3733ac0678350e8f612a52e8f61ffd8e3ba207"} err="failed to get container status \"fec39c8a8c3d87d26583ad89ff3733ac0678350e8f612a52e8f61ffd8e3ba207\": rpc error: code = NotFound desc = could not find container \"fec39c8a8c3d87d26583ad89ff3733ac0678350e8f612a52e8f61ffd8e3ba207\": container with ID starting with fec39c8a8c3d87d26583ad89ff3733ac0678350e8f612a52e8f61ffd8e3ba207 not found: ID does not exist" Mar 11 09:03:16 crc kubenswrapper[4808]: I0311 09:03:16.331820 4808 scope.go:117] "RemoveContainer" containerID="4ed7ed8b84fbb2f437e683872a6f8fc305094422bcecfc1826baf479a5fc57eb" Mar 11 09:03:16 crc kubenswrapper[4808]: I0311 09:03:16.332166 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ed7ed8b84fbb2f437e683872a6f8fc305094422bcecfc1826baf479a5fc57eb"} err="failed to get container status \"4ed7ed8b84fbb2f437e683872a6f8fc305094422bcecfc1826baf479a5fc57eb\": rpc error: code = NotFound desc = could not find container \"4ed7ed8b84fbb2f437e683872a6f8fc305094422bcecfc1826baf479a5fc57eb\": container with ID starting with 4ed7ed8b84fbb2f437e683872a6f8fc305094422bcecfc1826baf479a5fc57eb not found: ID does not exist" Mar 11 09:03:16 crc kubenswrapper[4808]: I0311 09:03:16.335413 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3de31e55-309e-4501-87bd-656bc7aa9371-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:03:16 crc kubenswrapper[4808]: I0311 09:03:16.348152 4808 scope.go:117] "RemoveContainer" containerID="e23720cbf5892f051460487965ac84601f530a5305b2287572756a2eda3f540f" Mar 11 09:03:16 crc kubenswrapper[4808]: I0311 09:03:16.348572 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e23720cbf5892f051460487965ac84601f530a5305b2287572756a2eda3f540f"} err="failed to get container status \"e23720cbf5892f051460487965ac84601f530a5305b2287572756a2eda3f540f\": rpc error: code = NotFound desc = could not find container \"e23720cbf5892f051460487965ac84601f530a5305b2287572756a2eda3f540f\": container with ID starting with e23720cbf5892f051460487965ac84601f530a5305b2287572756a2eda3f540f not found: ID does not exist" Mar 11 09:03:16 crc kubenswrapper[4808]: I0311 09:03:16.348605 4808 scope.go:117] "RemoveContainer" containerID="da3f628818cb376f34eaf2a8c86934f297856acda7d6a10bc69704ab26aec7db" Mar 11 09:03:16 crc kubenswrapper[4808]: I0311 09:03:16.348908 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da3f628818cb376f34eaf2a8c86934f297856acda7d6a10bc69704ab26aec7db"} err="failed to get container status \"da3f628818cb376f34eaf2a8c86934f297856acda7d6a10bc69704ab26aec7db\": rpc error: code = NotFound desc = could not find container \"da3f628818cb376f34eaf2a8c86934f297856acda7d6a10bc69704ab26aec7db\": container with ID starting with da3f628818cb376f34eaf2a8c86934f297856acda7d6a10bc69704ab26aec7db not found: ID does not exist" Mar 11 09:03:16 crc kubenswrapper[4808]: I0311 09:03:16.348944 4808 scope.go:117] "RemoveContainer" containerID="fec39c8a8c3d87d26583ad89ff3733ac0678350e8f612a52e8f61ffd8e3ba207" Mar 11 09:03:16 crc kubenswrapper[4808]: I0311 09:03:16.349235 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fec39c8a8c3d87d26583ad89ff3733ac0678350e8f612a52e8f61ffd8e3ba207"} err="failed to get container status \"fec39c8a8c3d87d26583ad89ff3733ac0678350e8f612a52e8f61ffd8e3ba207\": rpc error: code = NotFound desc = could not find container \"fec39c8a8c3d87d26583ad89ff3733ac0678350e8f612a52e8f61ffd8e3ba207\": container with ID starting with fec39c8a8c3d87d26583ad89ff3733ac0678350e8f612a52e8f61ffd8e3ba207 not found: ID does not exist" Mar 11 09:03:16 crc kubenswrapper[4808]: I0311 09:03:16.349257 4808 scope.go:117] "RemoveContainer" containerID="4ed7ed8b84fbb2f437e683872a6f8fc305094422bcecfc1826baf479a5fc57eb" Mar 11 09:03:16 crc kubenswrapper[4808]: I0311 09:03:16.349580 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ed7ed8b84fbb2f437e683872a6f8fc305094422bcecfc1826baf479a5fc57eb"} err="failed to get container status \"4ed7ed8b84fbb2f437e683872a6f8fc305094422bcecfc1826baf479a5fc57eb\": rpc error: code = NotFound desc = could not find container \"4ed7ed8b84fbb2f437e683872a6f8fc305094422bcecfc1826baf479a5fc57eb\": container with ID starting with 4ed7ed8b84fbb2f437e683872a6f8fc305094422bcecfc1826baf479a5fc57eb not found: ID does not exist" Mar 11 09:03:16 crc kubenswrapper[4808]: I0311 09:03:16.590988 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 11 09:03:16 crc kubenswrapper[4808]: I0311 09:03:16.605215 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 11 09:03:16 crc kubenswrapper[4808]: I0311 09:03:16.614912 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 11 09:03:16 crc kubenswrapper[4808]: E0311 09:03:16.615314 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3de31e55-309e-4501-87bd-656bc7aa9371" containerName="sg-core" Mar 11 09:03:16 crc kubenswrapper[4808]: I0311 09:03:16.615339 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="3de31e55-309e-4501-87bd-656bc7aa9371" containerName="sg-core" Mar 11 09:03:16 crc kubenswrapper[4808]: E0311 09:03:16.615369 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3de31e55-309e-4501-87bd-656bc7aa9371" containerName="proxy-httpd" Mar 11 09:03:16 crc kubenswrapper[4808]: I0311 09:03:16.615376 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="3de31e55-309e-4501-87bd-656bc7aa9371" containerName="proxy-httpd" Mar 11 09:03:16 crc kubenswrapper[4808]: E0311 09:03:16.615393 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3de31e55-309e-4501-87bd-656bc7aa9371" containerName="ceilometer-central-agent" Mar 11 09:03:16 crc kubenswrapper[4808]: I0311 09:03:16.615400 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="3de31e55-309e-4501-87bd-656bc7aa9371" containerName="ceilometer-central-agent" Mar 11 09:03:16 crc kubenswrapper[4808]: E0311 09:03:16.615418 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3de31e55-309e-4501-87bd-656bc7aa9371" containerName="ceilometer-notification-agent" Mar 11 09:03:16 crc kubenswrapper[4808]: I0311 09:03:16.615424 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="3de31e55-309e-4501-87bd-656bc7aa9371" containerName="ceilometer-notification-agent" Mar 11 09:03:16 crc kubenswrapper[4808]: I0311 09:03:16.615592 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="3de31e55-309e-4501-87bd-656bc7aa9371" containerName="ceilometer-central-agent" Mar 11 09:03:16 crc kubenswrapper[4808]: I0311 09:03:16.615604 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="3de31e55-309e-4501-87bd-656bc7aa9371" containerName="ceilometer-notification-agent" Mar 11 09:03:16 crc kubenswrapper[4808]: I0311 09:03:16.615613 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="3de31e55-309e-4501-87bd-656bc7aa9371" containerName="proxy-httpd" Mar 11 09:03:16 crc kubenswrapper[4808]: I0311 09:03:16.615621 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="3de31e55-309e-4501-87bd-656bc7aa9371" containerName="sg-core" Mar 11 09:03:16 crc kubenswrapper[4808]: I0311 09:03:16.617132 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 11 09:03:16 crc kubenswrapper[4808]: I0311 09:03:16.619164 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 11 09:03:16 crc kubenswrapper[4808]: I0311 09:03:16.619645 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 11 09:03:16 crc kubenswrapper[4808]: I0311 09:03:16.625119 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 11 09:03:16 crc kubenswrapper[4808]: I0311 09:03:16.639808 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f96efdc-7b47-4ce4-a534-84718c3dc7ce-scripts\") pod \"ceilometer-0\" (UID: \"8f96efdc-7b47-4ce4-a534-84718c3dc7ce\") " pod="openstack/ceilometer-0" Mar 11 09:03:16 crc kubenswrapper[4808]: I0311 09:03:16.639880 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8f96efdc-7b47-4ce4-a534-84718c3dc7ce-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8f96efdc-7b47-4ce4-a534-84718c3dc7ce\") " pod="openstack/ceilometer-0" Mar 11 09:03:16 crc kubenswrapper[4808]: I0311 09:03:16.640009 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f96efdc-7b47-4ce4-a534-84718c3dc7ce-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8f96efdc-7b47-4ce4-a534-84718c3dc7ce\") " pod="openstack/ceilometer-0" Mar 11 09:03:16 crc kubenswrapper[4808]: I0311 09:03:16.640069 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcxrw\" (UniqueName: \"kubernetes.io/projected/8f96efdc-7b47-4ce4-a534-84718c3dc7ce-kube-api-access-pcxrw\") pod \"ceilometer-0\" (UID: \"8f96efdc-7b47-4ce4-a534-84718c3dc7ce\") " pod="openstack/ceilometer-0" Mar 11 09:03:16 crc kubenswrapper[4808]: I0311 09:03:16.640105 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8f96efdc-7b47-4ce4-a534-84718c3dc7ce-run-httpd\") pod \"ceilometer-0\" (UID: \"8f96efdc-7b47-4ce4-a534-84718c3dc7ce\") " pod="openstack/ceilometer-0" Mar 11 09:03:16 crc kubenswrapper[4808]: I0311 09:03:16.640163 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f96efdc-7b47-4ce4-a534-84718c3dc7ce-config-data\") pod \"ceilometer-0\" (UID: \"8f96efdc-7b47-4ce4-a534-84718c3dc7ce\") " pod="openstack/ceilometer-0" Mar 11 09:03:16 crc kubenswrapper[4808]: I0311 09:03:16.640196 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8f96efdc-7b47-4ce4-a534-84718c3dc7ce-log-httpd\") pod \"ceilometer-0\" (UID: \"8f96efdc-7b47-4ce4-a534-84718c3dc7ce\") " pod="openstack/ceilometer-0" Mar 11 09:03:16 crc kubenswrapper[4808]: I0311 09:03:16.741838 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f96efdc-7b47-4ce4-a534-84718c3dc7ce-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8f96efdc-7b47-4ce4-a534-84718c3dc7ce\") " pod="openstack/ceilometer-0" Mar 11 09:03:16 crc kubenswrapper[4808]: I0311 09:03:16.742229 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcxrw\" (UniqueName: \"kubernetes.io/projected/8f96efdc-7b47-4ce4-a534-84718c3dc7ce-kube-api-access-pcxrw\") pod \"ceilometer-0\" (UID: \"8f96efdc-7b47-4ce4-a534-84718c3dc7ce\") " pod="openstack/ceilometer-0" Mar 11 09:03:16 crc kubenswrapper[4808]: I0311 09:03:16.742478 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8f96efdc-7b47-4ce4-a534-84718c3dc7ce-run-httpd\") pod \"ceilometer-0\" (UID: \"8f96efdc-7b47-4ce4-a534-84718c3dc7ce\") " pod="openstack/ceilometer-0" Mar 11 09:03:16 crc kubenswrapper[4808]: I0311 09:03:16.742683 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f96efdc-7b47-4ce4-a534-84718c3dc7ce-config-data\") pod \"ceilometer-0\" (UID: \"8f96efdc-7b47-4ce4-a534-84718c3dc7ce\") " pod="openstack/ceilometer-0" Mar 11 09:03:16 crc kubenswrapper[4808]: I0311 09:03:16.742847 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8f96efdc-7b47-4ce4-a534-84718c3dc7ce-log-httpd\") pod \"ceilometer-0\" (UID: \"8f96efdc-7b47-4ce4-a534-84718c3dc7ce\") " pod="openstack/ceilometer-0" Mar 11 09:03:16 crc kubenswrapper[4808]: I0311 09:03:16.743109 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f96efdc-7b47-4ce4-a534-84718c3dc7ce-scripts\") pod \"ceilometer-0\" (UID: \"8f96efdc-7b47-4ce4-a534-84718c3dc7ce\") " pod="openstack/ceilometer-0" Mar 11 09:03:16 crc kubenswrapper[4808]: I0311 09:03:16.743268 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8f96efdc-7b47-4ce4-a534-84718c3dc7ce-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8f96efdc-7b47-4ce4-a534-84718c3dc7ce\") " pod="openstack/ceilometer-0" Mar 11 09:03:16 crc kubenswrapper[4808]: I0311 09:03:16.743300 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8f96efdc-7b47-4ce4-a534-84718c3dc7ce-log-httpd\") pod \"ceilometer-0\" (UID: \"8f96efdc-7b47-4ce4-a534-84718c3dc7ce\") " pod="openstack/ceilometer-0" Mar 11 09:03:16 crc kubenswrapper[4808]: I0311 09:03:16.744468 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8f96efdc-7b47-4ce4-a534-84718c3dc7ce-run-httpd\") pod \"ceilometer-0\" (UID: \"8f96efdc-7b47-4ce4-a534-84718c3dc7ce\") " pod="openstack/ceilometer-0" Mar 11 09:03:16 crc kubenswrapper[4808]: I0311 09:03:16.746808 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f96efdc-7b47-4ce4-a534-84718c3dc7ce-config-data\") pod \"ceilometer-0\" (UID: \"8f96efdc-7b47-4ce4-a534-84718c3dc7ce\") " pod="openstack/ceilometer-0" Mar 11 09:03:16 crc kubenswrapper[4808]: I0311 09:03:16.747825 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f96efdc-7b47-4ce4-a534-84718c3dc7ce-scripts\") pod \"ceilometer-0\" (UID: \"8f96efdc-7b47-4ce4-a534-84718c3dc7ce\") " pod="openstack/ceilometer-0" Mar 11 09:03:16 crc kubenswrapper[4808]: I0311 09:03:16.748142 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8f96efdc-7b47-4ce4-a534-84718c3dc7ce-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8f96efdc-7b47-4ce4-a534-84718c3dc7ce\") " pod="openstack/ceilometer-0" Mar 11 09:03:16 crc kubenswrapper[4808]: I0311 09:03:16.748314 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f96efdc-7b47-4ce4-a534-84718c3dc7ce-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8f96efdc-7b47-4ce4-a534-84718c3dc7ce\") " pod="openstack/ceilometer-0" Mar 11 09:03:16 crc kubenswrapper[4808]: I0311 09:03:16.772178 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcxrw\" (UniqueName: \"kubernetes.io/projected/8f96efdc-7b47-4ce4-a534-84718c3dc7ce-kube-api-access-pcxrw\") pod \"ceilometer-0\" (UID: \"8f96efdc-7b47-4ce4-a534-84718c3dc7ce\") " pod="openstack/ceilometer-0" Mar 11 09:03:16 crc kubenswrapper[4808]: I0311 09:03:16.957866 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 11 09:03:17 crc kubenswrapper[4808]: I0311 09:03:17.477307 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 11 09:03:17 crc kubenswrapper[4808]: I0311 09:03:17.567064 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-dmgm7" Mar 11 09:03:17 crc kubenswrapper[4808]: I0311 09:03:17.669879 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16e5dd70-3baf-4a95-be4e-d3f27d963aa6-config-data\") pod \"16e5dd70-3baf-4a95-be4e-d3f27d963aa6\" (UID: \"16e5dd70-3baf-4a95-be4e-d3f27d963aa6\") " Mar 11 09:03:17 crc kubenswrapper[4808]: I0311 09:03:17.670008 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ds98q\" (UniqueName: \"kubernetes.io/projected/16e5dd70-3baf-4a95-be4e-d3f27d963aa6-kube-api-access-ds98q\") pod \"16e5dd70-3baf-4a95-be4e-d3f27d963aa6\" (UID: \"16e5dd70-3baf-4a95-be4e-d3f27d963aa6\") " Mar 11 09:03:17 crc kubenswrapper[4808]: I0311 09:03:17.670095 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16e5dd70-3baf-4a95-be4e-d3f27d963aa6-scripts\") pod \"16e5dd70-3baf-4a95-be4e-d3f27d963aa6\" (UID: \"16e5dd70-3baf-4a95-be4e-d3f27d963aa6\") " Mar 11 09:03:17 crc kubenswrapper[4808]: I0311 09:03:17.670182 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16e5dd70-3baf-4a95-be4e-d3f27d963aa6-combined-ca-bundle\") pod \"16e5dd70-3baf-4a95-be4e-d3f27d963aa6\" (UID: \"16e5dd70-3baf-4a95-be4e-d3f27d963aa6\") " Mar 11 09:03:17 crc kubenswrapper[4808]: I0311 09:03:17.675029 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16e5dd70-3baf-4a95-be4e-d3f27d963aa6-kube-api-access-ds98q" (OuterVolumeSpecName: "kube-api-access-ds98q") pod "16e5dd70-3baf-4a95-be4e-d3f27d963aa6" (UID: "16e5dd70-3baf-4a95-be4e-d3f27d963aa6"). InnerVolumeSpecName "kube-api-access-ds98q". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:03:17 crc kubenswrapper[4808]: I0311 09:03:17.676518 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16e5dd70-3baf-4a95-be4e-d3f27d963aa6-scripts" (OuterVolumeSpecName: "scripts") pod "16e5dd70-3baf-4a95-be4e-d3f27d963aa6" (UID: "16e5dd70-3baf-4a95-be4e-d3f27d963aa6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:03:17 crc kubenswrapper[4808]: I0311 09:03:17.696768 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16e5dd70-3baf-4a95-be4e-d3f27d963aa6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "16e5dd70-3baf-4a95-be4e-d3f27d963aa6" (UID: "16e5dd70-3baf-4a95-be4e-d3f27d963aa6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:03:17 crc kubenswrapper[4808]: I0311 09:03:17.699228 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16e5dd70-3baf-4a95-be4e-d3f27d963aa6-config-data" (OuterVolumeSpecName: "config-data") pod "16e5dd70-3baf-4a95-be4e-d3f27d963aa6" (UID: "16e5dd70-3baf-4a95-be4e-d3f27d963aa6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:03:17 crc kubenswrapper[4808]: I0311 09:03:17.772378 4808 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16e5dd70-3baf-4a95-be4e-d3f27d963aa6-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:03:17 crc kubenswrapper[4808]: I0311 09:03:17.772426 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16e5dd70-3baf-4a95-be4e-d3f27d963aa6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:03:17 crc kubenswrapper[4808]: I0311 09:03:17.772442 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16e5dd70-3baf-4a95-be4e-d3f27d963aa6-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:03:17 crc kubenswrapper[4808]: I0311 09:03:17.772454 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ds98q\" (UniqueName: \"kubernetes.io/projected/16e5dd70-3baf-4a95-be4e-d3f27d963aa6-kube-api-access-ds98q\") on node \"crc\" DevicePath \"\"" Mar 11 09:03:17 crc kubenswrapper[4808]: I0311 09:03:17.813665 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3de31e55-309e-4501-87bd-656bc7aa9371" path="/var/lib/kubelet/pods/3de31e55-309e-4501-87bd-656bc7aa9371/volumes" Mar 11 09:03:18 crc kubenswrapper[4808]: I0311 09:03:18.312304 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-dmgm7" event={"ID":"16e5dd70-3baf-4a95-be4e-d3f27d963aa6","Type":"ContainerDied","Data":"7a6ae4efd2d1831d694d2b121e1d686b2864d282cf76b009f326aefb6ac90c29"} Mar 11 09:03:18 crc kubenswrapper[4808]: I0311 09:03:18.313001 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a6ae4efd2d1831d694d2b121e1d686b2864d282cf76b009f326aefb6ac90c29" Mar 11 09:03:18 crc kubenswrapper[4808]: I0311 09:03:18.312605 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-dmgm7" Mar 11 09:03:18 crc kubenswrapper[4808]: I0311 09:03:18.315966 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8f96efdc-7b47-4ce4-a534-84718c3dc7ce","Type":"ContainerStarted","Data":"3d591601e45559ff655035fdbfc985b5b9765c93bc91377b334eda6415550ede"} Mar 11 09:03:18 crc kubenswrapper[4808]: I0311 09:03:18.426437 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 11 09:03:18 crc kubenswrapper[4808]: E0311 09:03:18.427126 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16e5dd70-3baf-4a95-be4e-d3f27d963aa6" containerName="nova-cell0-conductor-db-sync" Mar 11 09:03:18 crc kubenswrapper[4808]: I0311 09:03:18.427241 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="16e5dd70-3baf-4a95-be4e-d3f27d963aa6" containerName="nova-cell0-conductor-db-sync" Mar 11 09:03:18 crc kubenswrapper[4808]: I0311 09:03:18.427561 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="16e5dd70-3baf-4a95-be4e-d3f27d963aa6" containerName="nova-cell0-conductor-db-sync" Mar 11 09:03:18 crc kubenswrapper[4808]: I0311 09:03:18.428369 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Mar 11 09:03:18 crc kubenswrapper[4808]: I0311 09:03:18.432311 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Mar 11 09:03:18 crc kubenswrapper[4808]: I0311 09:03:18.441864 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-h5bjm" Mar 11 09:03:18 crc kubenswrapper[4808]: I0311 09:03:18.465913 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 11 09:03:18 crc kubenswrapper[4808]: I0311 09:03:18.498161 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5hsw\" (UniqueName: \"kubernetes.io/projected/c8798b96-74d7-4e0e-a4c7-97f3c995544b-kube-api-access-s5hsw\") pod \"nova-cell0-conductor-0\" (UID: \"c8798b96-74d7-4e0e-a4c7-97f3c995544b\") " pod="openstack/nova-cell0-conductor-0" Mar 11 09:03:18 crc kubenswrapper[4808]: I0311 09:03:18.498494 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8798b96-74d7-4e0e-a4c7-97f3c995544b-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"c8798b96-74d7-4e0e-a4c7-97f3c995544b\") " pod="openstack/nova-cell0-conductor-0" Mar 11 09:03:18 crc kubenswrapper[4808]: I0311 09:03:18.498570 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8798b96-74d7-4e0e-a4c7-97f3c995544b-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"c8798b96-74d7-4e0e-a4c7-97f3c995544b\") " pod="openstack/nova-cell0-conductor-0" Mar 11 09:03:18 crc kubenswrapper[4808]: I0311 09:03:18.599997 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5hsw\" (UniqueName: \"kubernetes.io/projected/c8798b96-74d7-4e0e-a4c7-97f3c995544b-kube-api-access-s5hsw\") pod \"nova-cell0-conductor-0\" (UID: \"c8798b96-74d7-4e0e-a4c7-97f3c995544b\") " pod="openstack/nova-cell0-conductor-0" Mar 11 09:03:18 crc kubenswrapper[4808]: I0311 09:03:18.600307 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8798b96-74d7-4e0e-a4c7-97f3c995544b-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"c8798b96-74d7-4e0e-a4c7-97f3c995544b\") " pod="openstack/nova-cell0-conductor-0" Mar 11 09:03:18 crc kubenswrapper[4808]: I0311 09:03:18.600509 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8798b96-74d7-4e0e-a4c7-97f3c995544b-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"c8798b96-74d7-4e0e-a4c7-97f3c995544b\") " pod="openstack/nova-cell0-conductor-0" Mar 11 09:03:18 crc kubenswrapper[4808]: I0311 09:03:18.605310 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8798b96-74d7-4e0e-a4c7-97f3c995544b-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"c8798b96-74d7-4e0e-a4c7-97f3c995544b\") " pod="openstack/nova-cell0-conductor-0" Mar 11 09:03:18 crc kubenswrapper[4808]: I0311 09:03:18.605390 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8798b96-74d7-4e0e-a4c7-97f3c995544b-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"c8798b96-74d7-4e0e-a4c7-97f3c995544b\") " pod="openstack/nova-cell0-conductor-0" Mar 11 09:03:18 crc kubenswrapper[4808]: I0311 09:03:18.618224 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5hsw\" (UniqueName: \"kubernetes.io/projected/c8798b96-74d7-4e0e-a4c7-97f3c995544b-kube-api-access-s5hsw\") pod \"nova-cell0-conductor-0\" (UID: \"c8798b96-74d7-4e0e-a4c7-97f3c995544b\") " pod="openstack/nova-cell0-conductor-0" Mar 11 09:03:18 crc kubenswrapper[4808]: I0311 09:03:18.853594 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Mar 11 09:03:19 crc kubenswrapper[4808]: I0311 09:03:19.329686 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8f96efdc-7b47-4ce4-a534-84718c3dc7ce","Type":"ContainerStarted","Data":"d5f51bb30049e0d4099dd423e5fa14c13976e2a1b7e4c564b512d2104f4a4a77"} Mar 11 09:03:19 crc kubenswrapper[4808]: I0311 09:03:19.330074 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8f96efdc-7b47-4ce4-a534-84718c3dc7ce","Type":"ContainerStarted","Data":"b016ce50a19334679004b642d0b753f5c671a0ca0a1b7a5d01038b03e5de574c"} Mar 11 09:03:19 crc kubenswrapper[4808]: I0311 09:03:19.401978 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 11 09:03:19 crc kubenswrapper[4808]: W0311 09:03:19.406851 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc8798b96_74d7_4e0e_a4c7_97f3c995544b.slice/crio-c3154a9665cef62668e2a53aa7ca84b5b233bad0c5f30cf7eda704b78e2abdbd WatchSource:0}: Error finding container c3154a9665cef62668e2a53aa7ca84b5b233bad0c5f30cf7eda704b78e2abdbd: Status 404 returned error can't find the container with id c3154a9665cef62668e2a53aa7ca84b5b233bad0c5f30cf7eda704b78e2abdbd Mar 11 09:03:20 crc kubenswrapper[4808]: I0311 09:03:20.340161 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"c8798b96-74d7-4e0e-a4c7-97f3c995544b","Type":"ContainerStarted","Data":"3eb75e498cfe3f13fb0222d0de5cdb5657fd0be88039ded0f03ccc09d63cb1e3"} Mar 11 09:03:20 crc kubenswrapper[4808]: I0311 09:03:20.340544 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"c8798b96-74d7-4e0e-a4c7-97f3c995544b","Type":"ContainerStarted","Data":"c3154a9665cef62668e2a53aa7ca84b5b233bad0c5f30cf7eda704b78e2abdbd"} Mar 11 09:03:20 crc kubenswrapper[4808]: I0311 09:03:20.340781 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Mar 11 09:03:20 crc kubenswrapper[4808]: I0311 09:03:20.342658 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8f96efdc-7b47-4ce4-a534-84718c3dc7ce","Type":"ContainerStarted","Data":"1c22323585f07ca9032638dc564b2d985aa615534adde9047a09edbaca9ffc6a"} Mar 11 09:03:20 crc kubenswrapper[4808]: I0311 09:03:20.358503 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.35848449 podStartE2EDuration="2.35848449s" podCreationTimestamp="2026-03-11 09:03:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:03:20.356991267 +0000 UTC m=+1451.310314587" watchObservedRunningTime="2026-03-11 09:03:20.35848449 +0000 UTC m=+1451.311807820" Mar 11 09:03:22 crc kubenswrapper[4808]: I0311 09:03:22.361572 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8f96efdc-7b47-4ce4-a534-84718c3dc7ce","Type":"ContainerStarted","Data":"232d3646be8e1aa9ac57c7bd941d3b740c401708edd1b8db98ce701df80ccb75"} Mar 11 09:03:22 crc kubenswrapper[4808]: I0311 09:03:22.362095 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 11 09:03:22 crc kubenswrapper[4808]: I0311 09:03:22.403913 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.201147276 podStartE2EDuration="6.403887252s" podCreationTimestamp="2026-03-11 09:03:16 +0000 UTC" firstStartedPulling="2026-03-11 09:03:17.481129818 +0000 UTC m=+1448.434453138" lastFinishedPulling="2026-03-11 09:03:21.683869794 +0000 UTC m=+1452.637193114" observedRunningTime="2026-03-11 09:03:22.399231016 +0000 UTC m=+1453.352554346" watchObservedRunningTime="2026-03-11 09:03:22.403887252 +0000 UTC m=+1453.357210572" Mar 11 09:03:28 crc kubenswrapper[4808]: I0311 09:03:28.900852 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Mar 11 09:03:29 crc kubenswrapper[4808]: I0311 09:03:29.419246 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-f44ls"] Mar 11 09:03:29 crc kubenswrapper[4808]: I0311 09:03:29.420353 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-f44ls" Mar 11 09:03:29 crc kubenswrapper[4808]: I0311 09:03:29.426107 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Mar 11 09:03:29 crc kubenswrapper[4808]: I0311 09:03:29.427996 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Mar 11 09:03:29 crc kubenswrapper[4808]: I0311 09:03:29.445168 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-f44ls"] Mar 11 09:03:29 crc kubenswrapper[4808]: I0311 09:03:29.538699 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1132fd26-9b0b-4a76-9e1c-ad025025ed8a-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-f44ls\" (UID: \"1132fd26-9b0b-4a76-9e1c-ad025025ed8a\") " pod="openstack/nova-cell0-cell-mapping-f44ls" Mar 11 09:03:29 crc kubenswrapper[4808]: I0311 09:03:29.538754 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1132fd26-9b0b-4a76-9e1c-ad025025ed8a-config-data\") pod \"nova-cell0-cell-mapping-f44ls\" (UID: \"1132fd26-9b0b-4a76-9e1c-ad025025ed8a\") " pod="openstack/nova-cell0-cell-mapping-f44ls" Mar 11 09:03:29 crc kubenswrapper[4808]: I0311 09:03:29.538791 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1132fd26-9b0b-4a76-9e1c-ad025025ed8a-scripts\") pod \"nova-cell0-cell-mapping-f44ls\" (UID: \"1132fd26-9b0b-4a76-9e1c-ad025025ed8a\") " pod="openstack/nova-cell0-cell-mapping-f44ls" Mar 11 09:03:29 crc kubenswrapper[4808]: I0311 09:03:29.538872 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wsvq\" (UniqueName: \"kubernetes.io/projected/1132fd26-9b0b-4a76-9e1c-ad025025ed8a-kube-api-access-4wsvq\") pod \"nova-cell0-cell-mapping-f44ls\" (UID: \"1132fd26-9b0b-4a76-9e1c-ad025025ed8a\") " pod="openstack/nova-cell0-cell-mapping-f44ls" Mar 11 09:03:29 crc kubenswrapper[4808]: I0311 09:03:29.614305 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 11 09:03:29 crc kubenswrapper[4808]: I0311 09:03:29.616546 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 11 09:03:29 crc kubenswrapper[4808]: I0311 09:03:29.623476 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 11 09:03:29 crc kubenswrapper[4808]: I0311 09:03:29.626045 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 11 09:03:29 crc kubenswrapper[4808]: I0311 09:03:29.627481 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 11 09:03:29 crc kubenswrapper[4808]: I0311 09:03:29.632813 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Mar 11 09:03:29 crc kubenswrapper[4808]: I0311 09:03:29.639087 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 11 09:03:29 crc kubenswrapper[4808]: I0311 09:03:29.640693 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1132fd26-9b0b-4a76-9e1c-ad025025ed8a-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-f44ls\" (UID: \"1132fd26-9b0b-4a76-9e1c-ad025025ed8a\") " pod="openstack/nova-cell0-cell-mapping-f44ls" Mar 11 09:03:29 crc kubenswrapper[4808]: I0311 09:03:29.640748 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1132fd26-9b0b-4a76-9e1c-ad025025ed8a-config-data\") pod \"nova-cell0-cell-mapping-f44ls\" (UID: \"1132fd26-9b0b-4a76-9e1c-ad025025ed8a\") " pod="openstack/nova-cell0-cell-mapping-f44ls" Mar 11 09:03:29 crc kubenswrapper[4808]: I0311 09:03:29.640796 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1132fd26-9b0b-4a76-9e1c-ad025025ed8a-scripts\") pod \"nova-cell0-cell-mapping-f44ls\" (UID: \"1132fd26-9b0b-4a76-9e1c-ad025025ed8a\") " pod="openstack/nova-cell0-cell-mapping-f44ls" Mar 11 09:03:29 crc kubenswrapper[4808]: I0311 09:03:29.640817 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wsvq\" (UniqueName: \"kubernetes.io/projected/1132fd26-9b0b-4a76-9e1c-ad025025ed8a-kube-api-access-4wsvq\") pod \"nova-cell0-cell-mapping-f44ls\" (UID: \"1132fd26-9b0b-4a76-9e1c-ad025025ed8a\") " pod="openstack/nova-cell0-cell-mapping-f44ls" Mar 11 09:03:29 crc kubenswrapper[4808]: I0311 09:03:29.653802 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 11 09:03:29 crc kubenswrapper[4808]: I0311 09:03:29.660713 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1132fd26-9b0b-4a76-9e1c-ad025025ed8a-scripts\") pod \"nova-cell0-cell-mapping-f44ls\" (UID: \"1132fd26-9b0b-4a76-9e1c-ad025025ed8a\") " pod="openstack/nova-cell0-cell-mapping-f44ls" Mar 11 09:03:29 crc kubenswrapper[4808]: I0311 09:03:29.671684 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wsvq\" (UniqueName: \"kubernetes.io/projected/1132fd26-9b0b-4a76-9e1c-ad025025ed8a-kube-api-access-4wsvq\") pod \"nova-cell0-cell-mapping-f44ls\" (UID: \"1132fd26-9b0b-4a76-9e1c-ad025025ed8a\") " pod="openstack/nova-cell0-cell-mapping-f44ls" Mar 11 09:03:29 crc kubenswrapper[4808]: I0311 09:03:29.677403 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1132fd26-9b0b-4a76-9e1c-ad025025ed8a-config-data\") pod \"nova-cell0-cell-mapping-f44ls\" (UID: \"1132fd26-9b0b-4a76-9e1c-ad025025ed8a\") " pod="openstack/nova-cell0-cell-mapping-f44ls" Mar 11 09:03:29 crc kubenswrapper[4808]: I0311 09:03:29.713954 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1132fd26-9b0b-4a76-9e1c-ad025025ed8a-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-f44ls\" (UID: \"1132fd26-9b0b-4a76-9e1c-ad025025ed8a\") " pod="openstack/nova-cell0-cell-mapping-f44ls" Mar 11 09:03:29 crc kubenswrapper[4808]: I0311 09:03:29.734830 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 11 09:03:29 crc kubenswrapper[4808]: I0311 09:03:29.741217 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-f44ls" Mar 11 09:03:29 crc kubenswrapper[4808]: I0311 09:03:29.742346 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/576445f3-2e23-46f5-94a4-aac06720e4d4-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"576445f3-2e23-46f5-94a4-aac06720e4d4\") " pod="openstack/nova-cell1-novncproxy-0" Mar 11 09:03:29 crc kubenswrapper[4808]: I0311 09:03:29.742461 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlwmc\" (UniqueName: \"kubernetes.io/projected/6a9283cf-cd37-47f9-97aa-f7964eec1c36-kube-api-access-mlwmc\") pod \"nova-api-0\" (UID: \"6a9283cf-cd37-47f9-97aa-f7964eec1c36\") " pod="openstack/nova-api-0" Mar 11 09:03:29 crc kubenswrapper[4808]: I0311 09:03:29.742489 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a9283cf-cd37-47f9-97aa-f7964eec1c36-logs\") pod \"nova-api-0\" (UID: \"6a9283cf-cd37-47f9-97aa-f7964eec1c36\") " pod="openstack/nova-api-0" Mar 11 09:03:29 crc kubenswrapper[4808]: I0311 09:03:29.742520 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a9283cf-cd37-47f9-97aa-f7964eec1c36-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6a9283cf-cd37-47f9-97aa-f7964eec1c36\") " pod="openstack/nova-api-0" Mar 11 09:03:29 crc kubenswrapper[4808]: I0311 09:03:29.742541 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a9283cf-cd37-47f9-97aa-f7964eec1c36-config-data\") pod \"nova-api-0\" (UID: \"6a9283cf-cd37-47f9-97aa-f7964eec1c36\") " pod="openstack/nova-api-0" Mar 11 09:03:29 crc kubenswrapper[4808]: I0311 09:03:29.742591 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfbw7\" (UniqueName: \"kubernetes.io/projected/576445f3-2e23-46f5-94a4-aac06720e4d4-kube-api-access-gfbw7\") pod \"nova-cell1-novncproxy-0\" (UID: \"576445f3-2e23-46f5-94a4-aac06720e4d4\") " pod="openstack/nova-cell1-novncproxy-0" Mar 11 09:03:29 crc kubenswrapper[4808]: I0311 09:03:29.742617 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/576445f3-2e23-46f5-94a4-aac06720e4d4-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"576445f3-2e23-46f5-94a4-aac06720e4d4\") " pod="openstack/nova-cell1-novncproxy-0" Mar 11 09:03:29 crc kubenswrapper[4808]: I0311 09:03:29.747935 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 11 09:03:29 crc kubenswrapper[4808]: I0311 09:03:29.763554 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Mar 11 09:03:29 crc kubenswrapper[4808]: I0311 09:03:29.763937 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 11 09:03:29 crc kubenswrapper[4808]: I0311 09:03:29.764649 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 11 09:03:29 crc kubenswrapper[4808]: I0311 09:03:29.767416 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Mar 11 09:03:29 crc kubenswrapper[4808]: I0311 09:03:29.778182 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 11 09:03:29 crc kubenswrapper[4808]: I0311 09:03:29.838585 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 11 09:03:29 crc kubenswrapper[4808]: I0311 09:03:29.843959 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfbw7\" (UniqueName: \"kubernetes.io/projected/576445f3-2e23-46f5-94a4-aac06720e4d4-kube-api-access-gfbw7\") pod \"nova-cell1-novncproxy-0\" (UID: \"576445f3-2e23-46f5-94a4-aac06720e4d4\") " pod="openstack/nova-cell1-novncproxy-0" Mar 11 09:03:29 crc kubenswrapper[4808]: I0311 09:03:29.844013 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6626039-dc3f-429c-a561-aa9131e93439-logs\") pod \"nova-metadata-0\" (UID: \"c6626039-dc3f-429c-a561-aa9131e93439\") " pod="openstack/nova-metadata-0" Mar 11 09:03:29 crc kubenswrapper[4808]: I0311 09:03:29.844072 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/576445f3-2e23-46f5-94a4-aac06720e4d4-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"576445f3-2e23-46f5-94a4-aac06720e4d4\") " pod="openstack/nova-cell1-novncproxy-0" Mar 11 09:03:29 crc kubenswrapper[4808]: I0311 09:03:29.844115 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/576445f3-2e23-46f5-94a4-aac06720e4d4-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"576445f3-2e23-46f5-94a4-aac06720e4d4\") " pod="openstack/nova-cell1-novncproxy-0" Mar 11 09:03:29 crc kubenswrapper[4808]: I0311 09:03:29.844186 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6626039-dc3f-429c-a561-aa9131e93439-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c6626039-dc3f-429c-a561-aa9131e93439\") " pod="openstack/nova-metadata-0" Mar 11 09:03:29 crc kubenswrapper[4808]: I0311 09:03:29.844213 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bb8abee-130a-4290-8602-6b5b4ca8860e-config-data\") pod \"nova-scheduler-0\" (UID: \"2bb8abee-130a-4290-8602-6b5b4ca8860e\") " pod="openstack/nova-scheduler-0" Mar 11 09:03:29 crc kubenswrapper[4808]: I0311 09:03:29.844231 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlwmc\" (UniqueName: \"kubernetes.io/projected/6a9283cf-cd37-47f9-97aa-f7964eec1c36-kube-api-access-mlwmc\") pod \"nova-api-0\" (UID: \"6a9283cf-cd37-47f9-97aa-f7964eec1c36\") " pod="openstack/nova-api-0" Mar 11 09:03:29 crc kubenswrapper[4808]: I0311 09:03:29.844264 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a9283cf-cd37-47f9-97aa-f7964eec1c36-logs\") pod \"nova-api-0\" (UID: \"6a9283cf-cd37-47f9-97aa-f7964eec1c36\") " pod="openstack/nova-api-0" Mar 11 09:03:29 crc kubenswrapper[4808]: I0311 09:03:29.844281 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6626039-dc3f-429c-a561-aa9131e93439-config-data\") pod \"nova-metadata-0\" (UID: \"c6626039-dc3f-429c-a561-aa9131e93439\") " pod="openstack/nova-metadata-0" Mar 11 09:03:29 crc kubenswrapper[4808]: I0311 09:03:29.844314 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a9283cf-cd37-47f9-97aa-f7964eec1c36-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6a9283cf-cd37-47f9-97aa-f7964eec1c36\") " pod="openstack/nova-api-0" Mar 11 09:03:29 crc kubenswrapper[4808]: I0311 09:03:29.844336 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bb8abee-130a-4290-8602-6b5b4ca8860e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2bb8abee-130a-4290-8602-6b5b4ca8860e\") " pod="openstack/nova-scheduler-0" Mar 11 09:03:29 crc kubenswrapper[4808]: I0311 09:03:29.844368 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a9283cf-cd37-47f9-97aa-f7964eec1c36-config-data\") pod \"nova-api-0\" (UID: \"6a9283cf-cd37-47f9-97aa-f7964eec1c36\") " pod="openstack/nova-api-0" Mar 11 09:03:29 crc kubenswrapper[4808]: I0311 09:03:29.844410 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49lzl\" (UniqueName: \"kubernetes.io/projected/2bb8abee-130a-4290-8602-6b5b4ca8860e-kube-api-access-49lzl\") pod \"nova-scheduler-0\" (UID: \"2bb8abee-130a-4290-8602-6b5b4ca8860e\") " pod="openstack/nova-scheduler-0" Mar 11 09:03:29 crc kubenswrapper[4808]: I0311 09:03:29.844429 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7j92g\" (UniqueName: \"kubernetes.io/projected/c6626039-dc3f-429c-a561-aa9131e93439-kube-api-access-7j92g\") pod \"nova-metadata-0\" (UID: \"c6626039-dc3f-429c-a561-aa9131e93439\") " pod="openstack/nova-metadata-0" Mar 11 09:03:29 crc kubenswrapper[4808]: I0311 09:03:29.845383 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a9283cf-cd37-47f9-97aa-f7964eec1c36-logs\") pod \"nova-api-0\" (UID: \"6a9283cf-cd37-47f9-97aa-f7964eec1c36\") " pod="openstack/nova-api-0" Mar 11 09:03:29 crc kubenswrapper[4808]: I0311 09:03:29.851712 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/576445f3-2e23-46f5-94a4-aac06720e4d4-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"576445f3-2e23-46f5-94a4-aac06720e4d4\") " pod="openstack/nova-cell1-novncproxy-0" Mar 11 09:03:29 crc kubenswrapper[4808]: I0311 09:03:29.854250 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a9283cf-cd37-47f9-97aa-f7964eec1c36-config-data\") pod \"nova-api-0\" (UID: \"6a9283cf-cd37-47f9-97aa-f7964eec1c36\") " pod="openstack/nova-api-0" Mar 11 09:03:29 crc kubenswrapper[4808]: I0311 09:03:29.854508 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a9283cf-cd37-47f9-97aa-f7964eec1c36-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6a9283cf-cd37-47f9-97aa-f7964eec1c36\") " pod="openstack/nova-api-0" Mar 11 09:03:29 crc kubenswrapper[4808]: I0311 09:03:29.855871 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/576445f3-2e23-46f5-94a4-aac06720e4d4-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"576445f3-2e23-46f5-94a4-aac06720e4d4\") " pod="openstack/nova-cell1-novncproxy-0" Mar 11 09:03:29 crc kubenswrapper[4808]: I0311 09:03:29.880228 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlwmc\" (UniqueName: \"kubernetes.io/projected/6a9283cf-cd37-47f9-97aa-f7964eec1c36-kube-api-access-mlwmc\") pod \"nova-api-0\" (UID: \"6a9283cf-cd37-47f9-97aa-f7964eec1c36\") " pod="openstack/nova-api-0" Mar 11 09:03:29 crc kubenswrapper[4808]: I0311 09:03:29.888928 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfbw7\" (UniqueName: \"kubernetes.io/projected/576445f3-2e23-46f5-94a4-aac06720e4d4-kube-api-access-gfbw7\") pod \"nova-cell1-novncproxy-0\" (UID: \"576445f3-2e23-46f5-94a4-aac06720e4d4\") " pod="openstack/nova-cell1-novncproxy-0" Mar 11 09:03:29 crc kubenswrapper[4808]: I0311 09:03:29.904571 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7bd5679c8c-lnxwb"] Mar 11 09:03:29 crc kubenswrapper[4808]: I0311 09:03:29.906551 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bd5679c8c-lnxwb" Mar 11 09:03:29 crc kubenswrapper[4808]: I0311 09:03:29.915462 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7bd5679c8c-lnxwb"] Mar 11 09:03:29 crc kubenswrapper[4808]: I0311 09:03:29.946657 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d36885b-b25e-467a-bdd9-bb9100e7f02b-config\") pod \"dnsmasq-dns-7bd5679c8c-lnxwb\" (UID: \"0d36885b-b25e-467a-bdd9-bb9100e7f02b\") " pod="openstack/dnsmasq-dns-7bd5679c8c-lnxwb" Mar 11 09:03:29 crc kubenswrapper[4808]: I0311 09:03:29.947200 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0d36885b-b25e-467a-bdd9-bb9100e7f02b-dns-svc\") pod \"dnsmasq-dns-7bd5679c8c-lnxwb\" (UID: \"0d36885b-b25e-467a-bdd9-bb9100e7f02b\") " pod="openstack/dnsmasq-dns-7bd5679c8c-lnxwb" Mar 11 09:03:29 crc kubenswrapper[4808]: I0311 09:03:29.947226 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6626039-dc3f-429c-a561-aa9131e93439-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c6626039-dc3f-429c-a561-aa9131e93439\") " pod="openstack/nova-metadata-0" Mar 11 09:03:29 crc kubenswrapper[4808]: I0311 09:03:29.947270 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0d36885b-b25e-467a-bdd9-bb9100e7f02b-ovsdbserver-nb\") pod \"dnsmasq-dns-7bd5679c8c-lnxwb\" (UID: \"0d36885b-b25e-467a-bdd9-bb9100e7f02b\") " pod="openstack/dnsmasq-dns-7bd5679c8c-lnxwb" Mar 11 09:03:29 crc kubenswrapper[4808]: I0311 09:03:29.947300 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bb8abee-130a-4290-8602-6b5b4ca8860e-config-data\") pod \"nova-scheduler-0\" (UID: \"2bb8abee-130a-4290-8602-6b5b4ca8860e\") " pod="openstack/nova-scheduler-0" Mar 11 09:03:29 crc kubenswrapper[4808]: I0311 09:03:29.947379 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6626039-dc3f-429c-a561-aa9131e93439-config-data\") pod \"nova-metadata-0\" (UID: \"c6626039-dc3f-429c-a561-aa9131e93439\") " pod="openstack/nova-metadata-0" Mar 11 09:03:29 crc kubenswrapper[4808]: I0311 09:03:29.947415 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bb8abee-130a-4290-8602-6b5b4ca8860e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2bb8abee-130a-4290-8602-6b5b4ca8860e\") " pod="openstack/nova-scheduler-0" Mar 11 09:03:29 crc kubenswrapper[4808]: I0311 09:03:29.947470 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49lzl\" (UniqueName: \"kubernetes.io/projected/2bb8abee-130a-4290-8602-6b5b4ca8860e-kube-api-access-49lzl\") pod \"nova-scheduler-0\" (UID: \"2bb8abee-130a-4290-8602-6b5b4ca8860e\") " pod="openstack/nova-scheduler-0" Mar 11 09:03:29 crc kubenswrapper[4808]: I0311 09:03:29.947490 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7j92g\" (UniqueName: \"kubernetes.io/projected/c6626039-dc3f-429c-a561-aa9131e93439-kube-api-access-7j92g\") pod \"nova-metadata-0\" (UID: \"c6626039-dc3f-429c-a561-aa9131e93439\") " pod="openstack/nova-metadata-0" Mar 11 09:03:29 crc kubenswrapper[4808]: I0311 09:03:29.947542 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q94wr\" (UniqueName: \"kubernetes.io/projected/0d36885b-b25e-467a-bdd9-bb9100e7f02b-kube-api-access-q94wr\") pod \"dnsmasq-dns-7bd5679c8c-lnxwb\" (UID: \"0d36885b-b25e-467a-bdd9-bb9100e7f02b\") " pod="openstack/dnsmasq-dns-7bd5679c8c-lnxwb" Mar 11 09:03:29 crc kubenswrapper[4808]: I0311 09:03:29.947566 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0d36885b-b25e-467a-bdd9-bb9100e7f02b-dns-swift-storage-0\") pod \"dnsmasq-dns-7bd5679c8c-lnxwb\" (UID: \"0d36885b-b25e-467a-bdd9-bb9100e7f02b\") " pod="openstack/dnsmasq-dns-7bd5679c8c-lnxwb" Mar 11 09:03:29 crc kubenswrapper[4808]: I0311 09:03:29.947628 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6626039-dc3f-429c-a561-aa9131e93439-logs\") pod \"nova-metadata-0\" (UID: \"c6626039-dc3f-429c-a561-aa9131e93439\") " pod="openstack/nova-metadata-0" Mar 11 09:03:29 crc kubenswrapper[4808]: I0311 09:03:29.947713 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0d36885b-b25e-467a-bdd9-bb9100e7f02b-ovsdbserver-sb\") pod \"dnsmasq-dns-7bd5679c8c-lnxwb\" (UID: \"0d36885b-b25e-467a-bdd9-bb9100e7f02b\") " pod="openstack/dnsmasq-dns-7bd5679c8c-lnxwb" Mar 11 09:03:29 crc kubenswrapper[4808]: I0311 09:03:29.952851 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6626039-dc3f-429c-a561-aa9131e93439-logs\") pod \"nova-metadata-0\" (UID: \"c6626039-dc3f-429c-a561-aa9131e93439\") " pod="openstack/nova-metadata-0" Mar 11 09:03:29 crc kubenswrapper[4808]: I0311 09:03:29.954868 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 11 09:03:29 crc kubenswrapper[4808]: I0311 09:03:29.957269 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bb8abee-130a-4290-8602-6b5b4ca8860e-config-data\") pod \"nova-scheduler-0\" (UID: \"2bb8abee-130a-4290-8602-6b5b4ca8860e\") " pod="openstack/nova-scheduler-0" Mar 11 09:03:29 crc kubenswrapper[4808]: I0311 09:03:29.957382 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6626039-dc3f-429c-a561-aa9131e93439-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c6626039-dc3f-429c-a561-aa9131e93439\") " pod="openstack/nova-metadata-0" Mar 11 09:03:29 crc kubenswrapper[4808]: I0311 09:03:29.957974 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bb8abee-130a-4290-8602-6b5b4ca8860e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2bb8abee-130a-4290-8602-6b5b4ca8860e\") " pod="openstack/nova-scheduler-0" Mar 11 09:03:29 crc kubenswrapper[4808]: I0311 09:03:29.981392 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6626039-dc3f-429c-a561-aa9131e93439-config-data\") pod \"nova-metadata-0\" (UID: \"c6626039-dc3f-429c-a561-aa9131e93439\") " pod="openstack/nova-metadata-0" Mar 11 09:03:29 crc kubenswrapper[4808]: I0311 09:03:29.982603 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49lzl\" (UniqueName: \"kubernetes.io/projected/2bb8abee-130a-4290-8602-6b5b4ca8860e-kube-api-access-49lzl\") pod \"nova-scheduler-0\" (UID: \"2bb8abee-130a-4290-8602-6b5b4ca8860e\") " pod="openstack/nova-scheduler-0" Mar 11 09:03:29 crc kubenswrapper[4808]: I0311 09:03:29.985890 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7j92g\" (UniqueName: \"kubernetes.io/projected/c6626039-dc3f-429c-a561-aa9131e93439-kube-api-access-7j92g\") pod \"nova-metadata-0\" (UID: \"c6626039-dc3f-429c-a561-aa9131e93439\") " pod="openstack/nova-metadata-0" Mar 11 09:03:30 crc kubenswrapper[4808]: I0311 09:03:30.049497 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d36885b-b25e-467a-bdd9-bb9100e7f02b-config\") pod \"dnsmasq-dns-7bd5679c8c-lnxwb\" (UID: \"0d36885b-b25e-467a-bdd9-bb9100e7f02b\") " pod="openstack/dnsmasq-dns-7bd5679c8c-lnxwb" Mar 11 09:03:30 crc kubenswrapper[4808]: I0311 09:03:30.050487 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0d36885b-b25e-467a-bdd9-bb9100e7f02b-dns-svc\") pod \"dnsmasq-dns-7bd5679c8c-lnxwb\" (UID: \"0d36885b-b25e-467a-bdd9-bb9100e7f02b\") " pod="openstack/dnsmasq-dns-7bd5679c8c-lnxwb" Mar 11 09:03:30 crc kubenswrapper[4808]: I0311 09:03:30.050518 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0d36885b-b25e-467a-bdd9-bb9100e7f02b-ovsdbserver-nb\") pod \"dnsmasq-dns-7bd5679c8c-lnxwb\" (UID: \"0d36885b-b25e-467a-bdd9-bb9100e7f02b\") " pod="openstack/dnsmasq-dns-7bd5679c8c-lnxwb" Mar 11 09:03:30 crc kubenswrapper[4808]: I0311 09:03:30.050656 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q94wr\" (UniqueName: \"kubernetes.io/projected/0d36885b-b25e-467a-bdd9-bb9100e7f02b-kube-api-access-q94wr\") pod \"dnsmasq-dns-7bd5679c8c-lnxwb\" (UID: \"0d36885b-b25e-467a-bdd9-bb9100e7f02b\") " pod="openstack/dnsmasq-dns-7bd5679c8c-lnxwb" Mar 11 09:03:30 crc kubenswrapper[4808]: I0311 09:03:30.050691 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0d36885b-b25e-467a-bdd9-bb9100e7f02b-dns-swift-storage-0\") pod \"dnsmasq-dns-7bd5679c8c-lnxwb\" (UID: \"0d36885b-b25e-467a-bdd9-bb9100e7f02b\") " pod="openstack/dnsmasq-dns-7bd5679c8c-lnxwb" Mar 11 09:03:30 crc kubenswrapper[4808]: I0311 09:03:30.050777 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0d36885b-b25e-467a-bdd9-bb9100e7f02b-ovsdbserver-sb\") pod \"dnsmasq-dns-7bd5679c8c-lnxwb\" (UID: \"0d36885b-b25e-467a-bdd9-bb9100e7f02b\") " pod="openstack/dnsmasq-dns-7bd5679c8c-lnxwb" Mar 11 09:03:30 crc kubenswrapper[4808]: I0311 09:03:30.051692 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0d36885b-b25e-467a-bdd9-bb9100e7f02b-ovsdbserver-sb\") pod \"dnsmasq-dns-7bd5679c8c-lnxwb\" (UID: \"0d36885b-b25e-467a-bdd9-bb9100e7f02b\") " pod="openstack/dnsmasq-dns-7bd5679c8c-lnxwb" Mar 11 09:03:30 crc kubenswrapper[4808]: I0311 09:03:30.052213 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d36885b-b25e-467a-bdd9-bb9100e7f02b-config\") pod \"dnsmasq-dns-7bd5679c8c-lnxwb\" (UID: \"0d36885b-b25e-467a-bdd9-bb9100e7f02b\") " pod="openstack/dnsmasq-dns-7bd5679c8c-lnxwb" Mar 11 09:03:30 crc kubenswrapper[4808]: I0311 09:03:30.052720 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0d36885b-b25e-467a-bdd9-bb9100e7f02b-dns-svc\") pod \"dnsmasq-dns-7bd5679c8c-lnxwb\" (UID: \"0d36885b-b25e-467a-bdd9-bb9100e7f02b\") " pod="openstack/dnsmasq-dns-7bd5679c8c-lnxwb" Mar 11 09:03:30 crc kubenswrapper[4808]: I0311 09:03:30.053229 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0d36885b-b25e-467a-bdd9-bb9100e7f02b-dns-swift-storage-0\") pod \"dnsmasq-dns-7bd5679c8c-lnxwb\" (UID: \"0d36885b-b25e-467a-bdd9-bb9100e7f02b\") " pod="openstack/dnsmasq-dns-7bd5679c8c-lnxwb" Mar 11 09:03:30 crc kubenswrapper[4808]: I0311 09:03:30.054030 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0d36885b-b25e-467a-bdd9-bb9100e7f02b-ovsdbserver-nb\") pod \"dnsmasq-dns-7bd5679c8c-lnxwb\" (UID: \"0d36885b-b25e-467a-bdd9-bb9100e7f02b\") " pod="openstack/dnsmasq-dns-7bd5679c8c-lnxwb" Mar 11 09:03:30 crc kubenswrapper[4808]: I0311 09:03:30.070081 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q94wr\" (UniqueName: \"kubernetes.io/projected/0d36885b-b25e-467a-bdd9-bb9100e7f02b-kube-api-access-q94wr\") pod \"dnsmasq-dns-7bd5679c8c-lnxwb\" (UID: \"0d36885b-b25e-467a-bdd9-bb9100e7f02b\") " pod="openstack/dnsmasq-dns-7bd5679c8c-lnxwb" Mar 11 09:03:30 crc kubenswrapper[4808]: I0311 09:03:30.147662 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 11 09:03:30 crc kubenswrapper[4808]: I0311 09:03:30.173258 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 11 09:03:30 crc kubenswrapper[4808]: I0311 09:03:30.188702 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 11 09:03:30 crc kubenswrapper[4808]: I0311 09:03:30.234842 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bd5679c8c-lnxwb" Mar 11 09:03:30 crc kubenswrapper[4808]: I0311 09:03:30.369378 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-f44ls"] Mar 11 09:03:30 crc kubenswrapper[4808]: I0311 09:03:30.475484 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-5z55v"] Mar 11 09:03:30 crc kubenswrapper[4808]: I0311 09:03:30.476626 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-f44ls" event={"ID":"1132fd26-9b0b-4a76-9e1c-ad025025ed8a","Type":"ContainerStarted","Data":"d4c37321ced49c9b2c57a3ebf9154935693f9a6f8bc958e370ba56846526dd17"} Mar 11 09:03:30 crc kubenswrapper[4808]: I0311 09:03:30.476715 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-5z55v" Mar 11 09:03:30 crc kubenswrapper[4808]: I0311 09:03:30.490999 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Mar 11 09:03:30 crc kubenswrapper[4808]: I0311 09:03:30.491229 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Mar 11 09:03:30 crc kubenswrapper[4808]: I0311 09:03:30.531580 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-5z55v"] Mar 11 09:03:30 crc kubenswrapper[4808]: I0311 09:03:30.563659 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjzwn\" (UniqueName: \"kubernetes.io/projected/cc20fcdb-674b-47bf-abcb-c7985d23f8c8-kube-api-access-mjzwn\") pod \"nova-cell1-conductor-db-sync-5z55v\" (UID: \"cc20fcdb-674b-47bf-abcb-c7985d23f8c8\") " pod="openstack/nova-cell1-conductor-db-sync-5z55v" Mar 11 09:03:30 crc kubenswrapper[4808]: I0311 09:03:30.564016 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc20fcdb-674b-47bf-abcb-c7985d23f8c8-config-data\") pod \"nova-cell1-conductor-db-sync-5z55v\" (UID: \"cc20fcdb-674b-47bf-abcb-c7985d23f8c8\") " pod="openstack/nova-cell1-conductor-db-sync-5z55v" Mar 11 09:03:30 crc kubenswrapper[4808]: I0311 09:03:30.564065 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc20fcdb-674b-47bf-abcb-c7985d23f8c8-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-5z55v\" (UID: \"cc20fcdb-674b-47bf-abcb-c7985d23f8c8\") " pod="openstack/nova-cell1-conductor-db-sync-5z55v" Mar 11 09:03:30 crc kubenswrapper[4808]: I0311 09:03:30.564116 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc20fcdb-674b-47bf-abcb-c7985d23f8c8-scripts\") pod \"nova-cell1-conductor-db-sync-5z55v\" (UID: \"cc20fcdb-674b-47bf-abcb-c7985d23f8c8\") " pod="openstack/nova-cell1-conductor-db-sync-5z55v" Mar 11 09:03:30 crc kubenswrapper[4808]: I0311 09:03:30.573010 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 11 09:03:30 crc kubenswrapper[4808]: I0311 09:03:30.665323 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc20fcdb-674b-47bf-abcb-c7985d23f8c8-config-data\") pod \"nova-cell1-conductor-db-sync-5z55v\" (UID: \"cc20fcdb-674b-47bf-abcb-c7985d23f8c8\") " pod="openstack/nova-cell1-conductor-db-sync-5z55v" Mar 11 09:03:30 crc kubenswrapper[4808]: I0311 09:03:30.665653 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc20fcdb-674b-47bf-abcb-c7985d23f8c8-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-5z55v\" (UID: \"cc20fcdb-674b-47bf-abcb-c7985d23f8c8\") " pod="openstack/nova-cell1-conductor-db-sync-5z55v" Mar 11 09:03:30 crc kubenswrapper[4808]: I0311 09:03:30.665852 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc20fcdb-674b-47bf-abcb-c7985d23f8c8-scripts\") pod \"nova-cell1-conductor-db-sync-5z55v\" (UID: \"cc20fcdb-674b-47bf-abcb-c7985d23f8c8\") " pod="openstack/nova-cell1-conductor-db-sync-5z55v" Mar 11 09:03:30 crc kubenswrapper[4808]: I0311 09:03:30.666015 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjzwn\" (UniqueName: \"kubernetes.io/projected/cc20fcdb-674b-47bf-abcb-c7985d23f8c8-kube-api-access-mjzwn\") pod \"nova-cell1-conductor-db-sync-5z55v\" (UID: \"cc20fcdb-674b-47bf-abcb-c7985d23f8c8\") " pod="openstack/nova-cell1-conductor-db-sync-5z55v" Mar 11 09:03:30 crc kubenswrapper[4808]: I0311 09:03:30.681198 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc20fcdb-674b-47bf-abcb-c7985d23f8c8-scripts\") pod \"nova-cell1-conductor-db-sync-5z55v\" (UID: \"cc20fcdb-674b-47bf-abcb-c7985d23f8c8\") " pod="openstack/nova-cell1-conductor-db-sync-5z55v" Mar 11 09:03:30 crc kubenswrapper[4808]: I0311 09:03:30.688146 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc20fcdb-674b-47bf-abcb-c7985d23f8c8-config-data\") pod \"nova-cell1-conductor-db-sync-5z55v\" (UID: \"cc20fcdb-674b-47bf-abcb-c7985d23f8c8\") " pod="openstack/nova-cell1-conductor-db-sync-5z55v" Mar 11 09:03:30 crc kubenswrapper[4808]: I0311 09:03:30.692033 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc20fcdb-674b-47bf-abcb-c7985d23f8c8-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-5z55v\" (UID: \"cc20fcdb-674b-47bf-abcb-c7985d23f8c8\") " pod="openstack/nova-cell1-conductor-db-sync-5z55v" Mar 11 09:03:30 crc kubenswrapper[4808]: I0311 09:03:30.710587 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjzwn\" (UniqueName: \"kubernetes.io/projected/cc20fcdb-674b-47bf-abcb-c7985d23f8c8-kube-api-access-mjzwn\") pod \"nova-cell1-conductor-db-sync-5z55v\" (UID: \"cc20fcdb-674b-47bf-abcb-c7985d23f8c8\") " pod="openstack/nova-cell1-conductor-db-sync-5z55v" Mar 11 09:03:30 crc kubenswrapper[4808]: I0311 09:03:30.820639 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-5z55v" Mar 11 09:03:30 crc kubenswrapper[4808]: I0311 09:03:30.824961 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 11 09:03:31 crc kubenswrapper[4808]: W0311 09:03:31.041861 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0d36885b_b25e_467a_bdd9_bb9100e7f02b.slice/crio-6c635d6331d450961ae69185f0e44f8ad3c1ef30449b34fe42118a7571a0787a WatchSource:0}: Error finding container 6c635d6331d450961ae69185f0e44f8ad3c1ef30449b34fe42118a7571a0787a: Status 404 returned error can't find the container with id 6c635d6331d450961ae69185f0e44f8ad3c1ef30449b34fe42118a7571a0787a Mar 11 09:03:31 crc kubenswrapper[4808]: I0311 09:03:31.047115 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7bd5679c8c-lnxwb"] Mar 11 09:03:31 crc kubenswrapper[4808]: I0311 09:03:31.108250 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 11 09:03:31 crc kubenswrapper[4808]: W0311 09:03:31.111597 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc6626039_dc3f_429c_a561_aa9131e93439.slice/crio-5ddfd2c87b476a65c53b947d0fc3d1f52674ce544943cc55d4aec13734a8ce6f WatchSource:0}: Error finding container 5ddfd2c87b476a65c53b947d0fc3d1f52674ce544943cc55d4aec13734a8ce6f: Status 404 returned error can't find the container with id 5ddfd2c87b476a65c53b947d0fc3d1f52674ce544943cc55d4aec13734a8ce6f Mar 11 09:03:31 crc kubenswrapper[4808]: I0311 09:03:31.246164 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 11 09:03:31 crc kubenswrapper[4808]: I0311 09:03:31.484373 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6a9283cf-cd37-47f9-97aa-f7964eec1c36","Type":"ContainerStarted","Data":"56e9401b5adb177aed8af72629b388147f4e6bbb37cdacf57cfb069cd8812eb2"} Mar 11 09:03:31 crc kubenswrapper[4808]: I0311 09:03:31.485782 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"576445f3-2e23-46f5-94a4-aac06720e4d4","Type":"ContainerStarted","Data":"bfa946d540b9860f72a7dd2c7e0224391aadb2384202d9902d3a241f31ba1be1"} Mar 11 09:03:31 crc kubenswrapper[4808]: I0311 09:03:31.486853 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c6626039-dc3f-429c-a561-aa9131e93439","Type":"ContainerStarted","Data":"5ddfd2c87b476a65c53b947d0fc3d1f52674ce544943cc55d4aec13734a8ce6f"} Mar 11 09:03:31 crc kubenswrapper[4808]: I0311 09:03:31.492795 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-f44ls" event={"ID":"1132fd26-9b0b-4a76-9e1c-ad025025ed8a","Type":"ContainerStarted","Data":"6030d37cc99ab2df68249c8c25ca42f7c3e532c1dfe31feb2d90ccfa67c70a0c"} Mar 11 09:03:31 crc kubenswrapper[4808]: I0311 09:03:31.503622 4808 generic.go:334] "Generic (PLEG): container finished" podID="0d36885b-b25e-467a-bdd9-bb9100e7f02b" containerID="2098f288f36d618b0b8ff8db38f323ea05e37a3350ea1f3a2a163a5c4608f26c" exitCode=0 Mar 11 09:03:31 crc kubenswrapper[4808]: I0311 09:03:31.503686 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bd5679c8c-lnxwb" event={"ID":"0d36885b-b25e-467a-bdd9-bb9100e7f02b","Type":"ContainerDied","Data":"2098f288f36d618b0b8ff8db38f323ea05e37a3350ea1f3a2a163a5c4608f26c"} Mar 11 09:03:31 crc kubenswrapper[4808]: I0311 09:03:31.503711 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bd5679c8c-lnxwb" event={"ID":"0d36885b-b25e-467a-bdd9-bb9100e7f02b","Type":"ContainerStarted","Data":"6c635d6331d450961ae69185f0e44f8ad3c1ef30449b34fe42118a7571a0787a"} Mar 11 09:03:31 crc kubenswrapper[4808]: I0311 09:03:31.507517 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2bb8abee-130a-4290-8602-6b5b4ca8860e","Type":"ContainerStarted","Data":"47a018c2361547e7a5e5b6099e8a0e6add5f6c574bf3fec9bd8657253de2d29d"} Mar 11 09:03:31 crc kubenswrapper[4808]: I0311 09:03:31.520009 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-f44ls" podStartSLOduration=2.519990297 podStartE2EDuration="2.519990297s" podCreationTimestamp="2026-03-11 09:03:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:03:31.51494859 +0000 UTC m=+1462.468271920" watchObservedRunningTime="2026-03-11 09:03:31.519990297 +0000 UTC m=+1462.473313617" Mar 11 09:03:31 crc kubenswrapper[4808]: I0311 09:03:31.901317 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-5z55v"] Mar 11 09:03:32 crc kubenswrapper[4808]: I0311 09:03:32.517566 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bd5679c8c-lnxwb" event={"ID":"0d36885b-b25e-467a-bdd9-bb9100e7f02b","Type":"ContainerStarted","Data":"f7de189655d9da6f7a1e8cc7c1967571cdb0ed5bcda3796b4f41b1596fabc0a5"} Mar 11 09:03:32 crc kubenswrapper[4808]: I0311 09:03:32.517673 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7bd5679c8c-lnxwb" Mar 11 09:03:32 crc kubenswrapper[4808]: I0311 09:03:32.547211 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7bd5679c8c-lnxwb" podStartSLOduration=3.547188873 podStartE2EDuration="3.547188873s" podCreationTimestamp="2026-03-11 09:03:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:03:32.540099617 +0000 UTC m=+1463.493422947" watchObservedRunningTime="2026-03-11 09:03:32.547188873 +0000 UTC m=+1463.500512193" Mar 11 09:03:33 crc kubenswrapper[4808]: I0311 09:03:33.528625 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-5z55v" event={"ID":"cc20fcdb-674b-47bf-abcb-c7985d23f8c8","Type":"ContainerStarted","Data":"ddd6dafc7a93efd6d599e16c18894ab73186a1a64146a4cc5a4c641026b134af"} Mar 11 09:03:33 crc kubenswrapper[4808]: I0311 09:03:33.758278 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 11 09:03:33 crc kubenswrapper[4808]: I0311 09:03:33.804263 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 11 09:03:35 crc kubenswrapper[4808]: I0311 09:03:35.545790 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"576445f3-2e23-46f5-94a4-aac06720e4d4","Type":"ContainerStarted","Data":"9b1636a9ef8c8bdae221e3923e75a711cbae868836ed093d383f0a9bc64c267e"} Mar 11 09:03:35 crc kubenswrapper[4808]: I0311 09:03:35.545915 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="576445f3-2e23-46f5-94a4-aac06720e4d4" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://9b1636a9ef8c8bdae221e3923e75a711cbae868836ed093d383f0a9bc64c267e" gracePeriod=30 Mar 11 09:03:35 crc kubenswrapper[4808]: I0311 09:03:35.556470 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c6626039-dc3f-429c-a561-aa9131e93439","Type":"ContainerStarted","Data":"69bbb5848b3abb29c26dcdb1eb9dcfa710c4879c59533ff9f7b2458ec1334b6b"} Mar 11 09:03:35 crc kubenswrapper[4808]: I0311 09:03:35.556526 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c6626039-dc3f-429c-a561-aa9131e93439","Type":"ContainerStarted","Data":"fa74c0fdb8f9368d032764aad842a51d876b8fa2dff4efa44e853b2e3db41c24"} Mar 11 09:03:35 crc kubenswrapper[4808]: I0311 09:03:35.556704 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="c6626039-dc3f-429c-a561-aa9131e93439" containerName="nova-metadata-log" containerID="cri-o://fa74c0fdb8f9368d032764aad842a51d876b8fa2dff4efa44e853b2e3db41c24" gracePeriod=30 Mar 11 09:03:35 crc kubenswrapper[4808]: I0311 09:03:35.556843 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="c6626039-dc3f-429c-a561-aa9131e93439" containerName="nova-metadata-metadata" containerID="cri-o://69bbb5848b3abb29c26dcdb1eb9dcfa710c4879c59533ff9f7b2458ec1334b6b" gracePeriod=30 Mar 11 09:03:35 crc kubenswrapper[4808]: I0311 09:03:35.559839 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2bb8abee-130a-4290-8602-6b5b4ca8860e","Type":"ContainerStarted","Data":"941e68628f1f20da7bdf161a252e40df1bcff6476a4d07663855b2484aa0d69b"} Mar 11 09:03:35 crc kubenswrapper[4808]: I0311 09:03:35.574778 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.13833787 podStartE2EDuration="6.574763072s" podCreationTimestamp="2026-03-11 09:03:29 +0000 UTC" firstStartedPulling="2026-03-11 09:03:30.845621666 +0000 UTC m=+1461.798944986" lastFinishedPulling="2026-03-11 09:03:34.282046868 +0000 UTC m=+1465.235370188" observedRunningTime="2026-03-11 09:03:35.57471082 +0000 UTC m=+1466.528034130" watchObservedRunningTime="2026-03-11 09:03:35.574763072 +0000 UTC m=+1466.528086392" Mar 11 09:03:35 crc kubenswrapper[4808]: I0311 09:03:35.577236 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6a9283cf-cd37-47f9-97aa-f7964eec1c36","Type":"ContainerStarted","Data":"479f294bc966add255c51b0d2f36a8591fac9edc220244d95ec0b58db9f07402"} Mar 11 09:03:35 crc kubenswrapper[4808]: I0311 09:03:35.577274 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6a9283cf-cd37-47f9-97aa-f7964eec1c36","Type":"ContainerStarted","Data":"ae477f0cee0efbea3766afa802e1cb07123e925a1ff61ce04f4bf1047caf43d5"} Mar 11 09:03:35 crc kubenswrapper[4808]: I0311 09:03:35.579707 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-5z55v" event={"ID":"cc20fcdb-674b-47bf-abcb-c7985d23f8c8","Type":"ContainerStarted","Data":"f5993ed9fccb696165a5956e8d830cce2b087e556012593cbecfbedebe131fea"} Mar 11 09:03:35 crc kubenswrapper[4808]: I0311 09:03:35.600793 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.463480081 podStartE2EDuration="6.600777338s" podCreationTimestamp="2026-03-11 09:03:29 +0000 UTC" firstStartedPulling="2026-03-11 09:03:31.119143206 +0000 UTC m=+1462.072466526" lastFinishedPulling="2026-03-11 09:03:34.256440463 +0000 UTC m=+1465.209763783" observedRunningTime="2026-03-11 09:03:35.592852217 +0000 UTC m=+1466.546175537" watchObservedRunningTime="2026-03-11 09:03:35.600777338 +0000 UTC m=+1466.554100658" Mar 11 09:03:35 crc kubenswrapper[4808]: I0311 09:03:35.615066 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.598913348 podStartE2EDuration="6.615051823s" podCreationTimestamp="2026-03-11 09:03:29 +0000 UTC" firstStartedPulling="2026-03-11 09:03:31.264489321 +0000 UTC m=+1462.217812641" lastFinishedPulling="2026-03-11 09:03:34.280627786 +0000 UTC m=+1465.233951116" observedRunningTime="2026-03-11 09:03:35.608323897 +0000 UTC m=+1466.561647217" watchObservedRunningTime="2026-03-11 09:03:35.615051823 +0000 UTC m=+1466.568375143" Mar 11 09:03:35 crc kubenswrapper[4808]: I0311 09:03:35.631371 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-5z55v" podStartSLOduration=5.631337146 podStartE2EDuration="5.631337146s" podCreationTimestamp="2026-03-11 09:03:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:03:35.623781336 +0000 UTC m=+1466.577104656" watchObservedRunningTime="2026-03-11 09:03:35.631337146 +0000 UTC m=+1466.584660456" Mar 11 09:03:35 crc kubenswrapper[4808]: I0311 09:03:35.648719 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.913432863 podStartE2EDuration="6.648698961s" podCreationTimestamp="2026-03-11 09:03:29 +0000 UTC" firstStartedPulling="2026-03-11 09:03:30.545321847 +0000 UTC m=+1461.498645167" lastFinishedPulling="2026-03-11 09:03:34.280587945 +0000 UTC m=+1465.233911265" observedRunningTime="2026-03-11 09:03:35.640938805 +0000 UTC m=+1466.594262125" watchObservedRunningTime="2026-03-11 09:03:35.648698961 +0000 UTC m=+1466.602022281" Mar 11 09:03:36 crc kubenswrapper[4808]: I0311 09:03:36.145796 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 11 09:03:36 crc kubenswrapper[4808]: I0311 09:03:36.189530 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6626039-dc3f-429c-a561-aa9131e93439-logs\") pod \"c6626039-dc3f-429c-a561-aa9131e93439\" (UID: \"c6626039-dc3f-429c-a561-aa9131e93439\") " Mar 11 09:03:36 crc kubenswrapper[4808]: I0311 09:03:36.189590 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6626039-dc3f-429c-a561-aa9131e93439-combined-ca-bundle\") pod \"c6626039-dc3f-429c-a561-aa9131e93439\" (UID: \"c6626039-dc3f-429c-a561-aa9131e93439\") " Mar 11 09:03:36 crc kubenswrapper[4808]: I0311 09:03:36.189633 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6626039-dc3f-429c-a561-aa9131e93439-config-data\") pod \"c6626039-dc3f-429c-a561-aa9131e93439\" (UID: \"c6626039-dc3f-429c-a561-aa9131e93439\") " Mar 11 09:03:36 crc kubenswrapper[4808]: I0311 09:03:36.189734 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7j92g\" (UniqueName: \"kubernetes.io/projected/c6626039-dc3f-429c-a561-aa9131e93439-kube-api-access-7j92g\") pod \"c6626039-dc3f-429c-a561-aa9131e93439\" (UID: \"c6626039-dc3f-429c-a561-aa9131e93439\") " Mar 11 09:03:36 crc kubenswrapper[4808]: I0311 09:03:36.190019 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6626039-dc3f-429c-a561-aa9131e93439-logs" (OuterVolumeSpecName: "logs") pod "c6626039-dc3f-429c-a561-aa9131e93439" (UID: "c6626039-dc3f-429c-a561-aa9131e93439"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:03:36 crc kubenswrapper[4808]: I0311 09:03:36.199994 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6626039-dc3f-429c-a561-aa9131e93439-kube-api-access-7j92g" (OuterVolumeSpecName: "kube-api-access-7j92g") pod "c6626039-dc3f-429c-a561-aa9131e93439" (UID: "c6626039-dc3f-429c-a561-aa9131e93439"). InnerVolumeSpecName "kube-api-access-7j92g". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:03:36 crc kubenswrapper[4808]: I0311 09:03:36.224593 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6626039-dc3f-429c-a561-aa9131e93439-config-data" (OuterVolumeSpecName: "config-data") pod "c6626039-dc3f-429c-a561-aa9131e93439" (UID: "c6626039-dc3f-429c-a561-aa9131e93439"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:03:36 crc kubenswrapper[4808]: I0311 09:03:36.233067 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6626039-dc3f-429c-a561-aa9131e93439-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c6626039-dc3f-429c-a561-aa9131e93439" (UID: "c6626039-dc3f-429c-a561-aa9131e93439"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:03:36 crc kubenswrapper[4808]: I0311 09:03:36.291628 4808 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6626039-dc3f-429c-a561-aa9131e93439-logs\") on node \"crc\" DevicePath \"\"" Mar 11 09:03:36 crc kubenswrapper[4808]: I0311 09:03:36.291663 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6626039-dc3f-429c-a561-aa9131e93439-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:03:36 crc kubenswrapper[4808]: I0311 09:03:36.291674 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6626039-dc3f-429c-a561-aa9131e93439-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:03:36 crc kubenswrapper[4808]: I0311 09:03:36.291683 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7j92g\" (UniqueName: \"kubernetes.io/projected/c6626039-dc3f-429c-a561-aa9131e93439-kube-api-access-7j92g\") on node \"crc\" DevicePath \"\"" Mar 11 09:03:36 crc kubenswrapper[4808]: I0311 09:03:36.588623 4808 generic.go:334] "Generic (PLEG): container finished" podID="c6626039-dc3f-429c-a561-aa9131e93439" containerID="69bbb5848b3abb29c26dcdb1eb9dcfa710c4879c59533ff9f7b2458ec1334b6b" exitCode=0 Mar 11 09:03:36 crc kubenswrapper[4808]: I0311 09:03:36.588909 4808 generic.go:334] "Generic (PLEG): container finished" podID="c6626039-dc3f-429c-a561-aa9131e93439" containerID="fa74c0fdb8f9368d032764aad842a51d876b8fa2dff4efa44e853b2e3db41c24" exitCode=143 Mar 11 09:03:36 crc kubenswrapper[4808]: I0311 09:03:36.589708 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 11 09:03:36 crc kubenswrapper[4808]: I0311 09:03:36.616665 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c6626039-dc3f-429c-a561-aa9131e93439","Type":"ContainerDied","Data":"69bbb5848b3abb29c26dcdb1eb9dcfa710c4879c59533ff9f7b2458ec1334b6b"} Mar 11 09:03:36 crc kubenswrapper[4808]: I0311 09:03:36.616732 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c6626039-dc3f-429c-a561-aa9131e93439","Type":"ContainerDied","Data":"fa74c0fdb8f9368d032764aad842a51d876b8fa2dff4efa44e853b2e3db41c24"} Mar 11 09:03:36 crc kubenswrapper[4808]: I0311 09:03:36.616747 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c6626039-dc3f-429c-a561-aa9131e93439","Type":"ContainerDied","Data":"5ddfd2c87b476a65c53b947d0fc3d1f52674ce544943cc55d4aec13734a8ce6f"} Mar 11 09:03:36 crc kubenswrapper[4808]: I0311 09:03:36.616789 4808 scope.go:117] "RemoveContainer" containerID="69bbb5848b3abb29c26dcdb1eb9dcfa710c4879c59533ff9f7b2458ec1334b6b" Mar 11 09:03:36 crc kubenswrapper[4808]: I0311 09:03:36.677927 4808 scope.go:117] "RemoveContainer" containerID="fa74c0fdb8f9368d032764aad842a51d876b8fa2dff4efa44e853b2e3db41c24" Mar 11 09:03:36 crc kubenswrapper[4808]: I0311 09:03:36.701235 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 11 09:03:36 crc kubenswrapper[4808]: I0311 09:03:36.719928 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Mar 11 09:03:36 crc kubenswrapper[4808]: I0311 09:03:36.732468 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 11 09:03:36 crc kubenswrapper[4808]: E0311 09:03:36.733181 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6626039-dc3f-429c-a561-aa9131e93439" containerName="nova-metadata-log" Mar 11 09:03:36 crc kubenswrapper[4808]: I0311 09:03:36.733206 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6626039-dc3f-429c-a561-aa9131e93439" containerName="nova-metadata-log" Mar 11 09:03:36 crc kubenswrapper[4808]: E0311 09:03:36.733259 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6626039-dc3f-429c-a561-aa9131e93439" containerName="nova-metadata-metadata" Mar 11 09:03:36 crc kubenswrapper[4808]: I0311 09:03:36.733268 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6626039-dc3f-429c-a561-aa9131e93439" containerName="nova-metadata-metadata" Mar 11 09:03:36 crc kubenswrapper[4808]: I0311 09:03:36.733828 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6626039-dc3f-429c-a561-aa9131e93439" containerName="nova-metadata-log" Mar 11 09:03:36 crc kubenswrapper[4808]: I0311 09:03:36.733863 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6626039-dc3f-429c-a561-aa9131e93439" containerName="nova-metadata-metadata" Mar 11 09:03:36 crc kubenswrapper[4808]: I0311 09:03:36.737693 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 11 09:03:36 crc kubenswrapper[4808]: I0311 09:03:36.743907 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 11 09:03:36 crc kubenswrapper[4808]: I0311 09:03:36.744170 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Mar 11 09:03:36 crc kubenswrapper[4808]: I0311 09:03:36.747759 4808 scope.go:117] "RemoveContainer" containerID="69bbb5848b3abb29c26dcdb1eb9dcfa710c4879c59533ff9f7b2458ec1334b6b" Mar 11 09:03:36 crc kubenswrapper[4808]: E0311 09:03:36.748214 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69bbb5848b3abb29c26dcdb1eb9dcfa710c4879c59533ff9f7b2458ec1334b6b\": container with ID starting with 69bbb5848b3abb29c26dcdb1eb9dcfa710c4879c59533ff9f7b2458ec1334b6b not found: ID does not exist" containerID="69bbb5848b3abb29c26dcdb1eb9dcfa710c4879c59533ff9f7b2458ec1334b6b" Mar 11 09:03:36 crc kubenswrapper[4808]: I0311 09:03:36.748253 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69bbb5848b3abb29c26dcdb1eb9dcfa710c4879c59533ff9f7b2458ec1334b6b"} err="failed to get container status \"69bbb5848b3abb29c26dcdb1eb9dcfa710c4879c59533ff9f7b2458ec1334b6b\": rpc error: code = NotFound desc = could not find container \"69bbb5848b3abb29c26dcdb1eb9dcfa710c4879c59533ff9f7b2458ec1334b6b\": container with ID starting with 69bbb5848b3abb29c26dcdb1eb9dcfa710c4879c59533ff9f7b2458ec1334b6b not found: ID does not exist" Mar 11 09:03:36 crc kubenswrapper[4808]: I0311 09:03:36.748274 4808 scope.go:117] "RemoveContainer" containerID="fa74c0fdb8f9368d032764aad842a51d876b8fa2dff4efa44e853b2e3db41c24" Mar 11 09:03:36 crc kubenswrapper[4808]: E0311 09:03:36.749426 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa74c0fdb8f9368d032764aad842a51d876b8fa2dff4efa44e853b2e3db41c24\": container with ID starting with fa74c0fdb8f9368d032764aad842a51d876b8fa2dff4efa44e853b2e3db41c24 not found: ID does not exist" containerID="fa74c0fdb8f9368d032764aad842a51d876b8fa2dff4efa44e853b2e3db41c24" Mar 11 09:03:36 crc kubenswrapper[4808]: I0311 09:03:36.749480 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa74c0fdb8f9368d032764aad842a51d876b8fa2dff4efa44e853b2e3db41c24"} err="failed to get container status \"fa74c0fdb8f9368d032764aad842a51d876b8fa2dff4efa44e853b2e3db41c24\": rpc error: code = NotFound desc = could not find container \"fa74c0fdb8f9368d032764aad842a51d876b8fa2dff4efa44e853b2e3db41c24\": container with ID starting with fa74c0fdb8f9368d032764aad842a51d876b8fa2dff4efa44e853b2e3db41c24 not found: ID does not exist" Mar 11 09:03:36 crc kubenswrapper[4808]: I0311 09:03:36.749516 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 11 09:03:36 crc kubenswrapper[4808]: I0311 09:03:36.749524 4808 scope.go:117] "RemoveContainer" containerID="69bbb5848b3abb29c26dcdb1eb9dcfa710c4879c59533ff9f7b2458ec1334b6b" Mar 11 09:03:36 crc kubenswrapper[4808]: I0311 09:03:36.749911 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69bbb5848b3abb29c26dcdb1eb9dcfa710c4879c59533ff9f7b2458ec1334b6b"} err="failed to get container status \"69bbb5848b3abb29c26dcdb1eb9dcfa710c4879c59533ff9f7b2458ec1334b6b\": rpc error: code = NotFound desc = could not find container \"69bbb5848b3abb29c26dcdb1eb9dcfa710c4879c59533ff9f7b2458ec1334b6b\": container with ID starting with 69bbb5848b3abb29c26dcdb1eb9dcfa710c4879c59533ff9f7b2458ec1334b6b not found: ID does not exist" Mar 11 09:03:36 crc kubenswrapper[4808]: I0311 09:03:36.749939 4808 scope.go:117] "RemoveContainer" containerID="fa74c0fdb8f9368d032764aad842a51d876b8fa2dff4efa44e853b2e3db41c24" Mar 11 09:03:36 crc kubenswrapper[4808]: I0311 09:03:36.750383 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa74c0fdb8f9368d032764aad842a51d876b8fa2dff4efa44e853b2e3db41c24"} err="failed to get container status \"fa74c0fdb8f9368d032764aad842a51d876b8fa2dff4efa44e853b2e3db41c24\": rpc error: code = NotFound desc = could not find container \"fa74c0fdb8f9368d032764aad842a51d876b8fa2dff4efa44e853b2e3db41c24\": container with ID starting with fa74c0fdb8f9368d032764aad842a51d876b8fa2dff4efa44e853b2e3db41c24 not found: ID does not exist" Mar 11 09:03:36 crc kubenswrapper[4808]: I0311 09:03:36.861999 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5642ba8b-850b-4677-9c9f-1c90de8c712b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5642ba8b-850b-4677-9c9f-1c90de8c712b\") " pod="openstack/nova-metadata-0" Mar 11 09:03:36 crc kubenswrapper[4808]: I0311 09:03:36.862087 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhcj2\" (UniqueName: \"kubernetes.io/projected/5642ba8b-850b-4677-9c9f-1c90de8c712b-kube-api-access-dhcj2\") pod \"nova-metadata-0\" (UID: \"5642ba8b-850b-4677-9c9f-1c90de8c712b\") " pod="openstack/nova-metadata-0" Mar 11 09:03:36 crc kubenswrapper[4808]: I0311 09:03:36.862142 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5642ba8b-850b-4677-9c9f-1c90de8c712b-config-data\") pod \"nova-metadata-0\" (UID: \"5642ba8b-850b-4677-9c9f-1c90de8c712b\") " pod="openstack/nova-metadata-0" Mar 11 09:03:36 crc kubenswrapper[4808]: I0311 09:03:36.862284 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5642ba8b-850b-4677-9c9f-1c90de8c712b-logs\") pod \"nova-metadata-0\" (UID: \"5642ba8b-850b-4677-9c9f-1c90de8c712b\") " pod="openstack/nova-metadata-0" Mar 11 09:03:36 crc kubenswrapper[4808]: I0311 09:03:36.862321 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5642ba8b-850b-4677-9c9f-1c90de8c712b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5642ba8b-850b-4677-9c9f-1c90de8c712b\") " pod="openstack/nova-metadata-0" Mar 11 09:03:36 crc kubenswrapper[4808]: I0311 09:03:36.963326 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5642ba8b-850b-4677-9c9f-1c90de8c712b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5642ba8b-850b-4677-9c9f-1c90de8c712b\") " pod="openstack/nova-metadata-0" Mar 11 09:03:36 crc kubenswrapper[4808]: I0311 09:03:36.963407 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhcj2\" (UniqueName: \"kubernetes.io/projected/5642ba8b-850b-4677-9c9f-1c90de8c712b-kube-api-access-dhcj2\") pod \"nova-metadata-0\" (UID: \"5642ba8b-850b-4677-9c9f-1c90de8c712b\") " pod="openstack/nova-metadata-0" Mar 11 09:03:36 crc kubenswrapper[4808]: I0311 09:03:36.963446 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5642ba8b-850b-4677-9c9f-1c90de8c712b-config-data\") pod \"nova-metadata-0\" (UID: \"5642ba8b-850b-4677-9c9f-1c90de8c712b\") " pod="openstack/nova-metadata-0" Mar 11 09:03:36 crc kubenswrapper[4808]: I0311 09:03:36.963584 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5642ba8b-850b-4677-9c9f-1c90de8c712b-logs\") pod \"nova-metadata-0\" (UID: \"5642ba8b-850b-4677-9c9f-1c90de8c712b\") " pod="openstack/nova-metadata-0" Mar 11 09:03:36 crc kubenswrapper[4808]: I0311 09:03:36.963614 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5642ba8b-850b-4677-9c9f-1c90de8c712b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5642ba8b-850b-4677-9c9f-1c90de8c712b\") " pod="openstack/nova-metadata-0" Mar 11 09:03:36 crc kubenswrapper[4808]: I0311 09:03:36.964449 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5642ba8b-850b-4677-9c9f-1c90de8c712b-logs\") pod \"nova-metadata-0\" (UID: \"5642ba8b-850b-4677-9c9f-1c90de8c712b\") " pod="openstack/nova-metadata-0" Mar 11 09:03:36 crc kubenswrapper[4808]: I0311 09:03:36.967638 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5642ba8b-850b-4677-9c9f-1c90de8c712b-config-data\") pod \"nova-metadata-0\" (UID: \"5642ba8b-850b-4677-9c9f-1c90de8c712b\") " pod="openstack/nova-metadata-0" Mar 11 09:03:36 crc kubenswrapper[4808]: I0311 09:03:36.968043 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5642ba8b-850b-4677-9c9f-1c90de8c712b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5642ba8b-850b-4677-9c9f-1c90de8c712b\") " pod="openstack/nova-metadata-0" Mar 11 09:03:36 crc kubenswrapper[4808]: I0311 09:03:36.975450 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5642ba8b-850b-4677-9c9f-1c90de8c712b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5642ba8b-850b-4677-9c9f-1c90de8c712b\") " pod="openstack/nova-metadata-0" Mar 11 09:03:36 crc kubenswrapper[4808]: I0311 09:03:36.981434 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhcj2\" (UniqueName: \"kubernetes.io/projected/5642ba8b-850b-4677-9c9f-1c90de8c712b-kube-api-access-dhcj2\") pod \"nova-metadata-0\" (UID: \"5642ba8b-850b-4677-9c9f-1c90de8c712b\") " pod="openstack/nova-metadata-0" Mar 11 09:03:37 crc kubenswrapper[4808]: I0311 09:03:37.059612 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 11 09:03:37 crc kubenswrapper[4808]: I0311 09:03:37.545210 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 11 09:03:37 crc kubenswrapper[4808]: W0311 09:03:37.565141 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5642ba8b_850b_4677_9c9f_1c90de8c712b.slice/crio-57d4511300033b9d1e9ec77de80d17a4a25283cf37b63315f830e243930997ad WatchSource:0}: Error finding container 57d4511300033b9d1e9ec77de80d17a4a25283cf37b63315f830e243930997ad: Status 404 returned error can't find the container with id 57d4511300033b9d1e9ec77de80d17a4a25283cf37b63315f830e243930997ad Mar 11 09:03:37 crc kubenswrapper[4808]: I0311 09:03:37.603673 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5642ba8b-850b-4677-9c9f-1c90de8c712b","Type":"ContainerStarted","Data":"57d4511300033b9d1e9ec77de80d17a4a25283cf37b63315f830e243930997ad"} Mar 11 09:03:37 crc kubenswrapper[4808]: I0311 09:03:37.802589 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6626039-dc3f-429c-a561-aa9131e93439" path="/var/lib/kubelet/pods/c6626039-dc3f-429c-a561-aa9131e93439/volumes" Mar 11 09:03:38 crc kubenswrapper[4808]: I0311 09:03:38.613092 4808 generic.go:334] "Generic (PLEG): container finished" podID="1132fd26-9b0b-4a76-9e1c-ad025025ed8a" containerID="6030d37cc99ab2df68249c8c25ca42f7c3e532c1dfe31feb2d90ccfa67c70a0c" exitCode=0 Mar 11 09:03:38 crc kubenswrapper[4808]: I0311 09:03:38.613219 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-f44ls" event={"ID":"1132fd26-9b0b-4a76-9e1c-ad025025ed8a","Type":"ContainerDied","Data":"6030d37cc99ab2df68249c8c25ca42f7c3e532c1dfe31feb2d90ccfa67c70a0c"} Mar 11 09:03:38 crc kubenswrapper[4808]: I0311 09:03:38.617144 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5642ba8b-850b-4677-9c9f-1c90de8c712b","Type":"ContainerStarted","Data":"3f90c9e2f85ef2adf1c055b7cf21f0cb4243cb135ea52e0f5bd45fe58a4fd07d"} Mar 11 09:03:38 crc kubenswrapper[4808]: I0311 09:03:38.617191 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5642ba8b-850b-4677-9c9f-1c90de8c712b","Type":"ContainerStarted","Data":"933813d0add599011ebe46baf5fd2b763b411d4a61ebd855595409ae019db363"} Mar 11 09:03:38 crc kubenswrapper[4808]: I0311 09:03:38.652025 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.652007534 podStartE2EDuration="2.652007534s" podCreationTimestamp="2026-03-11 09:03:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:03:38.648672137 +0000 UTC m=+1469.601995457" watchObservedRunningTime="2026-03-11 09:03:38.652007534 +0000 UTC m=+1469.605330854" Mar 11 09:03:39 crc kubenswrapper[4808]: I0311 09:03:39.956652 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 11 09:03:39 crc kubenswrapper[4808]: I0311 09:03:39.957014 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 11 09:03:40 crc kubenswrapper[4808]: I0311 09:03:40.012300 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-f44ls" Mar 11 09:03:40 crc kubenswrapper[4808]: I0311 09:03:40.126182 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1132fd26-9b0b-4a76-9e1c-ad025025ed8a-config-data\") pod \"1132fd26-9b0b-4a76-9e1c-ad025025ed8a\" (UID: \"1132fd26-9b0b-4a76-9e1c-ad025025ed8a\") " Mar 11 09:03:40 crc kubenswrapper[4808]: I0311 09:03:40.126292 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1132fd26-9b0b-4a76-9e1c-ad025025ed8a-scripts\") pod \"1132fd26-9b0b-4a76-9e1c-ad025025ed8a\" (UID: \"1132fd26-9b0b-4a76-9e1c-ad025025ed8a\") " Mar 11 09:03:40 crc kubenswrapper[4808]: I0311 09:03:40.126436 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4wsvq\" (UniqueName: \"kubernetes.io/projected/1132fd26-9b0b-4a76-9e1c-ad025025ed8a-kube-api-access-4wsvq\") pod \"1132fd26-9b0b-4a76-9e1c-ad025025ed8a\" (UID: \"1132fd26-9b0b-4a76-9e1c-ad025025ed8a\") " Mar 11 09:03:40 crc kubenswrapper[4808]: I0311 09:03:40.126482 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1132fd26-9b0b-4a76-9e1c-ad025025ed8a-combined-ca-bundle\") pod \"1132fd26-9b0b-4a76-9e1c-ad025025ed8a\" (UID: \"1132fd26-9b0b-4a76-9e1c-ad025025ed8a\") " Mar 11 09:03:40 crc kubenswrapper[4808]: I0311 09:03:40.134714 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1132fd26-9b0b-4a76-9e1c-ad025025ed8a-scripts" (OuterVolumeSpecName: "scripts") pod "1132fd26-9b0b-4a76-9e1c-ad025025ed8a" (UID: "1132fd26-9b0b-4a76-9e1c-ad025025ed8a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:03:40 crc kubenswrapper[4808]: I0311 09:03:40.137589 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1132fd26-9b0b-4a76-9e1c-ad025025ed8a-kube-api-access-4wsvq" (OuterVolumeSpecName: "kube-api-access-4wsvq") pod "1132fd26-9b0b-4a76-9e1c-ad025025ed8a" (UID: "1132fd26-9b0b-4a76-9e1c-ad025025ed8a"). InnerVolumeSpecName "kube-api-access-4wsvq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:03:40 crc kubenswrapper[4808]: I0311 09:03:40.147914 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Mar 11 09:03:40 crc kubenswrapper[4808]: I0311 09:03:40.167480 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1132fd26-9b0b-4a76-9e1c-ad025025ed8a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1132fd26-9b0b-4a76-9e1c-ad025025ed8a" (UID: "1132fd26-9b0b-4a76-9e1c-ad025025ed8a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:03:40 crc kubenswrapper[4808]: I0311 09:03:40.178451 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1132fd26-9b0b-4a76-9e1c-ad025025ed8a-config-data" (OuterVolumeSpecName: "config-data") pod "1132fd26-9b0b-4a76-9e1c-ad025025ed8a" (UID: "1132fd26-9b0b-4a76-9e1c-ad025025ed8a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:03:40 crc kubenswrapper[4808]: I0311 09:03:40.189805 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Mar 11 09:03:40 crc kubenswrapper[4808]: I0311 09:03:40.189870 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Mar 11 09:03:40 crc kubenswrapper[4808]: I0311 09:03:40.230306 4808 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1132fd26-9b0b-4a76-9e1c-ad025025ed8a-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:03:40 crc kubenswrapper[4808]: I0311 09:03:40.230344 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4wsvq\" (UniqueName: \"kubernetes.io/projected/1132fd26-9b0b-4a76-9e1c-ad025025ed8a-kube-api-access-4wsvq\") on node \"crc\" DevicePath \"\"" Mar 11 09:03:40 crc kubenswrapper[4808]: I0311 09:03:40.230385 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1132fd26-9b0b-4a76-9e1c-ad025025ed8a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:03:40 crc kubenswrapper[4808]: I0311 09:03:40.230402 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1132fd26-9b0b-4a76-9e1c-ad025025ed8a-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:03:40 crc kubenswrapper[4808]: I0311 09:03:40.232957 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Mar 11 09:03:40 crc kubenswrapper[4808]: I0311 09:03:40.236930 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7bd5679c8c-lnxwb" Mar 11 09:03:40 crc kubenswrapper[4808]: I0311 09:03:40.329684 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7b8fcc65cc-vlcss"] Mar 11 09:03:40 crc kubenswrapper[4808]: I0311 09:03:40.329919 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7b8fcc65cc-vlcss" podUID="6bf6365d-3a16-4b22-8d9a-0d261e818792" containerName="dnsmasq-dns" containerID="cri-o://4c597aa8c547f3e11152f1df0b7195c03b366c3d24dc3cf3f131df2be45a7c43" gracePeriod=10 Mar 11 09:03:40 crc kubenswrapper[4808]: I0311 09:03:40.641434 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-f44ls" Mar 11 09:03:40 crc kubenswrapper[4808]: I0311 09:03:40.641428 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-f44ls" event={"ID":"1132fd26-9b0b-4a76-9e1c-ad025025ed8a","Type":"ContainerDied","Data":"d4c37321ced49c9b2c57a3ebf9154935693f9a6f8bc958e370ba56846526dd17"} Mar 11 09:03:40 crc kubenswrapper[4808]: I0311 09:03:40.642124 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d4c37321ced49c9b2c57a3ebf9154935693f9a6f8bc958e370ba56846526dd17" Mar 11 09:03:40 crc kubenswrapper[4808]: I0311 09:03:40.652038 4808 generic.go:334] "Generic (PLEG): container finished" podID="6bf6365d-3a16-4b22-8d9a-0d261e818792" containerID="4c597aa8c547f3e11152f1df0b7195c03b366c3d24dc3cf3f131df2be45a7c43" exitCode=0 Mar 11 09:03:40 crc kubenswrapper[4808]: I0311 09:03:40.652098 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b8fcc65cc-vlcss" event={"ID":"6bf6365d-3a16-4b22-8d9a-0d261e818792","Type":"ContainerDied","Data":"4c597aa8c547f3e11152f1df0b7195c03b366c3d24dc3cf3f131df2be45a7c43"} Mar 11 09:03:40 crc kubenswrapper[4808]: I0311 09:03:40.707341 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Mar 11 09:03:40 crc kubenswrapper[4808]: I0311 09:03:40.739500 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b8fcc65cc-vlcss" Mar 11 09:03:40 crc kubenswrapper[4808]: I0311 09:03:40.824655 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 11 09:03:40 crc kubenswrapper[4808]: I0311 09:03:40.824902 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="6a9283cf-cd37-47f9-97aa-f7964eec1c36" containerName="nova-api-log" containerID="cri-o://ae477f0cee0efbea3766afa802e1cb07123e925a1ff61ce04f4bf1047caf43d5" gracePeriod=30 Mar 11 09:03:40 crc kubenswrapper[4808]: I0311 09:03:40.825160 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="6a9283cf-cd37-47f9-97aa-f7964eec1c36" containerName="nova-api-api" containerID="cri-o://479f294bc966add255c51b0d2f36a8591fac9edc220244d95ec0b58db9f07402" gracePeriod=30 Mar 11 09:03:40 crc kubenswrapper[4808]: I0311 09:03:40.831474 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="6a9283cf-cd37-47f9-97aa-f7964eec1c36" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.190:8774/\": EOF" Mar 11 09:03:40 crc kubenswrapper[4808]: I0311 09:03:40.831614 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="6a9283cf-cd37-47f9-97aa-f7964eec1c36" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.190:8774/\": EOF" Mar 11 09:03:40 crc kubenswrapper[4808]: I0311 09:03:40.844907 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qjb6g\" (UniqueName: \"kubernetes.io/projected/6bf6365d-3a16-4b22-8d9a-0d261e818792-kube-api-access-qjb6g\") pod \"6bf6365d-3a16-4b22-8d9a-0d261e818792\" (UID: \"6bf6365d-3a16-4b22-8d9a-0d261e818792\") " Mar 11 09:03:40 crc kubenswrapper[4808]: I0311 09:03:40.845042 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bf6365d-3a16-4b22-8d9a-0d261e818792-config\") pod \"6bf6365d-3a16-4b22-8d9a-0d261e818792\" (UID: \"6bf6365d-3a16-4b22-8d9a-0d261e818792\") " Mar 11 09:03:40 crc kubenswrapper[4808]: I0311 09:03:40.845079 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6bf6365d-3a16-4b22-8d9a-0d261e818792-dns-svc\") pod \"6bf6365d-3a16-4b22-8d9a-0d261e818792\" (UID: \"6bf6365d-3a16-4b22-8d9a-0d261e818792\") " Mar 11 09:03:40 crc kubenswrapper[4808]: I0311 09:03:40.845143 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6bf6365d-3a16-4b22-8d9a-0d261e818792-ovsdbserver-nb\") pod \"6bf6365d-3a16-4b22-8d9a-0d261e818792\" (UID: \"6bf6365d-3a16-4b22-8d9a-0d261e818792\") " Mar 11 09:03:40 crc kubenswrapper[4808]: I0311 09:03:40.845174 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6bf6365d-3a16-4b22-8d9a-0d261e818792-dns-swift-storage-0\") pod \"6bf6365d-3a16-4b22-8d9a-0d261e818792\" (UID: \"6bf6365d-3a16-4b22-8d9a-0d261e818792\") " Mar 11 09:03:40 crc kubenswrapper[4808]: I0311 09:03:40.845191 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6bf6365d-3a16-4b22-8d9a-0d261e818792-ovsdbserver-sb\") pod \"6bf6365d-3a16-4b22-8d9a-0d261e818792\" (UID: \"6bf6365d-3a16-4b22-8d9a-0d261e818792\") " Mar 11 09:03:40 crc kubenswrapper[4808]: I0311 09:03:40.853577 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bf6365d-3a16-4b22-8d9a-0d261e818792-kube-api-access-qjb6g" (OuterVolumeSpecName: "kube-api-access-qjb6g") pod "6bf6365d-3a16-4b22-8d9a-0d261e818792" (UID: "6bf6365d-3a16-4b22-8d9a-0d261e818792"). InnerVolumeSpecName "kube-api-access-qjb6g". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:03:40 crc kubenswrapper[4808]: I0311 09:03:40.902435 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 11 09:03:40 crc kubenswrapper[4808]: I0311 09:03:40.902627 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="5642ba8b-850b-4677-9c9f-1c90de8c712b" containerName="nova-metadata-log" containerID="cri-o://933813d0add599011ebe46baf5fd2b763b411d4a61ebd855595409ae019db363" gracePeriod=30 Mar 11 09:03:40 crc kubenswrapper[4808]: I0311 09:03:40.903174 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="5642ba8b-850b-4677-9c9f-1c90de8c712b" containerName="nova-metadata-metadata" containerID="cri-o://3f90c9e2f85ef2adf1c055b7cf21f0cb4243cb135ea52e0f5bd45fe58a4fd07d" gracePeriod=30 Mar 11 09:03:40 crc kubenswrapper[4808]: I0311 09:03:40.929748 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bf6365d-3a16-4b22-8d9a-0d261e818792-config" (OuterVolumeSpecName: "config") pod "6bf6365d-3a16-4b22-8d9a-0d261e818792" (UID: "6bf6365d-3a16-4b22-8d9a-0d261e818792"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:03:40 crc kubenswrapper[4808]: I0311 09:03:40.930537 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bf6365d-3a16-4b22-8d9a-0d261e818792-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6bf6365d-3a16-4b22-8d9a-0d261e818792" (UID: "6bf6365d-3a16-4b22-8d9a-0d261e818792"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:03:40 crc kubenswrapper[4808]: I0311 09:03:40.935267 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bf6365d-3a16-4b22-8d9a-0d261e818792-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "6bf6365d-3a16-4b22-8d9a-0d261e818792" (UID: "6bf6365d-3a16-4b22-8d9a-0d261e818792"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:03:40 crc kubenswrapper[4808]: I0311 09:03:40.948010 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bf6365d-3a16-4b22-8d9a-0d261e818792-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6bf6365d-3a16-4b22-8d9a-0d261e818792" (UID: "6bf6365d-3a16-4b22-8d9a-0d261e818792"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:03:40 crc kubenswrapper[4808]: I0311 09:03:40.949065 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qjb6g\" (UniqueName: \"kubernetes.io/projected/6bf6365d-3a16-4b22-8d9a-0d261e818792-kube-api-access-qjb6g\") on node \"crc\" DevicePath \"\"" Mar 11 09:03:40 crc kubenswrapper[4808]: I0311 09:03:40.949106 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bf6365d-3a16-4b22-8d9a-0d261e818792-config\") on node \"crc\" DevicePath \"\"" Mar 11 09:03:40 crc kubenswrapper[4808]: I0311 09:03:40.949121 4808 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6bf6365d-3a16-4b22-8d9a-0d261e818792-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 11 09:03:40 crc kubenswrapper[4808]: I0311 09:03:40.949133 4808 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6bf6365d-3a16-4b22-8d9a-0d261e818792-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 11 09:03:40 crc kubenswrapper[4808]: I0311 09:03:40.949145 4808 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6bf6365d-3a16-4b22-8d9a-0d261e818792-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 11 09:03:40 crc kubenswrapper[4808]: I0311 09:03:40.958022 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bf6365d-3a16-4b22-8d9a-0d261e818792-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6bf6365d-3a16-4b22-8d9a-0d261e818792" (UID: "6bf6365d-3a16-4b22-8d9a-0d261e818792"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:03:41 crc kubenswrapper[4808]: I0311 09:03:41.050549 4808 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6bf6365d-3a16-4b22-8d9a-0d261e818792-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 11 09:03:41 crc kubenswrapper[4808]: E0311 09:03:41.166288 4808 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5642ba8b_850b_4677_9c9f_1c90de8c712b.slice/crio-3f90c9e2f85ef2adf1c055b7cf21f0cb4243cb135ea52e0f5bd45fe58a4fd07d.scope\": RecentStats: unable to find data in memory cache]" Mar 11 09:03:41 crc kubenswrapper[4808]: I0311 09:03:41.326468 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 11 09:03:41 crc kubenswrapper[4808]: I0311 09:03:41.486258 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 11 09:03:41 crc kubenswrapper[4808]: I0311 09:03:41.662846 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b8fcc65cc-vlcss" Mar 11 09:03:41 crc kubenswrapper[4808]: I0311 09:03:41.662839 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b8fcc65cc-vlcss" event={"ID":"6bf6365d-3a16-4b22-8d9a-0d261e818792","Type":"ContainerDied","Data":"e4c3094a831fbae89577ed216098a0d039a5839b3a6687b7e477f3ee6a940296"} Mar 11 09:03:41 crc kubenswrapper[4808]: I0311 09:03:41.663020 4808 scope.go:117] "RemoveContainer" containerID="4c597aa8c547f3e11152f1df0b7195c03b366c3d24dc3cf3f131df2be45a7c43" Mar 11 09:03:41 crc kubenswrapper[4808]: I0311 09:03:41.667186 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5642ba8b-850b-4677-9c9f-1c90de8c712b-config-data\") pod \"5642ba8b-850b-4677-9c9f-1c90de8c712b\" (UID: \"5642ba8b-850b-4677-9c9f-1c90de8c712b\") " Mar 11 09:03:41 crc kubenswrapper[4808]: I0311 09:03:41.667259 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5642ba8b-850b-4677-9c9f-1c90de8c712b-logs\") pod \"5642ba8b-850b-4677-9c9f-1c90de8c712b\" (UID: \"5642ba8b-850b-4677-9c9f-1c90de8c712b\") " Mar 11 09:03:41 crc kubenswrapper[4808]: I0311 09:03:41.667332 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5642ba8b-850b-4677-9c9f-1c90de8c712b-nova-metadata-tls-certs\") pod \"5642ba8b-850b-4677-9c9f-1c90de8c712b\" (UID: \"5642ba8b-850b-4677-9c9f-1c90de8c712b\") " Mar 11 09:03:41 crc kubenswrapper[4808]: I0311 09:03:41.667567 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5642ba8b-850b-4677-9c9f-1c90de8c712b-combined-ca-bundle\") pod \"5642ba8b-850b-4677-9c9f-1c90de8c712b\" (UID: \"5642ba8b-850b-4677-9c9f-1c90de8c712b\") " Mar 11 09:03:41 crc kubenswrapper[4808]: I0311 09:03:41.667637 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dhcj2\" (UniqueName: \"kubernetes.io/projected/5642ba8b-850b-4677-9c9f-1c90de8c712b-kube-api-access-dhcj2\") pod \"5642ba8b-850b-4677-9c9f-1c90de8c712b\" (UID: \"5642ba8b-850b-4677-9c9f-1c90de8c712b\") " Mar 11 09:03:41 crc kubenswrapper[4808]: I0311 09:03:41.667800 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5642ba8b-850b-4677-9c9f-1c90de8c712b-logs" (OuterVolumeSpecName: "logs") pod "5642ba8b-850b-4677-9c9f-1c90de8c712b" (UID: "5642ba8b-850b-4677-9c9f-1c90de8c712b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:03:41 crc kubenswrapper[4808]: I0311 09:03:41.668130 4808 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5642ba8b-850b-4677-9c9f-1c90de8c712b-logs\") on node \"crc\" DevicePath \"\"" Mar 11 09:03:41 crc kubenswrapper[4808]: I0311 09:03:41.670913 4808 generic.go:334] "Generic (PLEG): container finished" podID="5642ba8b-850b-4677-9c9f-1c90de8c712b" containerID="3f90c9e2f85ef2adf1c055b7cf21f0cb4243cb135ea52e0f5bd45fe58a4fd07d" exitCode=0 Mar 11 09:03:41 crc kubenswrapper[4808]: I0311 09:03:41.670958 4808 generic.go:334] "Generic (PLEG): container finished" podID="5642ba8b-850b-4677-9c9f-1c90de8c712b" containerID="933813d0add599011ebe46baf5fd2b763b411d4a61ebd855595409ae019db363" exitCode=143 Mar 11 09:03:41 crc kubenswrapper[4808]: I0311 09:03:41.671006 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5642ba8b-850b-4677-9c9f-1c90de8c712b","Type":"ContainerDied","Data":"3f90c9e2f85ef2adf1c055b7cf21f0cb4243cb135ea52e0f5bd45fe58a4fd07d"} Mar 11 09:03:41 crc kubenswrapper[4808]: I0311 09:03:41.671039 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5642ba8b-850b-4677-9c9f-1c90de8c712b","Type":"ContainerDied","Data":"933813d0add599011ebe46baf5fd2b763b411d4a61ebd855595409ae019db363"} Mar 11 09:03:41 crc kubenswrapper[4808]: I0311 09:03:41.671053 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5642ba8b-850b-4677-9c9f-1c90de8c712b","Type":"ContainerDied","Data":"57d4511300033b9d1e9ec77de80d17a4a25283cf37b63315f830e243930997ad"} Mar 11 09:03:41 crc kubenswrapper[4808]: I0311 09:03:41.671122 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 11 09:03:41 crc kubenswrapper[4808]: I0311 09:03:41.674534 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5642ba8b-850b-4677-9c9f-1c90de8c712b-kube-api-access-dhcj2" (OuterVolumeSpecName: "kube-api-access-dhcj2") pod "5642ba8b-850b-4677-9c9f-1c90de8c712b" (UID: "5642ba8b-850b-4677-9c9f-1c90de8c712b"). InnerVolumeSpecName "kube-api-access-dhcj2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:03:41 crc kubenswrapper[4808]: I0311 09:03:41.674951 4808 generic.go:334] "Generic (PLEG): container finished" podID="6a9283cf-cd37-47f9-97aa-f7964eec1c36" containerID="ae477f0cee0efbea3766afa802e1cb07123e925a1ff61ce04f4bf1047caf43d5" exitCode=143 Mar 11 09:03:41 crc kubenswrapper[4808]: I0311 09:03:41.675000 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6a9283cf-cd37-47f9-97aa-f7964eec1c36","Type":"ContainerDied","Data":"ae477f0cee0efbea3766afa802e1cb07123e925a1ff61ce04f4bf1047caf43d5"} Mar 11 09:03:41 crc kubenswrapper[4808]: I0311 09:03:41.685865 4808 generic.go:334] "Generic (PLEG): container finished" podID="cc20fcdb-674b-47bf-abcb-c7985d23f8c8" containerID="f5993ed9fccb696165a5956e8d830cce2b087e556012593cbecfbedebe131fea" exitCode=0 Mar 11 09:03:41 crc kubenswrapper[4808]: I0311 09:03:41.685951 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-5z55v" event={"ID":"cc20fcdb-674b-47bf-abcb-c7985d23f8c8","Type":"ContainerDied","Data":"f5993ed9fccb696165a5956e8d830cce2b087e556012593cbecfbedebe131fea"} Mar 11 09:03:41 crc kubenswrapper[4808]: I0311 09:03:41.704378 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5642ba8b-850b-4677-9c9f-1c90de8c712b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5642ba8b-850b-4677-9c9f-1c90de8c712b" (UID: "5642ba8b-850b-4677-9c9f-1c90de8c712b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:03:41 crc kubenswrapper[4808]: I0311 09:03:41.704786 4808 scope.go:117] "RemoveContainer" containerID="2f7920b834438b19cba2838ec5212d8513e9815595eaaa3e5978fa4f1e1cc7e2" Mar 11 09:03:41 crc kubenswrapper[4808]: I0311 09:03:41.709535 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7b8fcc65cc-vlcss"] Mar 11 09:03:41 crc kubenswrapper[4808]: I0311 09:03:41.728335 4808 scope.go:117] "RemoveContainer" containerID="3f90c9e2f85ef2adf1c055b7cf21f0cb4243cb135ea52e0f5bd45fe58a4fd07d" Mar 11 09:03:41 crc kubenswrapper[4808]: I0311 09:03:41.739546 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7b8fcc65cc-vlcss"] Mar 11 09:03:41 crc kubenswrapper[4808]: I0311 09:03:41.742053 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5642ba8b-850b-4677-9c9f-1c90de8c712b-config-data" (OuterVolumeSpecName: "config-data") pod "5642ba8b-850b-4677-9c9f-1c90de8c712b" (UID: "5642ba8b-850b-4677-9c9f-1c90de8c712b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:03:41 crc kubenswrapper[4808]: I0311 09:03:41.758993 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5642ba8b-850b-4677-9c9f-1c90de8c712b-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "5642ba8b-850b-4677-9c9f-1c90de8c712b" (UID: "5642ba8b-850b-4677-9c9f-1c90de8c712b"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:03:41 crc kubenswrapper[4808]: I0311 09:03:41.759540 4808 scope.go:117] "RemoveContainer" containerID="933813d0add599011ebe46baf5fd2b763b411d4a61ebd855595409ae019db363" Mar 11 09:03:41 crc kubenswrapper[4808]: I0311 09:03:41.778577 4808 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5642ba8b-850b-4677-9c9f-1c90de8c712b-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 11 09:03:41 crc kubenswrapper[4808]: I0311 09:03:41.778606 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5642ba8b-850b-4677-9c9f-1c90de8c712b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:03:41 crc kubenswrapper[4808]: I0311 09:03:41.778615 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dhcj2\" (UniqueName: \"kubernetes.io/projected/5642ba8b-850b-4677-9c9f-1c90de8c712b-kube-api-access-dhcj2\") on node \"crc\" DevicePath \"\"" Mar 11 09:03:41 crc kubenswrapper[4808]: I0311 09:03:41.778623 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5642ba8b-850b-4677-9c9f-1c90de8c712b-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:03:41 crc kubenswrapper[4808]: I0311 09:03:41.791441 4808 scope.go:117] "RemoveContainer" containerID="3f90c9e2f85ef2adf1c055b7cf21f0cb4243cb135ea52e0f5bd45fe58a4fd07d" Mar 11 09:03:41 crc kubenswrapper[4808]: E0311 09:03:41.791936 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f90c9e2f85ef2adf1c055b7cf21f0cb4243cb135ea52e0f5bd45fe58a4fd07d\": container with ID starting with 3f90c9e2f85ef2adf1c055b7cf21f0cb4243cb135ea52e0f5bd45fe58a4fd07d not found: ID does not exist" containerID="3f90c9e2f85ef2adf1c055b7cf21f0cb4243cb135ea52e0f5bd45fe58a4fd07d" Mar 11 09:03:41 crc kubenswrapper[4808]: I0311 09:03:41.791962 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f90c9e2f85ef2adf1c055b7cf21f0cb4243cb135ea52e0f5bd45fe58a4fd07d"} err="failed to get container status \"3f90c9e2f85ef2adf1c055b7cf21f0cb4243cb135ea52e0f5bd45fe58a4fd07d\": rpc error: code = NotFound desc = could not find container \"3f90c9e2f85ef2adf1c055b7cf21f0cb4243cb135ea52e0f5bd45fe58a4fd07d\": container with ID starting with 3f90c9e2f85ef2adf1c055b7cf21f0cb4243cb135ea52e0f5bd45fe58a4fd07d not found: ID does not exist" Mar 11 09:03:41 crc kubenswrapper[4808]: I0311 09:03:41.791981 4808 scope.go:117] "RemoveContainer" containerID="933813d0add599011ebe46baf5fd2b763b411d4a61ebd855595409ae019db363" Mar 11 09:03:41 crc kubenswrapper[4808]: E0311 09:03:41.800489 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"933813d0add599011ebe46baf5fd2b763b411d4a61ebd855595409ae019db363\": container with ID starting with 933813d0add599011ebe46baf5fd2b763b411d4a61ebd855595409ae019db363 not found: ID does not exist" containerID="933813d0add599011ebe46baf5fd2b763b411d4a61ebd855595409ae019db363" Mar 11 09:03:41 crc kubenswrapper[4808]: I0311 09:03:41.800537 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"933813d0add599011ebe46baf5fd2b763b411d4a61ebd855595409ae019db363"} err="failed to get container status \"933813d0add599011ebe46baf5fd2b763b411d4a61ebd855595409ae019db363\": rpc error: code = NotFound desc = could not find container \"933813d0add599011ebe46baf5fd2b763b411d4a61ebd855595409ae019db363\": container with ID starting with 933813d0add599011ebe46baf5fd2b763b411d4a61ebd855595409ae019db363 not found: ID does not exist" Mar 11 09:03:41 crc kubenswrapper[4808]: I0311 09:03:41.800561 4808 scope.go:117] "RemoveContainer" containerID="3f90c9e2f85ef2adf1c055b7cf21f0cb4243cb135ea52e0f5bd45fe58a4fd07d" Mar 11 09:03:41 crc kubenswrapper[4808]: I0311 09:03:41.800922 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f90c9e2f85ef2adf1c055b7cf21f0cb4243cb135ea52e0f5bd45fe58a4fd07d"} err="failed to get container status \"3f90c9e2f85ef2adf1c055b7cf21f0cb4243cb135ea52e0f5bd45fe58a4fd07d\": rpc error: code = NotFound desc = could not find container \"3f90c9e2f85ef2adf1c055b7cf21f0cb4243cb135ea52e0f5bd45fe58a4fd07d\": container with ID starting with 3f90c9e2f85ef2adf1c055b7cf21f0cb4243cb135ea52e0f5bd45fe58a4fd07d not found: ID does not exist" Mar 11 09:03:41 crc kubenswrapper[4808]: I0311 09:03:41.800942 4808 scope.go:117] "RemoveContainer" containerID="933813d0add599011ebe46baf5fd2b763b411d4a61ebd855595409ae019db363" Mar 11 09:03:41 crc kubenswrapper[4808]: I0311 09:03:41.801112 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"933813d0add599011ebe46baf5fd2b763b411d4a61ebd855595409ae019db363"} err="failed to get container status \"933813d0add599011ebe46baf5fd2b763b411d4a61ebd855595409ae019db363\": rpc error: code = NotFound desc = could not find container \"933813d0add599011ebe46baf5fd2b763b411d4a61ebd855595409ae019db363\": container with ID starting with 933813d0add599011ebe46baf5fd2b763b411d4a61ebd855595409ae019db363 not found: ID does not exist" Mar 11 09:03:41 crc kubenswrapper[4808]: I0311 09:03:41.816043 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6bf6365d-3a16-4b22-8d9a-0d261e818792" path="/var/lib/kubelet/pods/6bf6365d-3a16-4b22-8d9a-0d261e818792/volumes" Mar 11 09:03:41 crc kubenswrapper[4808]: I0311 09:03:41.994686 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 11 09:03:42 crc kubenswrapper[4808]: I0311 09:03:42.003014 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Mar 11 09:03:42 crc kubenswrapper[4808]: I0311 09:03:42.024871 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 11 09:03:42 crc kubenswrapper[4808]: E0311 09:03:42.025540 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5642ba8b-850b-4677-9c9f-1c90de8c712b" containerName="nova-metadata-metadata" Mar 11 09:03:42 crc kubenswrapper[4808]: I0311 09:03:42.025559 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="5642ba8b-850b-4677-9c9f-1c90de8c712b" containerName="nova-metadata-metadata" Mar 11 09:03:42 crc kubenswrapper[4808]: E0311 09:03:42.025582 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bf6365d-3a16-4b22-8d9a-0d261e818792" containerName="dnsmasq-dns" Mar 11 09:03:42 crc kubenswrapper[4808]: I0311 09:03:42.025589 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bf6365d-3a16-4b22-8d9a-0d261e818792" containerName="dnsmasq-dns" Mar 11 09:03:42 crc kubenswrapper[4808]: E0311 09:03:42.025605 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1132fd26-9b0b-4a76-9e1c-ad025025ed8a" containerName="nova-manage" Mar 11 09:03:42 crc kubenswrapper[4808]: I0311 09:03:42.025611 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="1132fd26-9b0b-4a76-9e1c-ad025025ed8a" containerName="nova-manage" Mar 11 09:03:42 crc kubenswrapper[4808]: E0311 09:03:42.025626 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bf6365d-3a16-4b22-8d9a-0d261e818792" containerName="init" Mar 11 09:03:42 crc kubenswrapper[4808]: I0311 09:03:42.025633 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bf6365d-3a16-4b22-8d9a-0d261e818792" containerName="init" Mar 11 09:03:42 crc kubenswrapper[4808]: E0311 09:03:42.025646 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5642ba8b-850b-4677-9c9f-1c90de8c712b" containerName="nova-metadata-log" Mar 11 09:03:42 crc kubenswrapper[4808]: I0311 09:03:42.025653 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="5642ba8b-850b-4677-9c9f-1c90de8c712b" containerName="nova-metadata-log" Mar 11 09:03:42 crc kubenswrapper[4808]: I0311 09:03:42.025822 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bf6365d-3a16-4b22-8d9a-0d261e818792" containerName="dnsmasq-dns" Mar 11 09:03:42 crc kubenswrapper[4808]: I0311 09:03:42.025834 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="5642ba8b-850b-4677-9c9f-1c90de8c712b" containerName="nova-metadata-log" Mar 11 09:03:42 crc kubenswrapper[4808]: I0311 09:03:42.025852 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="5642ba8b-850b-4677-9c9f-1c90de8c712b" containerName="nova-metadata-metadata" Mar 11 09:03:42 crc kubenswrapper[4808]: I0311 09:03:42.025862 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="1132fd26-9b0b-4a76-9e1c-ad025025ed8a" containerName="nova-manage" Mar 11 09:03:42 crc kubenswrapper[4808]: I0311 09:03:42.026796 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 11 09:03:42 crc kubenswrapper[4808]: I0311 09:03:42.030087 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Mar 11 09:03:42 crc kubenswrapper[4808]: I0311 09:03:42.030347 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 11 09:03:42 crc kubenswrapper[4808]: I0311 09:03:42.035547 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 11 09:03:42 crc kubenswrapper[4808]: I0311 09:03:42.086931 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b660ca3-212f-4380-a566-d01166c0555d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0b660ca3-212f-4380-a566-d01166c0555d\") " pod="openstack/nova-metadata-0" Mar 11 09:03:42 crc kubenswrapper[4808]: I0311 09:03:42.086972 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxqv9\" (UniqueName: \"kubernetes.io/projected/0b660ca3-212f-4380-a566-d01166c0555d-kube-api-access-zxqv9\") pod \"nova-metadata-0\" (UID: \"0b660ca3-212f-4380-a566-d01166c0555d\") " pod="openstack/nova-metadata-0" Mar 11 09:03:42 crc kubenswrapper[4808]: I0311 09:03:42.087044 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b660ca3-212f-4380-a566-d01166c0555d-config-data\") pod \"nova-metadata-0\" (UID: \"0b660ca3-212f-4380-a566-d01166c0555d\") " pod="openstack/nova-metadata-0" Mar 11 09:03:42 crc kubenswrapper[4808]: I0311 09:03:42.087065 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b660ca3-212f-4380-a566-d01166c0555d-logs\") pod \"nova-metadata-0\" (UID: \"0b660ca3-212f-4380-a566-d01166c0555d\") " pod="openstack/nova-metadata-0" Mar 11 09:03:42 crc kubenswrapper[4808]: I0311 09:03:42.087097 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b660ca3-212f-4380-a566-d01166c0555d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0b660ca3-212f-4380-a566-d01166c0555d\") " pod="openstack/nova-metadata-0" Mar 11 09:03:42 crc kubenswrapper[4808]: I0311 09:03:42.189780 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b660ca3-212f-4380-a566-d01166c0555d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0b660ca3-212f-4380-a566-d01166c0555d\") " pod="openstack/nova-metadata-0" Mar 11 09:03:42 crc kubenswrapper[4808]: I0311 09:03:42.189892 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b660ca3-212f-4380-a566-d01166c0555d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0b660ca3-212f-4380-a566-d01166c0555d\") " pod="openstack/nova-metadata-0" Mar 11 09:03:42 crc kubenswrapper[4808]: I0311 09:03:42.189913 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxqv9\" (UniqueName: \"kubernetes.io/projected/0b660ca3-212f-4380-a566-d01166c0555d-kube-api-access-zxqv9\") pod \"nova-metadata-0\" (UID: \"0b660ca3-212f-4380-a566-d01166c0555d\") " pod="openstack/nova-metadata-0" Mar 11 09:03:42 crc kubenswrapper[4808]: I0311 09:03:42.189985 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b660ca3-212f-4380-a566-d01166c0555d-config-data\") pod \"nova-metadata-0\" (UID: \"0b660ca3-212f-4380-a566-d01166c0555d\") " pod="openstack/nova-metadata-0" Mar 11 09:03:42 crc kubenswrapper[4808]: I0311 09:03:42.190014 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b660ca3-212f-4380-a566-d01166c0555d-logs\") pod \"nova-metadata-0\" (UID: \"0b660ca3-212f-4380-a566-d01166c0555d\") " pod="openstack/nova-metadata-0" Mar 11 09:03:42 crc kubenswrapper[4808]: I0311 09:03:42.190845 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b660ca3-212f-4380-a566-d01166c0555d-logs\") pod \"nova-metadata-0\" (UID: \"0b660ca3-212f-4380-a566-d01166c0555d\") " pod="openstack/nova-metadata-0" Mar 11 09:03:42 crc kubenswrapper[4808]: I0311 09:03:42.193810 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b660ca3-212f-4380-a566-d01166c0555d-config-data\") pod \"nova-metadata-0\" (UID: \"0b660ca3-212f-4380-a566-d01166c0555d\") " pod="openstack/nova-metadata-0" Mar 11 09:03:42 crc kubenswrapper[4808]: I0311 09:03:42.193824 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b660ca3-212f-4380-a566-d01166c0555d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0b660ca3-212f-4380-a566-d01166c0555d\") " pod="openstack/nova-metadata-0" Mar 11 09:03:42 crc kubenswrapper[4808]: I0311 09:03:42.194226 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b660ca3-212f-4380-a566-d01166c0555d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0b660ca3-212f-4380-a566-d01166c0555d\") " pod="openstack/nova-metadata-0" Mar 11 09:03:42 crc kubenswrapper[4808]: I0311 09:03:42.207433 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxqv9\" (UniqueName: \"kubernetes.io/projected/0b660ca3-212f-4380-a566-d01166c0555d-kube-api-access-zxqv9\") pod \"nova-metadata-0\" (UID: \"0b660ca3-212f-4380-a566-d01166c0555d\") " pod="openstack/nova-metadata-0" Mar 11 09:03:42 crc kubenswrapper[4808]: I0311 09:03:42.347248 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 11 09:03:42 crc kubenswrapper[4808]: I0311 09:03:42.702115 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="2bb8abee-130a-4290-8602-6b5b4ca8860e" containerName="nova-scheduler-scheduler" containerID="cri-o://941e68628f1f20da7bdf161a252e40df1bcff6476a4d07663855b2484aa0d69b" gracePeriod=30 Mar 11 09:03:42 crc kubenswrapper[4808]: I0311 09:03:42.854951 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 11 09:03:43 crc kubenswrapper[4808]: I0311 09:03:43.093049 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-5z55v" Mar 11 09:03:43 crc kubenswrapper[4808]: I0311 09:03:43.207716 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjzwn\" (UniqueName: \"kubernetes.io/projected/cc20fcdb-674b-47bf-abcb-c7985d23f8c8-kube-api-access-mjzwn\") pod \"cc20fcdb-674b-47bf-abcb-c7985d23f8c8\" (UID: \"cc20fcdb-674b-47bf-abcb-c7985d23f8c8\") " Mar 11 09:03:43 crc kubenswrapper[4808]: I0311 09:03:43.207797 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc20fcdb-674b-47bf-abcb-c7985d23f8c8-combined-ca-bundle\") pod \"cc20fcdb-674b-47bf-abcb-c7985d23f8c8\" (UID: \"cc20fcdb-674b-47bf-abcb-c7985d23f8c8\") " Mar 11 09:03:43 crc kubenswrapper[4808]: I0311 09:03:43.207912 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc20fcdb-674b-47bf-abcb-c7985d23f8c8-scripts\") pod \"cc20fcdb-674b-47bf-abcb-c7985d23f8c8\" (UID: \"cc20fcdb-674b-47bf-abcb-c7985d23f8c8\") " Mar 11 09:03:43 crc kubenswrapper[4808]: I0311 09:03:43.207964 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc20fcdb-674b-47bf-abcb-c7985d23f8c8-config-data\") pod \"cc20fcdb-674b-47bf-abcb-c7985d23f8c8\" (UID: \"cc20fcdb-674b-47bf-abcb-c7985d23f8c8\") " Mar 11 09:03:43 crc kubenswrapper[4808]: I0311 09:03:43.212699 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc20fcdb-674b-47bf-abcb-c7985d23f8c8-kube-api-access-mjzwn" (OuterVolumeSpecName: "kube-api-access-mjzwn") pod "cc20fcdb-674b-47bf-abcb-c7985d23f8c8" (UID: "cc20fcdb-674b-47bf-abcb-c7985d23f8c8"). InnerVolumeSpecName "kube-api-access-mjzwn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:03:43 crc kubenswrapper[4808]: I0311 09:03:43.212982 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc20fcdb-674b-47bf-abcb-c7985d23f8c8-scripts" (OuterVolumeSpecName: "scripts") pod "cc20fcdb-674b-47bf-abcb-c7985d23f8c8" (UID: "cc20fcdb-674b-47bf-abcb-c7985d23f8c8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:03:43 crc kubenswrapper[4808]: I0311 09:03:43.245239 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc20fcdb-674b-47bf-abcb-c7985d23f8c8-config-data" (OuterVolumeSpecName: "config-data") pod "cc20fcdb-674b-47bf-abcb-c7985d23f8c8" (UID: "cc20fcdb-674b-47bf-abcb-c7985d23f8c8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:03:43 crc kubenswrapper[4808]: I0311 09:03:43.262568 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc20fcdb-674b-47bf-abcb-c7985d23f8c8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cc20fcdb-674b-47bf-abcb-c7985d23f8c8" (UID: "cc20fcdb-674b-47bf-abcb-c7985d23f8c8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:03:43 crc kubenswrapper[4808]: I0311 09:03:43.310801 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc20fcdb-674b-47bf-abcb-c7985d23f8c8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:03:43 crc kubenswrapper[4808]: I0311 09:03:43.310843 4808 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc20fcdb-674b-47bf-abcb-c7985d23f8c8-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:03:43 crc kubenswrapper[4808]: I0311 09:03:43.310856 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc20fcdb-674b-47bf-abcb-c7985d23f8c8-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:03:43 crc kubenswrapper[4808]: I0311 09:03:43.310871 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mjzwn\" (UniqueName: \"kubernetes.io/projected/cc20fcdb-674b-47bf-abcb-c7985d23f8c8-kube-api-access-mjzwn\") on node \"crc\" DevicePath \"\"" Mar 11 09:03:43 crc kubenswrapper[4808]: I0311 09:03:43.711617 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-5z55v" event={"ID":"cc20fcdb-674b-47bf-abcb-c7985d23f8c8","Type":"ContainerDied","Data":"ddd6dafc7a93efd6d599e16c18894ab73186a1a64146a4cc5a4c641026b134af"} Mar 11 09:03:43 crc kubenswrapper[4808]: I0311 09:03:43.711663 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ddd6dafc7a93efd6d599e16c18894ab73186a1a64146a4cc5a4c641026b134af" Mar 11 09:03:43 crc kubenswrapper[4808]: I0311 09:03:43.711665 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-5z55v" Mar 11 09:03:43 crc kubenswrapper[4808]: I0311 09:03:43.714505 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0b660ca3-212f-4380-a566-d01166c0555d","Type":"ContainerStarted","Data":"00787f1f446de5b650b5d65a276452c867882927ff6a676c3bab10c16475af77"} Mar 11 09:03:43 crc kubenswrapper[4808]: I0311 09:03:43.714549 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0b660ca3-212f-4380-a566-d01166c0555d","Type":"ContainerStarted","Data":"b9af2605df7a0ce309657c2c7b2da8b5837fc9d28bbdf9c5dde864cc4ed6570c"} Mar 11 09:03:43 crc kubenswrapper[4808]: I0311 09:03:43.714562 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0b660ca3-212f-4380-a566-d01166c0555d","Type":"ContainerStarted","Data":"730404de54338bc6b30c36c559063025a13f808ae53512e9c736d2cd86f7afaf"} Mar 11 09:03:43 crc kubenswrapper[4808]: I0311 09:03:43.737222 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.7372022190000003 podStartE2EDuration="2.737202219s" podCreationTimestamp="2026-03-11 09:03:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:03:43.730285728 +0000 UTC m=+1474.683609038" watchObservedRunningTime="2026-03-11 09:03:43.737202219 +0000 UTC m=+1474.690525539" Mar 11 09:03:43 crc kubenswrapper[4808]: I0311 09:03:43.801075 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5642ba8b-850b-4677-9c9f-1c90de8c712b" path="/var/lib/kubelet/pods/5642ba8b-850b-4677-9c9f-1c90de8c712b/volumes" Mar 11 09:03:43 crc kubenswrapper[4808]: I0311 09:03:43.816607 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 11 09:03:43 crc kubenswrapper[4808]: E0311 09:03:43.817031 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc20fcdb-674b-47bf-abcb-c7985d23f8c8" containerName="nova-cell1-conductor-db-sync" Mar 11 09:03:43 crc kubenswrapper[4808]: I0311 09:03:43.817047 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc20fcdb-674b-47bf-abcb-c7985d23f8c8" containerName="nova-cell1-conductor-db-sync" Mar 11 09:03:43 crc kubenswrapper[4808]: I0311 09:03:43.817263 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc20fcdb-674b-47bf-abcb-c7985d23f8c8" containerName="nova-cell1-conductor-db-sync" Mar 11 09:03:43 crc kubenswrapper[4808]: I0311 09:03:43.817885 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Mar 11 09:03:43 crc kubenswrapper[4808]: I0311 09:03:43.822950 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Mar 11 09:03:43 crc kubenswrapper[4808]: I0311 09:03:43.835350 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 11 09:03:44 crc kubenswrapper[4808]: I0311 09:03:44.021115 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/512d8427-151d-42dd-a2fe-b52d22583604-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"512d8427-151d-42dd-a2fe-b52d22583604\") " pod="openstack/nova-cell1-conductor-0" Mar 11 09:03:44 crc kubenswrapper[4808]: I0311 09:03:44.021487 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/512d8427-151d-42dd-a2fe-b52d22583604-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"512d8427-151d-42dd-a2fe-b52d22583604\") " pod="openstack/nova-cell1-conductor-0" Mar 11 09:03:44 crc kubenswrapper[4808]: I0311 09:03:44.021622 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pzxs\" (UniqueName: \"kubernetes.io/projected/512d8427-151d-42dd-a2fe-b52d22583604-kube-api-access-8pzxs\") pod \"nova-cell1-conductor-0\" (UID: \"512d8427-151d-42dd-a2fe-b52d22583604\") " pod="openstack/nova-cell1-conductor-0" Mar 11 09:03:44 crc kubenswrapper[4808]: I0311 09:03:44.123144 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pzxs\" (UniqueName: \"kubernetes.io/projected/512d8427-151d-42dd-a2fe-b52d22583604-kube-api-access-8pzxs\") pod \"nova-cell1-conductor-0\" (UID: \"512d8427-151d-42dd-a2fe-b52d22583604\") " pod="openstack/nova-cell1-conductor-0" Mar 11 09:03:44 crc kubenswrapper[4808]: I0311 09:03:44.123280 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/512d8427-151d-42dd-a2fe-b52d22583604-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"512d8427-151d-42dd-a2fe-b52d22583604\") " pod="openstack/nova-cell1-conductor-0" Mar 11 09:03:44 crc kubenswrapper[4808]: I0311 09:03:44.123421 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/512d8427-151d-42dd-a2fe-b52d22583604-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"512d8427-151d-42dd-a2fe-b52d22583604\") " pod="openstack/nova-cell1-conductor-0" Mar 11 09:03:44 crc kubenswrapper[4808]: I0311 09:03:44.129957 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/512d8427-151d-42dd-a2fe-b52d22583604-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"512d8427-151d-42dd-a2fe-b52d22583604\") " pod="openstack/nova-cell1-conductor-0" Mar 11 09:03:44 crc kubenswrapper[4808]: I0311 09:03:44.130165 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/512d8427-151d-42dd-a2fe-b52d22583604-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"512d8427-151d-42dd-a2fe-b52d22583604\") " pod="openstack/nova-cell1-conductor-0" Mar 11 09:03:44 crc kubenswrapper[4808]: I0311 09:03:44.147875 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pzxs\" (UniqueName: \"kubernetes.io/projected/512d8427-151d-42dd-a2fe-b52d22583604-kube-api-access-8pzxs\") pod \"nova-cell1-conductor-0\" (UID: \"512d8427-151d-42dd-a2fe-b52d22583604\") " pod="openstack/nova-cell1-conductor-0" Mar 11 09:03:44 crc kubenswrapper[4808]: I0311 09:03:44.435765 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Mar 11 09:03:44 crc kubenswrapper[4808]: I0311 09:03:44.769330 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 11 09:03:45 crc kubenswrapper[4808]: E0311 09:03:45.191577 4808 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="941e68628f1f20da7bdf161a252e40df1bcff6476a4d07663855b2484aa0d69b" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 11 09:03:45 crc kubenswrapper[4808]: E0311 09:03:45.193680 4808 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="941e68628f1f20da7bdf161a252e40df1bcff6476a4d07663855b2484aa0d69b" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 11 09:03:45 crc kubenswrapper[4808]: E0311 09:03:45.194993 4808 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="941e68628f1f20da7bdf161a252e40df1bcff6476a4d07663855b2484aa0d69b" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 11 09:03:45 crc kubenswrapper[4808]: E0311 09:03:45.195024 4808 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="2bb8abee-130a-4290-8602-6b5b4ca8860e" containerName="nova-scheduler-scheduler" Mar 11 09:03:45 crc kubenswrapper[4808]: I0311 09:03:45.739617 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"512d8427-151d-42dd-a2fe-b52d22583604","Type":"ContainerStarted","Data":"e2259d06d5e8abb57d4defe5dfed63d448596138a276f57890cb8984cb01e2ed"} Mar 11 09:03:45 crc kubenswrapper[4808]: I0311 09:03:45.739658 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"512d8427-151d-42dd-a2fe-b52d22583604","Type":"ContainerStarted","Data":"c86b94d1a0d436dbe1724eab959d9c375ca1128d25cf62566cde2ea56ac15edd"} Mar 11 09:03:45 crc kubenswrapper[4808]: I0311 09:03:45.739890 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Mar 11 09:03:45 crc kubenswrapper[4808]: I0311 09:03:45.771146 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.771122545 podStartE2EDuration="2.771122545s" podCreationTimestamp="2026-03-11 09:03:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:03:45.754127701 +0000 UTC m=+1476.707451021" watchObservedRunningTime="2026-03-11 09:03:45.771122545 +0000 UTC m=+1476.724445895" Mar 11 09:03:46 crc kubenswrapper[4808]: I0311 09:03:46.599981 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 11 09:03:46 crc kubenswrapper[4808]: I0311 09:03:46.717220 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 11 09:03:46 crc kubenswrapper[4808]: I0311 09:03:46.749966 4808 generic.go:334] "Generic (PLEG): container finished" podID="2bb8abee-130a-4290-8602-6b5b4ca8860e" containerID="941e68628f1f20da7bdf161a252e40df1bcff6476a4d07663855b2484aa0d69b" exitCode=0 Mar 11 09:03:46 crc kubenswrapper[4808]: I0311 09:03:46.750090 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2bb8abee-130a-4290-8602-6b5b4ca8860e","Type":"ContainerDied","Data":"941e68628f1f20da7bdf161a252e40df1bcff6476a4d07663855b2484aa0d69b"} Mar 11 09:03:46 crc kubenswrapper[4808]: I0311 09:03:46.750127 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2bb8abee-130a-4290-8602-6b5b4ca8860e","Type":"ContainerDied","Data":"47a018c2361547e7a5e5b6099e8a0e6add5f6c574bf3fec9bd8657253de2d29d"} Mar 11 09:03:46 crc kubenswrapper[4808]: I0311 09:03:46.750148 4808 scope.go:117] "RemoveContainer" containerID="941e68628f1f20da7bdf161a252e40df1bcff6476a4d07663855b2484aa0d69b" Mar 11 09:03:46 crc kubenswrapper[4808]: I0311 09:03:46.750290 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 11 09:03:46 crc kubenswrapper[4808]: I0311 09:03:46.752126 4808 generic.go:334] "Generic (PLEG): container finished" podID="6a9283cf-cd37-47f9-97aa-f7964eec1c36" containerID="479f294bc966add255c51b0d2f36a8591fac9edc220244d95ec0b58db9f07402" exitCode=0 Mar 11 09:03:46 crc kubenswrapper[4808]: I0311 09:03:46.752864 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 11 09:03:46 crc kubenswrapper[4808]: I0311 09:03:46.753013 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6a9283cf-cd37-47f9-97aa-f7964eec1c36","Type":"ContainerDied","Data":"479f294bc966add255c51b0d2f36a8591fac9edc220244d95ec0b58db9f07402"} Mar 11 09:03:46 crc kubenswrapper[4808]: I0311 09:03:46.753033 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6a9283cf-cd37-47f9-97aa-f7964eec1c36","Type":"ContainerDied","Data":"56e9401b5adb177aed8af72629b388147f4e6bbb37cdacf57cfb069cd8812eb2"} Mar 11 09:03:46 crc kubenswrapper[4808]: I0311 09:03:46.776669 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bb8abee-130a-4290-8602-6b5b4ca8860e-config-data\") pod \"2bb8abee-130a-4290-8602-6b5b4ca8860e\" (UID: \"2bb8abee-130a-4290-8602-6b5b4ca8860e\") " Mar 11 09:03:46 crc kubenswrapper[4808]: I0311 09:03:46.776714 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-49lzl\" (UniqueName: \"kubernetes.io/projected/2bb8abee-130a-4290-8602-6b5b4ca8860e-kube-api-access-49lzl\") pod \"2bb8abee-130a-4290-8602-6b5b4ca8860e\" (UID: \"2bb8abee-130a-4290-8602-6b5b4ca8860e\") " Mar 11 09:03:46 crc kubenswrapper[4808]: I0311 09:03:46.776966 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bb8abee-130a-4290-8602-6b5b4ca8860e-combined-ca-bundle\") pod \"2bb8abee-130a-4290-8602-6b5b4ca8860e\" (UID: \"2bb8abee-130a-4290-8602-6b5b4ca8860e\") " Mar 11 09:03:46 crc kubenswrapper[4808]: I0311 09:03:46.778425 4808 scope.go:117] "RemoveContainer" containerID="941e68628f1f20da7bdf161a252e40df1bcff6476a4d07663855b2484aa0d69b" Mar 11 09:03:46 crc kubenswrapper[4808]: E0311 09:03:46.780345 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"941e68628f1f20da7bdf161a252e40df1bcff6476a4d07663855b2484aa0d69b\": container with ID starting with 941e68628f1f20da7bdf161a252e40df1bcff6476a4d07663855b2484aa0d69b not found: ID does not exist" containerID="941e68628f1f20da7bdf161a252e40df1bcff6476a4d07663855b2484aa0d69b" Mar 11 09:03:46 crc kubenswrapper[4808]: I0311 09:03:46.780421 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"941e68628f1f20da7bdf161a252e40df1bcff6476a4d07663855b2484aa0d69b"} err="failed to get container status \"941e68628f1f20da7bdf161a252e40df1bcff6476a4d07663855b2484aa0d69b\": rpc error: code = NotFound desc = could not find container \"941e68628f1f20da7bdf161a252e40df1bcff6476a4d07663855b2484aa0d69b\": container with ID starting with 941e68628f1f20da7bdf161a252e40df1bcff6476a4d07663855b2484aa0d69b not found: ID does not exist" Mar 11 09:03:46 crc kubenswrapper[4808]: I0311 09:03:46.780454 4808 scope.go:117] "RemoveContainer" containerID="479f294bc966add255c51b0d2f36a8591fac9edc220244d95ec0b58db9f07402" Mar 11 09:03:46 crc kubenswrapper[4808]: I0311 09:03:46.783519 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2bb8abee-130a-4290-8602-6b5b4ca8860e-kube-api-access-49lzl" (OuterVolumeSpecName: "kube-api-access-49lzl") pod "2bb8abee-130a-4290-8602-6b5b4ca8860e" (UID: "2bb8abee-130a-4290-8602-6b5b4ca8860e"). InnerVolumeSpecName "kube-api-access-49lzl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:03:46 crc kubenswrapper[4808]: I0311 09:03:46.810639 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2bb8abee-130a-4290-8602-6b5b4ca8860e-config-data" (OuterVolumeSpecName: "config-data") pod "2bb8abee-130a-4290-8602-6b5b4ca8860e" (UID: "2bb8abee-130a-4290-8602-6b5b4ca8860e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:03:46 crc kubenswrapper[4808]: I0311 09:03:46.811395 4808 scope.go:117] "RemoveContainer" containerID="ae477f0cee0efbea3766afa802e1cb07123e925a1ff61ce04f4bf1047caf43d5" Mar 11 09:03:46 crc kubenswrapper[4808]: I0311 09:03:46.813846 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2bb8abee-130a-4290-8602-6b5b4ca8860e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2bb8abee-130a-4290-8602-6b5b4ca8860e" (UID: "2bb8abee-130a-4290-8602-6b5b4ca8860e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:03:46 crc kubenswrapper[4808]: I0311 09:03:46.833493 4808 scope.go:117] "RemoveContainer" containerID="479f294bc966add255c51b0d2f36a8591fac9edc220244d95ec0b58db9f07402" Mar 11 09:03:46 crc kubenswrapper[4808]: E0311 09:03:46.833864 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"479f294bc966add255c51b0d2f36a8591fac9edc220244d95ec0b58db9f07402\": container with ID starting with 479f294bc966add255c51b0d2f36a8591fac9edc220244d95ec0b58db9f07402 not found: ID does not exist" containerID="479f294bc966add255c51b0d2f36a8591fac9edc220244d95ec0b58db9f07402" Mar 11 09:03:46 crc kubenswrapper[4808]: I0311 09:03:46.833892 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"479f294bc966add255c51b0d2f36a8591fac9edc220244d95ec0b58db9f07402"} err="failed to get container status \"479f294bc966add255c51b0d2f36a8591fac9edc220244d95ec0b58db9f07402\": rpc error: code = NotFound desc = could not find container \"479f294bc966add255c51b0d2f36a8591fac9edc220244d95ec0b58db9f07402\": container with ID starting with 479f294bc966add255c51b0d2f36a8591fac9edc220244d95ec0b58db9f07402 not found: ID does not exist" Mar 11 09:03:46 crc kubenswrapper[4808]: I0311 09:03:46.833913 4808 scope.go:117] "RemoveContainer" containerID="ae477f0cee0efbea3766afa802e1cb07123e925a1ff61ce04f4bf1047caf43d5" Mar 11 09:03:46 crc kubenswrapper[4808]: E0311 09:03:46.834224 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae477f0cee0efbea3766afa802e1cb07123e925a1ff61ce04f4bf1047caf43d5\": container with ID starting with ae477f0cee0efbea3766afa802e1cb07123e925a1ff61ce04f4bf1047caf43d5 not found: ID does not exist" containerID="ae477f0cee0efbea3766afa802e1cb07123e925a1ff61ce04f4bf1047caf43d5" Mar 11 09:03:46 crc kubenswrapper[4808]: I0311 09:03:46.834244 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae477f0cee0efbea3766afa802e1cb07123e925a1ff61ce04f4bf1047caf43d5"} err="failed to get container status \"ae477f0cee0efbea3766afa802e1cb07123e925a1ff61ce04f4bf1047caf43d5\": rpc error: code = NotFound desc = could not find container \"ae477f0cee0efbea3766afa802e1cb07123e925a1ff61ce04f4bf1047caf43d5\": container with ID starting with ae477f0cee0efbea3766afa802e1cb07123e925a1ff61ce04f4bf1047caf43d5 not found: ID does not exist" Mar 11 09:03:46 crc kubenswrapper[4808]: I0311 09:03:46.877828 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a9283cf-cd37-47f9-97aa-f7964eec1c36-logs\") pod \"6a9283cf-cd37-47f9-97aa-f7964eec1c36\" (UID: \"6a9283cf-cd37-47f9-97aa-f7964eec1c36\") " Mar 11 09:03:46 crc kubenswrapper[4808]: I0311 09:03:46.877971 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a9283cf-cd37-47f9-97aa-f7964eec1c36-config-data\") pod \"6a9283cf-cd37-47f9-97aa-f7964eec1c36\" (UID: \"6a9283cf-cd37-47f9-97aa-f7964eec1c36\") " Mar 11 09:03:46 crc kubenswrapper[4808]: I0311 09:03:46.878043 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mlwmc\" (UniqueName: \"kubernetes.io/projected/6a9283cf-cd37-47f9-97aa-f7964eec1c36-kube-api-access-mlwmc\") pod \"6a9283cf-cd37-47f9-97aa-f7964eec1c36\" (UID: \"6a9283cf-cd37-47f9-97aa-f7964eec1c36\") " Mar 11 09:03:46 crc kubenswrapper[4808]: I0311 09:03:46.878202 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a9283cf-cd37-47f9-97aa-f7964eec1c36-combined-ca-bundle\") pod \"6a9283cf-cd37-47f9-97aa-f7964eec1c36\" (UID: \"6a9283cf-cd37-47f9-97aa-f7964eec1c36\") " Mar 11 09:03:46 crc kubenswrapper[4808]: I0311 09:03:46.878432 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a9283cf-cd37-47f9-97aa-f7964eec1c36-logs" (OuterVolumeSpecName: "logs") pod "6a9283cf-cd37-47f9-97aa-f7964eec1c36" (UID: "6a9283cf-cd37-47f9-97aa-f7964eec1c36"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:03:46 crc kubenswrapper[4808]: I0311 09:03:46.878749 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bb8abee-130a-4290-8602-6b5b4ca8860e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:03:46 crc kubenswrapper[4808]: I0311 09:03:46.878768 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bb8abee-130a-4290-8602-6b5b4ca8860e-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:03:46 crc kubenswrapper[4808]: I0311 09:03:46.878777 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-49lzl\" (UniqueName: \"kubernetes.io/projected/2bb8abee-130a-4290-8602-6b5b4ca8860e-kube-api-access-49lzl\") on node \"crc\" DevicePath \"\"" Mar 11 09:03:46 crc kubenswrapper[4808]: I0311 09:03:46.878787 4808 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a9283cf-cd37-47f9-97aa-f7964eec1c36-logs\") on node \"crc\" DevicePath \"\"" Mar 11 09:03:46 crc kubenswrapper[4808]: I0311 09:03:46.881336 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a9283cf-cd37-47f9-97aa-f7964eec1c36-kube-api-access-mlwmc" (OuterVolumeSpecName: "kube-api-access-mlwmc") pod "6a9283cf-cd37-47f9-97aa-f7964eec1c36" (UID: "6a9283cf-cd37-47f9-97aa-f7964eec1c36"). InnerVolumeSpecName "kube-api-access-mlwmc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:03:46 crc kubenswrapper[4808]: I0311 09:03:46.904079 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a9283cf-cd37-47f9-97aa-f7964eec1c36-config-data" (OuterVolumeSpecName: "config-data") pod "6a9283cf-cd37-47f9-97aa-f7964eec1c36" (UID: "6a9283cf-cd37-47f9-97aa-f7964eec1c36"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:03:46 crc kubenswrapper[4808]: I0311 09:03:46.905220 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a9283cf-cd37-47f9-97aa-f7964eec1c36-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6a9283cf-cd37-47f9-97aa-f7964eec1c36" (UID: "6a9283cf-cd37-47f9-97aa-f7964eec1c36"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:03:46 crc kubenswrapper[4808]: I0311 09:03:46.972865 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Mar 11 09:03:46 crc kubenswrapper[4808]: I0311 09:03:46.980728 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a9283cf-cd37-47f9-97aa-f7964eec1c36-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:03:46 crc kubenswrapper[4808]: I0311 09:03:46.980762 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a9283cf-cd37-47f9-97aa-f7964eec1c36-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:03:46 crc kubenswrapper[4808]: I0311 09:03:46.980776 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mlwmc\" (UniqueName: \"kubernetes.io/projected/6a9283cf-cd37-47f9-97aa-f7964eec1c36-kube-api-access-mlwmc\") on node \"crc\" DevicePath \"\"" Mar 11 09:03:47 crc kubenswrapper[4808]: I0311 09:03:47.091576 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 11 09:03:47 crc kubenswrapper[4808]: I0311 09:03:47.116250 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Mar 11 09:03:47 crc kubenswrapper[4808]: I0311 09:03:47.129010 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Mar 11 09:03:47 crc kubenswrapper[4808]: E0311 09:03:47.129533 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bb8abee-130a-4290-8602-6b5b4ca8860e" containerName="nova-scheduler-scheduler" Mar 11 09:03:47 crc kubenswrapper[4808]: I0311 09:03:47.129557 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bb8abee-130a-4290-8602-6b5b4ca8860e" containerName="nova-scheduler-scheduler" Mar 11 09:03:47 crc kubenswrapper[4808]: E0311 09:03:47.129573 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a9283cf-cd37-47f9-97aa-f7964eec1c36" containerName="nova-api-api" Mar 11 09:03:47 crc kubenswrapper[4808]: I0311 09:03:47.129583 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a9283cf-cd37-47f9-97aa-f7964eec1c36" containerName="nova-api-api" Mar 11 09:03:47 crc kubenswrapper[4808]: E0311 09:03:47.129604 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a9283cf-cd37-47f9-97aa-f7964eec1c36" containerName="nova-api-log" Mar 11 09:03:47 crc kubenswrapper[4808]: I0311 09:03:47.129612 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a9283cf-cd37-47f9-97aa-f7964eec1c36" containerName="nova-api-log" Mar 11 09:03:47 crc kubenswrapper[4808]: I0311 09:03:47.129859 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a9283cf-cd37-47f9-97aa-f7964eec1c36" containerName="nova-api-api" Mar 11 09:03:47 crc kubenswrapper[4808]: I0311 09:03:47.129886 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a9283cf-cd37-47f9-97aa-f7964eec1c36" containerName="nova-api-log" Mar 11 09:03:47 crc kubenswrapper[4808]: I0311 09:03:47.129894 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="2bb8abee-130a-4290-8602-6b5b4ca8860e" containerName="nova-scheduler-scheduler" Mar 11 09:03:47 crc kubenswrapper[4808]: I0311 09:03:47.130633 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 11 09:03:47 crc kubenswrapper[4808]: I0311 09:03:47.134790 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Mar 11 09:03:47 crc kubenswrapper[4808]: I0311 09:03:47.149019 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 11 09:03:47 crc kubenswrapper[4808]: I0311 09:03:47.164587 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 11 09:03:47 crc kubenswrapper[4808]: I0311 09:03:47.183678 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 11 09:03:47 crc kubenswrapper[4808]: I0311 09:03:47.197402 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 11 09:03:47 crc kubenswrapper[4808]: I0311 09:03:47.199232 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 11 09:03:47 crc kubenswrapper[4808]: I0311 09:03:47.202674 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 11 09:03:47 crc kubenswrapper[4808]: I0311 09:03:47.204372 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 11 09:03:47 crc kubenswrapper[4808]: I0311 09:03:47.284757 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bh55\" (UniqueName: \"kubernetes.io/projected/e1200f85-8f0d-4d77-b48e-943687a44df3-kube-api-access-7bh55\") pod \"nova-scheduler-0\" (UID: \"e1200f85-8f0d-4d77-b48e-943687a44df3\") " pod="openstack/nova-scheduler-0" Mar 11 09:03:47 crc kubenswrapper[4808]: I0311 09:03:47.284889 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fd7c321-2b2e-4c84-8837-6f1dea682b5d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1fd7c321-2b2e-4c84-8837-6f1dea682b5d\") " pod="openstack/nova-api-0" Mar 11 09:03:47 crc kubenswrapper[4808]: I0311 09:03:47.284914 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1fd7c321-2b2e-4c84-8837-6f1dea682b5d-logs\") pod \"nova-api-0\" (UID: \"1fd7c321-2b2e-4c84-8837-6f1dea682b5d\") " pod="openstack/nova-api-0" Mar 11 09:03:47 crc kubenswrapper[4808]: I0311 09:03:47.284980 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fd7c321-2b2e-4c84-8837-6f1dea682b5d-config-data\") pod \"nova-api-0\" (UID: \"1fd7c321-2b2e-4c84-8837-6f1dea682b5d\") " pod="openstack/nova-api-0" Mar 11 09:03:47 crc kubenswrapper[4808]: I0311 09:03:47.285045 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1200f85-8f0d-4d77-b48e-943687a44df3-config-data\") pod \"nova-scheduler-0\" (UID: \"e1200f85-8f0d-4d77-b48e-943687a44df3\") " pod="openstack/nova-scheduler-0" Mar 11 09:03:47 crc kubenswrapper[4808]: I0311 09:03:47.285109 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8tkk\" (UniqueName: \"kubernetes.io/projected/1fd7c321-2b2e-4c84-8837-6f1dea682b5d-kube-api-access-r8tkk\") pod \"nova-api-0\" (UID: \"1fd7c321-2b2e-4c84-8837-6f1dea682b5d\") " pod="openstack/nova-api-0" Mar 11 09:03:47 crc kubenswrapper[4808]: I0311 09:03:47.285177 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1200f85-8f0d-4d77-b48e-943687a44df3-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e1200f85-8f0d-4d77-b48e-943687a44df3\") " pod="openstack/nova-scheduler-0" Mar 11 09:03:47 crc kubenswrapper[4808]: I0311 09:03:47.347584 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 11 09:03:47 crc kubenswrapper[4808]: I0311 09:03:47.347636 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 11 09:03:47 crc kubenswrapper[4808]: I0311 09:03:47.386940 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fd7c321-2b2e-4c84-8837-6f1dea682b5d-config-data\") pod \"nova-api-0\" (UID: \"1fd7c321-2b2e-4c84-8837-6f1dea682b5d\") " pod="openstack/nova-api-0" Mar 11 09:03:47 crc kubenswrapper[4808]: I0311 09:03:47.386989 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1200f85-8f0d-4d77-b48e-943687a44df3-config-data\") pod \"nova-scheduler-0\" (UID: \"e1200f85-8f0d-4d77-b48e-943687a44df3\") " pod="openstack/nova-scheduler-0" Mar 11 09:03:47 crc kubenswrapper[4808]: I0311 09:03:47.387026 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8tkk\" (UniqueName: \"kubernetes.io/projected/1fd7c321-2b2e-4c84-8837-6f1dea682b5d-kube-api-access-r8tkk\") pod \"nova-api-0\" (UID: \"1fd7c321-2b2e-4c84-8837-6f1dea682b5d\") " pod="openstack/nova-api-0" Mar 11 09:03:47 crc kubenswrapper[4808]: I0311 09:03:47.387057 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1200f85-8f0d-4d77-b48e-943687a44df3-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e1200f85-8f0d-4d77-b48e-943687a44df3\") " pod="openstack/nova-scheduler-0" Mar 11 09:03:47 crc kubenswrapper[4808]: I0311 09:03:47.387147 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bh55\" (UniqueName: \"kubernetes.io/projected/e1200f85-8f0d-4d77-b48e-943687a44df3-kube-api-access-7bh55\") pod \"nova-scheduler-0\" (UID: \"e1200f85-8f0d-4d77-b48e-943687a44df3\") " pod="openstack/nova-scheduler-0" Mar 11 09:03:47 crc kubenswrapper[4808]: I0311 09:03:47.387185 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fd7c321-2b2e-4c84-8837-6f1dea682b5d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1fd7c321-2b2e-4c84-8837-6f1dea682b5d\") " pod="openstack/nova-api-0" Mar 11 09:03:47 crc kubenswrapper[4808]: I0311 09:03:47.387199 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1fd7c321-2b2e-4c84-8837-6f1dea682b5d-logs\") pod \"nova-api-0\" (UID: \"1fd7c321-2b2e-4c84-8837-6f1dea682b5d\") " pod="openstack/nova-api-0" Mar 11 09:03:47 crc kubenswrapper[4808]: I0311 09:03:47.387602 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1fd7c321-2b2e-4c84-8837-6f1dea682b5d-logs\") pod \"nova-api-0\" (UID: \"1fd7c321-2b2e-4c84-8837-6f1dea682b5d\") " pod="openstack/nova-api-0" Mar 11 09:03:47 crc kubenswrapper[4808]: I0311 09:03:47.391714 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1200f85-8f0d-4d77-b48e-943687a44df3-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e1200f85-8f0d-4d77-b48e-943687a44df3\") " pod="openstack/nova-scheduler-0" Mar 11 09:03:47 crc kubenswrapper[4808]: I0311 09:03:47.391721 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fd7c321-2b2e-4c84-8837-6f1dea682b5d-config-data\") pod \"nova-api-0\" (UID: \"1fd7c321-2b2e-4c84-8837-6f1dea682b5d\") " pod="openstack/nova-api-0" Mar 11 09:03:47 crc kubenswrapper[4808]: I0311 09:03:47.398042 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1200f85-8f0d-4d77-b48e-943687a44df3-config-data\") pod \"nova-scheduler-0\" (UID: \"e1200f85-8f0d-4d77-b48e-943687a44df3\") " pod="openstack/nova-scheduler-0" Mar 11 09:03:47 crc kubenswrapper[4808]: I0311 09:03:47.406683 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fd7c321-2b2e-4c84-8837-6f1dea682b5d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1fd7c321-2b2e-4c84-8837-6f1dea682b5d\") " pod="openstack/nova-api-0" Mar 11 09:03:47 crc kubenswrapper[4808]: I0311 09:03:47.411970 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8tkk\" (UniqueName: \"kubernetes.io/projected/1fd7c321-2b2e-4c84-8837-6f1dea682b5d-kube-api-access-r8tkk\") pod \"nova-api-0\" (UID: \"1fd7c321-2b2e-4c84-8837-6f1dea682b5d\") " pod="openstack/nova-api-0" Mar 11 09:03:47 crc kubenswrapper[4808]: I0311 09:03:47.416052 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bh55\" (UniqueName: \"kubernetes.io/projected/e1200f85-8f0d-4d77-b48e-943687a44df3-kube-api-access-7bh55\") pod \"nova-scheduler-0\" (UID: \"e1200f85-8f0d-4d77-b48e-943687a44df3\") " pod="openstack/nova-scheduler-0" Mar 11 09:03:47 crc kubenswrapper[4808]: I0311 09:03:47.486031 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 11 09:03:47 crc kubenswrapper[4808]: I0311 09:03:47.517442 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 11 09:03:47 crc kubenswrapper[4808]: I0311 09:03:47.803829 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2bb8abee-130a-4290-8602-6b5b4ca8860e" path="/var/lib/kubelet/pods/2bb8abee-130a-4290-8602-6b5b4ca8860e/volumes" Mar 11 09:03:47 crc kubenswrapper[4808]: I0311 09:03:47.804701 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a9283cf-cd37-47f9-97aa-f7964eec1c36" path="/var/lib/kubelet/pods/6a9283cf-cd37-47f9-97aa-f7964eec1c36/volumes" Mar 11 09:03:47 crc kubenswrapper[4808]: I0311 09:03:47.946500 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 11 09:03:48 crc kubenswrapper[4808]: I0311 09:03:48.012349 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 11 09:03:48 crc kubenswrapper[4808]: I0311 09:03:48.784394 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1fd7c321-2b2e-4c84-8837-6f1dea682b5d","Type":"ContainerStarted","Data":"2bc2dfc4e2474da440b434255cf438e42a0f86a147e57db7c473cb659ec0377b"} Mar 11 09:03:48 crc kubenswrapper[4808]: I0311 09:03:48.785020 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1fd7c321-2b2e-4c84-8837-6f1dea682b5d","Type":"ContainerStarted","Data":"19dd084139975303482c38cd8de71a48fbefe1d156e97bbda0aff8b0a330c3a2"} Mar 11 09:03:48 crc kubenswrapper[4808]: I0311 09:03:48.785101 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1fd7c321-2b2e-4c84-8837-6f1dea682b5d","Type":"ContainerStarted","Data":"5be0fbd299924fe8b8d93b163a138fa95f87b0701d463aab2cc8be6cc46b0922"} Mar 11 09:03:48 crc kubenswrapper[4808]: I0311 09:03:48.788371 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e1200f85-8f0d-4d77-b48e-943687a44df3","Type":"ContainerStarted","Data":"7de44323ba103c19769f99055732828f4800d9b6417b6189f355de999324b5c1"} Mar 11 09:03:48 crc kubenswrapper[4808]: I0311 09:03:48.788436 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e1200f85-8f0d-4d77-b48e-943687a44df3","Type":"ContainerStarted","Data":"4d5bb8b71675672d6056d7b035e58bd23996cf314a8db140b7f1e83e1de3a51e"} Mar 11 09:03:48 crc kubenswrapper[4808]: I0311 09:03:48.806007 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=1.805988395 podStartE2EDuration="1.805988395s" podCreationTimestamp="2026-03-11 09:03:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:03:48.799499857 +0000 UTC m=+1479.752823177" watchObservedRunningTime="2026-03-11 09:03:48.805988395 +0000 UTC m=+1479.759311715" Mar 11 09:03:48 crc kubenswrapper[4808]: I0311 09:03:48.822207 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=1.822191676 podStartE2EDuration="1.822191676s" podCreationTimestamp="2026-03-11 09:03:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:03:48.816285184 +0000 UTC m=+1479.769608504" watchObservedRunningTime="2026-03-11 09:03:48.822191676 +0000 UTC m=+1479.775514996" Mar 11 09:03:49 crc kubenswrapper[4808]: I0311 09:03:49.471651 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Mar 11 09:03:50 crc kubenswrapper[4808]: I0311 09:03:50.893211 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 11 09:03:50 crc kubenswrapper[4808]: I0311 09:03:50.894632 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="fadbd323-a0cc-4d54-b71f-bebe9c716af4" containerName="kube-state-metrics" containerID="cri-o://8e3fe5ca2c9c5545def51d7a23ab4a606e863878ea492ecfbdf894f5166219ef" gracePeriod=30 Mar 11 09:03:51 crc kubenswrapper[4808]: I0311 09:03:51.403060 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 11 09:03:51 crc kubenswrapper[4808]: I0311 09:03:51.565735 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wvv7t\" (UniqueName: \"kubernetes.io/projected/fadbd323-a0cc-4d54-b71f-bebe9c716af4-kube-api-access-wvv7t\") pod \"fadbd323-a0cc-4d54-b71f-bebe9c716af4\" (UID: \"fadbd323-a0cc-4d54-b71f-bebe9c716af4\") " Mar 11 09:03:51 crc kubenswrapper[4808]: I0311 09:03:51.584808 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fadbd323-a0cc-4d54-b71f-bebe9c716af4-kube-api-access-wvv7t" (OuterVolumeSpecName: "kube-api-access-wvv7t") pod "fadbd323-a0cc-4d54-b71f-bebe9c716af4" (UID: "fadbd323-a0cc-4d54-b71f-bebe9c716af4"). InnerVolumeSpecName "kube-api-access-wvv7t". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:03:51 crc kubenswrapper[4808]: I0311 09:03:51.668312 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wvv7t\" (UniqueName: \"kubernetes.io/projected/fadbd323-a0cc-4d54-b71f-bebe9c716af4-kube-api-access-wvv7t\") on node \"crc\" DevicePath \"\"" Mar 11 09:03:51 crc kubenswrapper[4808]: I0311 09:03:51.813128 4808 generic.go:334] "Generic (PLEG): container finished" podID="fadbd323-a0cc-4d54-b71f-bebe9c716af4" containerID="8e3fe5ca2c9c5545def51d7a23ab4a606e863878ea492ecfbdf894f5166219ef" exitCode=2 Mar 11 09:03:51 crc kubenswrapper[4808]: I0311 09:03:51.813172 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"fadbd323-a0cc-4d54-b71f-bebe9c716af4","Type":"ContainerDied","Data":"8e3fe5ca2c9c5545def51d7a23ab4a606e863878ea492ecfbdf894f5166219ef"} Mar 11 09:03:51 crc kubenswrapper[4808]: I0311 09:03:51.813202 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"fadbd323-a0cc-4d54-b71f-bebe9c716af4","Type":"ContainerDied","Data":"62f67b3796de52818a1d42cf4f841b061087e5b5db2e871df2e32afdf2c23447"} Mar 11 09:03:51 crc kubenswrapper[4808]: I0311 09:03:51.813203 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 11 09:03:51 crc kubenswrapper[4808]: I0311 09:03:51.813221 4808 scope.go:117] "RemoveContainer" containerID="8e3fe5ca2c9c5545def51d7a23ab4a606e863878ea492ecfbdf894f5166219ef" Mar 11 09:03:51 crc kubenswrapper[4808]: I0311 09:03:51.837617 4808 scope.go:117] "RemoveContainer" containerID="8e3fe5ca2c9c5545def51d7a23ab4a606e863878ea492ecfbdf894f5166219ef" Mar 11 09:03:51 crc kubenswrapper[4808]: E0311 09:03:51.838090 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e3fe5ca2c9c5545def51d7a23ab4a606e863878ea492ecfbdf894f5166219ef\": container with ID starting with 8e3fe5ca2c9c5545def51d7a23ab4a606e863878ea492ecfbdf894f5166219ef not found: ID does not exist" containerID="8e3fe5ca2c9c5545def51d7a23ab4a606e863878ea492ecfbdf894f5166219ef" Mar 11 09:03:51 crc kubenswrapper[4808]: I0311 09:03:51.838138 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e3fe5ca2c9c5545def51d7a23ab4a606e863878ea492ecfbdf894f5166219ef"} err="failed to get container status \"8e3fe5ca2c9c5545def51d7a23ab4a606e863878ea492ecfbdf894f5166219ef\": rpc error: code = NotFound desc = could not find container \"8e3fe5ca2c9c5545def51d7a23ab4a606e863878ea492ecfbdf894f5166219ef\": container with ID starting with 8e3fe5ca2c9c5545def51d7a23ab4a606e863878ea492ecfbdf894f5166219ef not found: ID does not exist" Mar 11 09:03:51 crc kubenswrapper[4808]: I0311 09:03:51.841201 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 11 09:03:51 crc kubenswrapper[4808]: I0311 09:03:51.855554 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 11 09:03:51 crc kubenswrapper[4808]: I0311 09:03:51.864353 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Mar 11 09:03:51 crc kubenswrapper[4808]: E0311 09:03:51.865064 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fadbd323-a0cc-4d54-b71f-bebe9c716af4" containerName="kube-state-metrics" Mar 11 09:03:51 crc kubenswrapper[4808]: I0311 09:03:51.865090 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="fadbd323-a0cc-4d54-b71f-bebe9c716af4" containerName="kube-state-metrics" Mar 11 09:03:51 crc kubenswrapper[4808]: I0311 09:03:51.865342 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="fadbd323-a0cc-4d54-b71f-bebe9c716af4" containerName="kube-state-metrics" Mar 11 09:03:51 crc kubenswrapper[4808]: I0311 09:03:51.866216 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 11 09:03:51 crc kubenswrapper[4808]: I0311 09:03:51.867970 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Mar 11 09:03:51 crc kubenswrapper[4808]: I0311 09:03:51.868171 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Mar 11 09:03:51 crc kubenswrapper[4808]: I0311 09:03:51.875002 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 11 09:03:51 crc kubenswrapper[4808]: I0311 09:03:51.979184 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fde4956-a749-475e-9b5e-978fd33a4239-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"7fde4956-a749-475e-9b5e-978fd33a4239\") " pod="openstack/kube-state-metrics-0" Mar 11 09:03:51 crc kubenswrapper[4808]: I0311 09:03:51.979644 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/7fde4956-a749-475e-9b5e-978fd33a4239-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"7fde4956-a749-475e-9b5e-978fd33a4239\") " pod="openstack/kube-state-metrics-0" Mar 11 09:03:51 crc kubenswrapper[4808]: I0311 09:03:51.979745 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwmb5\" (UniqueName: \"kubernetes.io/projected/7fde4956-a749-475e-9b5e-978fd33a4239-kube-api-access-zwmb5\") pod \"kube-state-metrics-0\" (UID: \"7fde4956-a749-475e-9b5e-978fd33a4239\") " pod="openstack/kube-state-metrics-0" Mar 11 09:03:51 crc kubenswrapper[4808]: I0311 09:03:51.979785 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/7fde4956-a749-475e-9b5e-978fd33a4239-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"7fde4956-a749-475e-9b5e-978fd33a4239\") " pod="openstack/kube-state-metrics-0" Mar 11 09:03:52 crc kubenswrapper[4808]: I0311 09:03:52.081889 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwmb5\" (UniqueName: \"kubernetes.io/projected/7fde4956-a749-475e-9b5e-978fd33a4239-kube-api-access-zwmb5\") pod \"kube-state-metrics-0\" (UID: \"7fde4956-a749-475e-9b5e-978fd33a4239\") " pod="openstack/kube-state-metrics-0" Mar 11 09:03:52 crc kubenswrapper[4808]: I0311 09:03:52.082295 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/7fde4956-a749-475e-9b5e-978fd33a4239-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"7fde4956-a749-475e-9b5e-978fd33a4239\") " pod="openstack/kube-state-metrics-0" Mar 11 09:03:52 crc kubenswrapper[4808]: I0311 09:03:52.082544 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fde4956-a749-475e-9b5e-978fd33a4239-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"7fde4956-a749-475e-9b5e-978fd33a4239\") " pod="openstack/kube-state-metrics-0" Mar 11 09:03:52 crc kubenswrapper[4808]: I0311 09:03:52.082791 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/7fde4956-a749-475e-9b5e-978fd33a4239-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"7fde4956-a749-475e-9b5e-978fd33a4239\") " pod="openstack/kube-state-metrics-0" Mar 11 09:03:52 crc kubenswrapper[4808]: I0311 09:03:52.086809 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/7fde4956-a749-475e-9b5e-978fd33a4239-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"7fde4956-a749-475e-9b5e-978fd33a4239\") " pod="openstack/kube-state-metrics-0" Mar 11 09:03:52 crc kubenswrapper[4808]: I0311 09:03:52.086950 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/7fde4956-a749-475e-9b5e-978fd33a4239-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"7fde4956-a749-475e-9b5e-978fd33a4239\") " pod="openstack/kube-state-metrics-0" Mar 11 09:03:52 crc kubenswrapper[4808]: I0311 09:03:52.094034 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fde4956-a749-475e-9b5e-978fd33a4239-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"7fde4956-a749-475e-9b5e-978fd33a4239\") " pod="openstack/kube-state-metrics-0" Mar 11 09:03:52 crc kubenswrapper[4808]: I0311 09:03:52.100919 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwmb5\" (UniqueName: \"kubernetes.io/projected/7fde4956-a749-475e-9b5e-978fd33a4239-kube-api-access-zwmb5\") pod \"kube-state-metrics-0\" (UID: \"7fde4956-a749-475e-9b5e-978fd33a4239\") " pod="openstack/kube-state-metrics-0" Mar 11 09:03:52 crc kubenswrapper[4808]: I0311 09:03:52.187939 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 11 09:03:52 crc kubenswrapper[4808]: I0311 09:03:52.348219 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 11 09:03:52 crc kubenswrapper[4808]: I0311 09:03:52.349681 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 11 09:03:52 crc kubenswrapper[4808]: I0311 09:03:52.486918 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Mar 11 09:03:52 crc kubenswrapper[4808]: I0311 09:03:52.649671 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 11 09:03:52 crc kubenswrapper[4808]: I0311 09:03:52.825664 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"7fde4956-a749-475e-9b5e-978fd33a4239","Type":"ContainerStarted","Data":"470f25486156346186d1d954378f4ada476774612b051acac91371c7d4123c57"} Mar 11 09:03:53 crc kubenswrapper[4808]: I0311 09:03:52.992193 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 11 09:03:53 crc kubenswrapper[4808]: I0311 09:03:52.992520 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8f96efdc-7b47-4ce4-a534-84718c3dc7ce" containerName="ceilometer-central-agent" containerID="cri-o://b016ce50a19334679004b642d0b753f5c671a0ca0a1b7a5d01038b03e5de574c" gracePeriod=30 Mar 11 09:03:53 crc kubenswrapper[4808]: I0311 09:03:52.992643 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8f96efdc-7b47-4ce4-a534-84718c3dc7ce" containerName="proxy-httpd" containerID="cri-o://232d3646be8e1aa9ac57c7bd941d3b740c401708edd1b8db98ce701df80ccb75" gracePeriod=30 Mar 11 09:03:53 crc kubenswrapper[4808]: I0311 09:03:52.992683 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8f96efdc-7b47-4ce4-a534-84718c3dc7ce" containerName="sg-core" containerID="cri-o://1c22323585f07ca9032638dc564b2d985aa615534adde9047a09edbaca9ffc6a" gracePeriod=30 Mar 11 09:03:53 crc kubenswrapper[4808]: I0311 09:03:52.992714 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8f96efdc-7b47-4ce4-a534-84718c3dc7ce" containerName="ceilometer-notification-agent" containerID="cri-o://d5f51bb30049e0d4099dd423e5fa14c13976e2a1b7e4c564b512d2104f4a4a77" gracePeriod=30 Mar 11 09:03:53 crc kubenswrapper[4808]: I0311 09:03:53.364791 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="0b660ca3-212f-4380-a566-d01166c0555d" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.197:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 11 09:03:53 crc kubenswrapper[4808]: I0311 09:03:53.379606 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="0b660ca3-212f-4380-a566-d01166c0555d" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.197:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 11 09:03:53 crc kubenswrapper[4808]: I0311 09:03:53.801375 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fadbd323-a0cc-4d54-b71f-bebe9c716af4" path="/var/lib/kubelet/pods/fadbd323-a0cc-4d54-b71f-bebe9c716af4/volumes" Mar 11 09:03:53 crc kubenswrapper[4808]: I0311 09:03:53.889878 4808 generic.go:334] "Generic (PLEG): container finished" podID="8f96efdc-7b47-4ce4-a534-84718c3dc7ce" containerID="232d3646be8e1aa9ac57c7bd941d3b740c401708edd1b8db98ce701df80ccb75" exitCode=0 Mar 11 09:03:53 crc kubenswrapper[4808]: I0311 09:03:53.889925 4808 generic.go:334] "Generic (PLEG): container finished" podID="8f96efdc-7b47-4ce4-a534-84718c3dc7ce" containerID="1c22323585f07ca9032638dc564b2d985aa615534adde9047a09edbaca9ffc6a" exitCode=2 Mar 11 09:03:53 crc kubenswrapper[4808]: I0311 09:03:53.889934 4808 generic.go:334] "Generic (PLEG): container finished" podID="8f96efdc-7b47-4ce4-a534-84718c3dc7ce" containerID="b016ce50a19334679004b642d0b753f5c671a0ca0a1b7a5d01038b03e5de574c" exitCode=0 Mar 11 09:03:53 crc kubenswrapper[4808]: I0311 09:03:53.890001 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8f96efdc-7b47-4ce4-a534-84718c3dc7ce","Type":"ContainerDied","Data":"232d3646be8e1aa9ac57c7bd941d3b740c401708edd1b8db98ce701df80ccb75"} Mar 11 09:03:53 crc kubenswrapper[4808]: I0311 09:03:53.890033 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8f96efdc-7b47-4ce4-a534-84718c3dc7ce","Type":"ContainerDied","Data":"1c22323585f07ca9032638dc564b2d985aa615534adde9047a09edbaca9ffc6a"} Mar 11 09:03:53 crc kubenswrapper[4808]: I0311 09:03:53.890046 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8f96efdc-7b47-4ce4-a534-84718c3dc7ce","Type":"ContainerDied","Data":"b016ce50a19334679004b642d0b753f5c671a0ca0a1b7a5d01038b03e5de574c"} Mar 11 09:03:53 crc kubenswrapper[4808]: I0311 09:03:53.903548 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"7fde4956-a749-475e-9b5e-978fd33a4239","Type":"ContainerStarted","Data":"93a0e55009e5032ca6043f65c6537e79b802bcb416d44c8d232d960a3cf12786"} Mar 11 09:03:53 crc kubenswrapper[4808]: I0311 09:03:53.904676 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Mar 11 09:03:53 crc kubenswrapper[4808]: I0311 09:03:53.932031 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.553993459 podStartE2EDuration="2.932014137s" podCreationTimestamp="2026-03-11 09:03:51 +0000 UTC" firstStartedPulling="2026-03-11 09:03:52.644670379 +0000 UTC m=+1483.597993699" lastFinishedPulling="2026-03-11 09:03:53.022691057 +0000 UTC m=+1483.976014377" observedRunningTime="2026-03-11 09:03:53.930594196 +0000 UTC m=+1484.883917516" watchObservedRunningTime="2026-03-11 09:03:53.932014137 +0000 UTC m=+1484.885337457" Mar 11 09:03:55 crc kubenswrapper[4808]: I0311 09:03:55.611949 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 11 09:03:55 crc kubenswrapper[4808]: I0311 09:03:55.765741 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8f96efdc-7b47-4ce4-a534-84718c3dc7ce-run-httpd\") pod \"8f96efdc-7b47-4ce4-a534-84718c3dc7ce\" (UID: \"8f96efdc-7b47-4ce4-a534-84718c3dc7ce\") " Mar 11 09:03:55 crc kubenswrapper[4808]: I0311 09:03:55.765820 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f96efdc-7b47-4ce4-a534-84718c3dc7ce-combined-ca-bundle\") pod \"8f96efdc-7b47-4ce4-a534-84718c3dc7ce\" (UID: \"8f96efdc-7b47-4ce4-a534-84718c3dc7ce\") " Mar 11 09:03:55 crc kubenswrapper[4808]: I0311 09:03:55.765880 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8f96efdc-7b47-4ce4-a534-84718c3dc7ce-log-httpd\") pod \"8f96efdc-7b47-4ce4-a534-84718c3dc7ce\" (UID: \"8f96efdc-7b47-4ce4-a534-84718c3dc7ce\") " Mar 11 09:03:55 crc kubenswrapper[4808]: I0311 09:03:55.765970 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f96efdc-7b47-4ce4-a534-84718c3dc7ce-config-data\") pod \"8f96efdc-7b47-4ce4-a534-84718c3dc7ce\" (UID: \"8f96efdc-7b47-4ce4-a534-84718c3dc7ce\") " Mar 11 09:03:55 crc kubenswrapper[4808]: I0311 09:03:55.766079 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxrw\" (UniqueName: \"kubernetes.io/projected/8f96efdc-7b47-4ce4-a534-84718c3dc7ce-kube-api-access-pcxrw\") pod \"8f96efdc-7b47-4ce4-a534-84718c3dc7ce\" (UID: \"8f96efdc-7b47-4ce4-a534-84718c3dc7ce\") " Mar 11 09:03:55 crc kubenswrapper[4808]: I0311 09:03:55.766130 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8f96efdc-7b47-4ce4-a534-84718c3dc7ce-sg-core-conf-yaml\") pod \"8f96efdc-7b47-4ce4-a534-84718c3dc7ce\" (UID: \"8f96efdc-7b47-4ce4-a534-84718c3dc7ce\") " Mar 11 09:03:55 crc kubenswrapper[4808]: I0311 09:03:55.766175 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f96efdc-7b47-4ce4-a534-84718c3dc7ce-scripts\") pod \"8f96efdc-7b47-4ce4-a534-84718c3dc7ce\" (UID: \"8f96efdc-7b47-4ce4-a534-84718c3dc7ce\") " Mar 11 09:03:55 crc kubenswrapper[4808]: I0311 09:03:55.766316 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f96efdc-7b47-4ce4-a534-84718c3dc7ce-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "8f96efdc-7b47-4ce4-a534-84718c3dc7ce" (UID: "8f96efdc-7b47-4ce4-a534-84718c3dc7ce"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:03:55 crc kubenswrapper[4808]: I0311 09:03:55.766684 4808 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8f96efdc-7b47-4ce4-a534-84718c3dc7ce-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 11 09:03:55 crc kubenswrapper[4808]: I0311 09:03:55.766706 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f96efdc-7b47-4ce4-a534-84718c3dc7ce-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "8f96efdc-7b47-4ce4-a534-84718c3dc7ce" (UID: "8f96efdc-7b47-4ce4-a534-84718c3dc7ce"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:03:55 crc kubenswrapper[4808]: I0311 09:03:55.790178 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f96efdc-7b47-4ce4-a534-84718c3dc7ce-scripts" (OuterVolumeSpecName: "scripts") pod "8f96efdc-7b47-4ce4-a534-84718c3dc7ce" (UID: "8f96efdc-7b47-4ce4-a534-84718c3dc7ce"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:03:55 crc kubenswrapper[4808]: I0311 09:03:55.802543 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f96efdc-7b47-4ce4-a534-84718c3dc7ce-kube-api-access-pcxrw" (OuterVolumeSpecName: "kube-api-access-pcxrw") pod "8f96efdc-7b47-4ce4-a534-84718c3dc7ce" (UID: "8f96efdc-7b47-4ce4-a534-84718c3dc7ce"). InnerVolumeSpecName "kube-api-access-pcxrw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:03:55 crc kubenswrapper[4808]: I0311 09:03:55.817553 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f96efdc-7b47-4ce4-a534-84718c3dc7ce-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "8f96efdc-7b47-4ce4-a534-84718c3dc7ce" (UID: "8f96efdc-7b47-4ce4-a534-84718c3dc7ce"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:03:55 crc kubenswrapper[4808]: I0311 09:03:55.868737 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxrw\" (UniqueName: \"kubernetes.io/projected/8f96efdc-7b47-4ce4-a534-84718c3dc7ce-kube-api-access-pcxrw\") on node \"crc\" DevicePath \"\"" Mar 11 09:03:55 crc kubenswrapper[4808]: I0311 09:03:55.868774 4808 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8f96efdc-7b47-4ce4-a534-84718c3dc7ce-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 11 09:03:55 crc kubenswrapper[4808]: I0311 09:03:55.868787 4808 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f96efdc-7b47-4ce4-a534-84718c3dc7ce-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:03:55 crc kubenswrapper[4808]: I0311 09:03:55.868799 4808 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8f96efdc-7b47-4ce4-a534-84718c3dc7ce-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 11 09:03:55 crc kubenswrapper[4808]: I0311 09:03:55.874310 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f96efdc-7b47-4ce4-a534-84718c3dc7ce-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8f96efdc-7b47-4ce4-a534-84718c3dc7ce" (UID: "8f96efdc-7b47-4ce4-a534-84718c3dc7ce"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:03:55 crc kubenswrapper[4808]: I0311 09:03:55.893961 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f96efdc-7b47-4ce4-a534-84718c3dc7ce-config-data" (OuterVolumeSpecName: "config-data") pod "8f96efdc-7b47-4ce4-a534-84718c3dc7ce" (UID: "8f96efdc-7b47-4ce4-a534-84718c3dc7ce"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:03:55 crc kubenswrapper[4808]: I0311 09:03:55.928693 4808 generic.go:334] "Generic (PLEG): container finished" podID="8f96efdc-7b47-4ce4-a534-84718c3dc7ce" containerID="d5f51bb30049e0d4099dd423e5fa14c13976e2a1b7e4c564b512d2104f4a4a77" exitCode=0 Mar 11 09:03:55 crc kubenswrapper[4808]: I0311 09:03:55.929173 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 11 09:03:55 crc kubenswrapper[4808]: I0311 09:03:55.929591 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8f96efdc-7b47-4ce4-a534-84718c3dc7ce","Type":"ContainerDied","Data":"d5f51bb30049e0d4099dd423e5fa14c13976e2a1b7e4c564b512d2104f4a4a77"} Mar 11 09:03:55 crc kubenswrapper[4808]: I0311 09:03:55.929627 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8f96efdc-7b47-4ce4-a534-84718c3dc7ce","Type":"ContainerDied","Data":"3d591601e45559ff655035fdbfc985b5b9765c93bc91377b334eda6415550ede"} Mar 11 09:03:55 crc kubenswrapper[4808]: I0311 09:03:55.929646 4808 scope.go:117] "RemoveContainer" containerID="232d3646be8e1aa9ac57c7bd941d3b740c401708edd1b8db98ce701df80ccb75" Mar 11 09:03:55 crc kubenswrapper[4808]: I0311 09:03:55.971442 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f96efdc-7b47-4ce4-a534-84718c3dc7ce-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:03:55 crc kubenswrapper[4808]: I0311 09:03:55.971479 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f96efdc-7b47-4ce4-a534-84718c3dc7ce-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:03:55 crc kubenswrapper[4808]: I0311 09:03:55.973329 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 11 09:03:55 crc kubenswrapper[4808]: I0311 09:03:55.976029 4808 scope.go:117] "RemoveContainer" containerID="1c22323585f07ca9032638dc564b2d985aa615534adde9047a09edbaca9ffc6a" Mar 11 09:03:55 crc kubenswrapper[4808]: I0311 09:03:55.984049 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 11 09:03:55 crc kubenswrapper[4808]: I0311 09:03:55.992204 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 11 09:03:55 crc kubenswrapper[4808]: E0311 09:03:55.992598 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f96efdc-7b47-4ce4-a534-84718c3dc7ce" containerName="ceilometer-central-agent" Mar 11 09:03:55 crc kubenswrapper[4808]: I0311 09:03:55.992613 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f96efdc-7b47-4ce4-a534-84718c3dc7ce" containerName="ceilometer-central-agent" Mar 11 09:03:55 crc kubenswrapper[4808]: E0311 09:03:55.992634 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f96efdc-7b47-4ce4-a534-84718c3dc7ce" containerName="ceilometer-notification-agent" Mar 11 09:03:55 crc kubenswrapper[4808]: I0311 09:03:55.992640 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f96efdc-7b47-4ce4-a534-84718c3dc7ce" containerName="ceilometer-notification-agent" Mar 11 09:03:55 crc kubenswrapper[4808]: E0311 09:03:55.992656 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f96efdc-7b47-4ce4-a534-84718c3dc7ce" containerName="sg-core" Mar 11 09:03:55 crc kubenswrapper[4808]: I0311 09:03:55.992661 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f96efdc-7b47-4ce4-a534-84718c3dc7ce" containerName="sg-core" Mar 11 09:03:55 crc kubenswrapper[4808]: E0311 09:03:55.992672 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f96efdc-7b47-4ce4-a534-84718c3dc7ce" containerName="proxy-httpd" Mar 11 09:03:55 crc kubenswrapper[4808]: I0311 09:03:55.992678 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f96efdc-7b47-4ce4-a534-84718c3dc7ce" containerName="proxy-httpd" Mar 11 09:03:55 crc kubenswrapper[4808]: I0311 09:03:55.992848 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f96efdc-7b47-4ce4-a534-84718c3dc7ce" containerName="ceilometer-central-agent" Mar 11 09:03:55 crc kubenswrapper[4808]: I0311 09:03:55.992869 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f96efdc-7b47-4ce4-a534-84718c3dc7ce" containerName="sg-core" Mar 11 09:03:55 crc kubenswrapper[4808]: I0311 09:03:55.992880 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f96efdc-7b47-4ce4-a534-84718c3dc7ce" containerName="ceilometer-notification-agent" Mar 11 09:03:55 crc kubenswrapper[4808]: I0311 09:03:55.992895 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f96efdc-7b47-4ce4-a534-84718c3dc7ce" containerName="proxy-httpd" Mar 11 09:03:55 crc kubenswrapper[4808]: I0311 09:03:55.994550 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 11 09:03:55 crc kubenswrapper[4808]: I0311 09:03:55.996474 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 11 09:03:55 crc kubenswrapper[4808]: I0311 09:03:55.997264 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Mar 11 09:03:56 crc kubenswrapper[4808]: I0311 09:03:55.999663 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 11 09:03:56 crc kubenswrapper[4808]: I0311 09:03:56.032024 4808 scope.go:117] "RemoveContainer" containerID="d5f51bb30049e0d4099dd423e5fa14c13976e2a1b7e4c564b512d2104f4a4a77" Mar 11 09:03:56 crc kubenswrapper[4808]: I0311 09:03:56.034252 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 11 09:03:56 crc kubenswrapper[4808]: I0311 09:03:56.060540 4808 scope.go:117] "RemoveContainer" containerID="b016ce50a19334679004b642d0b753f5c671a0ca0a1b7a5d01038b03e5de574c" Mar 11 09:03:56 crc kubenswrapper[4808]: I0311 09:03:56.072726 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da6aa778-e5c4-4a42-8766-82f7f06fc986-config-data\") pod \"ceilometer-0\" (UID: \"da6aa778-e5c4-4a42-8766-82f7f06fc986\") " pod="openstack/ceilometer-0" Mar 11 09:03:56 crc kubenswrapper[4808]: I0311 09:03:56.072771 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxlfp\" (UniqueName: \"kubernetes.io/projected/da6aa778-e5c4-4a42-8766-82f7f06fc986-kube-api-access-gxlfp\") pod \"ceilometer-0\" (UID: \"da6aa778-e5c4-4a42-8766-82f7f06fc986\") " pod="openstack/ceilometer-0" Mar 11 09:03:56 crc kubenswrapper[4808]: I0311 09:03:56.072800 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da6aa778-e5c4-4a42-8766-82f7f06fc986-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"da6aa778-e5c4-4a42-8766-82f7f06fc986\") " pod="openstack/ceilometer-0" Mar 11 09:03:56 crc kubenswrapper[4808]: I0311 09:03:56.072818 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/da6aa778-e5c4-4a42-8766-82f7f06fc986-log-httpd\") pod \"ceilometer-0\" (UID: \"da6aa778-e5c4-4a42-8766-82f7f06fc986\") " pod="openstack/ceilometer-0" Mar 11 09:03:56 crc kubenswrapper[4808]: I0311 09:03:56.073495 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/da6aa778-e5c4-4a42-8766-82f7f06fc986-run-httpd\") pod \"ceilometer-0\" (UID: \"da6aa778-e5c4-4a42-8766-82f7f06fc986\") " pod="openstack/ceilometer-0" Mar 11 09:03:56 crc kubenswrapper[4808]: I0311 09:03:56.073615 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/da6aa778-e5c4-4a42-8766-82f7f06fc986-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"da6aa778-e5c4-4a42-8766-82f7f06fc986\") " pod="openstack/ceilometer-0" Mar 11 09:03:56 crc kubenswrapper[4808]: I0311 09:03:56.073705 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da6aa778-e5c4-4a42-8766-82f7f06fc986-scripts\") pod \"ceilometer-0\" (UID: \"da6aa778-e5c4-4a42-8766-82f7f06fc986\") " pod="openstack/ceilometer-0" Mar 11 09:03:56 crc kubenswrapper[4808]: I0311 09:03:56.073793 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/da6aa778-e5c4-4a42-8766-82f7f06fc986-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"da6aa778-e5c4-4a42-8766-82f7f06fc986\") " pod="openstack/ceilometer-0" Mar 11 09:03:56 crc kubenswrapper[4808]: I0311 09:03:56.083176 4808 scope.go:117] "RemoveContainer" containerID="232d3646be8e1aa9ac57c7bd941d3b740c401708edd1b8db98ce701df80ccb75" Mar 11 09:03:56 crc kubenswrapper[4808]: E0311 09:03:56.083786 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"232d3646be8e1aa9ac57c7bd941d3b740c401708edd1b8db98ce701df80ccb75\": container with ID starting with 232d3646be8e1aa9ac57c7bd941d3b740c401708edd1b8db98ce701df80ccb75 not found: ID does not exist" containerID="232d3646be8e1aa9ac57c7bd941d3b740c401708edd1b8db98ce701df80ccb75" Mar 11 09:03:56 crc kubenswrapper[4808]: I0311 09:03:56.083830 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"232d3646be8e1aa9ac57c7bd941d3b740c401708edd1b8db98ce701df80ccb75"} err="failed to get container status \"232d3646be8e1aa9ac57c7bd941d3b740c401708edd1b8db98ce701df80ccb75\": rpc error: code = NotFound desc = could not find container \"232d3646be8e1aa9ac57c7bd941d3b740c401708edd1b8db98ce701df80ccb75\": container with ID starting with 232d3646be8e1aa9ac57c7bd941d3b740c401708edd1b8db98ce701df80ccb75 not found: ID does not exist" Mar 11 09:03:56 crc kubenswrapper[4808]: I0311 09:03:56.083855 4808 scope.go:117] "RemoveContainer" containerID="1c22323585f07ca9032638dc564b2d985aa615534adde9047a09edbaca9ffc6a" Mar 11 09:03:56 crc kubenswrapper[4808]: E0311 09:03:56.085141 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c22323585f07ca9032638dc564b2d985aa615534adde9047a09edbaca9ffc6a\": container with ID starting with 1c22323585f07ca9032638dc564b2d985aa615534adde9047a09edbaca9ffc6a not found: ID does not exist" containerID="1c22323585f07ca9032638dc564b2d985aa615534adde9047a09edbaca9ffc6a" Mar 11 09:03:56 crc kubenswrapper[4808]: I0311 09:03:56.085163 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c22323585f07ca9032638dc564b2d985aa615534adde9047a09edbaca9ffc6a"} err="failed to get container status \"1c22323585f07ca9032638dc564b2d985aa615534adde9047a09edbaca9ffc6a\": rpc error: code = NotFound desc = could not find container \"1c22323585f07ca9032638dc564b2d985aa615534adde9047a09edbaca9ffc6a\": container with ID starting with 1c22323585f07ca9032638dc564b2d985aa615534adde9047a09edbaca9ffc6a not found: ID does not exist" Mar 11 09:03:56 crc kubenswrapper[4808]: I0311 09:03:56.085180 4808 scope.go:117] "RemoveContainer" containerID="d5f51bb30049e0d4099dd423e5fa14c13976e2a1b7e4c564b512d2104f4a4a77" Mar 11 09:03:56 crc kubenswrapper[4808]: E0311 09:03:56.086765 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5f51bb30049e0d4099dd423e5fa14c13976e2a1b7e4c564b512d2104f4a4a77\": container with ID starting with d5f51bb30049e0d4099dd423e5fa14c13976e2a1b7e4c564b512d2104f4a4a77 not found: ID does not exist" containerID="d5f51bb30049e0d4099dd423e5fa14c13976e2a1b7e4c564b512d2104f4a4a77" Mar 11 09:03:56 crc kubenswrapper[4808]: I0311 09:03:56.086793 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5f51bb30049e0d4099dd423e5fa14c13976e2a1b7e4c564b512d2104f4a4a77"} err="failed to get container status \"d5f51bb30049e0d4099dd423e5fa14c13976e2a1b7e4c564b512d2104f4a4a77\": rpc error: code = NotFound desc = could not find container \"d5f51bb30049e0d4099dd423e5fa14c13976e2a1b7e4c564b512d2104f4a4a77\": container with ID starting with d5f51bb30049e0d4099dd423e5fa14c13976e2a1b7e4c564b512d2104f4a4a77 not found: ID does not exist" Mar 11 09:03:56 crc kubenswrapper[4808]: I0311 09:03:56.086806 4808 scope.go:117] "RemoveContainer" containerID="b016ce50a19334679004b642d0b753f5c671a0ca0a1b7a5d01038b03e5de574c" Mar 11 09:03:56 crc kubenswrapper[4808]: E0311 09:03:56.087119 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b016ce50a19334679004b642d0b753f5c671a0ca0a1b7a5d01038b03e5de574c\": container with ID starting with b016ce50a19334679004b642d0b753f5c671a0ca0a1b7a5d01038b03e5de574c not found: ID does not exist" containerID="b016ce50a19334679004b642d0b753f5c671a0ca0a1b7a5d01038b03e5de574c" Mar 11 09:03:56 crc kubenswrapper[4808]: I0311 09:03:56.087135 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b016ce50a19334679004b642d0b753f5c671a0ca0a1b7a5d01038b03e5de574c"} err="failed to get container status \"b016ce50a19334679004b642d0b753f5c671a0ca0a1b7a5d01038b03e5de574c\": rpc error: code = NotFound desc = could not find container \"b016ce50a19334679004b642d0b753f5c671a0ca0a1b7a5d01038b03e5de574c\": container with ID starting with b016ce50a19334679004b642d0b753f5c671a0ca0a1b7a5d01038b03e5de574c not found: ID does not exist" Mar 11 09:03:56 crc kubenswrapper[4808]: I0311 09:03:56.175242 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da6aa778-e5c4-4a42-8766-82f7f06fc986-config-data\") pod \"ceilometer-0\" (UID: \"da6aa778-e5c4-4a42-8766-82f7f06fc986\") " pod="openstack/ceilometer-0" Mar 11 09:03:56 crc kubenswrapper[4808]: I0311 09:03:56.175295 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxlfp\" (UniqueName: \"kubernetes.io/projected/da6aa778-e5c4-4a42-8766-82f7f06fc986-kube-api-access-gxlfp\") pod \"ceilometer-0\" (UID: \"da6aa778-e5c4-4a42-8766-82f7f06fc986\") " pod="openstack/ceilometer-0" Mar 11 09:03:56 crc kubenswrapper[4808]: I0311 09:03:56.175316 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da6aa778-e5c4-4a42-8766-82f7f06fc986-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"da6aa778-e5c4-4a42-8766-82f7f06fc986\") " pod="openstack/ceilometer-0" Mar 11 09:03:56 crc kubenswrapper[4808]: I0311 09:03:56.175336 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/da6aa778-e5c4-4a42-8766-82f7f06fc986-log-httpd\") pod \"ceilometer-0\" (UID: \"da6aa778-e5c4-4a42-8766-82f7f06fc986\") " pod="openstack/ceilometer-0" Mar 11 09:03:56 crc kubenswrapper[4808]: I0311 09:03:56.175431 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/da6aa778-e5c4-4a42-8766-82f7f06fc986-run-httpd\") pod \"ceilometer-0\" (UID: \"da6aa778-e5c4-4a42-8766-82f7f06fc986\") " pod="openstack/ceilometer-0" Mar 11 09:03:56 crc kubenswrapper[4808]: I0311 09:03:56.175455 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/da6aa778-e5c4-4a42-8766-82f7f06fc986-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"da6aa778-e5c4-4a42-8766-82f7f06fc986\") " pod="openstack/ceilometer-0" Mar 11 09:03:56 crc kubenswrapper[4808]: I0311 09:03:56.175483 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da6aa778-e5c4-4a42-8766-82f7f06fc986-scripts\") pod \"ceilometer-0\" (UID: \"da6aa778-e5c4-4a42-8766-82f7f06fc986\") " pod="openstack/ceilometer-0" Mar 11 09:03:56 crc kubenswrapper[4808]: I0311 09:03:56.175505 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/da6aa778-e5c4-4a42-8766-82f7f06fc986-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"da6aa778-e5c4-4a42-8766-82f7f06fc986\") " pod="openstack/ceilometer-0" Mar 11 09:03:56 crc kubenswrapper[4808]: I0311 09:03:56.176201 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/da6aa778-e5c4-4a42-8766-82f7f06fc986-run-httpd\") pod \"ceilometer-0\" (UID: \"da6aa778-e5c4-4a42-8766-82f7f06fc986\") " pod="openstack/ceilometer-0" Mar 11 09:03:56 crc kubenswrapper[4808]: I0311 09:03:56.176369 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/da6aa778-e5c4-4a42-8766-82f7f06fc986-log-httpd\") pod \"ceilometer-0\" (UID: \"da6aa778-e5c4-4a42-8766-82f7f06fc986\") " pod="openstack/ceilometer-0" Mar 11 09:03:56 crc kubenswrapper[4808]: I0311 09:03:56.179278 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/da6aa778-e5c4-4a42-8766-82f7f06fc986-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"da6aa778-e5c4-4a42-8766-82f7f06fc986\") " pod="openstack/ceilometer-0" Mar 11 09:03:56 crc kubenswrapper[4808]: I0311 09:03:56.179287 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da6aa778-e5c4-4a42-8766-82f7f06fc986-config-data\") pod \"ceilometer-0\" (UID: \"da6aa778-e5c4-4a42-8766-82f7f06fc986\") " pod="openstack/ceilometer-0" Mar 11 09:03:56 crc kubenswrapper[4808]: I0311 09:03:56.179389 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/da6aa778-e5c4-4a42-8766-82f7f06fc986-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"da6aa778-e5c4-4a42-8766-82f7f06fc986\") " pod="openstack/ceilometer-0" Mar 11 09:03:56 crc kubenswrapper[4808]: I0311 09:03:56.179625 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da6aa778-e5c4-4a42-8766-82f7f06fc986-scripts\") pod \"ceilometer-0\" (UID: \"da6aa778-e5c4-4a42-8766-82f7f06fc986\") " pod="openstack/ceilometer-0" Mar 11 09:03:56 crc kubenswrapper[4808]: I0311 09:03:56.188584 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da6aa778-e5c4-4a42-8766-82f7f06fc986-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"da6aa778-e5c4-4a42-8766-82f7f06fc986\") " pod="openstack/ceilometer-0" Mar 11 09:03:56 crc kubenswrapper[4808]: I0311 09:03:56.191307 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxlfp\" (UniqueName: \"kubernetes.io/projected/da6aa778-e5c4-4a42-8766-82f7f06fc986-kube-api-access-gxlfp\") pod \"ceilometer-0\" (UID: \"da6aa778-e5c4-4a42-8766-82f7f06fc986\") " pod="openstack/ceilometer-0" Mar 11 09:03:56 crc kubenswrapper[4808]: I0311 09:03:56.333638 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 11 09:03:56 crc kubenswrapper[4808]: W0311 09:03:56.834117 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podda6aa778_e5c4_4a42_8766_82f7f06fc986.slice/crio-2daa42e8b6215a6ef86357a4da82f296a080aead1a80fa983e7da19f88112588 WatchSource:0}: Error finding container 2daa42e8b6215a6ef86357a4da82f296a080aead1a80fa983e7da19f88112588: Status 404 returned error can't find the container with id 2daa42e8b6215a6ef86357a4da82f296a080aead1a80fa983e7da19f88112588 Mar 11 09:03:56 crc kubenswrapper[4808]: I0311 09:03:56.836126 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 11 09:03:56 crc kubenswrapper[4808]: I0311 09:03:56.937204 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"da6aa778-e5c4-4a42-8766-82f7f06fc986","Type":"ContainerStarted","Data":"2daa42e8b6215a6ef86357a4da82f296a080aead1a80fa983e7da19f88112588"} Mar 11 09:03:57 crc kubenswrapper[4808]: I0311 09:03:57.487031 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Mar 11 09:03:57 crc kubenswrapper[4808]: I0311 09:03:57.518135 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 11 09:03:57 crc kubenswrapper[4808]: I0311 09:03:57.518265 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Mar 11 09:03:57 crc kubenswrapper[4808]: I0311 09:03:57.518283 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 11 09:03:57 crc kubenswrapper[4808]: I0311 09:03:57.801121 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f96efdc-7b47-4ce4-a534-84718c3dc7ce" path="/var/lib/kubelet/pods/8f96efdc-7b47-4ce4-a534-84718c3dc7ce/volumes" Mar 11 09:03:58 crc kubenswrapper[4808]: I0311 09:03:58.000008 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Mar 11 09:03:58 crc kubenswrapper[4808]: I0311 09:03:58.601661 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="1fd7c321-2b2e-4c84-8837-6f1dea682b5d" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.200:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 11 09:03:58 crc kubenswrapper[4808]: I0311 09:03:58.601758 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="1fd7c321-2b2e-4c84-8837-6f1dea682b5d" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.200:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 11 09:03:58 crc kubenswrapper[4808]: I0311 09:03:58.956848 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"da6aa778-e5c4-4a42-8766-82f7f06fc986","Type":"ContainerStarted","Data":"ad405a13c9ebb530062d4a354488ed62c11a53fa8be58e2ebbbedce12bcbf0ae"} Mar 11 09:03:58 crc kubenswrapper[4808]: I0311 09:03:58.957208 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"da6aa778-e5c4-4a42-8766-82f7f06fc986","Type":"ContainerStarted","Data":"838875062adfcd5a5462f332a1d2b8ee8acde89a43b8946ec82f6d6bc93c4413"} Mar 11 09:03:59 crc kubenswrapper[4808]: I0311 09:03:59.968111 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"da6aa778-e5c4-4a42-8766-82f7f06fc986","Type":"ContainerStarted","Data":"4dea977ba0156ec034fbae0cf4b55bf6da39d39f06bfa0f39d4f44164264489f"} Mar 11 09:04:00 crc kubenswrapper[4808]: I0311 09:04:00.139468 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553664-5zwr4"] Mar 11 09:04:00 crc kubenswrapper[4808]: I0311 09:04:00.141172 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553664-5zwr4" Mar 11 09:04:00 crc kubenswrapper[4808]: I0311 09:04:00.145681 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 09:04:00 crc kubenswrapper[4808]: I0311 09:04:00.145989 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-8r7cc" Mar 11 09:04:00 crc kubenswrapper[4808]: I0311 09:04:00.146213 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 09:04:00 crc kubenswrapper[4808]: I0311 09:04:00.149865 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553664-5zwr4"] Mar 11 09:04:00 crc kubenswrapper[4808]: I0311 09:04:00.284441 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmkgl\" (UniqueName: \"kubernetes.io/projected/46ec9b97-accf-46c4-9ec4-a66f90979d05-kube-api-access-dmkgl\") pod \"auto-csr-approver-29553664-5zwr4\" (UID: \"46ec9b97-accf-46c4-9ec4-a66f90979d05\") " pod="openshift-infra/auto-csr-approver-29553664-5zwr4" Mar 11 09:04:00 crc kubenswrapper[4808]: I0311 09:04:00.386603 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmkgl\" (UniqueName: \"kubernetes.io/projected/46ec9b97-accf-46c4-9ec4-a66f90979d05-kube-api-access-dmkgl\") pod \"auto-csr-approver-29553664-5zwr4\" (UID: \"46ec9b97-accf-46c4-9ec4-a66f90979d05\") " pod="openshift-infra/auto-csr-approver-29553664-5zwr4" Mar 11 09:04:00 crc kubenswrapper[4808]: I0311 09:04:00.423480 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmkgl\" (UniqueName: \"kubernetes.io/projected/46ec9b97-accf-46c4-9ec4-a66f90979d05-kube-api-access-dmkgl\") pod \"auto-csr-approver-29553664-5zwr4\" (UID: \"46ec9b97-accf-46c4-9ec4-a66f90979d05\") " pod="openshift-infra/auto-csr-approver-29553664-5zwr4" Mar 11 09:04:00 crc kubenswrapper[4808]: I0311 09:04:00.502243 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553664-5zwr4" Mar 11 09:04:00 crc kubenswrapper[4808]: W0311 09:04:00.995379 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod46ec9b97_accf_46c4_9ec4_a66f90979d05.slice/crio-79131ce93a49824cd81d3c8707be5f1db4c3a6d48f722b043cdc03ef7c56470e WatchSource:0}: Error finding container 79131ce93a49824cd81d3c8707be5f1db4c3a6d48f722b043cdc03ef7c56470e: Status 404 returned error can't find the container with id 79131ce93a49824cd81d3c8707be5f1db4c3a6d48f722b043cdc03ef7c56470e Mar 11 09:04:01 crc kubenswrapper[4808]: I0311 09:04:01.011683 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553664-5zwr4"] Mar 11 09:04:01 crc kubenswrapper[4808]: I0311 09:04:01.991443 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"da6aa778-e5c4-4a42-8766-82f7f06fc986","Type":"ContainerStarted","Data":"e7bca6946d586f439a930fe6949699c44b6ad58d809c153227b70beaee8714ae"} Mar 11 09:04:01 crc kubenswrapper[4808]: I0311 09:04:01.992057 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 11 09:04:01 crc kubenswrapper[4808]: I0311 09:04:01.995136 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553664-5zwr4" event={"ID":"46ec9b97-accf-46c4-9ec4-a66f90979d05","Type":"ContainerStarted","Data":"79131ce93a49824cd81d3c8707be5f1db4c3a6d48f722b043cdc03ef7c56470e"} Mar 11 09:04:02 crc kubenswrapper[4808]: I0311 09:04:02.021741 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.269669048 podStartE2EDuration="7.02172425s" podCreationTimestamp="2026-03-11 09:03:55 +0000 UTC" firstStartedPulling="2026-03-11 09:03:56.837551968 +0000 UTC m=+1487.790875318" lastFinishedPulling="2026-03-11 09:04:01.5896072 +0000 UTC m=+1492.542930520" observedRunningTime="2026-03-11 09:04:02.019286359 +0000 UTC m=+1492.972609679" watchObservedRunningTime="2026-03-11 09:04:02.02172425 +0000 UTC m=+1492.975047570" Mar 11 09:04:02 crc kubenswrapper[4808]: I0311 09:04:02.201964 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Mar 11 09:04:02 crc kubenswrapper[4808]: I0311 09:04:02.358692 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 11 09:04:02 crc kubenswrapper[4808]: I0311 09:04:02.360393 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 11 09:04:02 crc kubenswrapper[4808]: I0311 09:04:02.363751 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 11 09:04:03 crc kubenswrapper[4808]: I0311 09:04:03.016863 4808 generic.go:334] "Generic (PLEG): container finished" podID="46ec9b97-accf-46c4-9ec4-a66f90979d05" containerID="0b673ef902153b042052c1c50aa5304582d6373cf0de80c64763348094d8a4a3" exitCode=0 Mar 11 09:04:03 crc kubenswrapper[4808]: I0311 09:04:03.017076 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553664-5zwr4" event={"ID":"46ec9b97-accf-46c4-9ec4-a66f90979d05","Type":"ContainerDied","Data":"0b673ef902153b042052c1c50aa5304582d6373cf0de80c64763348094d8a4a3"} Mar 11 09:04:03 crc kubenswrapper[4808]: I0311 09:04:03.026670 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 11 09:04:04 crc kubenswrapper[4808]: I0311 09:04:04.365469 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553664-5zwr4" Mar 11 09:04:04 crc kubenswrapper[4808]: I0311 09:04:04.503160 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dmkgl\" (UniqueName: \"kubernetes.io/projected/46ec9b97-accf-46c4-9ec4-a66f90979d05-kube-api-access-dmkgl\") pod \"46ec9b97-accf-46c4-9ec4-a66f90979d05\" (UID: \"46ec9b97-accf-46c4-9ec4-a66f90979d05\") " Mar 11 09:04:04 crc kubenswrapper[4808]: I0311 09:04:04.509590 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46ec9b97-accf-46c4-9ec4-a66f90979d05-kube-api-access-dmkgl" (OuterVolumeSpecName: "kube-api-access-dmkgl") pod "46ec9b97-accf-46c4-9ec4-a66f90979d05" (UID: "46ec9b97-accf-46c4-9ec4-a66f90979d05"). InnerVolumeSpecName "kube-api-access-dmkgl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:04:04 crc kubenswrapper[4808]: I0311 09:04:04.605246 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dmkgl\" (UniqueName: \"kubernetes.io/projected/46ec9b97-accf-46c4-9ec4-a66f90979d05-kube-api-access-dmkgl\") on node \"crc\" DevicePath \"\"" Mar 11 09:04:05 crc kubenswrapper[4808]: I0311 09:04:05.038295 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553664-5zwr4" event={"ID":"46ec9b97-accf-46c4-9ec4-a66f90979d05","Type":"ContainerDied","Data":"79131ce93a49824cd81d3c8707be5f1db4c3a6d48f722b043cdc03ef7c56470e"} Mar 11 09:04:05 crc kubenswrapper[4808]: I0311 09:04:05.038667 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79131ce93a49824cd81d3c8707be5f1db4c3a6d48f722b043cdc03ef7c56470e" Mar 11 09:04:05 crc kubenswrapper[4808]: I0311 09:04:05.038342 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553664-5zwr4" Mar 11 09:04:05 crc kubenswrapper[4808]: I0311 09:04:05.431926 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553658-6z4sp"] Mar 11 09:04:05 crc kubenswrapper[4808]: I0311 09:04:05.438466 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553658-6z4sp"] Mar 11 09:04:05 crc kubenswrapper[4808]: I0311 09:04:05.821323 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85e3d78d-fe06-452c-a667-90fb06438938" path="/var/lib/kubelet/pods/85e3d78d-fe06-452c-a667-90fb06438938/volumes" Mar 11 09:04:06 crc kubenswrapper[4808]: I0311 09:04:06.023813 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 11 09:04:06 crc kubenswrapper[4808]: I0311 09:04:06.055652 4808 generic.go:334] "Generic (PLEG): container finished" podID="576445f3-2e23-46f5-94a4-aac06720e4d4" containerID="9b1636a9ef8c8bdae221e3923e75a711cbae868836ed093d383f0a9bc64c267e" exitCode=137 Mar 11 09:04:06 crc kubenswrapper[4808]: I0311 09:04:06.055734 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 11 09:04:06 crc kubenswrapper[4808]: I0311 09:04:06.055718 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"576445f3-2e23-46f5-94a4-aac06720e4d4","Type":"ContainerDied","Data":"9b1636a9ef8c8bdae221e3923e75a711cbae868836ed093d383f0a9bc64c267e"} Mar 11 09:04:06 crc kubenswrapper[4808]: I0311 09:04:06.056205 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"576445f3-2e23-46f5-94a4-aac06720e4d4","Type":"ContainerDied","Data":"bfa946d540b9860f72a7dd2c7e0224391aadb2384202d9902d3a241f31ba1be1"} Mar 11 09:04:06 crc kubenswrapper[4808]: I0311 09:04:06.056273 4808 scope.go:117] "RemoveContainer" containerID="9b1636a9ef8c8bdae221e3923e75a711cbae868836ed093d383f0a9bc64c267e" Mar 11 09:04:06 crc kubenswrapper[4808]: I0311 09:04:06.100949 4808 scope.go:117] "RemoveContainer" containerID="9b1636a9ef8c8bdae221e3923e75a711cbae868836ed093d383f0a9bc64c267e" Mar 11 09:04:06 crc kubenswrapper[4808]: E0311 09:04:06.101558 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b1636a9ef8c8bdae221e3923e75a711cbae868836ed093d383f0a9bc64c267e\": container with ID starting with 9b1636a9ef8c8bdae221e3923e75a711cbae868836ed093d383f0a9bc64c267e not found: ID does not exist" containerID="9b1636a9ef8c8bdae221e3923e75a711cbae868836ed093d383f0a9bc64c267e" Mar 11 09:04:06 crc kubenswrapper[4808]: I0311 09:04:06.101621 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b1636a9ef8c8bdae221e3923e75a711cbae868836ed093d383f0a9bc64c267e"} err="failed to get container status \"9b1636a9ef8c8bdae221e3923e75a711cbae868836ed093d383f0a9bc64c267e\": rpc error: code = NotFound desc = could not find container \"9b1636a9ef8c8bdae221e3923e75a711cbae868836ed093d383f0a9bc64c267e\": container with ID starting with 9b1636a9ef8c8bdae221e3923e75a711cbae868836ed093d383f0a9bc64c267e not found: ID does not exist" Mar 11 09:04:06 crc kubenswrapper[4808]: I0311 09:04:06.130588 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/576445f3-2e23-46f5-94a4-aac06720e4d4-combined-ca-bundle\") pod \"576445f3-2e23-46f5-94a4-aac06720e4d4\" (UID: \"576445f3-2e23-46f5-94a4-aac06720e4d4\") " Mar 11 09:04:06 crc kubenswrapper[4808]: I0311 09:04:06.130714 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/576445f3-2e23-46f5-94a4-aac06720e4d4-config-data\") pod \"576445f3-2e23-46f5-94a4-aac06720e4d4\" (UID: \"576445f3-2e23-46f5-94a4-aac06720e4d4\") " Mar 11 09:04:06 crc kubenswrapper[4808]: I0311 09:04:06.130806 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gfbw7\" (UniqueName: \"kubernetes.io/projected/576445f3-2e23-46f5-94a4-aac06720e4d4-kube-api-access-gfbw7\") pod \"576445f3-2e23-46f5-94a4-aac06720e4d4\" (UID: \"576445f3-2e23-46f5-94a4-aac06720e4d4\") " Mar 11 09:04:06 crc kubenswrapper[4808]: I0311 09:04:06.136226 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/576445f3-2e23-46f5-94a4-aac06720e4d4-kube-api-access-gfbw7" (OuterVolumeSpecName: "kube-api-access-gfbw7") pod "576445f3-2e23-46f5-94a4-aac06720e4d4" (UID: "576445f3-2e23-46f5-94a4-aac06720e4d4"). InnerVolumeSpecName "kube-api-access-gfbw7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:04:06 crc kubenswrapper[4808]: I0311 09:04:06.159243 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/576445f3-2e23-46f5-94a4-aac06720e4d4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "576445f3-2e23-46f5-94a4-aac06720e4d4" (UID: "576445f3-2e23-46f5-94a4-aac06720e4d4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:04:06 crc kubenswrapper[4808]: I0311 09:04:06.182053 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/576445f3-2e23-46f5-94a4-aac06720e4d4-config-data" (OuterVolumeSpecName: "config-data") pod "576445f3-2e23-46f5-94a4-aac06720e4d4" (UID: "576445f3-2e23-46f5-94a4-aac06720e4d4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:04:06 crc kubenswrapper[4808]: I0311 09:04:06.233485 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/576445f3-2e23-46f5-94a4-aac06720e4d4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:04:06 crc kubenswrapper[4808]: I0311 09:04:06.233523 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/576445f3-2e23-46f5-94a4-aac06720e4d4-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:04:06 crc kubenswrapper[4808]: I0311 09:04:06.233537 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gfbw7\" (UniqueName: \"kubernetes.io/projected/576445f3-2e23-46f5-94a4-aac06720e4d4-kube-api-access-gfbw7\") on node \"crc\" DevicePath \"\"" Mar 11 09:04:06 crc kubenswrapper[4808]: I0311 09:04:06.390375 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 11 09:04:06 crc kubenswrapper[4808]: I0311 09:04:06.405404 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 11 09:04:06 crc kubenswrapper[4808]: I0311 09:04:06.420067 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 11 09:04:06 crc kubenswrapper[4808]: E0311 09:04:06.420992 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="576445f3-2e23-46f5-94a4-aac06720e4d4" containerName="nova-cell1-novncproxy-novncproxy" Mar 11 09:04:06 crc kubenswrapper[4808]: I0311 09:04:06.421762 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="576445f3-2e23-46f5-94a4-aac06720e4d4" containerName="nova-cell1-novncproxy-novncproxy" Mar 11 09:04:06 crc kubenswrapper[4808]: E0311 09:04:06.421948 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46ec9b97-accf-46c4-9ec4-a66f90979d05" containerName="oc" Mar 11 09:04:06 crc kubenswrapper[4808]: I0311 09:04:06.422082 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="46ec9b97-accf-46c4-9ec4-a66f90979d05" containerName="oc" Mar 11 09:04:06 crc kubenswrapper[4808]: I0311 09:04:06.422570 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="46ec9b97-accf-46c4-9ec4-a66f90979d05" containerName="oc" Mar 11 09:04:06 crc kubenswrapper[4808]: I0311 09:04:06.422760 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="576445f3-2e23-46f5-94a4-aac06720e4d4" containerName="nova-cell1-novncproxy-novncproxy" Mar 11 09:04:06 crc kubenswrapper[4808]: I0311 09:04:06.423913 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 11 09:04:06 crc kubenswrapper[4808]: I0311 09:04:06.427054 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Mar 11 09:04:06 crc kubenswrapper[4808]: I0311 09:04:06.427218 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Mar 11 09:04:06 crc kubenswrapper[4808]: I0311 09:04:06.427253 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Mar 11 09:04:06 crc kubenswrapper[4808]: I0311 09:04:06.428960 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 11 09:04:06 crc kubenswrapper[4808]: I0311 09:04:06.436201 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e648abe-27f1-49ac-aebb-a38e206fe101-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3e648abe-27f1-49ac-aebb-a38e206fe101\") " pod="openstack/nova-cell1-novncproxy-0" Mar 11 09:04:06 crc kubenswrapper[4808]: I0311 09:04:06.437100 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrmnv\" (UniqueName: \"kubernetes.io/projected/3e648abe-27f1-49ac-aebb-a38e206fe101-kube-api-access-lrmnv\") pod \"nova-cell1-novncproxy-0\" (UID: \"3e648abe-27f1-49ac-aebb-a38e206fe101\") " pod="openstack/nova-cell1-novncproxy-0" Mar 11 09:04:06 crc kubenswrapper[4808]: I0311 09:04:06.437277 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e648abe-27f1-49ac-aebb-a38e206fe101-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"3e648abe-27f1-49ac-aebb-a38e206fe101\") " pod="openstack/nova-cell1-novncproxy-0" Mar 11 09:04:06 crc kubenswrapper[4808]: I0311 09:04:06.437447 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e648abe-27f1-49ac-aebb-a38e206fe101-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3e648abe-27f1-49ac-aebb-a38e206fe101\") " pod="openstack/nova-cell1-novncproxy-0" Mar 11 09:04:06 crc kubenswrapper[4808]: I0311 09:04:06.439156 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e648abe-27f1-49ac-aebb-a38e206fe101-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"3e648abe-27f1-49ac-aebb-a38e206fe101\") " pod="openstack/nova-cell1-novncproxy-0" Mar 11 09:04:06 crc kubenswrapper[4808]: I0311 09:04:06.540558 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrmnv\" (UniqueName: \"kubernetes.io/projected/3e648abe-27f1-49ac-aebb-a38e206fe101-kube-api-access-lrmnv\") pod \"nova-cell1-novncproxy-0\" (UID: \"3e648abe-27f1-49ac-aebb-a38e206fe101\") " pod="openstack/nova-cell1-novncproxy-0" Mar 11 09:04:06 crc kubenswrapper[4808]: I0311 09:04:06.540641 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e648abe-27f1-49ac-aebb-a38e206fe101-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"3e648abe-27f1-49ac-aebb-a38e206fe101\") " pod="openstack/nova-cell1-novncproxy-0" Mar 11 09:04:06 crc kubenswrapper[4808]: I0311 09:04:06.540679 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e648abe-27f1-49ac-aebb-a38e206fe101-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3e648abe-27f1-49ac-aebb-a38e206fe101\") " pod="openstack/nova-cell1-novncproxy-0" Mar 11 09:04:06 crc kubenswrapper[4808]: I0311 09:04:06.540704 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e648abe-27f1-49ac-aebb-a38e206fe101-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"3e648abe-27f1-49ac-aebb-a38e206fe101\") " pod="openstack/nova-cell1-novncproxy-0" Mar 11 09:04:06 crc kubenswrapper[4808]: I0311 09:04:06.540750 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e648abe-27f1-49ac-aebb-a38e206fe101-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3e648abe-27f1-49ac-aebb-a38e206fe101\") " pod="openstack/nova-cell1-novncproxy-0" Mar 11 09:04:06 crc kubenswrapper[4808]: I0311 09:04:06.546119 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e648abe-27f1-49ac-aebb-a38e206fe101-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"3e648abe-27f1-49ac-aebb-a38e206fe101\") " pod="openstack/nova-cell1-novncproxy-0" Mar 11 09:04:06 crc kubenswrapper[4808]: I0311 09:04:06.546150 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e648abe-27f1-49ac-aebb-a38e206fe101-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3e648abe-27f1-49ac-aebb-a38e206fe101\") " pod="openstack/nova-cell1-novncproxy-0" Mar 11 09:04:06 crc kubenswrapper[4808]: I0311 09:04:06.547603 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e648abe-27f1-49ac-aebb-a38e206fe101-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3e648abe-27f1-49ac-aebb-a38e206fe101\") " pod="openstack/nova-cell1-novncproxy-0" Mar 11 09:04:06 crc kubenswrapper[4808]: I0311 09:04:06.548244 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e648abe-27f1-49ac-aebb-a38e206fe101-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"3e648abe-27f1-49ac-aebb-a38e206fe101\") " pod="openstack/nova-cell1-novncproxy-0" Mar 11 09:04:06 crc kubenswrapper[4808]: I0311 09:04:06.561016 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrmnv\" (UniqueName: \"kubernetes.io/projected/3e648abe-27f1-49ac-aebb-a38e206fe101-kube-api-access-lrmnv\") pod \"nova-cell1-novncproxy-0\" (UID: \"3e648abe-27f1-49ac-aebb-a38e206fe101\") " pod="openstack/nova-cell1-novncproxy-0" Mar 11 09:04:06 crc kubenswrapper[4808]: I0311 09:04:06.744946 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 11 09:04:07 crc kubenswrapper[4808]: I0311 09:04:07.344958 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 11 09:04:07 crc kubenswrapper[4808]: W0311 09:04:07.350555 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3e648abe_27f1_49ac_aebb_a38e206fe101.slice/crio-cee64291f1f08ef382b743b230ebdae20a4aff24fbd3810672176112c8afae9e WatchSource:0}: Error finding container cee64291f1f08ef382b743b230ebdae20a4aff24fbd3810672176112c8afae9e: Status 404 returned error can't find the container with id cee64291f1f08ef382b743b230ebdae20a4aff24fbd3810672176112c8afae9e Mar 11 09:04:07 crc kubenswrapper[4808]: I0311 09:04:07.522116 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 11 09:04:07 crc kubenswrapper[4808]: I0311 09:04:07.523479 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 11 09:04:07 crc kubenswrapper[4808]: I0311 09:04:07.529115 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 11 09:04:07 crc kubenswrapper[4808]: I0311 09:04:07.531528 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 11 09:04:07 crc kubenswrapper[4808]: I0311 09:04:07.802918 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="576445f3-2e23-46f5-94a4-aac06720e4d4" path="/var/lib/kubelet/pods/576445f3-2e23-46f5-94a4-aac06720e4d4/volumes" Mar 11 09:04:08 crc kubenswrapper[4808]: I0311 09:04:08.079325 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"3e648abe-27f1-49ac-aebb-a38e206fe101","Type":"ContainerStarted","Data":"169cbac68f068fd8106574eac7954c3f52641921608913e7e44b9ca108d7b78a"} Mar 11 09:04:08 crc kubenswrapper[4808]: I0311 09:04:08.079413 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"3e648abe-27f1-49ac-aebb-a38e206fe101","Type":"ContainerStarted","Data":"cee64291f1f08ef382b743b230ebdae20a4aff24fbd3810672176112c8afae9e"} Mar 11 09:04:08 crc kubenswrapper[4808]: I0311 09:04:08.080080 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 11 09:04:08 crc kubenswrapper[4808]: I0311 09:04:08.084637 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 11 09:04:08 crc kubenswrapper[4808]: I0311 09:04:08.106713 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.106694193 podStartE2EDuration="2.106694193s" podCreationTimestamp="2026-03-11 09:04:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:04:08.097014862 +0000 UTC m=+1499.050338192" watchObservedRunningTime="2026-03-11 09:04:08.106694193 +0000 UTC m=+1499.060017503" Mar 11 09:04:08 crc kubenswrapper[4808]: I0311 09:04:08.291580 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7749c44969-t6dx4"] Mar 11 09:04:08 crc kubenswrapper[4808]: I0311 09:04:08.298559 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7749c44969-t6dx4" Mar 11 09:04:08 crc kubenswrapper[4808]: I0311 09:04:08.330095 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7749c44969-t6dx4"] Mar 11 09:04:08 crc kubenswrapper[4808]: I0311 09:04:08.478925 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brwrt\" (UniqueName: \"kubernetes.io/projected/760732fc-fc8a-4a24-beca-c969fb0260fe-kube-api-access-brwrt\") pod \"dnsmasq-dns-7749c44969-t6dx4\" (UID: \"760732fc-fc8a-4a24-beca-c969fb0260fe\") " pod="openstack/dnsmasq-dns-7749c44969-t6dx4" Mar 11 09:04:08 crc kubenswrapper[4808]: I0311 09:04:08.479023 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/760732fc-fc8a-4a24-beca-c969fb0260fe-dns-svc\") pod \"dnsmasq-dns-7749c44969-t6dx4\" (UID: \"760732fc-fc8a-4a24-beca-c969fb0260fe\") " pod="openstack/dnsmasq-dns-7749c44969-t6dx4" Mar 11 09:04:08 crc kubenswrapper[4808]: I0311 09:04:08.479068 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/760732fc-fc8a-4a24-beca-c969fb0260fe-config\") pod \"dnsmasq-dns-7749c44969-t6dx4\" (UID: \"760732fc-fc8a-4a24-beca-c969fb0260fe\") " pod="openstack/dnsmasq-dns-7749c44969-t6dx4" Mar 11 09:04:08 crc kubenswrapper[4808]: I0311 09:04:08.479232 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/760732fc-fc8a-4a24-beca-c969fb0260fe-ovsdbserver-nb\") pod \"dnsmasq-dns-7749c44969-t6dx4\" (UID: \"760732fc-fc8a-4a24-beca-c969fb0260fe\") " pod="openstack/dnsmasq-dns-7749c44969-t6dx4" Mar 11 09:04:08 crc kubenswrapper[4808]: I0311 09:04:08.479333 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/760732fc-fc8a-4a24-beca-c969fb0260fe-dns-swift-storage-0\") pod \"dnsmasq-dns-7749c44969-t6dx4\" (UID: \"760732fc-fc8a-4a24-beca-c969fb0260fe\") " pod="openstack/dnsmasq-dns-7749c44969-t6dx4" Mar 11 09:04:08 crc kubenswrapper[4808]: I0311 09:04:08.479438 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/760732fc-fc8a-4a24-beca-c969fb0260fe-ovsdbserver-sb\") pod \"dnsmasq-dns-7749c44969-t6dx4\" (UID: \"760732fc-fc8a-4a24-beca-c969fb0260fe\") " pod="openstack/dnsmasq-dns-7749c44969-t6dx4" Mar 11 09:04:08 crc kubenswrapper[4808]: I0311 09:04:08.581546 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brwrt\" (UniqueName: \"kubernetes.io/projected/760732fc-fc8a-4a24-beca-c969fb0260fe-kube-api-access-brwrt\") pod \"dnsmasq-dns-7749c44969-t6dx4\" (UID: \"760732fc-fc8a-4a24-beca-c969fb0260fe\") " pod="openstack/dnsmasq-dns-7749c44969-t6dx4" Mar 11 09:04:08 crc kubenswrapper[4808]: I0311 09:04:08.581635 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/760732fc-fc8a-4a24-beca-c969fb0260fe-dns-svc\") pod \"dnsmasq-dns-7749c44969-t6dx4\" (UID: \"760732fc-fc8a-4a24-beca-c969fb0260fe\") " pod="openstack/dnsmasq-dns-7749c44969-t6dx4" Mar 11 09:04:08 crc kubenswrapper[4808]: I0311 09:04:08.581672 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/760732fc-fc8a-4a24-beca-c969fb0260fe-config\") pod \"dnsmasq-dns-7749c44969-t6dx4\" (UID: \"760732fc-fc8a-4a24-beca-c969fb0260fe\") " pod="openstack/dnsmasq-dns-7749c44969-t6dx4" Mar 11 09:04:08 crc kubenswrapper[4808]: I0311 09:04:08.581699 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/760732fc-fc8a-4a24-beca-c969fb0260fe-ovsdbserver-nb\") pod \"dnsmasq-dns-7749c44969-t6dx4\" (UID: \"760732fc-fc8a-4a24-beca-c969fb0260fe\") " pod="openstack/dnsmasq-dns-7749c44969-t6dx4" Mar 11 09:04:08 crc kubenswrapper[4808]: I0311 09:04:08.581729 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/760732fc-fc8a-4a24-beca-c969fb0260fe-dns-swift-storage-0\") pod \"dnsmasq-dns-7749c44969-t6dx4\" (UID: \"760732fc-fc8a-4a24-beca-c969fb0260fe\") " pod="openstack/dnsmasq-dns-7749c44969-t6dx4" Mar 11 09:04:08 crc kubenswrapper[4808]: I0311 09:04:08.581749 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/760732fc-fc8a-4a24-beca-c969fb0260fe-ovsdbserver-sb\") pod \"dnsmasq-dns-7749c44969-t6dx4\" (UID: \"760732fc-fc8a-4a24-beca-c969fb0260fe\") " pod="openstack/dnsmasq-dns-7749c44969-t6dx4" Mar 11 09:04:08 crc kubenswrapper[4808]: I0311 09:04:08.582588 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/760732fc-fc8a-4a24-beca-c969fb0260fe-ovsdbserver-sb\") pod \"dnsmasq-dns-7749c44969-t6dx4\" (UID: \"760732fc-fc8a-4a24-beca-c969fb0260fe\") " pod="openstack/dnsmasq-dns-7749c44969-t6dx4" Mar 11 09:04:08 crc kubenswrapper[4808]: I0311 09:04:08.582695 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/760732fc-fc8a-4a24-beca-c969fb0260fe-ovsdbserver-nb\") pod \"dnsmasq-dns-7749c44969-t6dx4\" (UID: \"760732fc-fc8a-4a24-beca-c969fb0260fe\") " pod="openstack/dnsmasq-dns-7749c44969-t6dx4" Mar 11 09:04:08 crc kubenswrapper[4808]: I0311 09:04:08.582912 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/760732fc-fc8a-4a24-beca-c969fb0260fe-dns-svc\") pod \"dnsmasq-dns-7749c44969-t6dx4\" (UID: \"760732fc-fc8a-4a24-beca-c969fb0260fe\") " pod="openstack/dnsmasq-dns-7749c44969-t6dx4" Mar 11 09:04:08 crc kubenswrapper[4808]: I0311 09:04:08.583130 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/760732fc-fc8a-4a24-beca-c969fb0260fe-dns-swift-storage-0\") pod \"dnsmasq-dns-7749c44969-t6dx4\" (UID: \"760732fc-fc8a-4a24-beca-c969fb0260fe\") " pod="openstack/dnsmasq-dns-7749c44969-t6dx4" Mar 11 09:04:08 crc kubenswrapper[4808]: I0311 09:04:08.583318 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/760732fc-fc8a-4a24-beca-c969fb0260fe-config\") pod \"dnsmasq-dns-7749c44969-t6dx4\" (UID: \"760732fc-fc8a-4a24-beca-c969fb0260fe\") " pod="openstack/dnsmasq-dns-7749c44969-t6dx4" Mar 11 09:04:08 crc kubenswrapper[4808]: I0311 09:04:08.602130 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brwrt\" (UniqueName: \"kubernetes.io/projected/760732fc-fc8a-4a24-beca-c969fb0260fe-kube-api-access-brwrt\") pod \"dnsmasq-dns-7749c44969-t6dx4\" (UID: \"760732fc-fc8a-4a24-beca-c969fb0260fe\") " pod="openstack/dnsmasq-dns-7749c44969-t6dx4" Mar 11 09:04:08 crc kubenswrapper[4808]: I0311 09:04:08.623239 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7749c44969-t6dx4" Mar 11 09:04:09 crc kubenswrapper[4808]: I0311 09:04:09.100009 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7749c44969-t6dx4"] Mar 11 09:04:10 crc kubenswrapper[4808]: I0311 09:04:10.097483 4808 generic.go:334] "Generic (PLEG): container finished" podID="760732fc-fc8a-4a24-beca-c969fb0260fe" containerID="41aef0a023a3378b84cb70fe3872c109eb0c3c9ac4b7471c94ff982c80281442" exitCode=0 Mar 11 09:04:10 crc kubenswrapper[4808]: I0311 09:04:10.099343 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7749c44969-t6dx4" event={"ID":"760732fc-fc8a-4a24-beca-c969fb0260fe","Type":"ContainerDied","Data":"41aef0a023a3378b84cb70fe3872c109eb0c3c9ac4b7471c94ff982c80281442"} Mar 11 09:04:10 crc kubenswrapper[4808]: I0311 09:04:10.099398 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7749c44969-t6dx4" event={"ID":"760732fc-fc8a-4a24-beca-c969fb0260fe","Type":"ContainerStarted","Data":"9237057f960e720d78d3340d0292fa97ddbc83277a383baf3d84e0fb50a4b103"} Mar 11 09:04:10 crc kubenswrapper[4808]: I0311 09:04:10.285129 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 11 09:04:10 crc kubenswrapper[4808]: I0311 09:04:10.285795 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="da6aa778-e5c4-4a42-8766-82f7f06fc986" containerName="ceilometer-central-agent" containerID="cri-o://838875062adfcd5a5462f332a1d2b8ee8acde89a43b8946ec82f6d6bc93c4413" gracePeriod=30 Mar 11 09:04:10 crc kubenswrapper[4808]: I0311 09:04:10.285847 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="da6aa778-e5c4-4a42-8766-82f7f06fc986" containerName="sg-core" containerID="cri-o://4dea977ba0156ec034fbae0cf4b55bf6da39d39f06bfa0f39d4f44164264489f" gracePeriod=30 Mar 11 09:04:10 crc kubenswrapper[4808]: I0311 09:04:10.285847 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="da6aa778-e5c4-4a42-8766-82f7f06fc986" containerName="proxy-httpd" containerID="cri-o://e7bca6946d586f439a930fe6949699c44b6ad58d809c153227b70beaee8714ae" gracePeriod=30 Mar 11 09:04:10 crc kubenswrapper[4808]: I0311 09:04:10.285847 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="da6aa778-e5c4-4a42-8766-82f7f06fc986" containerName="ceilometer-notification-agent" containerID="cri-o://ad405a13c9ebb530062d4a354488ed62c11a53fa8be58e2ebbbedce12bcbf0ae" gracePeriod=30 Mar 11 09:04:10 crc kubenswrapper[4808]: I0311 09:04:10.793772 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.017333 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.107339 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7749c44969-t6dx4" event={"ID":"760732fc-fc8a-4a24-beca-c969fb0260fe","Type":"ContainerStarted","Data":"d02f043e013fd12091fec4a22839ba78f4684779ae4ff1b219c9f90367f903e7"} Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.108313 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7749c44969-t6dx4" Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.111709 4808 generic.go:334] "Generic (PLEG): container finished" podID="da6aa778-e5c4-4a42-8766-82f7f06fc986" containerID="e7bca6946d586f439a930fe6949699c44b6ad58d809c153227b70beaee8714ae" exitCode=0 Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.111742 4808 generic.go:334] "Generic (PLEG): container finished" podID="da6aa778-e5c4-4a42-8766-82f7f06fc986" containerID="4dea977ba0156ec034fbae0cf4b55bf6da39d39f06bfa0f39d4f44164264489f" exitCode=2 Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.111747 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.111749 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"da6aa778-e5c4-4a42-8766-82f7f06fc986","Type":"ContainerDied","Data":"e7bca6946d586f439a930fe6949699c44b6ad58d809c153227b70beaee8714ae"} Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.111795 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"da6aa778-e5c4-4a42-8766-82f7f06fc986","Type":"ContainerDied","Data":"4dea977ba0156ec034fbae0cf4b55bf6da39d39f06bfa0f39d4f44164264489f"} Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.111814 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"da6aa778-e5c4-4a42-8766-82f7f06fc986","Type":"ContainerDied","Data":"ad405a13c9ebb530062d4a354488ed62c11a53fa8be58e2ebbbedce12bcbf0ae"} Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.111833 4808 scope.go:117] "RemoveContainer" containerID="e7bca6946d586f439a930fe6949699c44b6ad58d809c153227b70beaee8714ae" Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.111751 4808 generic.go:334] "Generic (PLEG): container finished" podID="da6aa778-e5c4-4a42-8766-82f7f06fc986" containerID="ad405a13c9ebb530062d4a354488ed62c11a53fa8be58e2ebbbedce12bcbf0ae" exitCode=0 Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.111926 4808 generic.go:334] "Generic (PLEG): container finished" podID="da6aa778-e5c4-4a42-8766-82f7f06fc986" containerID="838875062adfcd5a5462f332a1d2b8ee8acde89a43b8946ec82f6d6bc93c4413" exitCode=0 Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.111979 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"da6aa778-e5c4-4a42-8766-82f7f06fc986","Type":"ContainerDied","Data":"838875062adfcd5a5462f332a1d2b8ee8acde89a43b8946ec82f6d6bc93c4413"} Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.111997 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"da6aa778-e5c4-4a42-8766-82f7f06fc986","Type":"ContainerDied","Data":"2daa42e8b6215a6ef86357a4da82f296a080aead1a80fa983e7da19f88112588"} Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.112044 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="1fd7c321-2b2e-4c84-8837-6f1dea682b5d" containerName="nova-api-log" containerID="cri-o://19dd084139975303482c38cd8de71a48fbefe1d156e97bbda0aff8b0a330c3a2" gracePeriod=30 Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.112144 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="1fd7c321-2b2e-4c84-8837-6f1dea682b5d" containerName="nova-api-api" containerID="cri-o://2bc2dfc4e2474da440b434255cf438e42a0f86a147e57db7c473cb659ec0377b" gracePeriod=30 Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.130046 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7749c44969-t6dx4" podStartSLOduration=3.130023089 podStartE2EDuration="3.130023089s" podCreationTimestamp="2026-03-11 09:04:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:04:11.124955241 +0000 UTC m=+1502.078278601" watchObservedRunningTime="2026-03-11 09:04:11.130023089 +0000 UTC m=+1502.083346439" Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.131880 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da6aa778-e5c4-4a42-8766-82f7f06fc986-scripts\") pod \"da6aa778-e5c4-4a42-8766-82f7f06fc986\" (UID: \"da6aa778-e5c4-4a42-8766-82f7f06fc986\") " Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.131935 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/da6aa778-e5c4-4a42-8766-82f7f06fc986-run-httpd\") pod \"da6aa778-e5c4-4a42-8766-82f7f06fc986\" (UID: \"da6aa778-e5c4-4a42-8766-82f7f06fc986\") " Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.131958 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/da6aa778-e5c4-4a42-8766-82f7f06fc986-sg-core-conf-yaml\") pod \"da6aa778-e5c4-4a42-8766-82f7f06fc986\" (UID: \"da6aa778-e5c4-4a42-8766-82f7f06fc986\") " Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.131986 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gxlfp\" (UniqueName: \"kubernetes.io/projected/da6aa778-e5c4-4a42-8766-82f7f06fc986-kube-api-access-gxlfp\") pod \"da6aa778-e5c4-4a42-8766-82f7f06fc986\" (UID: \"da6aa778-e5c4-4a42-8766-82f7f06fc986\") " Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.132029 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da6aa778-e5c4-4a42-8766-82f7f06fc986-config-data\") pod \"da6aa778-e5c4-4a42-8766-82f7f06fc986\" (UID: \"da6aa778-e5c4-4a42-8766-82f7f06fc986\") " Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.132057 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/da6aa778-e5c4-4a42-8766-82f7f06fc986-ceilometer-tls-certs\") pod \"da6aa778-e5c4-4a42-8766-82f7f06fc986\" (UID: \"da6aa778-e5c4-4a42-8766-82f7f06fc986\") " Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.132170 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da6aa778-e5c4-4a42-8766-82f7f06fc986-combined-ca-bundle\") pod \"da6aa778-e5c4-4a42-8766-82f7f06fc986\" (UID: \"da6aa778-e5c4-4a42-8766-82f7f06fc986\") " Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.132254 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/da6aa778-e5c4-4a42-8766-82f7f06fc986-log-httpd\") pod \"da6aa778-e5c4-4a42-8766-82f7f06fc986\" (UID: \"da6aa778-e5c4-4a42-8766-82f7f06fc986\") " Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.132308 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da6aa778-e5c4-4a42-8766-82f7f06fc986-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "da6aa778-e5c4-4a42-8766-82f7f06fc986" (UID: "da6aa778-e5c4-4a42-8766-82f7f06fc986"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.132691 4808 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/da6aa778-e5c4-4a42-8766-82f7f06fc986-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.133016 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da6aa778-e5c4-4a42-8766-82f7f06fc986-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "da6aa778-e5c4-4a42-8766-82f7f06fc986" (UID: "da6aa778-e5c4-4a42-8766-82f7f06fc986"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.138108 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da6aa778-e5c4-4a42-8766-82f7f06fc986-scripts" (OuterVolumeSpecName: "scripts") pod "da6aa778-e5c4-4a42-8766-82f7f06fc986" (UID: "da6aa778-e5c4-4a42-8766-82f7f06fc986"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.146794 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da6aa778-e5c4-4a42-8766-82f7f06fc986-kube-api-access-gxlfp" (OuterVolumeSpecName: "kube-api-access-gxlfp") pod "da6aa778-e5c4-4a42-8766-82f7f06fc986" (UID: "da6aa778-e5c4-4a42-8766-82f7f06fc986"). InnerVolumeSpecName "kube-api-access-gxlfp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.192546 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da6aa778-e5c4-4a42-8766-82f7f06fc986-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "da6aa778-e5c4-4a42-8766-82f7f06fc986" (UID: "da6aa778-e5c4-4a42-8766-82f7f06fc986"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.201113 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da6aa778-e5c4-4a42-8766-82f7f06fc986-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "da6aa778-e5c4-4a42-8766-82f7f06fc986" (UID: "da6aa778-e5c4-4a42-8766-82f7f06fc986"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.227453 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da6aa778-e5c4-4a42-8766-82f7f06fc986-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "da6aa778-e5c4-4a42-8766-82f7f06fc986" (UID: "da6aa778-e5c4-4a42-8766-82f7f06fc986"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.234578 4808 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da6aa778-e5c4-4a42-8766-82f7f06fc986-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.234902 4808 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/da6aa778-e5c4-4a42-8766-82f7f06fc986-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.234962 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gxlfp\" (UniqueName: \"kubernetes.io/projected/da6aa778-e5c4-4a42-8766-82f7f06fc986-kube-api-access-gxlfp\") on node \"crc\" DevicePath \"\"" Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.235038 4808 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/da6aa778-e5c4-4a42-8766-82f7f06fc986-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.235095 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da6aa778-e5c4-4a42-8766-82f7f06fc986-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.235228 4808 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/da6aa778-e5c4-4a42-8766-82f7f06fc986-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.263570 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da6aa778-e5c4-4a42-8766-82f7f06fc986-config-data" (OuterVolumeSpecName: "config-data") pod "da6aa778-e5c4-4a42-8766-82f7f06fc986" (UID: "da6aa778-e5c4-4a42-8766-82f7f06fc986"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.336114 4808 scope.go:117] "RemoveContainer" containerID="4dea977ba0156ec034fbae0cf4b55bf6da39d39f06bfa0f39d4f44164264489f" Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.337137 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da6aa778-e5c4-4a42-8766-82f7f06fc986-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.356886 4808 scope.go:117] "RemoveContainer" containerID="ad405a13c9ebb530062d4a354488ed62c11a53fa8be58e2ebbbedce12bcbf0ae" Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.380194 4808 scope.go:117] "RemoveContainer" containerID="838875062adfcd5a5462f332a1d2b8ee8acde89a43b8946ec82f6d6bc93c4413" Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.400699 4808 scope.go:117] "RemoveContainer" containerID="e7bca6946d586f439a930fe6949699c44b6ad58d809c153227b70beaee8714ae" Mar 11 09:04:11 crc kubenswrapper[4808]: E0311 09:04:11.401129 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7bca6946d586f439a930fe6949699c44b6ad58d809c153227b70beaee8714ae\": container with ID starting with e7bca6946d586f439a930fe6949699c44b6ad58d809c153227b70beaee8714ae not found: ID does not exist" containerID="e7bca6946d586f439a930fe6949699c44b6ad58d809c153227b70beaee8714ae" Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.401171 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7bca6946d586f439a930fe6949699c44b6ad58d809c153227b70beaee8714ae"} err="failed to get container status \"e7bca6946d586f439a930fe6949699c44b6ad58d809c153227b70beaee8714ae\": rpc error: code = NotFound desc = could not find container \"e7bca6946d586f439a930fe6949699c44b6ad58d809c153227b70beaee8714ae\": container with ID starting with e7bca6946d586f439a930fe6949699c44b6ad58d809c153227b70beaee8714ae not found: ID does not exist" Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.401196 4808 scope.go:117] "RemoveContainer" containerID="4dea977ba0156ec034fbae0cf4b55bf6da39d39f06bfa0f39d4f44164264489f" Mar 11 09:04:11 crc kubenswrapper[4808]: E0311 09:04:11.401821 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4dea977ba0156ec034fbae0cf4b55bf6da39d39f06bfa0f39d4f44164264489f\": container with ID starting with 4dea977ba0156ec034fbae0cf4b55bf6da39d39f06bfa0f39d4f44164264489f not found: ID does not exist" containerID="4dea977ba0156ec034fbae0cf4b55bf6da39d39f06bfa0f39d4f44164264489f" Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.401851 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4dea977ba0156ec034fbae0cf4b55bf6da39d39f06bfa0f39d4f44164264489f"} err="failed to get container status \"4dea977ba0156ec034fbae0cf4b55bf6da39d39f06bfa0f39d4f44164264489f\": rpc error: code = NotFound desc = could not find container \"4dea977ba0156ec034fbae0cf4b55bf6da39d39f06bfa0f39d4f44164264489f\": container with ID starting with 4dea977ba0156ec034fbae0cf4b55bf6da39d39f06bfa0f39d4f44164264489f not found: ID does not exist" Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.401872 4808 scope.go:117] "RemoveContainer" containerID="ad405a13c9ebb530062d4a354488ed62c11a53fa8be58e2ebbbedce12bcbf0ae" Mar 11 09:04:11 crc kubenswrapper[4808]: E0311 09:04:11.402112 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad405a13c9ebb530062d4a354488ed62c11a53fa8be58e2ebbbedce12bcbf0ae\": container with ID starting with ad405a13c9ebb530062d4a354488ed62c11a53fa8be58e2ebbbedce12bcbf0ae not found: ID does not exist" containerID="ad405a13c9ebb530062d4a354488ed62c11a53fa8be58e2ebbbedce12bcbf0ae" Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.402143 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad405a13c9ebb530062d4a354488ed62c11a53fa8be58e2ebbbedce12bcbf0ae"} err="failed to get container status \"ad405a13c9ebb530062d4a354488ed62c11a53fa8be58e2ebbbedce12bcbf0ae\": rpc error: code = NotFound desc = could not find container \"ad405a13c9ebb530062d4a354488ed62c11a53fa8be58e2ebbbedce12bcbf0ae\": container with ID starting with ad405a13c9ebb530062d4a354488ed62c11a53fa8be58e2ebbbedce12bcbf0ae not found: ID does not exist" Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.402162 4808 scope.go:117] "RemoveContainer" containerID="838875062adfcd5a5462f332a1d2b8ee8acde89a43b8946ec82f6d6bc93c4413" Mar 11 09:04:11 crc kubenswrapper[4808]: E0311 09:04:11.402393 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"838875062adfcd5a5462f332a1d2b8ee8acde89a43b8946ec82f6d6bc93c4413\": container with ID starting with 838875062adfcd5a5462f332a1d2b8ee8acde89a43b8946ec82f6d6bc93c4413 not found: ID does not exist" containerID="838875062adfcd5a5462f332a1d2b8ee8acde89a43b8946ec82f6d6bc93c4413" Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.402419 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"838875062adfcd5a5462f332a1d2b8ee8acde89a43b8946ec82f6d6bc93c4413"} err="failed to get container status \"838875062adfcd5a5462f332a1d2b8ee8acde89a43b8946ec82f6d6bc93c4413\": rpc error: code = NotFound desc = could not find container \"838875062adfcd5a5462f332a1d2b8ee8acde89a43b8946ec82f6d6bc93c4413\": container with ID starting with 838875062adfcd5a5462f332a1d2b8ee8acde89a43b8946ec82f6d6bc93c4413 not found: ID does not exist" Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.402437 4808 scope.go:117] "RemoveContainer" containerID="e7bca6946d586f439a930fe6949699c44b6ad58d809c153227b70beaee8714ae" Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.402743 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7bca6946d586f439a930fe6949699c44b6ad58d809c153227b70beaee8714ae"} err="failed to get container status \"e7bca6946d586f439a930fe6949699c44b6ad58d809c153227b70beaee8714ae\": rpc error: code = NotFound desc = could not find container \"e7bca6946d586f439a930fe6949699c44b6ad58d809c153227b70beaee8714ae\": container with ID starting with e7bca6946d586f439a930fe6949699c44b6ad58d809c153227b70beaee8714ae not found: ID does not exist" Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.402780 4808 scope.go:117] "RemoveContainer" containerID="4dea977ba0156ec034fbae0cf4b55bf6da39d39f06bfa0f39d4f44164264489f" Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.403254 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4dea977ba0156ec034fbae0cf4b55bf6da39d39f06bfa0f39d4f44164264489f"} err="failed to get container status \"4dea977ba0156ec034fbae0cf4b55bf6da39d39f06bfa0f39d4f44164264489f\": rpc error: code = NotFound desc = could not find container \"4dea977ba0156ec034fbae0cf4b55bf6da39d39f06bfa0f39d4f44164264489f\": container with ID starting with 4dea977ba0156ec034fbae0cf4b55bf6da39d39f06bfa0f39d4f44164264489f not found: ID does not exist" Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.403288 4808 scope.go:117] "RemoveContainer" containerID="ad405a13c9ebb530062d4a354488ed62c11a53fa8be58e2ebbbedce12bcbf0ae" Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.403538 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad405a13c9ebb530062d4a354488ed62c11a53fa8be58e2ebbbedce12bcbf0ae"} err="failed to get container status \"ad405a13c9ebb530062d4a354488ed62c11a53fa8be58e2ebbbedce12bcbf0ae\": rpc error: code = NotFound desc = could not find container \"ad405a13c9ebb530062d4a354488ed62c11a53fa8be58e2ebbbedce12bcbf0ae\": container with ID starting with ad405a13c9ebb530062d4a354488ed62c11a53fa8be58e2ebbbedce12bcbf0ae not found: ID does not exist" Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.403562 4808 scope.go:117] "RemoveContainer" containerID="838875062adfcd5a5462f332a1d2b8ee8acde89a43b8946ec82f6d6bc93c4413" Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.403835 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"838875062adfcd5a5462f332a1d2b8ee8acde89a43b8946ec82f6d6bc93c4413"} err="failed to get container status \"838875062adfcd5a5462f332a1d2b8ee8acde89a43b8946ec82f6d6bc93c4413\": rpc error: code = NotFound desc = could not find container \"838875062adfcd5a5462f332a1d2b8ee8acde89a43b8946ec82f6d6bc93c4413\": container with ID starting with 838875062adfcd5a5462f332a1d2b8ee8acde89a43b8946ec82f6d6bc93c4413 not found: ID does not exist" Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.403883 4808 scope.go:117] "RemoveContainer" containerID="e7bca6946d586f439a930fe6949699c44b6ad58d809c153227b70beaee8714ae" Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.404174 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7bca6946d586f439a930fe6949699c44b6ad58d809c153227b70beaee8714ae"} err="failed to get container status \"e7bca6946d586f439a930fe6949699c44b6ad58d809c153227b70beaee8714ae\": rpc error: code = NotFound desc = could not find container \"e7bca6946d586f439a930fe6949699c44b6ad58d809c153227b70beaee8714ae\": container with ID starting with e7bca6946d586f439a930fe6949699c44b6ad58d809c153227b70beaee8714ae not found: ID does not exist" Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.404194 4808 scope.go:117] "RemoveContainer" containerID="4dea977ba0156ec034fbae0cf4b55bf6da39d39f06bfa0f39d4f44164264489f" Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.404373 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4dea977ba0156ec034fbae0cf4b55bf6da39d39f06bfa0f39d4f44164264489f"} err="failed to get container status \"4dea977ba0156ec034fbae0cf4b55bf6da39d39f06bfa0f39d4f44164264489f\": rpc error: code = NotFound desc = could not find container \"4dea977ba0156ec034fbae0cf4b55bf6da39d39f06bfa0f39d4f44164264489f\": container with ID starting with 4dea977ba0156ec034fbae0cf4b55bf6da39d39f06bfa0f39d4f44164264489f not found: ID does not exist" Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.404399 4808 scope.go:117] "RemoveContainer" containerID="ad405a13c9ebb530062d4a354488ed62c11a53fa8be58e2ebbbedce12bcbf0ae" Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.404608 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad405a13c9ebb530062d4a354488ed62c11a53fa8be58e2ebbbedce12bcbf0ae"} err="failed to get container status \"ad405a13c9ebb530062d4a354488ed62c11a53fa8be58e2ebbbedce12bcbf0ae\": rpc error: code = NotFound desc = could not find container \"ad405a13c9ebb530062d4a354488ed62c11a53fa8be58e2ebbbedce12bcbf0ae\": container with ID starting with ad405a13c9ebb530062d4a354488ed62c11a53fa8be58e2ebbbedce12bcbf0ae not found: ID does not exist" Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.404639 4808 scope.go:117] "RemoveContainer" containerID="838875062adfcd5a5462f332a1d2b8ee8acde89a43b8946ec82f6d6bc93c4413" Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.404842 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"838875062adfcd5a5462f332a1d2b8ee8acde89a43b8946ec82f6d6bc93c4413"} err="failed to get container status \"838875062adfcd5a5462f332a1d2b8ee8acde89a43b8946ec82f6d6bc93c4413\": rpc error: code = NotFound desc = could not find container \"838875062adfcd5a5462f332a1d2b8ee8acde89a43b8946ec82f6d6bc93c4413\": container with ID starting with 838875062adfcd5a5462f332a1d2b8ee8acde89a43b8946ec82f6d6bc93c4413 not found: ID does not exist" Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.404868 4808 scope.go:117] "RemoveContainer" containerID="e7bca6946d586f439a930fe6949699c44b6ad58d809c153227b70beaee8714ae" Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.405053 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7bca6946d586f439a930fe6949699c44b6ad58d809c153227b70beaee8714ae"} err="failed to get container status \"e7bca6946d586f439a930fe6949699c44b6ad58d809c153227b70beaee8714ae\": rpc error: code = NotFound desc = could not find container \"e7bca6946d586f439a930fe6949699c44b6ad58d809c153227b70beaee8714ae\": container with ID starting with e7bca6946d586f439a930fe6949699c44b6ad58d809c153227b70beaee8714ae not found: ID does not exist" Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.405080 4808 scope.go:117] "RemoveContainer" containerID="4dea977ba0156ec034fbae0cf4b55bf6da39d39f06bfa0f39d4f44164264489f" Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.405246 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4dea977ba0156ec034fbae0cf4b55bf6da39d39f06bfa0f39d4f44164264489f"} err="failed to get container status \"4dea977ba0156ec034fbae0cf4b55bf6da39d39f06bfa0f39d4f44164264489f\": rpc error: code = NotFound desc = could not find container \"4dea977ba0156ec034fbae0cf4b55bf6da39d39f06bfa0f39d4f44164264489f\": container with ID starting with 4dea977ba0156ec034fbae0cf4b55bf6da39d39f06bfa0f39d4f44164264489f not found: ID does not exist" Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.405271 4808 scope.go:117] "RemoveContainer" containerID="ad405a13c9ebb530062d4a354488ed62c11a53fa8be58e2ebbbedce12bcbf0ae" Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.405811 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad405a13c9ebb530062d4a354488ed62c11a53fa8be58e2ebbbedce12bcbf0ae"} err="failed to get container status \"ad405a13c9ebb530062d4a354488ed62c11a53fa8be58e2ebbbedce12bcbf0ae\": rpc error: code = NotFound desc = could not find container \"ad405a13c9ebb530062d4a354488ed62c11a53fa8be58e2ebbbedce12bcbf0ae\": container with ID starting with ad405a13c9ebb530062d4a354488ed62c11a53fa8be58e2ebbbedce12bcbf0ae not found: ID does not exist" Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.405834 4808 scope.go:117] "RemoveContainer" containerID="838875062adfcd5a5462f332a1d2b8ee8acde89a43b8946ec82f6d6bc93c4413" Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.406111 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"838875062adfcd5a5462f332a1d2b8ee8acde89a43b8946ec82f6d6bc93c4413"} err="failed to get container status \"838875062adfcd5a5462f332a1d2b8ee8acde89a43b8946ec82f6d6bc93c4413\": rpc error: code = NotFound desc = could not find container \"838875062adfcd5a5462f332a1d2b8ee8acde89a43b8946ec82f6d6bc93c4413\": container with ID starting with 838875062adfcd5a5462f332a1d2b8ee8acde89a43b8946ec82f6d6bc93c4413 not found: ID does not exist" Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.464862 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.476632 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.487739 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 11 09:04:11 crc kubenswrapper[4808]: E0311 09:04:11.488161 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da6aa778-e5c4-4a42-8766-82f7f06fc986" containerName="proxy-httpd" Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.488177 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="da6aa778-e5c4-4a42-8766-82f7f06fc986" containerName="proxy-httpd" Mar 11 09:04:11 crc kubenswrapper[4808]: E0311 09:04:11.488203 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da6aa778-e5c4-4a42-8766-82f7f06fc986" containerName="ceilometer-central-agent" Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.488211 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="da6aa778-e5c4-4a42-8766-82f7f06fc986" containerName="ceilometer-central-agent" Mar 11 09:04:11 crc kubenswrapper[4808]: E0311 09:04:11.488231 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da6aa778-e5c4-4a42-8766-82f7f06fc986" containerName="ceilometer-notification-agent" Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.488237 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="da6aa778-e5c4-4a42-8766-82f7f06fc986" containerName="ceilometer-notification-agent" Mar 11 09:04:11 crc kubenswrapper[4808]: E0311 09:04:11.488245 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da6aa778-e5c4-4a42-8766-82f7f06fc986" containerName="sg-core" Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.488250 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="da6aa778-e5c4-4a42-8766-82f7f06fc986" containerName="sg-core" Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.488482 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="da6aa778-e5c4-4a42-8766-82f7f06fc986" containerName="ceilometer-notification-agent" Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.488498 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="da6aa778-e5c4-4a42-8766-82f7f06fc986" containerName="sg-core" Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.488512 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="da6aa778-e5c4-4a42-8766-82f7f06fc986" containerName="proxy-httpd" Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.488521 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="da6aa778-e5c4-4a42-8766-82f7f06fc986" containerName="ceilometer-central-agent" Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.490155 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.498728 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.518711 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.518844 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.519130 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.645608 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7dfc1938-9bba-4759-bf06-92e67939aefa-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7dfc1938-9bba-4759-bf06-92e67939aefa\") " pod="openstack/ceilometer-0" Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.645663 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7dfc1938-9bba-4759-bf06-92e67939aefa-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"7dfc1938-9bba-4759-bf06-92e67939aefa\") " pod="openstack/ceilometer-0" Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.645873 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7dfc1938-9bba-4759-bf06-92e67939aefa-config-data\") pod \"ceilometer-0\" (UID: \"7dfc1938-9bba-4759-bf06-92e67939aefa\") " pod="openstack/ceilometer-0" Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.645947 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dfc1938-9bba-4759-bf06-92e67939aefa-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7dfc1938-9bba-4759-bf06-92e67939aefa\") " pod="openstack/ceilometer-0" Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.646260 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7dfc1938-9bba-4759-bf06-92e67939aefa-scripts\") pod \"ceilometer-0\" (UID: \"7dfc1938-9bba-4759-bf06-92e67939aefa\") " pod="openstack/ceilometer-0" Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.646342 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7dfc1938-9bba-4759-bf06-92e67939aefa-run-httpd\") pod \"ceilometer-0\" (UID: \"7dfc1938-9bba-4759-bf06-92e67939aefa\") " pod="openstack/ceilometer-0" Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.646414 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7dfc1938-9bba-4759-bf06-92e67939aefa-log-httpd\") pod \"ceilometer-0\" (UID: \"7dfc1938-9bba-4759-bf06-92e67939aefa\") " pod="openstack/ceilometer-0" Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.646434 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9rv7\" (UniqueName: \"kubernetes.io/projected/7dfc1938-9bba-4759-bf06-92e67939aefa-kube-api-access-z9rv7\") pod \"ceilometer-0\" (UID: \"7dfc1938-9bba-4759-bf06-92e67939aefa\") " pod="openstack/ceilometer-0" Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.745240 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.748010 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7dfc1938-9bba-4759-bf06-92e67939aefa-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"7dfc1938-9bba-4759-bf06-92e67939aefa\") " pod="openstack/ceilometer-0" Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.748079 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7dfc1938-9bba-4759-bf06-92e67939aefa-config-data\") pod \"ceilometer-0\" (UID: \"7dfc1938-9bba-4759-bf06-92e67939aefa\") " pod="openstack/ceilometer-0" Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.748106 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dfc1938-9bba-4759-bf06-92e67939aefa-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7dfc1938-9bba-4759-bf06-92e67939aefa\") " pod="openstack/ceilometer-0" Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.748177 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7dfc1938-9bba-4759-bf06-92e67939aefa-scripts\") pod \"ceilometer-0\" (UID: \"7dfc1938-9bba-4759-bf06-92e67939aefa\") " pod="openstack/ceilometer-0" Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.748214 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7dfc1938-9bba-4759-bf06-92e67939aefa-run-httpd\") pod \"ceilometer-0\" (UID: \"7dfc1938-9bba-4759-bf06-92e67939aefa\") " pod="openstack/ceilometer-0" Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.748259 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7dfc1938-9bba-4759-bf06-92e67939aefa-log-httpd\") pod \"ceilometer-0\" (UID: \"7dfc1938-9bba-4759-bf06-92e67939aefa\") " pod="openstack/ceilometer-0" Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.748289 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9rv7\" (UniqueName: \"kubernetes.io/projected/7dfc1938-9bba-4759-bf06-92e67939aefa-kube-api-access-z9rv7\") pod \"ceilometer-0\" (UID: \"7dfc1938-9bba-4759-bf06-92e67939aefa\") " pod="openstack/ceilometer-0" Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.748342 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7dfc1938-9bba-4759-bf06-92e67939aefa-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7dfc1938-9bba-4759-bf06-92e67939aefa\") " pod="openstack/ceilometer-0" Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.748755 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7dfc1938-9bba-4759-bf06-92e67939aefa-run-httpd\") pod \"ceilometer-0\" (UID: \"7dfc1938-9bba-4759-bf06-92e67939aefa\") " pod="openstack/ceilometer-0" Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.748790 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7dfc1938-9bba-4759-bf06-92e67939aefa-log-httpd\") pod \"ceilometer-0\" (UID: \"7dfc1938-9bba-4759-bf06-92e67939aefa\") " pod="openstack/ceilometer-0" Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.756601 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7dfc1938-9bba-4759-bf06-92e67939aefa-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7dfc1938-9bba-4759-bf06-92e67939aefa\") " pod="openstack/ceilometer-0" Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.757003 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7dfc1938-9bba-4759-bf06-92e67939aefa-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"7dfc1938-9bba-4759-bf06-92e67939aefa\") " pod="openstack/ceilometer-0" Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.763388 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dfc1938-9bba-4759-bf06-92e67939aefa-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7dfc1938-9bba-4759-bf06-92e67939aefa\") " pod="openstack/ceilometer-0" Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.766747 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7dfc1938-9bba-4759-bf06-92e67939aefa-config-data\") pod \"ceilometer-0\" (UID: \"7dfc1938-9bba-4759-bf06-92e67939aefa\") " pod="openstack/ceilometer-0" Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.766939 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9rv7\" (UniqueName: \"kubernetes.io/projected/7dfc1938-9bba-4759-bf06-92e67939aefa-kube-api-access-z9rv7\") pod \"ceilometer-0\" (UID: \"7dfc1938-9bba-4759-bf06-92e67939aefa\") " pod="openstack/ceilometer-0" Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.781478 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7dfc1938-9bba-4759-bf06-92e67939aefa-scripts\") pod \"ceilometer-0\" (UID: \"7dfc1938-9bba-4759-bf06-92e67939aefa\") " pod="openstack/ceilometer-0" Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.801951 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da6aa778-e5c4-4a42-8766-82f7f06fc986" path="/var/lib/kubelet/pods/da6aa778-e5c4-4a42-8766-82f7f06fc986/volumes" Mar 11 09:04:11 crc kubenswrapper[4808]: I0311 09:04:11.833842 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 11 09:04:12 crc kubenswrapper[4808]: I0311 09:04:12.122227 4808 generic.go:334] "Generic (PLEG): container finished" podID="1fd7c321-2b2e-4c84-8837-6f1dea682b5d" containerID="19dd084139975303482c38cd8de71a48fbefe1d156e97bbda0aff8b0a330c3a2" exitCode=143 Mar 11 09:04:12 crc kubenswrapper[4808]: I0311 09:04:12.122307 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1fd7c321-2b2e-4c84-8837-6f1dea682b5d","Type":"ContainerDied","Data":"19dd084139975303482c38cd8de71a48fbefe1d156e97bbda0aff8b0a330c3a2"} Mar 11 09:04:12 crc kubenswrapper[4808]: I0311 09:04:12.297031 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 11 09:04:12 crc kubenswrapper[4808]: I0311 09:04:12.313670 4808 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 11 09:04:12 crc kubenswrapper[4808]: I0311 09:04:12.327259 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 11 09:04:13 crc kubenswrapper[4808]: I0311 09:04:13.136171 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7dfc1938-9bba-4759-bf06-92e67939aefa","Type":"ContainerStarted","Data":"a94f30227615ba722307bb0a2c14505ded79a3c7d33997161cf57ebca5684d17"} Mar 11 09:04:14 crc kubenswrapper[4808]: I0311 09:04:14.151937 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7dfc1938-9bba-4759-bf06-92e67939aefa","Type":"ContainerStarted","Data":"51f33c805bc04c418fb8027dc5fa74c99015b60cf8e37725fe46f0107328e154"} Mar 11 09:04:14 crc kubenswrapper[4808]: I0311 09:04:14.152161 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7dfc1938-9bba-4759-bf06-92e67939aefa","Type":"ContainerStarted","Data":"4c44f9fa32bb63adf63a8aceb9e79b9cd805347bb05c92ec4d8b6feae6ce948d"} Mar 11 09:04:14 crc kubenswrapper[4808]: I0311 09:04:14.731074 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 11 09:04:14 crc kubenswrapper[4808]: I0311 09:04:14.814875 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fd7c321-2b2e-4c84-8837-6f1dea682b5d-config-data\") pod \"1fd7c321-2b2e-4c84-8837-6f1dea682b5d\" (UID: \"1fd7c321-2b2e-4c84-8837-6f1dea682b5d\") " Mar 11 09:04:14 crc kubenswrapper[4808]: I0311 09:04:14.815104 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1fd7c321-2b2e-4c84-8837-6f1dea682b5d-logs\") pod \"1fd7c321-2b2e-4c84-8837-6f1dea682b5d\" (UID: \"1fd7c321-2b2e-4c84-8837-6f1dea682b5d\") " Mar 11 09:04:14 crc kubenswrapper[4808]: I0311 09:04:14.815153 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r8tkk\" (UniqueName: \"kubernetes.io/projected/1fd7c321-2b2e-4c84-8837-6f1dea682b5d-kube-api-access-r8tkk\") pod \"1fd7c321-2b2e-4c84-8837-6f1dea682b5d\" (UID: \"1fd7c321-2b2e-4c84-8837-6f1dea682b5d\") " Mar 11 09:04:14 crc kubenswrapper[4808]: I0311 09:04:14.815195 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fd7c321-2b2e-4c84-8837-6f1dea682b5d-combined-ca-bundle\") pod \"1fd7c321-2b2e-4c84-8837-6f1dea682b5d\" (UID: \"1fd7c321-2b2e-4c84-8837-6f1dea682b5d\") " Mar 11 09:04:14 crc kubenswrapper[4808]: I0311 09:04:14.816089 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1fd7c321-2b2e-4c84-8837-6f1dea682b5d-logs" (OuterVolumeSpecName: "logs") pod "1fd7c321-2b2e-4c84-8837-6f1dea682b5d" (UID: "1fd7c321-2b2e-4c84-8837-6f1dea682b5d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:04:14 crc kubenswrapper[4808]: I0311 09:04:14.820724 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fd7c321-2b2e-4c84-8837-6f1dea682b5d-kube-api-access-r8tkk" (OuterVolumeSpecName: "kube-api-access-r8tkk") pod "1fd7c321-2b2e-4c84-8837-6f1dea682b5d" (UID: "1fd7c321-2b2e-4c84-8837-6f1dea682b5d"). InnerVolumeSpecName "kube-api-access-r8tkk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:04:14 crc kubenswrapper[4808]: I0311 09:04:14.864549 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fd7c321-2b2e-4c84-8837-6f1dea682b5d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1fd7c321-2b2e-4c84-8837-6f1dea682b5d" (UID: "1fd7c321-2b2e-4c84-8837-6f1dea682b5d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:04:14 crc kubenswrapper[4808]: I0311 09:04:14.877761 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fd7c321-2b2e-4c84-8837-6f1dea682b5d-config-data" (OuterVolumeSpecName: "config-data") pod "1fd7c321-2b2e-4c84-8837-6f1dea682b5d" (UID: "1fd7c321-2b2e-4c84-8837-6f1dea682b5d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:04:14 crc kubenswrapper[4808]: I0311 09:04:14.917470 4808 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1fd7c321-2b2e-4c84-8837-6f1dea682b5d-logs\") on node \"crc\" DevicePath \"\"" Mar 11 09:04:14 crc kubenswrapper[4808]: I0311 09:04:14.917508 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r8tkk\" (UniqueName: \"kubernetes.io/projected/1fd7c321-2b2e-4c84-8837-6f1dea682b5d-kube-api-access-r8tkk\") on node \"crc\" DevicePath \"\"" Mar 11 09:04:14 crc kubenswrapper[4808]: I0311 09:04:14.917523 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fd7c321-2b2e-4c84-8837-6f1dea682b5d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:04:14 crc kubenswrapper[4808]: I0311 09:04:14.917534 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fd7c321-2b2e-4c84-8837-6f1dea682b5d-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:04:15 crc kubenswrapper[4808]: I0311 09:04:15.162818 4808 generic.go:334] "Generic (PLEG): container finished" podID="1fd7c321-2b2e-4c84-8837-6f1dea682b5d" containerID="2bc2dfc4e2474da440b434255cf438e42a0f86a147e57db7c473cb659ec0377b" exitCode=0 Mar 11 09:04:15 crc kubenswrapper[4808]: I0311 09:04:15.162878 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 11 09:04:15 crc kubenswrapper[4808]: I0311 09:04:15.162878 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1fd7c321-2b2e-4c84-8837-6f1dea682b5d","Type":"ContainerDied","Data":"2bc2dfc4e2474da440b434255cf438e42a0f86a147e57db7c473cb659ec0377b"} Mar 11 09:04:15 crc kubenswrapper[4808]: I0311 09:04:15.163053 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1fd7c321-2b2e-4c84-8837-6f1dea682b5d","Type":"ContainerDied","Data":"5be0fbd299924fe8b8d93b163a138fa95f87b0701d463aab2cc8be6cc46b0922"} Mar 11 09:04:15 crc kubenswrapper[4808]: I0311 09:04:15.163077 4808 scope.go:117] "RemoveContainer" containerID="2bc2dfc4e2474da440b434255cf438e42a0f86a147e57db7c473cb659ec0377b" Mar 11 09:04:15 crc kubenswrapper[4808]: I0311 09:04:15.165594 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7dfc1938-9bba-4759-bf06-92e67939aefa","Type":"ContainerStarted","Data":"8299c61c4dcf84b439cbaaafe97a9459a0402393b8ce1a63380405bb7b3d6840"} Mar 11 09:04:15 crc kubenswrapper[4808]: I0311 09:04:15.189215 4808 scope.go:117] "RemoveContainer" containerID="19dd084139975303482c38cd8de71a48fbefe1d156e97bbda0aff8b0a330c3a2" Mar 11 09:04:15 crc kubenswrapper[4808]: I0311 09:04:15.195398 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 11 09:04:15 crc kubenswrapper[4808]: I0311 09:04:15.213096 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 11 09:04:15 crc kubenswrapper[4808]: I0311 09:04:15.213998 4808 scope.go:117] "RemoveContainer" containerID="2bc2dfc4e2474da440b434255cf438e42a0f86a147e57db7c473cb659ec0377b" Mar 11 09:04:15 crc kubenswrapper[4808]: E0311 09:04:15.214429 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2bc2dfc4e2474da440b434255cf438e42a0f86a147e57db7c473cb659ec0377b\": container with ID starting with 2bc2dfc4e2474da440b434255cf438e42a0f86a147e57db7c473cb659ec0377b not found: ID does not exist" containerID="2bc2dfc4e2474da440b434255cf438e42a0f86a147e57db7c473cb659ec0377b" Mar 11 09:04:15 crc kubenswrapper[4808]: I0311 09:04:15.214460 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2bc2dfc4e2474da440b434255cf438e42a0f86a147e57db7c473cb659ec0377b"} err="failed to get container status \"2bc2dfc4e2474da440b434255cf438e42a0f86a147e57db7c473cb659ec0377b\": rpc error: code = NotFound desc = could not find container \"2bc2dfc4e2474da440b434255cf438e42a0f86a147e57db7c473cb659ec0377b\": container with ID starting with 2bc2dfc4e2474da440b434255cf438e42a0f86a147e57db7c473cb659ec0377b not found: ID does not exist" Mar 11 09:04:15 crc kubenswrapper[4808]: I0311 09:04:15.214484 4808 scope.go:117] "RemoveContainer" containerID="19dd084139975303482c38cd8de71a48fbefe1d156e97bbda0aff8b0a330c3a2" Mar 11 09:04:15 crc kubenswrapper[4808]: E0311 09:04:15.214868 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19dd084139975303482c38cd8de71a48fbefe1d156e97bbda0aff8b0a330c3a2\": container with ID starting with 19dd084139975303482c38cd8de71a48fbefe1d156e97bbda0aff8b0a330c3a2 not found: ID does not exist" containerID="19dd084139975303482c38cd8de71a48fbefe1d156e97bbda0aff8b0a330c3a2" Mar 11 09:04:15 crc kubenswrapper[4808]: I0311 09:04:15.214887 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19dd084139975303482c38cd8de71a48fbefe1d156e97bbda0aff8b0a330c3a2"} err="failed to get container status \"19dd084139975303482c38cd8de71a48fbefe1d156e97bbda0aff8b0a330c3a2\": rpc error: code = NotFound desc = could not find container \"19dd084139975303482c38cd8de71a48fbefe1d156e97bbda0aff8b0a330c3a2\": container with ID starting with 19dd084139975303482c38cd8de71a48fbefe1d156e97bbda0aff8b0a330c3a2 not found: ID does not exist" Mar 11 09:04:15 crc kubenswrapper[4808]: I0311 09:04:15.226002 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 11 09:04:15 crc kubenswrapper[4808]: E0311 09:04:15.226432 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fd7c321-2b2e-4c84-8837-6f1dea682b5d" containerName="nova-api-log" Mar 11 09:04:15 crc kubenswrapper[4808]: I0311 09:04:15.226448 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fd7c321-2b2e-4c84-8837-6f1dea682b5d" containerName="nova-api-log" Mar 11 09:04:15 crc kubenswrapper[4808]: E0311 09:04:15.226465 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fd7c321-2b2e-4c84-8837-6f1dea682b5d" containerName="nova-api-api" Mar 11 09:04:15 crc kubenswrapper[4808]: I0311 09:04:15.226471 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fd7c321-2b2e-4c84-8837-6f1dea682b5d" containerName="nova-api-api" Mar 11 09:04:15 crc kubenswrapper[4808]: I0311 09:04:15.226628 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fd7c321-2b2e-4c84-8837-6f1dea682b5d" containerName="nova-api-log" Mar 11 09:04:15 crc kubenswrapper[4808]: I0311 09:04:15.226643 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fd7c321-2b2e-4c84-8837-6f1dea682b5d" containerName="nova-api-api" Mar 11 09:04:15 crc kubenswrapper[4808]: I0311 09:04:15.227596 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 11 09:04:15 crc kubenswrapper[4808]: I0311 09:04:15.229848 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 11 09:04:15 crc kubenswrapper[4808]: I0311 09:04:15.229985 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Mar 11 09:04:15 crc kubenswrapper[4808]: I0311 09:04:15.230992 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Mar 11 09:04:15 crc kubenswrapper[4808]: I0311 09:04:15.235289 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 11 09:04:15 crc kubenswrapper[4808]: I0311 09:04:15.325604 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89b4a6a1-6ce1-48b9-9cd3-711620b15a37-logs\") pod \"nova-api-0\" (UID: \"89b4a6a1-6ce1-48b9-9cd3-711620b15a37\") " pod="openstack/nova-api-0" Mar 11 09:04:15 crc kubenswrapper[4808]: I0311 09:04:15.325949 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89b4a6a1-6ce1-48b9-9cd3-711620b15a37-config-data\") pod \"nova-api-0\" (UID: \"89b4a6a1-6ce1-48b9-9cd3-711620b15a37\") " pod="openstack/nova-api-0" Mar 11 09:04:15 crc kubenswrapper[4808]: I0311 09:04:15.325994 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/89b4a6a1-6ce1-48b9-9cd3-711620b15a37-public-tls-certs\") pod \"nova-api-0\" (UID: \"89b4a6a1-6ce1-48b9-9cd3-711620b15a37\") " pod="openstack/nova-api-0" Mar 11 09:04:15 crc kubenswrapper[4808]: I0311 09:04:15.326038 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzwcp\" (UniqueName: \"kubernetes.io/projected/89b4a6a1-6ce1-48b9-9cd3-711620b15a37-kube-api-access-vzwcp\") pod \"nova-api-0\" (UID: \"89b4a6a1-6ce1-48b9-9cd3-711620b15a37\") " pod="openstack/nova-api-0" Mar 11 09:04:15 crc kubenswrapper[4808]: I0311 09:04:15.326068 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89b4a6a1-6ce1-48b9-9cd3-711620b15a37-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"89b4a6a1-6ce1-48b9-9cd3-711620b15a37\") " pod="openstack/nova-api-0" Mar 11 09:04:15 crc kubenswrapper[4808]: I0311 09:04:15.326090 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/89b4a6a1-6ce1-48b9-9cd3-711620b15a37-internal-tls-certs\") pod \"nova-api-0\" (UID: \"89b4a6a1-6ce1-48b9-9cd3-711620b15a37\") " pod="openstack/nova-api-0" Mar 11 09:04:15 crc kubenswrapper[4808]: I0311 09:04:15.427328 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89b4a6a1-6ce1-48b9-9cd3-711620b15a37-logs\") pod \"nova-api-0\" (UID: \"89b4a6a1-6ce1-48b9-9cd3-711620b15a37\") " pod="openstack/nova-api-0" Mar 11 09:04:15 crc kubenswrapper[4808]: I0311 09:04:15.427400 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89b4a6a1-6ce1-48b9-9cd3-711620b15a37-config-data\") pod \"nova-api-0\" (UID: \"89b4a6a1-6ce1-48b9-9cd3-711620b15a37\") " pod="openstack/nova-api-0" Mar 11 09:04:15 crc kubenswrapper[4808]: I0311 09:04:15.427443 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/89b4a6a1-6ce1-48b9-9cd3-711620b15a37-public-tls-certs\") pod \"nova-api-0\" (UID: \"89b4a6a1-6ce1-48b9-9cd3-711620b15a37\") " pod="openstack/nova-api-0" Mar 11 09:04:15 crc kubenswrapper[4808]: I0311 09:04:15.427487 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzwcp\" (UniqueName: \"kubernetes.io/projected/89b4a6a1-6ce1-48b9-9cd3-711620b15a37-kube-api-access-vzwcp\") pod \"nova-api-0\" (UID: \"89b4a6a1-6ce1-48b9-9cd3-711620b15a37\") " pod="openstack/nova-api-0" Mar 11 09:04:15 crc kubenswrapper[4808]: I0311 09:04:15.427530 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89b4a6a1-6ce1-48b9-9cd3-711620b15a37-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"89b4a6a1-6ce1-48b9-9cd3-711620b15a37\") " pod="openstack/nova-api-0" Mar 11 09:04:15 crc kubenswrapper[4808]: I0311 09:04:15.427553 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/89b4a6a1-6ce1-48b9-9cd3-711620b15a37-internal-tls-certs\") pod \"nova-api-0\" (UID: \"89b4a6a1-6ce1-48b9-9cd3-711620b15a37\") " pod="openstack/nova-api-0" Mar 11 09:04:15 crc kubenswrapper[4808]: I0311 09:04:15.427842 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89b4a6a1-6ce1-48b9-9cd3-711620b15a37-logs\") pod \"nova-api-0\" (UID: \"89b4a6a1-6ce1-48b9-9cd3-711620b15a37\") " pod="openstack/nova-api-0" Mar 11 09:04:15 crc kubenswrapper[4808]: I0311 09:04:15.432673 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/89b4a6a1-6ce1-48b9-9cd3-711620b15a37-public-tls-certs\") pod \"nova-api-0\" (UID: \"89b4a6a1-6ce1-48b9-9cd3-711620b15a37\") " pod="openstack/nova-api-0" Mar 11 09:04:15 crc kubenswrapper[4808]: I0311 09:04:15.433601 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89b4a6a1-6ce1-48b9-9cd3-711620b15a37-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"89b4a6a1-6ce1-48b9-9cd3-711620b15a37\") " pod="openstack/nova-api-0" Mar 11 09:04:15 crc kubenswrapper[4808]: I0311 09:04:15.437627 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/89b4a6a1-6ce1-48b9-9cd3-711620b15a37-internal-tls-certs\") pod \"nova-api-0\" (UID: \"89b4a6a1-6ce1-48b9-9cd3-711620b15a37\") " pod="openstack/nova-api-0" Mar 11 09:04:15 crc kubenswrapper[4808]: I0311 09:04:15.439848 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89b4a6a1-6ce1-48b9-9cd3-711620b15a37-config-data\") pod \"nova-api-0\" (UID: \"89b4a6a1-6ce1-48b9-9cd3-711620b15a37\") " pod="openstack/nova-api-0" Mar 11 09:04:15 crc kubenswrapper[4808]: I0311 09:04:15.443062 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzwcp\" (UniqueName: \"kubernetes.io/projected/89b4a6a1-6ce1-48b9-9cd3-711620b15a37-kube-api-access-vzwcp\") pod \"nova-api-0\" (UID: \"89b4a6a1-6ce1-48b9-9cd3-711620b15a37\") " pod="openstack/nova-api-0" Mar 11 09:04:15 crc kubenswrapper[4808]: I0311 09:04:15.546817 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 11 09:04:15 crc kubenswrapper[4808]: I0311 09:04:15.826898 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1fd7c321-2b2e-4c84-8837-6f1dea682b5d" path="/var/lib/kubelet/pods/1fd7c321-2b2e-4c84-8837-6f1dea682b5d/volumes" Mar 11 09:04:16 crc kubenswrapper[4808]: W0311 09:04:16.006018 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod89b4a6a1_6ce1_48b9_9cd3_711620b15a37.slice/crio-1529fb3d1966bac5d4db6899b79b59f7f62b89afa6ba967acbee4d6815c5c8bb WatchSource:0}: Error finding container 1529fb3d1966bac5d4db6899b79b59f7f62b89afa6ba967acbee4d6815c5c8bb: Status 404 returned error can't find the container with id 1529fb3d1966bac5d4db6899b79b59f7f62b89afa6ba967acbee4d6815c5c8bb Mar 11 09:04:16 crc kubenswrapper[4808]: I0311 09:04:16.007323 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 11 09:04:16 crc kubenswrapper[4808]: I0311 09:04:16.027832 4808 patch_prober.go:28] interesting pod/machine-config-daemon-tfsm9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 09:04:16 crc kubenswrapper[4808]: I0311 09:04:16.027901 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 09:04:16 crc kubenswrapper[4808]: I0311 09:04:16.178331 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"89b4a6a1-6ce1-48b9-9cd3-711620b15a37","Type":"ContainerStarted","Data":"8d80136e4ed25b18856c8c93ec17a7789337660fc25ca3cb9c7765eee57e86d0"} Mar 11 09:04:16 crc kubenswrapper[4808]: I0311 09:04:16.178778 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"89b4a6a1-6ce1-48b9-9cd3-711620b15a37","Type":"ContainerStarted","Data":"1529fb3d1966bac5d4db6899b79b59f7f62b89afa6ba967acbee4d6815c5c8bb"} Mar 11 09:04:16 crc kubenswrapper[4808]: I0311 09:04:16.745820 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Mar 11 09:04:16 crc kubenswrapper[4808]: I0311 09:04:16.804843 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Mar 11 09:04:17 crc kubenswrapper[4808]: I0311 09:04:17.194271 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"89b4a6a1-6ce1-48b9-9cd3-711620b15a37","Type":"ContainerStarted","Data":"c35d426f724b36e7f18cfdfba915d0889bccb1640f5342244bf93ae811e3b00f"} Mar 11 09:04:17 crc kubenswrapper[4808]: I0311 09:04:17.210218 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Mar 11 09:04:17 crc kubenswrapper[4808]: I0311 09:04:17.218951 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.218929627 podStartE2EDuration="2.218929627s" podCreationTimestamp="2026-03-11 09:04:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:04:17.212430298 +0000 UTC m=+1508.165753618" watchObservedRunningTime="2026-03-11 09:04:17.218929627 +0000 UTC m=+1508.172252947" Mar 11 09:04:17 crc kubenswrapper[4808]: I0311 09:04:17.406094 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-5fbh5"] Mar 11 09:04:17 crc kubenswrapper[4808]: I0311 09:04:17.407180 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-5fbh5" Mar 11 09:04:17 crc kubenswrapper[4808]: I0311 09:04:17.412420 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Mar 11 09:04:17 crc kubenswrapper[4808]: I0311 09:04:17.412649 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Mar 11 09:04:17 crc kubenswrapper[4808]: I0311 09:04:17.418967 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-5fbh5"] Mar 11 09:04:17 crc kubenswrapper[4808]: I0311 09:04:17.471748 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpgxl\" (UniqueName: \"kubernetes.io/projected/09a9abaa-b83b-45d9-8fdd-cd9df7b814bb-kube-api-access-bpgxl\") pod \"nova-cell1-cell-mapping-5fbh5\" (UID: \"09a9abaa-b83b-45d9-8fdd-cd9df7b814bb\") " pod="openstack/nova-cell1-cell-mapping-5fbh5" Mar 11 09:04:17 crc kubenswrapper[4808]: I0311 09:04:17.471816 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09a9abaa-b83b-45d9-8fdd-cd9df7b814bb-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-5fbh5\" (UID: \"09a9abaa-b83b-45d9-8fdd-cd9df7b814bb\") " pod="openstack/nova-cell1-cell-mapping-5fbh5" Mar 11 09:04:17 crc kubenswrapper[4808]: I0311 09:04:17.471895 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09a9abaa-b83b-45d9-8fdd-cd9df7b814bb-config-data\") pod \"nova-cell1-cell-mapping-5fbh5\" (UID: \"09a9abaa-b83b-45d9-8fdd-cd9df7b814bb\") " pod="openstack/nova-cell1-cell-mapping-5fbh5" Mar 11 09:04:17 crc kubenswrapper[4808]: I0311 09:04:17.471927 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09a9abaa-b83b-45d9-8fdd-cd9df7b814bb-scripts\") pod \"nova-cell1-cell-mapping-5fbh5\" (UID: \"09a9abaa-b83b-45d9-8fdd-cd9df7b814bb\") " pod="openstack/nova-cell1-cell-mapping-5fbh5" Mar 11 09:04:17 crc kubenswrapper[4808]: I0311 09:04:17.573994 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09a9abaa-b83b-45d9-8fdd-cd9df7b814bb-config-data\") pod \"nova-cell1-cell-mapping-5fbh5\" (UID: \"09a9abaa-b83b-45d9-8fdd-cd9df7b814bb\") " pod="openstack/nova-cell1-cell-mapping-5fbh5" Mar 11 09:04:17 crc kubenswrapper[4808]: I0311 09:04:17.574053 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09a9abaa-b83b-45d9-8fdd-cd9df7b814bb-scripts\") pod \"nova-cell1-cell-mapping-5fbh5\" (UID: \"09a9abaa-b83b-45d9-8fdd-cd9df7b814bb\") " pod="openstack/nova-cell1-cell-mapping-5fbh5" Mar 11 09:04:17 crc kubenswrapper[4808]: I0311 09:04:17.574117 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpgxl\" (UniqueName: \"kubernetes.io/projected/09a9abaa-b83b-45d9-8fdd-cd9df7b814bb-kube-api-access-bpgxl\") pod \"nova-cell1-cell-mapping-5fbh5\" (UID: \"09a9abaa-b83b-45d9-8fdd-cd9df7b814bb\") " pod="openstack/nova-cell1-cell-mapping-5fbh5" Mar 11 09:04:17 crc kubenswrapper[4808]: I0311 09:04:17.574164 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09a9abaa-b83b-45d9-8fdd-cd9df7b814bb-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-5fbh5\" (UID: \"09a9abaa-b83b-45d9-8fdd-cd9df7b814bb\") " pod="openstack/nova-cell1-cell-mapping-5fbh5" Mar 11 09:04:17 crc kubenswrapper[4808]: I0311 09:04:17.590315 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09a9abaa-b83b-45d9-8fdd-cd9df7b814bb-scripts\") pod \"nova-cell1-cell-mapping-5fbh5\" (UID: \"09a9abaa-b83b-45d9-8fdd-cd9df7b814bb\") " pod="openstack/nova-cell1-cell-mapping-5fbh5" Mar 11 09:04:17 crc kubenswrapper[4808]: I0311 09:04:17.590924 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09a9abaa-b83b-45d9-8fdd-cd9df7b814bb-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-5fbh5\" (UID: \"09a9abaa-b83b-45d9-8fdd-cd9df7b814bb\") " pod="openstack/nova-cell1-cell-mapping-5fbh5" Mar 11 09:04:17 crc kubenswrapper[4808]: I0311 09:04:17.592029 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09a9abaa-b83b-45d9-8fdd-cd9df7b814bb-config-data\") pod \"nova-cell1-cell-mapping-5fbh5\" (UID: \"09a9abaa-b83b-45d9-8fdd-cd9df7b814bb\") " pod="openstack/nova-cell1-cell-mapping-5fbh5" Mar 11 09:04:17 crc kubenswrapper[4808]: I0311 09:04:17.592863 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpgxl\" (UniqueName: \"kubernetes.io/projected/09a9abaa-b83b-45d9-8fdd-cd9df7b814bb-kube-api-access-bpgxl\") pod \"nova-cell1-cell-mapping-5fbh5\" (UID: \"09a9abaa-b83b-45d9-8fdd-cd9df7b814bb\") " pod="openstack/nova-cell1-cell-mapping-5fbh5" Mar 11 09:04:17 crc kubenswrapper[4808]: I0311 09:04:17.738522 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-5fbh5" Mar 11 09:04:18 crc kubenswrapper[4808]: I0311 09:04:18.211741 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7dfc1938-9bba-4759-bf06-92e67939aefa","Type":"ContainerStarted","Data":"03f84e31541ef49831e304d907e87784c691a7399a4cc3c828032d7d08a57917"} Mar 11 09:04:18 crc kubenswrapper[4808]: I0311 09:04:18.212139 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7dfc1938-9bba-4759-bf06-92e67939aefa" containerName="ceilometer-central-agent" containerID="cri-o://4c44f9fa32bb63adf63a8aceb9e79b9cd805347bb05c92ec4d8b6feae6ce948d" gracePeriod=30 Mar 11 09:04:18 crc kubenswrapper[4808]: I0311 09:04:18.212164 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7dfc1938-9bba-4759-bf06-92e67939aefa" containerName="ceilometer-notification-agent" containerID="cri-o://51f33c805bc04c418fb8027dc5fa74c99015b60cf8e37725fe46f0107328e154" gracePeriod=30 Mar 11 09:04:18 crc kubenswrapper[4808]: I0311 09:04:18.212157 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7dfc1938-9bba-4759-bf06-92e67939aefa" containerName="proxy-httpd" containerID="cri-o://03f84e31541ef49831e304d907e87784c691a7399a4cc3c828032d7d08a57917" gracePeriod=30 Mar 11 09:04:18 crc kubenswrapper[4808]: I0311 09:04:18.212145 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7dfc1938-9bba-4759-bf06-92e67939aefa" containerName="sg-core" containerID="cri-o://8299c61c4dcf84b439cbaaafe97a9459a0402393b8ce1a63380405bb7b3d6840" gracePeriod=30 Mar 11 09:04:18 crc kubenswrapper[4808]: I0311 09:04:18.245092 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.538060331 podStartE2EDuration="7.245073503s" podCreationTimestamp="2026-03-11 09:04:11 +0000 UTC" firstStartedPulling="2026-03-11 09:04:12.313312542 +0000 UTC m=+1503.266635952" lastFinishedPulling="2026-03-11 09:04:17.020325804 +0000 UTC m=+1507.973649124" observedRunningTime="2026-03-11 09:04:18.237716649 +0000 UTC m=+1509.191039989" watchObservedRunningTime="2026-03-11 09:04:18.245073503 +0000 UTC m=+1509.198396823" Mar 11 09:04:18 crc kubenswrapper[4808]: I0311 09:04:18.283187 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-5fbh5"] Mar 11 09:04:18 crc kubenswrapper[4808]: I0311 09:04:18.624525 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7749c44969-t6dx4" Mar 11 09:04:18 crc kubenswrapper[4808]: I0311 09:04:18.717342 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7bd5679c8c-lnxwb"] Mar 11 09:04:18 crc kubenswrapper[4808]: I0311 09:04:18.717653 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7bd5679c8c-lnxwb" podUID="0d36885b-b25e-467a-bdd9-bb9100e7f02b" containerName="dnsmasq-dns" containerID="cri-o://f7de189655d9da6f7a1e8cc7c1967571cdb0ed5bcda3796b4f41b1596fabc0a5" gracePeriod=10 Mar 11 09:04:19 crc kubenswrapper[4808]: I0311 09:04:19.237707 4808 generic.go:334] "Generic (PLEG): container finished" podID="0d36885b-b25e-467a-bdd9-bb9100e7f02b" containerID="f7de189655d9da6f7a1e8cc7c1967571cdb0ed5bcda3796b4f41b1596fabc0a5" exitCode=0 Mar 11 09:04:19 crc kubenswrapper[4808]: I0311 09:04:19.237978 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bd5679c8c-lnxwb" event={"ID":"0d36885b-b25e-467a-bdd9-bb9100e7f02b","Type":"ContainerDied","Data":"f7de189655d9da6f7a1e8cc7c1967571cdb0ed5bcda3796b4f41b1596fabc0a5"} Mar 11 09:04:19 crc kubenswrapper[4808]: I0311 09:04:19.238010 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bd5679c8c-lnxwb" event={"ID":"0d36885b-b25e-467a-bdd9-bb9100e7f02b","Type":"ContainerDied","Data":"6c635d6331d450961ae69185f0e44f8ad3c1ef30449b34fe42118a7571a0787a"} Mar 11 09:04:19 crc kubenswrapper[4808]: I0311 09:04:19.238023 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c635d6331d450961ae69185f0e44f8ad3c1ef30449b34fe42118a7571a0787a" Mar 11 09:04:19 crc kubenswrapper[4808]: I0311 09:04:19.240136 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-5fbh5" event={"ID":"09a9abaa-b83b-45d9-8fdd-cd9df7b814bb","Type":"ContainerStarted","Data":"07ff8f6d1805a2a54b4d0bba98badaaa8bb257fee469c5d8c592bb7825b277e9"} Mar 11 09:04:19 crc kubenswrapper[4808]: I0311 09:04:19.240482 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-5fbh5" event={"ID":"09a9abaa-b83b-45d9-8fdd-cd9df7b814bb","Type":"ContainerStarted","Data":"e43a3debf093931017519068e4272409c47ce109bf278952fca0a746e86404d4"} Mar 11 09:04:19 crc kubenswrapper[4808]: I0311 09:04:19.244965 4808 generic.go:334] "Generic (PLEG): container finished" podID="7dfc1938-9bba-4759-bf06-92e67939aefa" containerID="03f84e31541ef49831e304d907e87784c691a7399a4cc3c828032d7d08a57917" exitCode=0 Mar 11 09:04:19 crc kubenswrapper[4808]: I0311 09:04:19.245003 4808 generic.go:334] "Generic (PLEG): container finished" podID="7dfc1938-9bba-4759-bf06-92e67939aefa" containerID="8299c61c4dcf84b439cbaaafe97a9459a0402393b8ce1a63380405bb7b3d6840" exitCode=2 Mar 11 09:04:19 crc kubenswrapper[4808]: I0311 09:04:19.245015 4808 generic.go:334] "Generic (PLEG): container finished" podID="7dfc1938-9bba-4759-bf06-92e67939aefa" containerID="51f33c805bc04c418fb8027dc5fa74c99015b60cf8e37725fe46f0107328e154" exitCode=0 Mar 11 09:04:19 crc kubenswrapper[4808]: I0311 09:04:19.245036 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7dfc1938-9bba-4759-bf06-92e67939aefa","Type":"ContainerDied","Data":"03f84e31541ef49831e304d907e87784c691a7399a4cc3c828032d7d08a57917"} Mar 11 09:04:19 crc kubenswrapper[4808]: I0311 09:04:19.245085 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7dfc1938-9bba-4759-bf06-92e67939aefa","Type":"ContainerDied","Data":"8299c61c4dcf84b439cbaaafe97a9459a0402393b8ce1a63380405bb7b3d6840"} Mar 11 09:04:19 crc kubenswrapper[4808]: I0311 09:04:19.245099 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7dfc1938-9bba-4759-bf06-92e67939aefa","Type":"ContainerDied","Data":"51f33c805bc04c418fb8027dc5fa74c99015b60cf8e37725fe46f0107328e154"} Mar 11 09:04:19 crc kubenswrapper[4808]: I0311 09:04:19.262535 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bd5679c8c-lnxwb" Mar 11 09:04:19 crc kubenswrapper[4808]: I0311 09:04:19.265686 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-5fbh5" podStartSLOduration=2.265667837 podStartE2EDuration="2.265667837s" podCreationTimestamp="2026-03-11 09:04:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:04:19.258441057 +0000 UTC m=+1510.211764377" watchObservedRunningTime="2026-03-11 09:04:19.265667837 +0000 UTC m=+1510.218991157" Mar 11 09:04:19 crc kubenswrapper[4808]: I0311 09:04:19.420635 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0d36885b-b25e-467a-bdd9-bb9100e7f02b-ovsdbserver-nb\") pod \"0d36885b-b25e-467a-bdd9-bb9100e7f02b\" (UID: \"0d36885b-b25e-467a-bdd9-bb9100e7f02b\") " Mar 11 09:04:19 crc kubenswrapper[4808]: I0311 09:04:19.420710 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0d36885b-b25e-467a-bdd9-bb9100e7f02b-dns-svc\") pod \"0d36885b-b25e-467a-bdd9-bb9100e7f02b\" (UID: \"0d36885b-b25e-467a-bdd9-bb9100e7f02b\") " Mar 11 09:04:19 crc kubenswrapper[4808]: I0311 09:04:19.420742 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0d36885b-b25e-467a-bdd9-bb9100e7f02b-ovsdbserver-sb\") pod \"0d36885b-b25e-467a-bdd9-bb9100e7f02b\" (UID: \"0d36885b-b25e-467a-bdd9-bb9100e7f02b\") " Mar 11 09:04:19 crc kubenswrapper[4808]: I0311 09:04:19.420893 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0d36885b-b25e-467a-bdd9-bb9100e7f02b-dns-swift-storage-0\") pod \"0d36885b-b25e-467a-bdd9-bb9100e7f02b\" (UID: \"0d36885b-b25e-467a-bdd9-bb9100e7f02b\") " Mar 11 09:04:19 crc kubenswrapper[4808]: I0311 09:04:19.420952 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q94wr\" (UniqueName: \"kubernetes.io/projected/0d36885b-b25e-467a-bdd9-bb9100e7f02b-kube-api-access-q94wr\") pod \"0d36885b-b25e-467a-bdd9-bb9100e7f02b\" (UID: \"0d36885b-b25e-467a-bdd9-bb9100e7f02b\") " Mar 11 09:04:19 crc kubenswrapper[4808]: I0311 09:04:19.421026 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d36885b-b25e-467a-bdd9-bb9100e7f02b-config\") pod \"0d36885b-b25e-467a-bdd9-bb9100e7f02b\" (UID: \"0d36885b-b25e-467a-bdd9-bb9100e7f02b\") " Mar 11 09:04:19 crc kubenswrapper[4808]: I0311 09:04:19.426155 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d36885b-b25e-467a-bdd9-bb9100e7f02b-kube-api-access-q94wr" (OuterVolumeSpecName: "kube-api-access-q94wr") pod "0d36885b-b25e-467a-bdd9-bb9100e7f02b" (UID: "0d36885b-b25e-467a-bdd9-bb9100e7f02b"). InnerVolumeSpecName "kube-api-access-q94wr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:04:19 crc kubenswrapper[4808]: I0311 09:04:19.481062 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d36885b-b25e-467a-bdd9-bb9100e7f02b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "0d36885b-b25e-467a-bdd9-bb9100e7f02b" (UID: "0d36885b-b25e-467a-bdd9-bb9100e7f02b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:04:19 crc kubenswrapper[4808]: I0311 09:04:19.489185 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d36885b-b25e-467a-bdd9-bb9100e7f02b-config" (OuterVolumeSpecName: "config") pod "0d36885b-b25e-467a-bdd9-bb9100e7f02b" (UID: "0d36885b-b25e-467a-bdd9-bb9100e7f02b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:04:19 crc kubenswrapper[4808]: I0311 09:04:19.492930 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d36885b-b25e-467a-bdd9-bb9100e7f02b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0d36885b-b25e-467a-bdd9-bb9100e7f02b" (UID: "0d36885b-b25e-467a-bdd9-bb9100e7f02b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:04:19 crc kubenswrapper[4808]: I0311 09:04:19.496085 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d36885b-b25e-467a-bdd9-bb9100e7f02b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0d36885b-b25e-467a-bdd9-bb9100e7f02b" (UID: "0d36885b-b25e-467a-bdd9-bb9100e7f02b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:04:19 crc kubenswrapper[4808]: I0311 09:04:19.496588 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d36885b-b25e-467a-bdd9-bb9100e7f02b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0d36885b-b25e-467a-bdd9-bb9100e7f02b" (UID: "0d36885b-b25e-467a-bdd9-bb9100e7f02b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:04:19 crc kubenswrapper[4808]: I0311 09:04:19.523788 4808 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0d36885b-b25e-467a-bdd9-bb9100e7f02b-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 11 09:04:19 crc kubenswrapper[4808]: I0311 09:04:19.523821 4808 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0d36885b-b25e-467a-bdd9-bb9100e7f02b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 11 09:04:19 crc kubenswrapper[4808]: I0311 09:04:19.523831 4808 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0d36885b-b25e-467a-bdd9-bb9100e7f02b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 11 09:04:19 crc kubenswrapper[4808]: I0311 09:04:19.523840 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q94wr\" (UniqueName: \"kubernetes.io/projected/0d36885b-b25e-467a-bdd9-bb9100e7f02b-kube-api-access-q94wr\") on node \"crc\" DevicePath \"\"" Mar 11 09:04:19 crc kubenswrapper[4808]: I0311 09:04:19.523848 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d36885b-b25e-467a-bdd9-bb9100e7f02b-config\") on node \"crc\" DevicePath \"\"" Mar 11 09:04:19 crc kubenswrapper[4808]: I0311 09:04:19.523856 4808 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0d36885b-b25e-467a-bdd9-bb9100e7f02b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 11 09:04:19 crc kubenswrapper[4808]: I0311 09:04:19.912926 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 11 09:04:20 crc kubenswrapper[4808]: I0311 09:04:20.034055 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7dfc1938-9bba-4759-bf06-92e67939aefa-scripts\") pod \"7dfc1938-9bba-4759-bf06-92e67939aefa\" (UID: \"7dfc1938-9bba-4759-bf06-92e67939aefa\") " Mar 11 09:04:20 crc kubenswrapper[4808]: I0311 09:04:20.034109 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7dfc1938-9bba-4759-bf06-92e67939aefa-config-data\") pod \"7dfc1938-9bba-4759-bf06-92e67939aefa\" (UID: \"7dfc1938-9bba-4759-bf06-92e67939aefa\") " Mar 11 09:04:20 crc kubenswrapper[4808]: I0311 09:04:20.034178 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7dfc1938-9bba-4759-bf06-92e67939aefa-ceilometer-tls-certs\") pod \"7dfc1938-9bba-4759-bf06-92e67939aefa\" (UID: \"7dfc1938-9bba-4759-bf06-92e67939aefa\") " Mar 11 09:04:20 crc kubenswrapper[4808]: I0311 09:04:20.035123 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9rv7\" (UniqueName: \"kubernetes.io/projected/7dfc1938-9bba-4759-bf06-92e67939aefa-kube-api-access-z9rv7\") pod \"7dfc1938-9bba-4759-bf06-92e67939aefa\" (UID: \"7dfc1938-9bba-4759-bf06-92e67939aefa\") " Mar 11 09:04:20 crc kubenswrapper[4808]: I0311 09:04:20.035533 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7dfc1938-9bba-4759-bf06-92e67939aefa-log-httpd\") pod \"7dfc1938-9bba-4759-bf06-92e67939aefa\" (UID: \"7dfc1938-9bba-4759-bf06-92e67939aefa\") " Mar 11 09:04:20 crc kubenswrapper[4808]: I0311 09:04:20.035940 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7dfc1938-9bba-4759-bf06-92e67939aefa-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "7dfc1938-9bba-4759-bf06-92e67939aefa" (UID: "7dfc1938-9bba-4759-bf06-92e67939aefa"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:04:20 crc kubenswrapper[4808]: I0311 09:04:20.036124 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7dfc1938-9bba-4759-bf06-92e67939aefa-sg-core-conf-yaml\") pod \"7dfc1938-9bba-4759-bf06-92e67939aefa\" (UID: \"7dfc1938-9bba-4759-bf06-92e67939aefa\") " Mar 11 09:04:20 crc kubenswrapper[4808]: I0311 09:04:20.036193 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7dfc1938-9bba-4759-bf06-92e67939aefa-run-httpd\") pod \"7dfc1938-9bba-4759-bf06-92e67939aefa\" (UID: \"7dfc1938-9bba-4759-bf06-92e67939aefa\") " Mar 11 09:04:20 crc kubenswrapper[4808]: I0311 09:04:20.036209 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dfc1938-9bba-4759-bf06-92e67939aefa-combined-ca-bundle\") pod \"7dfc1938-9bba-4759-bf06-92e67939aefa\" (UID: \"7dfc1938-9bba-4759-bf06-92e67939aefa\") " Mar 11 09:04:20 crc kubenswrapper[4808]: I0311 09:04:20.036563 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7dfc1938-9bba-4759-bf06-92e67939aefa-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "7dfc1938-9bba-4759-bf06-92e67939aefa" (UID: "7dfc1938-9bba-4759-bf06-92e67939aefa"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:04:20 crc kubenswrapper[4808]: I0311 09:04:20.037178 4808 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7dfc1938-9bba-4759-bf06-92e67939aefa-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 11 09:04:20 crc kubenswrapper[4808]: I0311 09:04:20.037196 4808 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7dfc1938-9bba-4759-bf06-92e67939aefa-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 11 09:04:20 crc kubenswrapper[4808]: I0311 09:04:20.039621 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7dfc1938-9bba-4759-bf06-92e67939aefa-scripts" (OuterVolumeSpecName: "scripts") pod "7dfc1938-9bba-4759-bf06-92e67939aefa" (UID: "7dfc1938-9bba-4759-bf06-92e67939aefa"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:04:20 crc kubenswrapper[4808]: I0311 09:04:20.039642 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7dfc1938-9bba-4759-bf06-92e67939aefa-kube-api-access-z9rv7" (OuterVolumeSpecName: "kube-api-access-z9rv7") pod "7dfc1938-9bba-4759-bf06-92e67939aefa" (UID: "7dfc1938-9bba-4759-bf06-92e67939aefa"). InnerVolumeSpecName "kube-api-access-z9rv7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:04:20 crc kubenswrapper[4808]: I0311 09:04:20.067576 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7dfc1938-9bba-4759-bf06-92e67939aefa-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "7dfc1938-9bba-4759-bf06-92e67939aefa" (UID: "7dfc1938-9bba-4759-bf06-92e67939aefa"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:04:20 crc kubenswrapper[4808]: I0311 09:04:20.087041 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7dfc1938-9bba-4759-bf06-92e67939aefa-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "7dfc1938-9bba-4759-bf06-92e67939aefa" (UID: "7dfc1938-9bba-4759-bf06-92e67939aefa"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:04:20 crc kubenswrapper[4808]: I0311 09:04:20.107522 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7dfc1938-9bba-4759-bf06-92e67939aefa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7dfc1938-9bba-4759-bf06-92e67939aefa" (UID: "7dfc1938-9bba-4759-bf06-92e67939aefa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:04:20 crc kubenswrapper[4808]: I0311 09:04:20.129320 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7dfc1938-9bba-4759-bf06-92e67939aefa-config-data" (OuterVolumeSpecName: "config-data") pod "7dfc1938-9bba-4759-bf06-92e67939aefa" (UID: "7dfc1938-9bba-4759-bf06-92e67939aefa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:04:20 crc kubenswrapper[4808]: I0311 09:04:20.138371 4808 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7dfc1938-9bba-4759-bf06-92e67939aefa-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 11 09:04:20 crc kubenswrapper[4808]: I0311 09:04:20.138401 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dfc1938-9bba-4759-bf06-92e67939aefa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:04:20 crc kubenswrapper[4808]: I0311 09:04:20.138411 4808 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7dfc1938-9bba-4759-bf06-92e67939aefa-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:04:20 crc kubenswrapper[4808]: I0311 09:04:20.138423 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7dfc1938-9bba-4759-bf06-92e67939aefa-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:04:20 crc kubenswrapper[4808]: I0311 09:04:20.138432 4808 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7dfc1938-9bba-4759-bf06-92e67939aefa-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 11 09:04:20 crc kubenswrapper[4808]: I0311 09:04:20.138441 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z9rv7\" (UniqueName: \"kubernetes.io/projected/7dfc1938-9bba-4759-bf06-92e67939aefa-kube-api-access-z9rv7\") on node \"crc\" DevicePath \"\"" Mar 11 09:04:20 crc kubenswrapper[4808]: I0311 09:04:20.268954 4808 generic.go:334] "Generic (PLEG): container finished" podID="7dfc1938-9bba-4759-bf06-92e67939aefa" containerID="4c44f9fa32bb63adf63a8aceb9e79b9cd805347bb05c92ec4d8b6feae6ce948d" exitCode=0 Mar 11 09:04:20 crc kubenswrapper[4808]: I0311 09:04:20.269000 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7dfc1938-9bba-4759-bf06-92e67939aefa","Type":"ContainerDied","Data":"4c44f9fa32bb63adf63a8aceb9e79b9cd805347bb05c92ec4d8b6feae6ce948d"} Mar 11 09:04:20 crc kubenswrapper[4808]: I0311 09:04:20.269075 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7dfc1938-9bba-4759-bf06-92e67939aefa","Type":"ContainerDied","Data":"a94f30227615ba722307bb0a2c14505ded79a3c7d33997161cf57ebca5684d17"} Mar 11 09:04:20 crc kubenswrapper[4808]: I0311 09:04:20.269081 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bd5679c8c-lnxwb" Mar 11 09:04:20 crc kubenswrapper[4808]: I0311 09:04:20.269097 4808 scope.go:117] "RemoveContainer" containerID="03f84e31541ef49831e304d907e87784c691a7399a4cc3c828032d7d08a57917" Mar 11 09:04:20 crc kubenswrapper[4808]: I0311 09:04:20.269079 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 11 09:04:20 crc kubenswrapper[4808]: I0311 09:04:20.303974 4808 scope.go:117] "RemoveContainer" containerID="8299c61c4dcf84b439cbaaafe97a9459a0402393b8ce1a63380405bb7b3d6840" Mar 11 09:04:20 crc kubenswrapper[4808]: I0311 09:04:20.331266 4808 scope.go:117] "RemoveContainer" containerID="51f33c805bc04c418fb8027dc5fa74c99015b60cf8e37725fe46f0107328e154" Mar 11 09:04:20 crc kubenswrapper[4808]: I0311 09:04:20.333900 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7bd5679c8c-lnxwb"] Mar 11 09:04:20 crc kubenswrapper[4808]: I0311 09:04:20.346481 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7bd5679c8c-lnxwb"] Mar 11 09:04:20 crc kubenswrapper[4808]: I0311 09:04:20.363926 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 11 09:04:20 crc kubenswrapper[4808]: I0311 09:04:20.365621 4808 scope.go:117] "RemoveContainer" containerID="4c44f9fa32bb63adf63a8aceb9e79b9cd805347bb05c92ec4d8b6feae6ce948d" Mar 11 09:04:20 crc kubenswrapper[4808]: I0311 09:04:20.374061 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 11 09:04:20 crc kubenswrapper[4808]: I0311 09:04:20.384186 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 11 09:04:20 crc kubenswrapper[4808]: E0311 09:04:20.385763 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dfc1938-9bba-4759-bf06-92e67939aefa" containerName="ceilometer-notification-agent" Mar 11 09:04:20 crc kubenswrapper[4808]: I0311 09:04:20.385803 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dfc1938-9bba-4759-bf06-92e67939aefa" containerName="ceilometer-notification-agent" Mar 11 09:04:20 crc kubenswrapper[4808]: E0311 09:04:20.385830 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d36885b-b25e-467a-bdd9-bb9100e7f02b" containerName="init" Mar 11 09:04:20 crc kubenswrapper[4808]: I0311 09:04:20.385842 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d36885b-b25e-467a-bdd9-bb9100e7f02b" containerName="init" Mar 11 09:04:20 crc kubenswrapper[4808]: E0311 09:04:20.385871 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dfc1938-9bba-4759-bf06-92e67939aefa" containerName="ceilometer-central-agent" Mar 11 09:04:20 crc kubenswrapper[4808]: I0311 09:04:20.385884 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dfc1938-9bba-4759-bf06-92e67939aefa" containerName="ceilometer-central-agent" Mar 11 09:04:20 crc kubenswrapper[4808]: E0311 09:04:20.385924 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d36885b-b25e-467a-bdd9-bb9100e7f02b" containerName="dnsmasq-dns" Mar 11 09:04:20 crc kubenswrapper[4808]: I0311 09:04:20.385936 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d36885b-b25e-467a-bdd9-bb9100e7f02b" containerName="dnsmasq-dns" Mar 11 09:04:20 crc kubenswrapper[4808]: E0311 09:04:20.385967 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dfc1938-9bba-4759-bf06-92e67939aefa" containerName="proxy-httpd" Mar 11 09:04:20 crc kubenswrapper[4808]: I0311 09:04:20.385977 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dfc1938-9bba-4759-bf06-92e67939aefa" containerName="proxy-httpd" Mar 11 09:04:20 crc kubenswrapper[4808]: E0311 09:04:20.385998 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dfc1938-9bba-4759-bf06-92e67939aefa" containerName="sg-core" Mar 11 09:04:20 crc kubenswrapper[4808]: I0311 09:04:20.386008 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dfc1938-9bba-4759-bf06-92e67939aefa" containerName="sg-core" Mar 11 09:04:20 crc kubenswrapper[4808]: I0311 09:04:20.386312 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d36885b-b25e-467a-bdd9-bb9100e7f02b" containerName="dnsmasq-dns" Mar 11 09:04:20 crc kubenswrapper[4808]: I0311 09:04:20.386352 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="7dfc1938-9bba-4759-bf06-92e67939aefa" containerName="ceilometer-central-agent" Mar 11 09:04:20 crc kubenswrapper[4808]: I0311 09:04:20.386411 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="7dfc1938-9bba-4759-bf06-92e67939aefa" containerName="sg-core" Mar 11 09:04:20 crc kubenswrapper[4808]: I0311 09:04:20.386439 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="7dfc1938-9bba-4759-bf06-92e67939aefa" containerName="ceilometer-notification-agent" Mar 11 09:04:20 crc kubenswrapper[4808]: I0311 09:04:20.386463 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="7dfc1938-9bba-4759-bf06-92e67939aefa" containerName="proxy-httpd" Mar 11 09:04:20 crc kubenswrapper[4808]: I0311 09:04:20.392174 4808 scope.go:117] "RemoveContainer" containerID="03f84e31541ef49831e304d907e87784c691a7399a4cc3c828032d7d08a57917" Mar 11 09:04:20 crc kubenswrapper[4808]: E0311 09:04:20.393104 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03f84e31541ef49831e304d907e87784c691a7399a4cc3c828032d7d08a57917\": container with ID starting with 03f84e31541ef49831e304d907e87784c691a7399a4cc3c828032d7d08a57917 not found: ID does not exist" containerID="03f84e31541ef49831e304d907e87784c691a7399a4cc3c828032d7d08a57917" Mar 11 09:04:20 crc kubenswrapper[4808]: I0311 09:04:20.393135 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03f84e31541ef49831e304d907e87784c691a7399a4cc3c828032d7d08a57917"} err="failed to get container status \"03f84e31541ef49831e304d907e87784c691a7399a4cc3c828032d7d08a57917\": rpc error: code = NotFound desc = could not find container \"03f84e31541ef49831e304d907e87784c691a7399a4cc3c828032d7d08a57917\": container with ID starting with 03f84e31541ef49831e304d907e87784c691a7399a4cc3c828032d7d08a57917 not found: ID does not exist" Mar 11 09:04:20 crc kubenswrapper[4808]: I0311 09:04:20.393153 4808 scope.go:117] "RemoveContainer" containerID="8299c61c4dcf84b439cbaaafe97a9459a0402393b8ce1a63380405bb7b3d6840" Mar 11 09:04:20 crc kubenswrapper[4808]: I0311 09:04:20.393321 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 11 09:04:20 crc kubenswrapper[4808]: E0311 09:04:20.393460 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8299c61c4dcf84b439cbaaafe97a9459a0402393b8ce1a63380405bb7b3d6840\": container with ID starting with 8299c61c4dcf84b439cbaaafe97a9459a0402393b8ce1a63380405bb7b3d6840 not found: ID does not exist" containerID="8299c61c4dcf84b439cbaaafe97a9459a0402393b8ce1a63380405bb7b3d6840" Mar 11 09:04:20 crc kubenswrapper[4808]: I0311 09:04:20.393497 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 11 09:04:20 crc kubenswrapper[4808]: I0311 09:04:20.393501 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8299c61c4dcf84b439cbaaafe97a9459a0402393b8ce1a63380405bb7b3d6840"} err="failed to get container status \"8299c61c4dcf84b439cbaaafe97a9459a0402393b8ce1a63380405bb7b3d6840\": rpc error: code = NotFound desc = could not find container \"8299c61c4dcf84b439cbaaafe97a9459a0402393b8ce1a63380405bb7b3d6840\": container with ID starting with 8299c61c4dcf84b439cbaaafe97a9459a0402393b8ce1a63380405bb7b3d6840 not found: ID does not exist" Mar 11 09:04:20 crc kubenswrapper[4808]: I0311 09:04:20.393526 4808 scope.go:117] "RemoveContainer" containerID="51f33c805bc04c418fb8027dc5fa74c99015b60cf8e37725fe46f0107328e154" Mar 11 09:04:20 crc kubenswrapper[4808]: E0311 09:04:20.393848 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51f33c805bc04c418fb8027dc5fa74c99015b60cf8e37725fe46f0107328e154\": container with ID starting with 51f33c805bc04c418fb8027dc5fa74c99015b60cf8e37725fe46f0107328e154 not found: ID does not exist" containerID="51f33c805bc04c418fb8027dc5fa74c99015b60cf8e37725fe46f0107328e154" Mar 11 09:04:20 crc kubenswrapper[4808]: I0311 09:04:20.393874 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51f33c805bc04c418fb8027dc5fa74c99015b60cf8e37725fe46f0107328e154"} err="failed to get container status \"51f33c805bc04c418fb8027dc5fa74c99015b60cf8e37725fe46f0107328e154\": rpc error: code = NotFound desc = could not find container \"51f33c805bc04c418fb8027dc5fa74c99015b60cf8e37725fe46f0107328e154\": container with ID starting with 51f33c805bc04c418fb8027dc5fa74c99015b60cf8e37725fe46f0107328e154 not found: ID does not exist" Mar 11 09:04:20 crc kubenswrapper[4808]: I0311 09:04:20.393895 4808 scope.go:117] "RemoveContainer" containerID="4c44f9fa32bb63adf63a8aceb9e79b9cd805347bb05c92ec4d8b6feae6ce948d" Mar 11 09:04:20 crc kubenswrapper[4808]: E0311 09:04:20.394105 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c44f9fa32bb63adf63a8aceb9e79b9cd805347bb05c92ec4d8b6feae6ce948d\": container with ID starting with 4c44f9fa32bb63adf63a8aceb9e79b9cd805347bb05c92ec4d8b6feae6ce948d not found: ID does not exist" containerID="4c44f9fa32bb63adf63a8aceb9e79b9cd805347bb05c92ec4d8b6feae6ce948d" Mar 11 09:04:20 crc kubenswrapper[4808]: I0311 09:04:20.394131 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c44f9fa32bb63adf63a8aceb9e79b9cd805347bb05c92ec4d8b6feae6ce948d"} err="failed to get container status \"4c44f9fa32bb63adf63a8aceb9e79b9cd805347bb05c92ec4d8b6feae6ce948d\": rpc error: code = NotFound desc = could not find container \"4c44f9fa32bb63adf63a8aceb9e79b9cd805347bb05c92ec4d8b6feae6ce948d\": container with ID starting with 4c44f9fa32bb63adf63a8aceb9e79b9cd805347bb05c92ec4d8b6feae6ce948d not found: ID does not exist" Mar 11 09:04:20 crc kubenswrapper[4808]: I0311 09:04:20.396058 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Mar 11 09:04:20 crc kubenswrapper[4808]: I0311 09:04:20.396302 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 11 09:04:20 crc kubenswrapper[4808]: I0311 09:04:20.397866 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 11 09:04:20 crc kubenswrapper[4808]: I0311 09:04:20.545635 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c526a61c-3322-446a-8ff5-edd5a02f4b1f-scripts\") pod \"ceilometer-0\" (UID: \"c526a61c-3322-446a-8ff5-edd5a02f4b1f\") " pod="openstack/ceilometer-0" Mar 11 09:04:20 crc kubenswrapper[4808]: I0311 09:04:20.545692 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c526a61c-3322-446a-8ff5-edd5a02f4b1f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c526a61c-3322-446a-8ff5-edd5a02f4b1f\") " pod="openstack/ceilometer-0" Mar 11 09:04:20 crc kubenswrapper[4808]: I0311 09:04:20.545715 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c526a61c-3322-446a-8ff5-edd5a02f4b1f-log-httpd\") pod \"ceilometer-0\" (UID: \"c526a61c-3322-446a-8ff5-edd5a02f4b1f\") " pod="openstack/ceilometer-0" Mar 11 09:04:20 crc kubenswrapper[4808]: I0311 09:04:20.545781 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c526a61c-3322-446a-8ff5-edd5a02f4b1f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c526a61c-3322-446a-8ff5-edd5a02f4b1f\") " pod="openstack/ceilometer-0" Mar 11 09:04:20 crc kubenswrapper[4808]: I0311 09:04:20.545808 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmrvq\" (UniqueName: \"kubernetes.io/projected/c526a61c-3322-446a-8ff5-edd5a02f4b1f-kube-api-access-zmrvq\") pod \"ceilometer-0\" (UID: \"c526a61c-3322-446a-8ff5-edd5a02f4b1f\") " pod="openstack/ceilometer-0" Mar 11 09:04:20 crc kubenswrapper[4808]: I0311 09:04:20.545835 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c526a61c-3322-446a-8ff5-edd5a02f4b1f-config-data\") pod \"ceilometer-0\" (UID: \"c526a61c-3322-446a-8ff5-edd5a02f4b1f\") " pod="openstack/ceilometer-0" Mar 11 09:04:20 crc kubenswrapper[4808]: I0311 09:04:20.545860 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c526a61c-3322-446a-8ff5-edd5a02f4b1f-run-httpd\") pod \"ceilometer-0\" (UID: \"c526a61c-3322-446a-8ff5-edd5a02f4b1f\") " pod="openstack/ceilometer-0" Mar 11 09:04:20 crc kubenswrapper[4808]: I0311 09:04:20.545900 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c526a61c-3322-446a-8ff5-edd5a02f4b1f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c526a61c-3322-446a-8ff5-edd5a02f4b1f\") " pod="openstack/ceilometer-0" Mar 11 09:04:20 crc kubenswrapper[4808]: I0311 09:04:20.648038 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c526a61c-3322-446a-8ff5-edd5a02f4b1f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c526a61c-3322-446a-8ff5-edd5a02f4b1f\") " pod="openstack/ceilometer-0" Mar 11 09:04:20 crc kubenswrapper[4808]: I0311 09:04:20.648302 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c526a61c-3322-446a-8ff5-edd5a02f4b1f-scripts\") pod \"ceilometer-0\" (UID: \"c526a61c-3322-446a-8ff5-edd5a02f4b1f\") " pod="openstack/ceilometer-0" Mar 11 09:04:20 crc kubenswrapper[4808]: I0311 09:04:20.648396 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c526a61c-3322-446a-8ff5-edd5a02f4b1f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c526a61c-3322-446a-8ff5-edd5a02f4b1f\") " pod="openstack/ceilometer-0" Mar 11 09:04:20 crc kubenswrapper[4808]: I0311 09:04:20.648464 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c526a61c-3322-446a-8ff5-edd5a02f4b1f-log-httpd\") pod \"ceilometer-0\" (UID: \"c526a61c-3322-446a-8ff5-edd5a02f4b1f\") " pod="openstack/ceilometer-0" Mar 11 09:04:20 crc kubenswrapper[4808]: I0311 09:04:20.648582 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c526a61c-3322-446a-8ff5-edd5a02f4b1f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c526a61c-3322-446a-8ff5-edd5a02f4b1f\") " pod="openstack/ceilometer-0" Mar 11 09:04:20 crc kubenswrapper[4808]: I0311 09:04:20.648659 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmrvq\" (UniqueName: \"kubernetes.io/projected/c526a61c-3322-446a-8ff5-edd5a02f4b1f-kube-api-access-zmrvq\") pod \"ceilometer-0\" (UID: \"c526a61c-3322-446a-8ff5-edd5a02f4b1f\") " pod="openstack/ceilometer-0" Mar 11 09:04:20 crc kubenswrapper[4808]: I0311 09:04:20.648737 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c526a61c-3322-446a-8ff5-edd5a02f4b1f-config-data\") pod \"ceilometer-0\" (UID: \"c526a61c-3322-446a-8ff5-edd5a02f4b1f\") " pod="openstack/ceilometer-0" Mar 11 09:04:20 crc kubenswrapper[4808]: I0311 09:04:20.648807 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c526a61c-3322-446a-8ff5-edd5a02f4b1f-run-httpd\") pod \"ceilometer-0\" (UID: \"c526a61c-3322-446a-8ff5-edd5a02f4b1f\") " pod="openstack/ceilometer-0" Mar 11 09:04:20 crc kubenswrapper[4808]: I0311 09:04:20.648813 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c526a61c-3322-446a-8ff5-edd5a02f4b1f-log-httpd\") pod \"ceilometer-0\" (UID: \"c526a61c-3322-446a-8ff5-edd5a02f4b1f\") " pod="openstack/ceilometer-0" Mar 11 09:04:20 crc kubenswrapper[4808]: I0311 09:04:20.649031 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c526a61c-3322-446a-8ff5-edd5a02f4b1f-run-httpd\") pod \"ceilometer-0\" (UID: \"c526a61c-3322-446a-8ff5-edd5a02f4b1f\") " pod="openstack/ceilometer-0" Mar 11 09:04:20 crc kubenswrapper[4808]: I0311 09:04:20.652120 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c526a61c-3322-446a-8ff5-edd5a02f4b1f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c526a61c-3322-446a-8ff5-edd5a02f4b1f\") " pod="openstack/ceilometer-0" Mar 11 09:04:20 crc kubenswrapper[4808]: I0311 09:04:20.653830 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c526a61c-3322-446a-8ff5-edd5a02f4b1f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c526a61c-3322-446a-8ff5-edd5a02f4b1f\") " pod="openstack/ceilometer-0" Mar 11 09:04:20 crc kubenswrapper[4808]: I0311 09:04:20.654331 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c526a61c-3322-446a-8ff5-edd5a02f4b1f-scripts\") pod \"ceilometer-0\" (UID: \"c526a61c-3322-446a-8ff5-edd5a02f4b1f\") " pod="openstack/ceilometer-0" Mar 11 09:04:20 crc kubenswrapper[4808]: I0311 09:04:20.655683 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c526a61c-3322-446a-8ff5-edd5a02f4b1f-config-data\") pod \"ceilometer-0\" (UID: \"c526a61c-3322-446a-8ff5-edd5a02f4b1f\") " pod="openstack/ceilometer-0" Mar 11 09:04:20 crc kubenswrapper[4808]: I0311 09:04:20.660474 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c526a61c-3322-446a-8ff5-edd5a02f4b1f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c526a61c-3322-446a-8ff5-edd5a02f4b1f\") " pod="openstack/ceilometer-0" Mar 11 09:04:20 crc kubenswrapper[4808]: I0311 09:04:20.671419 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmrvq\" (UniqueName: \"kubernetes.io/projected/c526a61c-3322-446a-8ff5-edd5a02f4b1f-kube-api-access-zmrvq\") pod \"ceilometer-0\" (UID: \"c526a61c-3322-446a-8ff5-edd5a02f4b1f\") " pod="openstack/ceilometer-0" Mar 11 09:04:20 crc kubenswrapper[4808]: I0311 09:04:20.713123 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 11 09:04:21 crc kubenswrapper[4808]: I0311 09:04:21.276618 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 11 09:04:21 crc kubenswrapper[4808]: W0311 09:04:21.282241 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc526a61c_3322_446a_8ff5_edd5a02f4b1f.slice/crio-09baa7e4d639a22c735a0165bcf92ae801d57d7fafd155d8b4b7f30a21984cd7 WatchSource:0}: Error finding container 09baa7e4d639a22c735a0165bcf92ae801d57d7fafd155d8b4b7f30a21984cd7: Status 404 returned error can't find the container with id 09baa7e4d639a22c735a0165bcf92ae801d57d7fafd155d8b4b7f30a21984cd7 Mar 11 09:04:21 crc kubenswrapper[4808]: I0311 09:04:21.802275 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d36885b-b25e-467a-bdd9-bb9100e7f02b" path="/var/lib/kubelet/pods/0d36885b-b25e-467a-bdd9-bb9100e7f02b/volumes" Mar 11 09:04:21 crc kubenswrapper[4808]: I0311 09:04:21.803201 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7dfc1938-9bba-4759-bf06-92e67939aefa" path="/var/lib/kubelet/pods/7dfc1938-9bba-4759-bf06-92e67939aefa/volumes" Mar 11 09:04:22 crc kubenswrapper[4808]: I0311 09:04:22.295546 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c526a61c-3322-446a-8ff5-edd5a02f4b1f","Type":"ContainerStarted","Data":"4dffba39f9aa8f7f8d804f85bd4b10f46074c0a37fa13b669739dacf69102465"} Mar 11 09:04:22 crc kubenswrapper[4808]: I0311 09:04:22.295916 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c526a61c-3322-446a-8ff5-edd5a02f4b1f","Type":"ContainerStarted","Data":"09baa7e4d639a22c735a0165bcf92ae801d57d7fafd155d8b4b7f30a21984cd7"} Mar 11 09:04:23 crc kubenswrapper[4808]: I0311 09:04:23.307397 4808 generic.go:334] "Generic (PLEG): container finished" podID="09a9abaa-b83b-45d9-8fdd-cd9df7b814bb" containerID="07ff8f6d1805a2a54b4d0bba98badaaa8bb257fee469c5d8c592bb7825b277e9" exitCode=0 Mar 11 09:04:23 crc kubenswrapper[4808]: I0311 09:04:23.307472 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-5fbh5" event={"ID":"09a9abaa-b83b-45d9-8fdd-cd9df7b814bb","Type":"ContainerDied","Data":"07ff8f6d1805a2a54b4d0bba98badaaa8bb257fee469c5d8c592bb7825b277e9"} Mar 11 09:04:23 crc kubenswrapper[4808]: I0311 09:04:23.309903 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c526a61c-3322-446a-8ff5-edd5a02f4b1f","Type":"ContainerStarted","Data":"cac6c475704a03c7e5006a4b2741b4004efa4bf58536e1048913a3630bde7cda"} Mar 11 09:04:24 crc kubenswrapper[4808]: I0311 09:04:24.358985 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c526a61c-3322-446a-8ff5-edd5a02f4b1f","Type":"ContainerStarted","Data":"d192897093f81eb77d49df57c52794a2e80dd6a9b36aabde6381dca653540f27"} Mar 11 09:04:24 crc kubenswrapper[4808]: I0311 09:04:24.750774 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-5fbh5" Mar 11 09:04:24 crc kubenswrapper[4808]: I0311 09:04:24.855260 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09a9abaa-b83b-45d9-8fdd-cd9df7b814bb-config-data\") pod \"09a9abaa-b83b-45d9-8fdd-cd9df7b814bb\" (UID: \"09a9abaa-b83b-45d9-8fdd-cd9df7b814bb\") " Mar 11 09:04:24 crc kubenswrapper[4808]: I0311 09:04:24.855600 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09a9abaa-b83b-45d9-8fdd-cd9df7b814bb-combined-ca-bundle\") pod \"09a9abaa-b83b-45d9-8fdd-cd9df7b814bb\" (UID: \"09a9abaa-b83b-45d9-8fdd-cd9df7b814bb\") " Mar 11 09:04:24 crc kubenswrapper[4808]: I0311 09:04:24.855815 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bpgxl\" (UniqueName: \"kubernetes.io/projected/09a9abaa-b83b-45d9-8fdd-cd9df7b814bb-kube-api-access-bpgxl\") pod \"09a9abaa-b83b-45d9-8fdd-cd9df7b814bb\" (UID: \"09a9abaa-b83b-45d9-8fdd-cd9df7b814bb\") " Mar 11 09:04:24 crc kubenswrapper[4808]: I0311 09:04:24.856028 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09a9abaa-b83b-45d9-8fdd-cd9df7b814bb-scripts\") pod \"09a9abaa-b83b-45d9-8fdd-cd9df7b814bb\" (UID: \"09a9abaa-b83b-45d9-8fdd-cd9df7b814bb\") " Mar 11 09:04:24 crc kubenswrapper[4808]: I0311 09:04:24.861527 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09a9abaa-b83b-45d9-8fdd-cd9df7b814bb-kube-api-access-bpgxl" (OuterVolumeSpecName: "kube-api-access-bpgxl") pod "09a9abaa-b83b-45d9-8fdd-cd9df7b814bb" (UID: "09a9abaa-b83b-45d9-8fdd-cd9df7b814bb"). InnerVolumeSpecName "kube-api-access-bpgxl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:04:24 crc kubenswrapper[4808]: I0311 09:04:24.875196 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09a9abaa-b83b-45d9-8fdd-cd9df7b814bb-scripts" (OuterVolumeSpecName: "scripts") pod "09a9abaa-b83b-45d9-8fdd-cd9df7b814bb" (UID: "09a9abaa-b83b-45d9-8fdd-cd9df7b814bb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:04:24 crc kubenswrapper[4808]: I0311 09:04:24.888128 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09a9abaa-b83b-45d9-8fdd-cd9df7b814bb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "09a9abaa-b83b-45d9-8fdd-cd9df7b814bb" (UID: "09a9abaa-b83b-45d9-8fdd-cd9df7b814bb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:04:24 crc kubenswrapper[4808]: I0311 09:04:24.912985 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09a9abaa-b83b-45d9-8fdd-cd9df7b814bb-config-data" (OuterVolumeSpecName: "config-data") pod "09a9abaa-b83b-45d9-8fdd-cd9df7b814bb" (UID: "09a9abaa-b83b-45d9-8fdd-cd9df7b814bb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:04:24 crc kubenswrapper[4808]: I0311 09:04:24.959479 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bpgxl\" (UniqueName: \"kubernetes.io/projected/09a9abaa-b83b-45d9-8fdd-cd9df7b814bb-kube-api-access-bpgxl\") on node \"crc\" DevicePath \"\"" Mar 11 09:04:24 crc kubenswrapper[4808]: I0311 09:04:24.959522 4808 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09a9abaa-b83b-45d9-8fdd-cd9df7b814bb-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:04:24 crc kubenswrapper[4808]: I0311 09:04:24.959536 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09a9abaa-b83b-45d9-8fdd-cd9df7b814bb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:04:24 crc kubenswrapper[4808]: I0311 09:04:24.959552 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09a9abaa-b83b-45d9-8fdd-cd9df7b814bb-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:04:25 crc kubenswrapper[4808]: I0311 09:04:25.371862 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-5fbh5" event={"ID":"09a9abaa-b83b-45d9-8fdd-cd9df7b814bb","Type":"ContainerDied","Data":"e43a3debf093931017519068e4272409c47ce109bf278952fca0a746e86404d4"} Mar 11 09:04:25 crc kubenswrapper[4808]: I0311 09:04:25.372120 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e43a3debf093931017519068e4272409c47ce109bf278952fca0a746e86404d4" Mar 11 09:04:25 crc kubenswrapper[4808]: I0311 09:04:25.372117 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-5fbh5" Mar 11 09:04:25 crc kubenswrapper[4808]: I0311 09:04:25.377390 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c526a61c-3322-446a-8ff5-edd5a02f4b1f","Type":"ContainerStarted","Data":"476be30bd69c12932b5a3bc740f763e7b974ace089360c9808141c4922a76390"} Mar 11 09:04:25 crc kubenswrapper[4808]: I0311 09:04:25.377583 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 11 09:04:25 crc kubenswrapper[4808]: I0311 09:04:25.403385 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.65689949 podStartE2EDuration="5.403218969s" podCreationTimestamp="2026-03-11 09:04:20 +0000 UTC" firstStartedPulling="2026-03-11 09:04:21.287716009 +0000 UTC m=+1512.241039349" lastFinishedPulling="2026-03-11 09:04:25.034035508 +0000 UTC m=+1515.987358828" observedRunningTime="2026-03-11 09:04:25.399272124 +0000 UTC m=+1516.352595444" watchObservedRunningTime="2026-03-11 09:04:25.403218969 +0000 UTC m=+1516.356542289" Mar 11 09:04:25 crc kubenswrapper[4808]: I0311 09:04:25.516309 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 11 09:04:25 crc kubenswrapper[4808]: I0311 09:04:25.516625 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="89b4a6a1-6ce1-48b9-9cd3-711620b15a37" containerName="nova-api-log" containerID="cri-o://8d80136e4ed25b18856c8c93ec17a7789337660fc25ca3cb9c7765eee57e86d0" gracePeriod=30 Mar 11 09:04:25 crc kubenswrapper[4808]: I0311 09:04:25.516653 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="89b4a6a1-6ce1-48b9-9cd3-711620b15a37" containerName="nova-api-api" containerID="cri-o://c35d426f724b36e7f18cfdfba915d0889bccb1640f5342244bf93ae811e3b00f" gracePeriod=30 Mar 11 09:04:25 crc kubenswrapper[4808]: I0311 09:04:25.530824 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 11 09:04:25 crc kubenswrapper[4808]: I0311 09:04:25.531058 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="e1200f85-8f0d-4d77-b48e-943687a44df3" containerName="nova-scheduler-scheduler" containerID="cri-o://7de44323ba103c19769f99055732828f4800d9b6417b6189f355de999324b5c1" gracePeriod=30 Mar 11 09:04:25 crc kubenswrapper[4808]: I0311 09:04:25.599036 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 11 09:04:25 crc kubenswrapper[4808]: I0311 09:04:25.599472 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="0b660ca3-212f-4380-a566-d01166c0555d" containerName="nova-metadata-metadata" containerID="cri-o://00787f1f446de5b650b5d65a276452c867882927ff6a676c3bab10c16475af77" gracePeriod=30 Mar 11 09:04:25 crc kubenswrapper[4808]: I0311 09:04:25.599979 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="0b660ca3-212f-4380-a566-d01166c0555d" containerName="nova-metadata-log" containerID="cri-o://b9af2605df7a0ce309657c2c7b2da8b5837fc9d28bbdf9c5dde864cc4ed6570c" gracePeriod=30 Mar 11 09:04:26 crc kubenswrapper[4808]: I0311 09:04:26.098308 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 11 09:04:26 crc kubenswrapper[4808]: I0311 09:04:26.183199 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/89b4a6a1-6ce1-48b9-9cd3-711620b15a37-internal-tls-certs\") pod \"89b4a6a1-6ce1-48b9-9cd3-711620b15a37\" (UID: \"89b4a6a1-6ce1-48b9-9cd3-711620b15a37\") " Mar 11 09:04:26 crc kubenswrapper[4808]: I0311 09:04:26.183267 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vzwcp\" (UniqueName: \"kubernetes.io/projected/89b4a6a1-6ce1-48b9-9cd3-711620b15a37-kube-api-access-vzwcp\") pod \"89b4a6a1-6ce1-48b9-9cd3-711620b15a37\" (UID: \"89b4a6a1-6ce1-48b9-9cd3-711620b15a37\") " Mar 11 09:04:26 crc kubenswrapper[4808]: I0311 09:04:26.183335 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/89b4a6a1-6ce1-48b9-9cd3-711620b15a37-public-tls-certs\") pod \"89b4a6a1-6ce1-48b9-9cd3-711620b15a37\" (UID: \"89b4a6a1-6ce1-48b9-9cd3-711620b15a37\") " Mar 11 09:04:26 crc kubenswrapper[4808]: I0311 09:04:26.183402 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89b4a6a1-6ce1-48b9-9cd3-711620b15a37-config-data\") pod \"89b4a6a1-6ce1-48b9-9cd3-711620b15a37\" (UID: \"89b4a6a1-6ce1-48b9-9cd3-711620b15a37\") " Mar 11 09:04:26 crc kubenswrapper[4808]: I0311 09:04:26.183427 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89b4a6a1-6ce1-48b9-9cd3-711620b15a37-combined-ca-bundle\") pod \"89b4a6a1-6ce1-48b9-9cd3-711620b15a37\" (UID: \"89b4a6a1-6ce1-48b9-9cd3-711620b15a37\") " Mar 11 09:04:26 crc kubenswrapper[4808]: I0311 09:04:26.183505 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89b4a6a1-6ce1-48b9-9cd3-711620b15a37-logs\") pod \"89b4a6a1-6ce1-48b9-9cd3-711620b15a37\" (UID: \"89b4a6a1-6ce1-48b9-9cd3-711620b15a37\") " Mar 11 09:04:26 crc kubenswrapper[4808]: I0311 09:04:26.184107 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89b4a6a1-6ce1-48b9-9cd3-711620b15a37-logs" (OuterVolumeSpecName: "logs") pod "89b4a6a1-6ce1-48b9-9cd3-711620b15a37" (UID: "89b4a6a1-6ce1-48b9-9cd3-711620b15a37"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:04:26 crc kubenswrapper[4808]: I0311 09:04:26.184730 4808 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89b4a6a1-6ce1-48b9-9cd3-711620b15a37-logs\") on node \"crc\" DevicePath \"\"" Mar 11 09:04:26 crc kubenswrapper[4808]: I0311 09:04:26.192574 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89b4a6a1-6ce1-48b9-9cd3-711620b15a37-kube-api-access-vzwcp" (OuterVolumeSpecName: "kube-api-access-vzwcp") pod "89b4a6a1-6ce1-48b9-9cd3-711620b15a37" (UID: "89b4a6a1-6ce1-48b9-9cd3-711620b15a37"). InnerVolumeSpecName "kube-api-access-vzwcp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:04:26 crc kubenswrapper[4808]: I0311 09:04:26.214382 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89b4a6a1-6ce1-48b9-9cd3-711620b15a37-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "89b4a6a1-6ce1-48b9-9cd3-711620b15a37" (UID: "89b4a6a1-6ce1-48b9-9cd3-711620b15a37"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:04:26 crc kubenswrapper[4808]: I0311 09:04:26.251556 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89b4a6a1-6ce1-48b9-9cd3-711620b15a37-config-data" (OuterVolumeSpecName: "config-data") pod "89b4a6a1-6ce1-48b9-9cd3-711620b15a37" (UID: "89b4a6a1-6ce1-48b9-9cd3-711620b15a37"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:04:26 crc kubenswrapper[4808]: I0311 09:04:26.253810 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89b4a6a1-6ce1-48b9-9cd3-711620b15a37-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "89b4a6a1-6ce1-48b9-9cd3-711620b15a37" (UID: "89b4a6a1-6ce1-48b9-9cd3-711620b15a37"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:04:26 crc kubenswrapper[4808]: I0311 09:04:26.256554 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89b4a6a1-6ce1-48b9-9cd3-711620b15a37-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "89b4a6a1-6ce1-48b9-9cd3-711620b15a37" (UID: "89b4a6a1-6ce1-48b9-9cd3-711620b15a37"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:04:26 crc kubenswrapper[4808]: I0311 09:04:26.287810 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89b4a6a1-6ce1-48b9-9cd3-711620b15a37-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:04:26 crc kubenswrapper[4808]: I0311 09:04:26.287857 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89b4a6a1-6ce1-48b9-9cd3-711620b15a37-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:04:26 crc kubenswrapper[4808]: I0311 09:04:26.287874 4808 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/89b4a6a1-6ce1-48b9-9cd3-711620b15a37-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 11 09:04:26 crc kubenswrapper[4808]: I0311 09:04:26.287889 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vzwcp\" (UniqueName: \"kubernetes.io/projected/89b4a6a1-6ce1-48b9-9cd3-711620b15a37-kube-api-access-vzwcp\") on node \"crc\" DevicePath \"\"" Mar 11 09:04:26 crc kubenswrapper[4808]: I0311 09:04:26.287902 4808 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/89b4a6a1-6ce1-48b9-9cd3-711620b15a37-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 11 09:04:26 crc kubenswrapper[4808]: I0311 09:04:26.392336 4808 generic.go:334] "Generic (PLEG): container finished" podID="89b4a6a1-6ce1-48b9-9cd3-711620b15a37" containerID="c35d426f724b36e7f18cfdfba915d0889bccb1640f5342244bf93ae811e3b00f" exitCode=0 Mar 11 09:04:26 crc kubenswrapper[4808]: I0311 09:04:26.392381 4808 generic.go:334] "Generic (PLEG): container finished" podID="89b4a6a1-6ce1-48b9-9cd3-711620b15a37" containerID="8d80136e4ed25b18856c8c93ec17a7789337660fc25ca3cb9c7765eee57e86d0" exitCode=143 Mar 11 09:04:26 crc kubenswrapper[4808]: I0311 09:04:26.392391 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"89b4a6a1-6ce1-48b9-9cd3-711620b15a37","Type":"ContainerDied","Data":"c35d426f724b36e7f18cfdfba915d0889bccb1640f5342244bf93ae811e3b00f"} Mar 11 09:04:26 crc kubenswrapper[4808]: I0311 09:04:26.392417 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 11 09:04:26 crc kubenswrapper[4808]: I0311 09:04:26.392437 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"89b4a6a1-6ce1-48b9-9cd3-711620b15a37","Type":"ContainerDied","Data":"8d80136e4ed25b18856c8c93ec17a7789337660fc25ca3cb9c7765eee57e86d0"} Mar 11 09:04:26 crc kubenswrapper[4808]: I0311 09:04:26.392452 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"89b4a6a1-6ce1-48b9-9cd3-711620b15a37","Type":"ContainerDied","Data":"1529fb3d1966bac5d4db6899b79b59f7f62b89afa6ba967acbee4d6815c5c8bb"} Mar 11 09:04:26 crc kubenswrapper[4808]: I0311 09:04:26.392472 4808 scope.go:117] "RemoveContainer" containerID="c35d426f724b36e7f18cfdfba915d0889bccb1640f5342244bf93ae811e3b00f" Mar 11 09:04:26 crc kubenswrapper[4808]: I0311 09:04:26.405066 4808 generic.go:334] "Generic (PLEG): container finished" podID="e1200f85-8f0d-4d77-b48e-943687a44df3" containerID="7de44323ba103c19769f99055732828f4800d9b6417b6189f355de999324b5c1" exitCode=0 Mar 11 09:04:26 crc kubenswrapper[4808]: I0311 09:04:26.405221 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e1200f85-8f0d-4d77-b48e-943687a44df3","Type":"ContainerDied","Data":"7de44323ba103c19769f99055732828f4800d9b6417b6189f355de999324b5c1"} Mar 11 09:04:26 crc kubenswrapper[4808]: I0311 09:04:26.411036 4808 generic.go:334] "Generic (PLEG): container finished" podID="0b660ca3-212f-4380-a566-d01166c0555d" containerID="b9af2605df7a0ce309657c2c7b2da8b5837fc9d28bbdf9c5dde864cc4ed6570c" exitCode=143 Mar 11 09:04:26 crc kubenswrapper[4808]: I0311 09:04:26.411784 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0b660ca3-212f-4380-a566-d01166c0555d","Type":"ContainerDied","Data":"b9af2605df7a0ce309657c2c7b2da8b5837fc9d28bbdf9c5dde864cc4ed6570c"} Mar 11 09:04:26 crc kubenswrapper[4808]: I0311 09:04:26.433613 4808 scope.go:117] "RemoveContainer" containerID="8d80136e4ed25b18856c8c93ec17a7789337660fc25ca3cb9c7765eee57e86d0" Mar 11 09:04:26 crc kubenswrapper[4808]: I0311 09:04:26.443915 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 11 09:04:26 crc kubenswrapper[4808]: I0311 09:04:26.459311 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 11 09:04:26 crc kubenswrapper[4808]: I0311 09:04:26.469193 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 11 09:04:26 crc kubenswrapper[4808]: E0311 09:04:26.470827 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89b4a6a1-6ce1-48b9-9cd3-711620b15a37" containerName="nova-api-log" Mar 11 09:04:26 crc kubenswrapper[4808]: I0311 09:04:26.470841 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="89b4a6a1-6ce1-48b9-9cd3-711620b15a37" containerName="nova-api-log" Mar 11 09:04:26 crc kubenswrapper[4808]: E0311 09:04:26.470853 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89b4a6a1-6ce1-48b9-9cd3-711620b15a37" containerName="nova-api-api" Mar 11 09:04:26 crc kubenswrapper[4808]: I0311 09:04:26.470859 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="89b4a6a1-6ce1-48b9-9cd3-711620b15a37" containerName="nova-api-api" Mar 11 09:04:26 crc kubenswrapper[4808]: E0311 09:04:26.470876 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09a9abaa-b83b-45d9-8fdd-cd9df7b814bb" containerName="nova-manage" Mar 11 09:04:26 crc kubenswrapper[4808]: I0311 09:04:26.470882 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="09a9abaa-b83b-45d9-8fdd-cd9df7b814bb" containerName="nova-manage" Mar 11 09:04:26 crc kubenswrapper[4808]: I0311 09:04:26.471041 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="89b4a6a1-6ce1-48b9-9cd3-711620b15a37" containerName="nova-api-api" Mar 11 09:04:26 crc kubenswrapper[4808]: I0311 09:04:26.471053 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="89b4a6a1-6ce1-48b9-9cd3-711620b15a37" containerName="nova-api-log" Mar 11 09:04:26 crc kubenswrapper[4808]: I0311 09:04:26.471068 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="09a9abaa-b83b-45d9-8fdd-cd9df7b814bb" containerName="nova-manage" Mar 11 09:04:26 crc kubenswrapper[4808]: I0311 09:04:26.473693 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 11 09:04:26 crc kubenswrapper[4808]: I0311 09:04:26.491981 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 11 09:04:26 crc kubenswrapper[4808]: I0311 09:04:26.492645 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Mar 11 09:04:26 crc kubenswrapper[4808]: I0311 09:04:26.492737 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Mar 11 09:04:26 crc kubenswrapper[4808]: I0311 09:04:26.509733 4808 scope.go:117] "RemoveContainer" containerID="c35d426f724b36e7f18cfdfba915d0889bccb1640f5342244bf93ae811e3b00f" Mar 11 09:04:26 crc kubenswrapper[4808]: E0311 09:04:26.510220 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c35d426f724b36e7f18cfdfba915d0889bccb1640f5342244bf93ae811e3b00f\": container with ID starting with c35d426f724b36e7f18cfdfba915d0889bccb1640f5342244bf93ae811e3b00f not found: ID does not exist" containerID="c35d426f724b36e7f18cfdfba915d0889bccb1640f5342244bf93ae811e3b00f" Mar 11 09:04:26 crc kubenswrapper[4808]: I0311 09:04:26.510247 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c35d426f724b36e7f18cfdfba915d0889bccb1640f5342244bf93ae811e3b00f"} err="failed to get container status \"c35d426f724b36e7f18cfdfba915d0889bccb1640f5342244bf93ae811e3b00f\": rpc error: code = NotFound desc = could not find container \"c35d426f724b36e7f18cfdfba915d0889bccb1640f5342244bf93ae811e3b00f\": container with ID starting with c35d426f724b36e7f18cfdfba915d0889bccb1640f5342244bf93ae811e3b00f not found: ID does not exist" Mar 11 09:04:26 crc kubenswrapper[4808]: I0311 09:04:26.510266 4808 scope.go:117] "RemoveContainer" containerID="8d80136e4ed25b18856c8c93ec17a7789337660fc25ca3cb9c7765eee57e86d0" Mar 11 09:04:26 crc kubenswrapper[4808]: E0311 09:04:26.510594 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d80136e4ed25b18856c8c93ec17a7789337660fc25ca3cb9c7765eee57e86d0\": container with ID starting with 8d80136e4ed25b18856c8c93ec17a7789337660fc25ca3cb9c7765eee57e86d0 not found: ID does not exist" containerID="8d80136e4ed25b18856c8c93ec17a7789337660fc25ca3cb9c7765eee57e86d0" Mar 11 09:04:26 crc kubenswrapper[4808]: I0311 09:04:26.510631 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d80136e4ed25b18856c8c93ec17a7789337660fc25ca3cb9c7765eee57e86d0"} err="failed to get container status \"8d80136e4ed25b18856c8c93ec17a7789337660fc25ca3cb9c7765eee57e86d0\": rpc error: code = NotFound desc = could not find container \"8d80136e4ed25b18856c8c93ec17a7789337660fc25ca3cb9c7765eee57e86d0\": container with ID starting with 8d80136e4ed25b18856c8c93ec17a7789337660fc25ca3cb9c7765eee57e86d0 not found: ID does not exist" Mar 11 09:04:26 crc kubenswrapper[4808]: I0311 09:04:26.510656 4808 scope.go:117] "RemoveContainer" containerID="c35d426f724b36e7f18cfdfba915d0889bccb1640f5342244bf93ae811e3b00f" Mar 11 09:04:26 crc kubenswrapper[4808]: I0311 09:04:26.510910 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c35d426f724b36e7f18cfdfba915d0889bccb1640f5342244bf93ae811e3b00f"} err="failed to get container status \"c35d426f724b36e7f18cfdfba915d0889bccb1640f5342244bf93ae811e3b00f\": rpc error: code = NotFound desc = could not find container \"c35d426f724b36e7f18cfdfba915d0889bccb1640f5342244bf93ae811e3b00f\": container with ID starting with c35d426f724b36e7f18cfdfba915d0889bccb1640f5342244bf93ae811e3b00f not found: ID does not exist" Mar 11 09:04:26 crc kubenswrapper[4808]: I0311 09:04:26.510929 4808 scope.go:117] "RemoveContainer" containerID="8d80136e4ed25b18856c8c93ec17a7789337660fc25ca3cb9c7765eee57e86d0" Mar 11 09:04:26 crc kubenswrapper[4808]: I0311 09:04:26.512884 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 11 09:04:26 crc kubenswrapper[4808]: I0311 09:04:26.513029 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d80136e4ed25b18856c8c93ec17a7789337660fc25ca3cb9c7765eee57e86d0"} err="failed to get container status \"8d80136e4ed25b18856c8c93ec17a7789337660fc25ca3cb9c7765eee57e86d0\": rpc error: code = NotFound desc = could not find container \"8d80136e4ed25b18856c8c93ec17a7789337660fc25ca3cb9c7765eee57e86d0\": container with ID starting with 8d80136e4ed25b18856c8c93ec17a7789337660fc25ca3cb9c7765eee57e86d0 not found: ID does not exist" Mar 11 09:04:26 crc kubenswrapper[4808]: I0311 09:04:26.595718 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/192d6d53-4174-487e-b652-0ad887475d54-internal-tls-certs\") pod \"nova-api-0\" (UID: \"192d6d53-4174-487e-b652-0ad887475d54\") " pod="openstack/nova-api-0" Mar 11 09:04:26 crc kubenswrapper[4808]: I0311 09:04:26.595802 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/192d6d53-4174-487e-b652-0ad887475d54-public-tls-certs\") pod \"nova-api-0\" (UID: \"192d6d53-4174-487e-b652-0ad887475d54\") " pod="openstack/nova-api-0" Mar 11 09:04:26 crc kubenswrapper[4808]: I0311 09:04:26.595846 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/192d6d53-4174-487e-b652-0ad887475d54-logs\") pod \"nova-api-0\" (UID: \"192d6d53-4174-487e-b652-0ad887475d54\") " pod="openstack/nova-api-0" Mar 11 09:04:26 crc kubenswrapper[4808]: I0311 09:04:26.595883 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxl5v\" (UniqueName: \"kubernetes.io/projected/192d6d53-4174-487e-b652-0ad887475d54-kube-api-access-cxl5v\") pod \"nova-api-0\" (UID: \"192d6d53-4174-487e-b652-0ad887475d54\") " pod="openstack/nova-api-0" Mar 11 09:04:26 crc kubenswrapper[4808]: I0311 09:04:26.595914 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/192d6d53-4174-487e-b652-0ad887475d54-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"192d6d53-4174-487e-b652-0ad887475d54\") " pod="openstack/nova-api-0" Mar 11 09:04:26 crc kubenswrapper[4808]: I0311 09:04:26.596060 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/192d6d53-4174-487e-b652-0ad887475d54-config-data\") pod \"nova-api-0\" (UID: \"192d6d53-4174-487e-b652-0ad887475d54\") " pod="openstack/nova-api-0" Mar 11 09:04:26 crc kubenswrapper[4808]: I0311 09:04:26.609988 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 11 09:04:26 crc kubenswrapper[4808]: I0311 09:04:26.697536 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1200f85-8f0d-4d77-b48e-943687a44df3-combined-ca-bundle\") pod \"e1200f85-8f0d-4d77-b48e-943687a44df3\" (UID: \"e1200f85-8f0d-4d77-b48e-943687a44df3\") " Mar 11 09:04:26 crc kubenswrapper[4808]: I0311 09:04:26.698011 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1200f85-8f0d-4d77-b48e-943687a44df3-config-data\") pod \"e1200f85-8f0d-4d77-b48e-943687a44df3\" (UID: \"e1200f85-8f0d-4d77-b48e-943687a44df3\") " Mar 11 09:04:26 crc kubenswrapper[4808]: I0311 09:04:26.698111 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7bh55\" (UniqueName: \"kubernetes.io/projected/e1200f85-8f0d-4d77-b48e-943687a44df3-kube-api-access-7bh55\") pod \"e1200f85-8f0d-4d77-b48e-943687a44df3\" (UID: \"e1200f85-8f0d-4d77-b48e-943687a44df3\") " Mar 11 09:04:26 crc kubenswrapper[4808]: I0311 09:04:26.698381 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/192d6d53-4174-487e-b652-0ad887475d54-internal-tls-certs\") pod \"nova-api-0\" (UID: \"192d6d53-4174-487e-b652-0ad887475d54\") " pod="openstack/nova-api-0" Mar 11 09:04:26 crc kubenswrapper[4808]: I0311 09:04:26.698441 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/192d6d53-4174-487e-b652-0ad887475d54-public-tls-certs\") pod \"nova-api-0\" (UID: \"192d6d53-4174-487e-b652-0ad887475d54\") " pod="openstack/nova-api-0" Mar 11 09:04:26 crc kubenswrapper[4808]: I0311 09:04:26.698481 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/192d6d53-4174-487e-b652-0ad887475d54-logs\") pod \"nova-api-0\" (UID: \"192d6d53-4174-487e-b652-0ad887475d54\") " pod="openstack/nova-api-0" Mar 11 09:04:26 crc kubenswrapper[4808]: I0311 09:04:26.698523 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxl5v\" (UniqueName: \"kubernetes.io/projected/192d6d53-4174-487e-b652-0ad887475d54-kube-api-access-cxl5v\") pod \"nova-api-0\" (UID: \"192d6d53-4174-487e-b652-0ad887475d54\") " pod="openstack/nova-api-0" Mar 11 09:04:26 crc kubenswrapper[4808]: I0311 09:04:26.698559 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/192d6d53-4174-487e-b652-0ad887475d54-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"192d6d53-4174-487e-b652-0ad887475d54\") " pod="openstack/nova-api-0" Mar 11 09:04:26 crc kubenswrapper[4808]: I0311 09:04:26.698594 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/192d6d53-4174-487e-b652-0ad887475d54-config-data\") pod \"nova-api-0\" (UID: \"192d6d53-4174-487e-b652-0ad887475d54\") " pod="openstack/nova-api-0" Mar 11 09:04:26 crc kubenswrapper[4808]: I0311 09:04:26.699384 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/192d6d53-4174-487e-b652-0ad887475d54-logs\") pod \"nova-api-0\" (UID: \"192d6d53-4174-487e-b652-0ad887475d54\") " pod="openstack/nova-api-0" Mar 11 09:04:26 crc kubenswrapper[4808]: I0311 09:04:26.704138 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/192d6d53-4174-487e-b652-0ad887475d54-internal-tls-certs\") pod \"nova-api-0\" (UID: \"192d6d53-4174-487e-b652-0ad887475d54\") " pod="openstack/nova-api-0" Mar 11 09:04:26 crc kubenswrapper[4808]: I0311 09:04:26.705286 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/192d6d53-4174-487e-b652-0ad887475d54-config-data\") pod \"nova-api-0\" (UID: \"192d6d53-4174-487e-b652-0ad887475d54\") " pod="openstack/nova-api-0" Mar 11 09:04:26 crc kubenswrapper[4808]: I0311 09:04:26.706368 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/192d6d53-4174-487e-b652-0ad887475d54-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"192d6d53-4174-487e-b652-0ad887475d54\") " pod="openstack/nova-api-0" Mar 11 09:04:26 crc kubenswrapper[4808]: I0311 09:04:26.706748 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/192d6d53-4174-487e-b652-0ad887475d54-public-tls-certs\") pod \"nova-api-0\" (UID: \"192d6d53-4174-487e-b652-0ad887475d54\") " pod="openstack/nova-api-0" Mar 11 09:04:26 crc kubenswrapper[4808]: I0311 09:04:26.709026 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1200f85-8f0d-4d77-b48e-943687a44df3-kube-api-access-7bh55" (OuterVolumeSpecName: "kube-api-access-7bh55") pod "e1200f85-8f0d-4d77-b48e-943687a44df3" (UID: "e1200f85-8f0d-4d77-b48e-943687a44df3"). InnerVolumeSpecName "kube-api-access-7bh55". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:04:26 crc kubenswrapper[4808]: I0311 09:04:26.719793 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxl5v\" (UniqueName: \"kubernetes.io/projected/192d6d53-4174-487e-b652-0ad887475d54-kube-api-access-cxl5v\") pod \"nova-api-0\" (UID: \"192d6d53-4174-487e-b652-0ad887475d54\") " pod="openstack/nova-api-0" Mar 11 09:04:26 crc kubenswrapper[4808]: I0311 09:04:26.729557 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1200f85-8f0d-4d77-b48e-943687a44df3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e1200f85-8f0d-4d77-b48e-943687a44df3" (UID: "e1200f85-8f0d-4d77-b48e-943687a44df3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:04:26 crc kubenswrapper[4808]: I0311 09:04:26.737913 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1200f85-8f0d-4d77-b48e-943687a44df3-config-data" (OuterVolumeSpecName: "config-data") pod "e1200f85-8f0d-4d77-b48e-943687a44df3" (UID: "e1200f85-8f0d-4d77-b48e-943687a44df3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:04:26 crc kubenswrapper[4808]: I0311 09:04:26.800787 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1200f85-8f0d-4d77-b48e-943687a44df3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:04:26 crc kubenswrapper[4808]: I0311 09:04:26.800819 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1200f85-8f0d-4d77-b48e-943687a44df3-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:04:26 crc kubenswrapper[4808]: I0311 09:04:26.800831 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7bh55\" (UniqueName: \"kubernetes.io/projected/e1200f85-8f0d-4d77-b48e-943687a44df3-kube-api-access-7bh55\") on node \"crc\" DevicePath \"\"" Mar 11 09:04:26 crc kubenswrapper[4808]: I0311 09:04:26.814529 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 11 09:04:27 crc kubenswrapper[4808]: I0311 09:04:27.271166 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 11 09:04:27 crc kubenswrapper[4808]: I0311 09:04:27.428677 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e1200f85-8f0d-4d77-b48e-943687a44df3","Type":"ContainerDied","Data":"4d5bb8b71675672d6056d7b035e58bd23996cf314a8db140b7f1e83e1de3a51e"} Mar 11 09:04:27 crc kubenswrapper[4808]: I0311 09:04:27.429727 4808 scope.go:117] "RemoveContainer" containerID="7de44323ba103c19769f99055732828f4800d9b6417b6189f355de999324b5c1" Mar 11 09:04:27 crc kubenswrapper[4808]: I0311 09:04:27.428697 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 11 09:04:27 crc kubenswrapper[4808]: I0311 09:04:27.431304 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"192d6d53-4174-487e-b652-0ad887475d54","Type":"ContainerStarted","Data":"a14632b0fceda7f05ace4d9f216e351bc0c5327cf9ff3070be85c7588a48e269"} Mar 11 09:04:27 crc kubenswrapper[4808]: I0311 09:04:27.481350 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 11 09:04:27 crc kubenswrapper[4808]: I0311 09:04:27.490228 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Mar 11 09:04:27 crc kubenswrapper[4808]: I0311 09:04:27.497427 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Mar 11 09:04:27 crc kubenswrapper[4808]: E0311 09:04:27.497819 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1200f85-8f0d-4d77-b48e-943687a44df3" containerName="nova-scheduler-scheduler" Mar 11 09:04:27 crc kubenswrapper[4808]: I0311 09:04:27.497834 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1200f85-8f0d-4d77-b48e-943687a44df3" containerName="nova-scheduler-scheduler" Mar 11 09:04:27 crc kubenswrapper[4808]: I0311 09:04:27.498002 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1200f85-8f0d-4d77-b48e-943687a44df3" containerName="nova-scheduler-scheduler" Mar 11 09:04:27 crc kubenswrapper[4808]: I0311 09:04:27.498585 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 11 09:04:27 crc kubenswrapper[4808]: I0311 09:04:27.501193 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Mar 11 09:04:27 crc kubenswrapper[4808]: I0311 09:04:27.515944 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 11 09:04:27 crc kubenswrapper[4808]: I0311 09:04:27.629209 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e28fc76b-781e-4397-87b5-2b2bf6d2a496-config-data\") pod \"nova-scheduler-0\" (UID: \"e28fc76b-781e-4397-87b5-2b2bf6d2a496\") " pod="openstack/nova-scheduler-0" Mar 11 09:04:27 crc kubenswrapper[4808]: I0311 09:04:27.629704 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e28fc76b-781e-4397-87b5-2b2bf6d2a496-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e28fc76b-781e-4397-87b5-2b2bf6d2a496\") " pod="openstack/nova-scheduler-0" Mar 11 09:04:27 crc kubenswrapper[4808]: I0311 09:04:27.629852 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4cxs\" (UniqueName: \"kubernetes.io/projected/e28fc76b-781e-4397-87b5-2b2bf6d2a496-kube-api-access-v4cxs\") pod \"nova-scheduler-0\" (UID: \"e28fc76b-781e-4397-87b5-2b2bf6d2a496\") " pod="openstack/nova-scheduler-0" Mar 11 09:04:27 crc kubenswrapper[4808]: I0311 09:04:27.732343 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4cxs\" (UniqueName: \"kubernetes.io/projected/e28fc76b-781e-4397-87b5-2b2bf6d2a496-kube-api-access-v4cxs\") pod \"nova-scheduler-0\" (UID: \"e28fc76b-781e-4397-87b5-2b2bf6d2a496\") " pod="openstack/nova-scheduler-0" Mar 11 09:04:27 crc kubenswrapper[4808]: I0311 09:04:27.732665 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e28fc76b-781e-4397-87b5-2b2bf6d2a496-config-data\") pod \"nova-scheduler-0\" (UID: \"e28fc76b-781e-4397-87b5-2b2bf6d2a496\") " pod="openstack/nova-scheduler-0" Mar 11 09:04:27 crc kubenswrapper[4808]: I0311 09:04:27.732731 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e28fc76b-781e-4397-87b5-2b2bf6d2a496-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e28fc76b-781e-4397-87b5-2b2bf6d2a496\") " pod="openstack/nova-scheduler-0" Mar 11 09:04:27 crc kubenswrapper[4808]: I0311 09:04:27.737809 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e28fc76b-781e-4397-87b5-2b2bf6d2a496-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e28fc76b-781e-4397-87b5-2b2bf6d2a496\") " pod="openstack/nova-scheduler-0" Mar 11 09:04:27 crc kubenswrapper[4808]: I0311 09:04:27.738085 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e28fc76b-781e-4397-87b5-2b2bf6d2a496-config-data\") pod \"nova-scheduler-0\" (UID: \"e28fc76b-781e-4397-87b5-2b2bf6d2a496\") " pod="openstack/nova-scheduler-0" Mar 11 09:04:27 crc kubenswrapper[4808]: I0311 09:04:27.754420 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4cxs\" (UniqueName: \"kubernetes.io/projected/e28fc76b-781e-4397-87b5-2b2bf6d2a496-kube-api-access-v4cxs\") pod \"nova-scheduler-0\" (UID: \"e28fc76b-781e-4397-87b5-2b2bf6d2a496\") " pod="openstack/nova-scheduler-0" Mar 11 09:04:27 crc kubenswrapper[4808]: I0311 09:04:27.812065 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89b4a6a1-6ce1-48b9-9cd3-711620b15a37" path="/var/lib/kubelet/pods/89b4a6a1-6ce1-48b9-9cd3-711620b15a37/volumes" Mar 11 09:04:27 crc kubenswrapper[4808]: I0311 09:04:27.813704 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1200f85-8f0d-4d77-b48e-943687a44df3" path="/var/lib/kubelet/pods/e1200f85-8f0d-4d77-b48e-943687a44df3/volumes" Mar 11 09:04:27 crc kubenswrapper[4808]: I0311 09:04:27.823232 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 11 09:04:28 crc kubenswrapper[4808]: W0311 09:04:28.279155 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode28fc76b_781e_4397_87b5_2b2bf6d2a496.slice/crio-53fde59faa4bc42e8ab2e74a1ab3aed153f2e7b3f7887b682e88f3b25e946250 WatchSource:0}: Error finding container 53fde59faa4bc42e8ab2e74a1ab3aed153f2e7b3f7887b682e88f3b25e946250: Status 404 returned error can't find the container with id 53fde59faa4bc42e8ab2e74a1ab3aed153f2e7b3f7887b682e88f3b25e946250 Mar 11 09:04:28 crc kubenswrapper[4808]: I0311 09:04:28.291638 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 11 09:04:28 crc kubenswrapper[4808]: I0311 09:04:28.448211 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"192d6d53-4174-487e-b652-0ad887475d54","Type":"ContainerStarted","Data":"c7fcbacbe8923acd9369c79ee4cc7aa70294bd2899daf98c5c24346801197e4e"} Mar 11 09:04:28 crc kubenswrapper[4808]: I0311 09:04:28.448247 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"192d6d53-4174-487e-b652-0ad887475d54","Type":"ContainerStarted","Data":"dc94ba0454419f0c1846eea9c0458329c712e6687fa7a17e1ad016ba48cf2ae7"} Mar 11 09:04:28 crc kubenswrapper[4808]: I0311 09:04:28.450659 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e28fc76b-781e-4397-87b5-2b2bf6d2a496","Type":"ContainerStarted","Data":"53fde59faa4bc42e8ab2e74a1ab3aed153f2e7b3f7887b682e88f3b25e946250"} Mar 11 09:04:28 crc kubenswrapper[4808]: I0311 09:04:28.478698 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.478672009 podStartE2EDuration="2.478672009s" podCreationTimestamp="2026-03-11 09:04:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:04:28.477699991 +0000 UTC m=+1519.431023321" watchObservedRunningTime="2026-03-11 09:04:28.478672009 +0000 UTC m=+1519.431995369" Mar 11 09:04:28 crc kubenswrapper[4808]: I0311 09:04:28.731046 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="0b660ca3-212f-4380-a566-d01166c0555d" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.197:8775/\": read tcp 10.217.0.2:41496->10.217.0.197:8775: read: connection reset by peer" Mar 11 09:04:28 crc kubenswrapper[4808]: I0311 09:04:28.731046 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="0b660ca3-212f-4380-a566-d01166c0555d" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.197:8775/\": read tcp 10.217.0.2:41488->10.217.0.197:8775: read: connection reset by peer" Mar 11 09:04:29 crc kubenswrapper[4808]: I0311 09:04:29.186256 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 11 09:04:29 crc kubenswrapper[4808]: I0311 09:04:29.211180 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.211158839 podStartE2EDuration="2.211158839s" podCreationTimestamp="2026-03-11 09:04:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:04:28.496206939 +0000 UTC m=+1519.449530259" watchObservedRunningTime="2026-03-11 09:04:29.211158839 +0000 UTC m=+1520.164482159" Mar 11 09:04:29 crc kubenswrapper[4808]: I0311 09:04:29.261383 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zxqv9\" (UniqueName: \"kubernetes.io/projected/0b660ca3-212f-4380-a566-d01166c0555d-kube-api-access-zxqv9\") pod \"0b660ca3-212f-4380-a566-d01166c0555d\" (UID: \"0b660ca3-212f-4380-a566-d01166c0555d\") " Mar 11 09:04:29 crc kubenswrapper[4808]: I0311 09:04:29.261488 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b660ca3-212f-4380-a566-d01166c0555d-config-data\") pod \"0b660ca3-212f-4380-a566-d01166c0555d\" (UID: \"0b660ca3-212f-4380-a566-d01166c0555d\") " Mar 11 09:04:29 crc kubenswrapper[4808]: I0311 09:04:29.267179 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b660ca3-212f-4380-a566-d01166c0555d-kube-api-access-zxqv9" (OuterVolumeSpecName: "kube-api-access-zxqv9") pod "0b660ca3-212f-4380-a566-d01166c0555d" (UID: "0b660ca3-212f-4380-a566-d01166c0555d"). InnerVolumeSpecName "kube-api-access-zxqv9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:04:29 crc kubenswrapper[4808]: I0311 09:04:29.261549 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b660ca3-212f-4380-a566-d01166c0555d-combined-ca-bundle\") pod \"0b660ca3-212f-4380-a566-d01166c0555d\" (UID: \"0b660ca3-212f-4380-a566-d01166c0555d\") " Mar 11 09:04:29 crc kubenswrapper[4808]: I0311 09:04:29.270252 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b660ca3-212f-4380-a566-d01166c0555d-nova-metadata-tls-certs\") pod \"0b660ca3-212f-4380-a566-d01166c0555d\" (UID: \"0b660ca3-212f-4380-a566-d01166c0555d\") " Mar 11 09:04:29 crc kubenswrapper[4808]: I0311 09:04:29.270298 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b660ca3-212f-4380-a566-d01166c0555d-logs\") pod \"0b660ca3-212f-4380-a566-d01166c0555d\" (UID: \"0b660ca3-212f-4380-a566-d01166c0555d\") " Mar 11 09:04:29 crc kubenswrapper[4808]: I0311 09:04:29.271063 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zxqv9\" (UniqueName: \"kubernetes.io/projected/0b660ca3-212f-4380-a566-d01166c0555d-kube-api-access-zxqv9\") on node \"crc\" DevicePath \"\"" Mar 11 09:04:29 crc kubenswrapper[4808]: I0311 09:04:29.271444 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b660ca3-212f-4380-a566-d01166c0555d-logs" (OuterVolumeSpecName: "logs") pod "0b660ca3-212f-4380-a566-d01166c0555d" (UID: "0b660ca3-212f-4380-a566-d01166c0555d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:04:29 crc kubenswrapper[4808]: I0311 09:04:29.310976 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b660ca3-212f-4380-a566-d01166c0555d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0b660ca3-212f-4380-a566-d01166c0555d" (UID: "0b660ca3-212f-4380-a566-d01166c0555d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:04:29 crc kubenswrapper[4808]: I0311 09:04:29.313633 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b660ca3-212f-4380-a566-d01166c0555d-config-data" (OuterVolumeSpecName: "config-data") pod "0b660ca3-212f-4380-a566-d01166c0555d" (UID: "0b660ca3-212f-4380-a566-d01166c0555d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:04:29 crc kubenswrapper[4808]: I0311 09:04:29.344332 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b660ca3-212f-4380-a566-d01166c0555d-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "0b660ca3-212f-4380-a566-d01166c0555d" (UID: "0b660ca3-212f-4380-a566-d01166c0555d"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:04:29 crc kubenswrapper[4808]: I0311 09:04:29.373167 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b660ca3-212f-4380-a566-d01166c0555d-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:04:29 crc kubenswrapper[4808]: I0311 09:04:29.373201 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b660ca3-212f-4380-a566-d01166c0555d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:04:29 crc kubenswrapper[4808]: I0311 09:04:29.373212 4808 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b660ca3-212f-4380-a566-d01166c0555d-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 11 09:04:29 crc kubenswrapper[4808]: I0311 09:04:29.373221 4808 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b660ca3-212f-4380-a566-d01166c0555d-logs\") on node \"crc\" DevicePath \"\"" Mar 11 09:04:29 crc kubenswrapper[4808]: I0311 09:04:29.467752 4808 generic.go:334] "Generic (PLEG): container finished" podID="0b660ca3-212f-4380-a566-d01166c0555d" containerID="00787f1f446de5b650b5d65a276452c867882927ff6a676c3bab10c16475af77" exitCode=0 Mar 11 09:04:29 crc kubenswrapper[4808]: I0311 09:04:29.467847 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 11 09:04:29 crc kubenswrapper[4808]: I0311 09:04:29.467871 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0b660ca3-212f-4380-a566-d01166c0555d","Type":"ContainerDied","Data":"00787f1f446de5b650b5d65a276452c867882927ff6a676c3bab10c16475af77"} Mar 11 09:04:29 crc kubenswrapper[4808]: I0311 09:04:29.469063 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0b660ca3-212f-4380-a566-d01166c0555d","Type":"ContainerDied","Data":"730404de54338bc6b30c36c559063025a13f808ae53512e9c736d2cd86f7afaf"} Mar 11 09:04:29 crc kubenswrapper[4808]: I0311 09:04:29.469087 4808 scope.go:117] "RemoveContainer" containerID="00787f1f446de5b650b5d65a276452c867882927ff6a676c3bab10c16475af77" Mar 11 09:04:29 crc kubenswrapper[4808]: I0311 09:04:29.480630 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e28fc76b-781e-4397-87b5-2b2bf6d2a496","Type":"ContainerStarted","Data":"46dcd1f24c24da00b44ee1905152bd43556064f3dd7b7d9f3e140bf0866a8897"} Mar 11 09:04:29 crc kubenswrapper[4808]: I0311 09:04:29.503548 4808 scope.go:117] "RemoveContainer" containerID="b9af2605df7a0ce309657c2c7b2da8b5837fc9d28bbdf9c5dde864cc4ed6570c" Mar 11 09:04:29 crc kubenswrapper[4808]: I0311 09:04:29.542802 4808 scope.go:117] "RemoveContainer" containerID="00787f1f446de5b650b5d65a276452c867882927ff6a676c3bab10c16475af77" Mar 11 09:04:29 crc kubenswrapper[4808]: E0311 09:04:29.549954 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00787f1f446de5b650b5d65a276452c867882927ff6a676c3bab10c16475af77\": container with ID starting with 00787f1f446de5b650b5d65a276452c867882927ff6a676c3bab10c16475af77 not found: ID does not exist" containerID="00787f1f446de5b650b5d65a276452c867882927ff6a676c3bab10c16475af77" Mar 11 09:04:29 crc kubenswrapper[4808]: I0311 09:04:29.549994 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00787f1f446de5b650b5d65a276452c867882927ff6a676c3bab10c16475af77"} err="failed to get container status \"00787f1f446de5b650b5d65a276452c867882927ff6a676c3bab10c16475af77\": rpc error: code = NotFound desc = could not find container \"00787f1f446de5b650b5d65a276452c867882927ff6a676c3bab10c16475af77\": container with ID starting with 00787f1f446de5b650b5d65a276452c867882927ff6a676c3bab10c16475af77 not found: ID does not exist" Mar 11 09:04:29 crc kubenswrapper[4808]: I0311 09:04:29.550017 4808 scope.go:117] "RemoveContainer" containerID="b9af2605df7a0ce309657c2c7b2da8b5837fc9d28bbdf9c5dde864cc4ed6570c" Mar 11 09:04:29 crc kubenswrapper[4808]: E0311 09:04:29.554787 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9af2605df7a0ce309657c2c7b2da8b5837fc9d28bbdf9c5dde864cc4ed6570c\": container with ID starting with b9af2605df7a0ce309657c2c7b2da8b5837fc9d28bbdf9c5dde864cc4ed6570c not found: ID does not exist" containerID="b9af2605df7a0ce309657c2c7b2da8b5837fc9d28bbdf9c5dde864cc4ed6570c" Mar 11 09:04:29 crc kubenswrapper[4808]: I0311 09:04:29.554866 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9af2605df7a0ce309657c2c7b2da8b5837fc9d28bbdf9c5dde864cc4ed6570c"} err="failed to get container status \"b9af2605df7a0ce309657c2c7b2da8b5837fc9d28bbdf9c5dde864cc4ed6570c\": rpc error: code = NotFound desc = could not find container \"b9af2605df7a0ce309657c2c7b2da8b5837fc9d28bbdf9c5dde864cc4ed6570c\": container with ID starting with b9af2605df7a0ce309657c2c7b2da8b5837fc9d28bbdf9c5dde864cc4ed6570c not found: ID does not exist" Mar 11 09:04:29 crc kubenswrapper[4808]: I0311 09:04:29.563412 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 11 09:04:29 crc kubenswrapper[4808]: I0311 09:04:29.574070 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Mar 11 09:04:29 crc kubenswrapper[4808]: I0311 09:04:29.641121 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 11 09:04:29 crc kubenswrapper[4808]: E0311 09:04:29.641607 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b660ca3-212f-4380-a566-d01166c0555d" containerName="nova-metadata-log" Mar 11 09:04:29 crc kubenswrapper[4808]: I0311 09:04:29.641626 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b660ca3-212f-4380-a566-d01166c0555d" containerName="nova-metadata-log" Mar 11 09:04:29 crc kubenswrapper[4808]: E0311 09:04:29.641654 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b660ca3-212f-4380-a566-d01166c0555d" containerName="nova-metadata-metadata" Mar 11 09:04:29 crc kubenswrapper[4808]: I0311 09:04:29.641661 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b660ca3-212f-4380-a566-d01166c0555d" containerName="nova-metadata-metadata" Mar 11 09:04:29 crc kubenswrapper[4808]: I0311 09:04:29.641838 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b660ca3-212f-4380-a566-d01166c0555d" containerName="nova-metadata-metadata" Mar 11 09:04:29 crc kubenswrapper[4808]: I0311 09:04:29.641850 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b660ca3-212f-4380-a566-d01166c0555d" containerName="nova-metadata-log" Mar 11 09:04:29 crc kubenswrapper[4808]: I0311 09:04:29.642756 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 11 09:04:29 crc kubenswrapper[4808]: I0311 09:04:29.651133 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 11 09:04:29 crc kubenswrapper[4808]: I0311 09:04:29.651192 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Mar 11 09:04:29 crc kubenswrapper[4808]: I0311 09:04:29.666750 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 11 09:04:29 crc kubenswrapper[4808]: I0311 09:04:29.781312 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25dc3abb-1552-49e8-a8b4-c51edd37f47c-logs\") pod \"nova-metadata-0\" (UID: \"25dc3abb-1552-49e8-a8b4-c51edd37f47c\") " pod="openstack/nova-metadata-0" Mar 11 09:04:29 crc kubenswrapper[4808]: I0311 09:04:29.781647 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25dnq\" (UniqueName: \"kubernetes.io/projected/25dc3abb-1552-49e8-a8b4-c51edd37f47c-kube-api-access-25dnq\") pod \"nova-metadata-0\" (UID: \"25dc3abb-1552-49e8-a8b4-c51edd37f47c\") " pod="openstack/nova-metadata-0" Mar 11 09:04:29 crc kubenswrapper[4808]: I0311 09:04:29.781761 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25dc3abb-1552-49e8-a8b4-c51edd37f47c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"25dc3abb-1552-49e8-a8b4-c51edd37f47c\") " pod="openstack/nova-metadata-0" Mar 11 09:04:29 crc kubenswrapper[4808]: I0311 09:04:29.781867 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/25dc3abb-1552-49e8-a8b4-c51edd37f47c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"25dc3abb-1552-49e8-a8b4-c51edd37f47c\") " pod="openstack/nova-metadata-0" Mar 11 09:04:29 crc kubenswrapper[4808]: I0311 09:04:29.781964 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25dc3abb-1552-49e8-a8b4-c51edd37f47c-config-data\") pod \"nova-metadata-0\" (UID: \"25dc3abb-1552-49e8-a8b4-c51edd37f47c\") " pod="openstack/nova-metadata-0" Mar 11 09:04:29 crc kubenswrapper[4808]: I0311 09:04:29.800916 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b660ca3-212f-4380-a566-d01166c0555d" path="/var/lib/kubelet/pods/0b660ca3-212f-4380-a566-d01166c0555d/volumes" Mar 11 09:04:29 crc kubenswrapper[4808]: I0311 09:04:29.883217 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25dc3abb-1552-49e8-a8b4-c51edd37f47c-logs\") pod \"nova-metadata-0\" (UID: \"25dc3abb-1552-49e8-a8b4-c51edd37f47c\") " pod="openstack/nova-metadata-0" Mar 11 09:04:29 crc kubenswrapper[4808]: I0311 09:04:29.883578 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25dnq\" (UniqueName: \"kubernetes.io/projected/25dc3abb-1552-49e8-a8b4-c51edd37f47c-kube-api-access-25dnq\") pod \"nova-metadata-0\" (UID: \"25dc3abb-1552-49e8-a8b4-c51edd37f47c\") " pod="openstack/nova-metadata-0" Mar 11 09:04:29 crc kubenswrapper[4808]: I0311 09:04:29.883701 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25dc3abb-1552-49e8-a8b4-c51edd37f47c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"25dc3abb-1552-49e8-a8b4-c51edd37f47c\") " pod="openstack/nova-metadata-0" Mar 11 09:04:29 crc kubenswrapper[4808]: I0311 09:04:29.883818 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/25dc3abb-1552-49e8-a8b4-c51edd37f47c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"25dc3abb-1552-49e8-a8b4-c51edd37f47c\") " pod="openstack/nova-metadata-0" Mar 11 09:04:29 crc kubenswrapper[4808]: I0311 09:04:29.883933 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25dc3abb-1552-49e8-a8b4-c51edd37f47c-config-data\") pod \"nova-metadata-0\" (UID: \"25dc3abb-1552-49e8-a8b4-c51edd37f47c\") " pod="openstack/nova-metadata-0" Mar 11 09:04:29 crc kubenswrapper[4808]: I0311 09:04:29.884149 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25dc3abb-1552-49e8-a8b4-c51edd37f47c-logs\") pod \"nova-metadata-0\" (UID: \"25dc3abb-1552-49e8-a8b4-c51edd37f47c\") " pod="openstack/nova-metadata-0" Mar 11 09:04:29 crc kubenswrapper[4808]: I0311 09:04:29.887254 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25dc3abb-1552-49e8-a8b4-c51edd37f47c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"25dc3abb-1552-49e8-a8b4-c51edd37f47c\") " pod="openstack/nova-metadata-0" Mar 11 09:04:29 crc kubenswrapper[4808]: I0311 09:04:29.888336 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25dc3abb-1552-49e8-a8b4-c51edd37f47c-config-data\") pod \"nova-metadata-0\" (UID: \"25dc3abb-1552-49e8-a8b4-c51edd37f47c\") " pod="openstack/nova-metadata-0" Mar 11 09:04:29 crc kubenswrapper[4808]: I0311 09:04:29.888914 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/25dc3abb-1552-49e8-a8b4-c51edd37f47c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"25dc3abb-1552-49e8-a8b4-c51edd37f47c\") " pod="openstack/nova-metadata-0" Mar 11 09:04:29 crc kubenswrapper[4808]: I0311 09:04:29.916398 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25dnq\" (UniqueName: \"kubernetes.io/projected/25dc3abb-1552-49e8-a8b4-c51edd37f47c-kube-api-access-25dnq\") pod \"nova-metadata-0\" (UID: \"25dc3abb-1552-49e8-a8b4-c51edd37f47c\") " pod="openstack/nova-metadata-0" Mar 11 09:04:30 crc kubenswrapper[4808]: I0311 09:04:30.026072 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 11 09:04:30 crc kubenswrapper[4808]: I0311 09:04:30.298732 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 11 09:04:30 crc kubenswrapper[4808]: W0311 09:04:30.312421 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod25dc3abb_1552_49e8_a8b4_c51edd37f47c.slice/crio-71abd76ae9d31e0a942b2074d7291ab833de2fdf4e16011866140ae73a79b9b4 WatchSource:0}: Error finding container 71abd76ae9d31e0a942b2074d7291ab833de2fdf4e16011866140ae73a79b9b4: Status 404 returned error can't find the container with id 71abd76ae9d31e0a942b2074d7291ab833de2fdf4e16011866140ae73a79b9b4 Mar 11 09:04:30 crc kubenswrapper[4808]: I0311 09:04:30.498749 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"25dc3abb-1552-49e8-a8b4-c51edd37f47c","Type":"ContainerStarted","Data":"a086c0e3022a2db2d34ca60d4015160e76a8270327ee343fcdb2cb62321f0689"} Mar 11 09:04:30 crc kubenswrapper[4808]: I0311 09:04:30.498809 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"25dc3abb-1552-49e8-a8b4-c51edd37f47c","Type":"ContainerStarted","Data":"71abd76ae9d31e0a942b2074d7291ab833de2fdf4e16011866140ae73a79b9b4"} Mar 11 09:04:31 crc kubenswrapper[4808]: I0311 09:04:31.520058 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"25dc3abb-1552-49e8-a8b4-c51edd37f47c","Type":"ContainerStarted","Data":"48b494dd6ddaf1d2614480b4dbea2fb21a3de491e1743fed834630233f736314"} Mar 11 09:04:31 crc kubenswrapper[4808]: I0311 09:04:31.554875 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.55484914 podStartE2EDuration="2.55484914s" podCreationTimestamp="2026-03-11 09:04:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:04:31.542620384 +0000 UTC m=+1522.495943714" watchObservedRunningTime="2026-03-11 09:04:31.55484914 +0000 UTC m=+1522.508172480" Mar 11 09:04:32 crc kubenswrapper[4808]: I0311 09:04:32.823528 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Mar 11 09:04:35 crc kubenswrapper[4808]: I0311 09:04:35.026712 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 11 09:04:35 crc kubenswrapper[4808]: I0311 09:04:35.027011 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 11 09:04:36 crc kubenswrapper[4808]: I0311 09:04:36.814715 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 11 09:04:36 crc kubenswrapper[4808]: I0311 09:04:36.815135 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 11 09:04:37 crc kubenswrapper[4808]: I0311 09:04:37.824206 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Mar 11 09:04:37 crc kubenswrapper[4808]: I0311 09:04:37.828563 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="192d6d53-4174-487e-b652-0ad887475d54" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.210:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 11 09:04:37 crc kubenswrapper[4808]: I0311 09:04:37.828562 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="192d6d53-4174-487e-b652-0ad887475d54" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.210:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 11 09:04:37 crc kubenswrapper[4808]: I0311 09:04:37.875699 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Mar 11 09:04:38 crc kubenswrapper[4808]: I0311 09:04:38.628515 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Mar 11 09:04:40 crc kubenswrapper[4808]: I0311 09:04:40.026878 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 11 09:04:40 crc kubenswrapper[4808]: I0311 09:04:40.027222 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 11 09:04:41 crc kubenswrapper[4808]: I0311 09:04:41.047555 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="25dc3abb-1552-49e8-a8b4-c51edd37f47c" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.212:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 11 09:04:41 crc kubenswrapper[4808]: I0311 09:04:41.047624 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="25dc3abb-1552-49e8-a8b4-c51edd37f47c" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.212:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 11 09:04:46 crc kubenswrapper[4808]: I0311 09:04:46.027674 4808 patch_prober.go:28] interesting pod/machine-config-daemon-tfsm9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 09:04:46 crc kubenswrapper[4808]: I0311 09:04:46.028280 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 09:04:46 crc kubenswrapper[4808]: I0311 09:04:46.830765 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 11 09:04:46 crc kubenswrapper[4808]: I0311 09:04:46.831229 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 11 09:04:46 crc kubenswrapper[4808]: I0311 09:04:46.831576 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 11 09:04:46 crc kubenswrapper[4808]: I0311 09:04:46.831743 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 11 09:04:46 crc kubenswrapper[4808]: I0311 09:04:46.841088 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 11 09:04:46 crc kubenswrapper[4808]: I0311 09:04:46.843050 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 11 09:04:50 crc kubenswrapper[4808]: I0311 09:04:50.004046 4808 scope.go:117] "RemoveContainer" containerID="a1bb330fc24dc8b6f1e8e00764cafdfb6954ab68b7644260dd538d66d235432a" Mar 11 09:04:50 crc kubenswrapper[4808]: I0311 09:04:50.033227 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 11 09:04:50 crc kubenswrapper[4808]: I0311 09:04:50.035150 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 11 09:04:50 crc kubenswrapper[4808]: I0311 09:04:50.045527 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 11 09:04:50 crc kubenswrapper[4808]: I0311 09:04:50.722155 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Mar 11 09:04:50 crc kubenswrapper[4808]: I0311 09:04:50.734316 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 11 09:05:09 crc kubenswrapper[4808]: I0311 09:05:09.474224 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Mar 11 09:05:09 crc kubenswrapper[4808]: I0311 09:05:09.475922 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="f9deef18-212d-4f90-adbe-84f8bb0177e1" containerName="openstackclient" containerID="cri-o://daa9d5be0dd494f080e478e2334f54c0d62a05a1d3e5c21ee9c65ba6d4767c26" gracePeriod=2 Mar 11 09:05:09 crc kubenswrapper[4808]: I0311 09:05:09.485037 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Mar 11 09:05:09 crc kubenswrapper[4808]: I0311 09:05:09.526567 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-pcjsh"] Mar 11 09:05:09 crc kubenswrapper[4808]: E0311 09:05:09.526942 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9deef18-212d-4f90-adbe-84f8bb0177e1" containerName="openstackclient" Mar 11 09:05:09 crc kubenswrapper[4808]: I0311 09:05:09.526953 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9deef18-212d-4f90-adbe-84f8bb0177e1" containerName="openstackclient" Mar 11 09:05:09 crc kubenswrapper[4808]: I0311 09:05:09.527150 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9deef18-212d-4f90-adbe-84f8bb0177e1" containerName="openstackclient" Mar 11 09:05:09 crc kubenswrapper[4808]: I0311 09:05:09.528311 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-pcjsh" Mar 11 09:05:09 crc kubenswrapper[4808]: I0311 09:05:09.530683 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Mar 11 09:05:09 crc kubenswrapper[4808]: I0311 09:05:09.552120 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-pcjsh"] Mar 11 09:05:09 crc kubenswrapper[4808]: I0311 09:05:09.612669 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4922ac8-998e-4ba3-88cb-6805fa10c7fd-operator-scripts\") pod \"root-account-create-update-pcjsh\" (UID: \"b4922ac8-998e-4ba3-88cb-6805fa10c7fd\") " pod="openstack/root-account-create-update-pcjsh" Mar 11 09:05:09 crc kubenswrapper[4808]: I0311 09:05:09.612761 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chn2x\" (UniqueName: \"kubernetes.io/projected/b4922ac8-998e-4ba3-88cb-6805fa10c7fd-kube-api-access-chn2x\") pod \"root-account-create-update-pcjsh\" (UID: \"b4922ac8-998e-4ba3-88cb-6805fa10c7fd\") " pod="openstack/root-account-create-update-pcjsh" Mar 11 09:05:09 crc kubenswrapper[4808]: I0311 09:05:09.625499 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-t747d"] Mar 11 09:05:09 crc kubenswrapper[4808]: I0311 09:05:09.652875 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-t747d"] Mar 11 09:05:09 crc kubenswrapper[4808]: I0311 09:05:09.674418 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-c800-account-create-update-cnrds"] Mar 11 09:05:09 crc kubenswrapper[4808]: I0311 09:05:09.675687 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-c800-account-create-update-cnrds" Mar 11 09:05:09 crc kubenswrapper[4808]: I0311 09:05:09.681576 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Mar 11 09:05:09 crc kubenswrapper[4808]: I0311 09:05:09.704300 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-c800-account-create-update-cnrds"] Mar 11 09:05:09 crc kubenswrapper[4808]: I0311 09:05:09.722221 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85b43352-7580-46b6-a90c-93a2598ac134-operator-scripts\") pod \"barbican-c800-account-create-update-cnrds\" (UID: \"85b43352-7580-46b6-a90c-93a2598ac134\") " pod="openstack/barbican-c800-account-create-update-cnrds" Mar 11 09:05:09 crc kubenswrapper[4808]: I0311 09:05:09.722528 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhs6z\" (UniqueName: \"kubernetes.io/projected/85b43352-7580-46b6-a90c-93a2598ac134-kube-api-access-vhs6z\") pod \"barbican-c800-account-create-update-cnrds\" (UID: \"85b43352-7580-46b6-a90c-93a2598ac134\") " pod="openstack/barbican-c800-account-create-update-cnrds" Mar 11 09:05:09 crc kubenswrapper[4808]: I0311 09:05:09.722731 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4922ac8-998e-4ba3-88cb-6805fa10c7fd-operator-scripts\") pod \"root-account-create-update-pcjsh\" (UID: \"b4922ac8-998e-4ba3-88cb-6805fa10c7fd\") " pod="openstack/root-account-create-update-pcjsh" Mar 11 09:05:09 crc kubenswrapper[4808]: I0311 09:05:09.722860 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chn2x\" (UniqueName: \"kubernetes.io/projected/b4922ac8-998e-4ba3-88cb-6805fa10c7fd-kube-api-access-chn2x\") pod \"root-account-create-update-pcjsh\" (UID: \"b4922ac8-998e-4ba3-88cb-6805fa10c7fd\") " pod="openstack/root-account-create-update-pcjsh" Mar 11 09:05:09 crc kubenswrapper[4808]: I0311 09:05:09.725139 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4922ac8-998e-4ba3-88cb-6805fa10c7fd-operator-scripts\") pod \"root-account-create-update-pcjsh\" (UID: \"b4922ac8-998e-4ba3-88cb-6805fa10c7fd\") " pod="openstack/root-account-create-update-pcjsh" Mar 11 09:05:09 crc kubenswrapper[4808]: I0311 09:05:09.731767 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-spf22"] Mar 11 09:05:09 crc kubenswrapper[4808]: I0311 09:05:09.766796 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-zcd8b"] Mar 11 09:05:09 crc kubenswrapper[4808]: I0311 09:05:09.767388 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-metrics-zcd8b" podUID="2f244d77-0b6a-4bcf-a6b4-dc7028019e29" containerName="openstack-network-exporter" containerID="cri-o://6fd674f4d1eb009a734a5c6b505490fc1b34cc96907e81fed8a9032b16af9052" gracePeriod=30 Mar 11 09:05:09 crc kubenswrapper[4808]: I0311 09:05:09.779066 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chn2x\" (UniqueName: \"kubernetes.io/projected/b4922ac8-998e-4ba3-88cb-6805fa10c7fd-kube-api-access-chn2x\") pod \"root-account-create-update-pcjsh\" (UID: \"b4922ac8-998e-4ba3-88cb-6805fa10c7fd\") " pod="openstack/root-account-create-update-pcjsh" Mar 11 09:05:09 crc kubenswrapper[4808]: I0311 09:05:09.782568 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-b04e-account-create-update-6vvff"] Mar 11 09:05:09 crc kubenswrapper[4808]: I0311 09:05:09.783869 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b04e-account-create-update-6vvff" Mar 11 09:05:09 crc kubenswrapper[4808]: I0311 09:05:09.789819 4808 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/swift-proxy-8749b8c99-fl7cg" secret="" err="secret \"swift-swift-dockercfg-r7rkp\" not found" Mar 11 09:05:09 crc kubenswrapper[4808]: I0311 09:05:09.790935 4808 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/swift-storage-0" secret="" err="secret \"swift-swift-dockercfg-r7rkp\" not found" Mar 11 09:05:09 crc kubenswrapper[4808]: I0311 09:05:09.801999 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Mar 11 09:05:09 crc kubenswrapper[4808]: I0311 09:05:09.817774 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c64c919f-53f9-4423-8ff5-76b34fa213ec" path="/var/lib/kubelet/pods/c64c919f-53f9-4423-8ff5-76b34fa213ec/volumes" Mar 11 09:05:09 crc kubenswrapper[4808]: I0311 09:05:09.818393 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-mbbhf"] Mar 11 09:05:09 crc kubenswrapper[4808]: I0311 09:05:09.825549 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhs6z\" (UniqueName: \"kubernetes.io/projected/85b43352-7580-46b6-a90c-93a2598ac134-kube-api-access-vhs6z\") pod \"barbican-c800-account-create-update-cnrds\" (UID: \"85b43352-7580-46b6-a90c-93a2598ac134\") " pod="openstack/barbican-c800-account-create-update-cnrds" Mar 11 09:05:09 crc kubenswrapper[4808]: I0311 09:05:09.825928 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85b43352-7580-46b6-a90c-93a2598ac134-operator-scripts\") pod \"barbican-c800-account-create-update-cnrds\" (UID: \"85b43352-7580-46b6-a90c-93a2598ac134\") " pod="openstack/barbican-c800-account-create-update-cnrds" Mar 11 09:05:09 crc kubenswrapper[4808]: E0311 09:05:09.826093 4808 projected.go:263] Couldn't get secret openstack/swift-conf: secret "swift-conf" not found Mar 11 09:05:09 crc kubenswrapper[4808]: E0311 09:05:09.826119 4808 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 11 09:05:09 crc kubenswrapper[4808]: E0311 09:05:09.826133 4808 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-proxy-8749b8c99-fl7cg: [secret "swift-conf" not found, configmap "swift-ring-files" not found] Mar 11 09:05:09 crc kubenswrapper[4808]: E0311 09:05:09.826183 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c8cf2302-c420-4e0f-a292-a601a5f66bfa-etc-swift podName:c8cf2302-c420-4e0f-a292-a601a5f66bfa nodeName:}" failed. No retries permitted until 2026-03-11 09:05:10.32616109 +0000 UTC m=+1561.279484410 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c8cf2302-c420-4e0f-a292-a601a5f66bfa-etc-swift") pod "swift-proxy-8749b8c99-fl7cg" (UID: "c8cf2302-c420-4e0f-a292-a601a5f66bfa") : [secret "swift-conf" not found, configmap "swift-ring-files" not found] Mar 11 09:05:09 crc kubenswrapper[4808]: I0311 09:05:09.855039 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85b43352-7580-46b6-a90c-93a2598ac134-operator-scripts\") pod \"barbican-c800-account-create-update-cnrds\" (UID: \"85b43352-7580-46b6-a90c-93a2598ac134\") " pod="openstack/barbican-c800-account-create-update-cnrds" Mar 11 09:05:09 crc kubenswrapper[4808]: I0311 09:05:09.855326 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-pcjsh" Mar 11 09:05:09 crc kubenswrapper[4808]: I0311 09:05:09.862700 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-c800-account-create-update-qnvq4"] Mar 11 09:05:10 crc kubenswrapper[4808]: I0311 09:05:09.904311 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-c800-account-create-update-qnvq4"] Mar 11 09:05:10 crc kubenswrapper[4808]: I0311 09:05:09.915335 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhs6z\" (UniqueName: \"kubernetes.io/projected/85b43352-7580-46b6-a90c-93a2598ac134-kube-api-access-vhs6z\") pod \"barbican-c800-account-create-update-cnrds\" (UID: \"85b43352-7580-46b6-a90c-93a2598ac134\") " pod="openstack/barbican-c800-account-create-update-cnrds" Mar 11 09:05:10 crc kubenswrapper[4808]: I0311 09:05:09.928805 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8c7p2\" (UniqueName: \"kubernetes.io/projected/041d9008-1855-405b-ad45-aee31ade42f2-kube-api-access-8c7p2\") pod \"glance-b04e-account-create-update-6vvff\" (UID: \"041d9008-1855-405b-ad45-aee31ade42f2\") " pod="openstack/glance-b04e-account-create-update-6vvff" Mar 11 09:05:10 crc kubenswrapper[4808]: I0311 09:05:09.929047 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/041d9008-1855-405b-ad45-aee31ade42f2-operator-scripts\") pod \"glance-b04e-account-create-update-6vvff\" (UID: \"041d9008-1855-405b-ad45-aee31ade42f2\") " pod="openstack/glance-b04e-account-create-update-6vvff" Mar 11 09:05:10 crc kubenswrapper[4808]: E0311 09:05:09.929412 4808 secret.go:188] Couldn't get secret openstack/cinder-config-data: secret "cinder-config-data" not found Mar 11 09:05:10 crc kubenswrapper[4808]: E0311 09:05:09.929464 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45e8823d-6df6-41fb-b7cd-9cb19e680db1-config-data podName:45e8823d-6df6-41fb-b7cd-9cb19e680db1 nodeName:}" failed. No retries permitted until 2026-03-11 09:05:10.429449592 +0000 UTC m=+1561.382772912 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/45e8823d-6df6-41fb-b7cd-9cb19e680db1-config-data") pod "cinder-api-0" (UID: "45e8823d-6df6-41fb-b7cd-9cb19e680db1") : secret "cinder-config-data" not found Mar 11 09:05:10 crc kubenswrapper[4808]: E0311 09:05:09.929867 4808 projected.go:263] Couldn't get secret openstack/swift-conf: secret "swift-conf" not found Mar 11 09:05:10 crc kubenswrapper[4808]: E0311 09:05:09.929879 4808 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 11 09:05:10 crc kubenswrapper[4808]: E0311 09:05:09.929890 4808 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: [secret "swift-conf" not found, configmap "swift-ring-files" not found] Mar 11 09:05:10 crc kubenswrapper[4808]: E0311 09:05:09.929914 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b2531f01-6ef8-4583-b788-97e0c8b4b50b-etc-swift podName:b2531f01-6ef8-4583-b788-97e0c8b4b50b nodeName:}" failed. No retries permitted until 2026-03-11 09:05:10.429906935 +0000 UTC m=+1561.383230255 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b2531f01-6ef8-4583-b788-97e0c8b4b50b-etc-swift") pod "swift-storage-0" (UID: "b2531f01-6ef8-4583-b788-97e0c8b4b50b") : [secret "swift-conf" not found, configmap "swift-ring-files" not found] Mar 11 09:05:10 crc kubenswrapper[4808]: E0311 09:05:09.930074 4808 secret.go:188] Couldn't get secret openstack/cinder-scripts: secret "cinder-scripts" not found Mar 11 09:05:10 crc kubenswrapper[4808]: E0311 09:05:09.930120 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45e8823d-6df6-41fb-b7cd-9cb19e680db1-scripts podName:45e8823d-6df6-41fb-b7cd-9cb19e680db1 nodeName:}" failed. No retries permitted until 2026-03-11 09:05:10.430105711 +0000 UTC m=+1561.383429031 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "scripts" (UniqueName: "kubernetes.io/secret/45e8823d-6df6-41fb-b7cd-9cb19e680db1-scripts") pod "cinder-api-0" (UID: "45e8823d-6df6-41fb-b7cd-9cb19e680db1") : secret "cinder-scripts" not found Mar 11 09:05:10 crc kubenswrapper[4808]: I0311 09:05:09.945638 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-b04e-account-create-update-6vvff"] Mar 11 09:05:10 crc kubenswrapper[4808]: I0311 09:05:10.002471 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-b04e-account-create-update-v6p9b"] Mar 11 09:05:10 crc kubenswrapper[4808]: I0311 09:05:10.004788 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-c800-account-create-update-cnrds" Mar 11 09:05:10 crc kubenswrapper[4808]: I0311 09:05:10.035099 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8c7p2\" (UniqueName: \"kubernetes.io/projected/041d9008-1855-405b-ad45-aee31ade42f2-kube-api-access-8c7p2\") pod \"glance-b04e-account-create-update-6vvff\" (UID: \"041d9008-1855-405b-ad45-aee31ade42f2\") " pod="openstack/glance-b04e-account-create-update-6vvff" Mar 11 09:05:10 crc kubenswrapper[4808]: I0311 09:05:10.035292 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/041d9008-1855-405b-ad45-aee31ade42f2-operator-scripts\") pod \"glance-b04e-account-create-update-6vvff\" (UID: \"041d9008-1855-405b-ad45-aee31ade42f2\") " pod="openstack/glance-b04e-account-create-update-6vvff" Mar 11 09:05:10 crc kubenswrapper[4808]: I0311 09:05:10.039141 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-b04e-account-create-update-v6p9b"] Mar 11 09:05:10 crc kubenswrapper[4808]: I0311 09:05:10.040593 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/041d9008-1855-405b-ad45-aee31ade42f2-operator-scripts\") pod \"glance-b04e-account-create-update-6vvff\" (UID: \"041d9008-1855-405b-ad45-aee31ade42f2\") " pod="openstack/glance-b04e-account-create-update-6vvff" Mar 11 09:05:10 crc kubenswrapper[4808]: I0311 09:05:10.062968 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-dde6-account-create-update-5lc8z"] Mar 11 09:05:10 crc kubenswrapper[4808]: I0311 09:05:10.064329 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dde6-account-create-update-5lc8z" Mar 11 09:05:10 crc kubenswrapper[4808]: I0311 09:05:10.071748 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Mar 11 09:05:10 crc kubenswrapper[4808]: I0311 09:05:10.086415 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-dde6-account-create-update-5lc8z"] Mar 11 09:05:10 crc kubenswrapper[4808]: I0311 09:05:10.102384 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-c59a-account-create-update-6xx54"] Mar 11 09:05:10 crc kubenswrapper[4808]: I0311 09:05:10.103642 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-c59a-account-create-update-6xx54" Mar 11 09:05:10 crc kubenswrapper[4808]: I0311 09:05:10.112170 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Mar 11 09:05:10 crc kubenswrapper[4808]: I0311 09:05:10.113310 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8c7p2\" (UniqueName: \"kubernetes.io/projected/041d9008-1855-405b-ad45-aee31ade42f2-kube-api-access-8c7p2\") pod \"glance-b04e-account-create-update-6vvff\" (UID: \"041d9008-1855-405b-ad45-aee31ade42f2\") " pod="openstack/glance-b04e-account-create-update-6vvff" Mar 11 09:05:10 crc kubenswrapper[4808]: I0311 09:05:10.134615 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 11 09:05:10 crc kubenswrapper[4808]: E0311 09:05:10.138067 4808 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err="command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: " execCommand=["/usr/share/ovn/scripts/ovn-ctl","stop_controller"] containerName="ovn-controller" pod="openstack/ovn-controller-spf22" message=< Mar 11 09:05:10 crc kubenswrapper[4808]: Exiting ovn-controller (1) [ OK ] Mar 11 09:05:10 crc kubenswrapper[4808]: > Mar 11 09:05:10 crc kubenswrapper[4808]: E0311 09:05:10.138110 4808 kuberuntime_container.go:691] "PreStop hook failed" err="command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: " pod="openstack/ovn-controller-spf22" podUID="3fd1979f-d1de-42a8-be8e-b61087f737bc" containerName="ovn-controller" containerID="cri-o://36c750f48ac33ddc1cee8a438d54bf885813c3f9d9bdb51b0dfec0a66d24f056" Mar 11 09:05:10 crc kubenswrapper[4808]: I0311 09:05:10.138149 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-spf22" podUID="3fd1979f-d1de-42a8-be8e-b61087f737bc" containerName="ovn-controller" containerID="cri-o://36c750f48ac33ddc1cee8a438d54bf885813c3f9d9bdb51b0dfec0a66d24f056" gracePeriod=30 Mar 11 09:05:10 crc kubenswrapper[4808]: I0311 09:05:10.176261 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-e4df-account-create-update-dwl4j"] Mar 11 09:05:10 crc kubenswrapper[4808]: I0311 09:05:10.194533 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-e4df-account-create-update-dwl4j"] Mar 11 09:05:10 crc kubenswrapper[4808]: I0311 09:05:10.226536 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-c59a-account-create-update-6xx54"] Mar 11 09:05:10 crc kubenswrapper[4808]: I0311 09:05:10.241351 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqg6h\" (UniqueName: \"kubernetes.io/projected/2f291b15-bc87-423a-8843-a3105ea5688b-kube-api-access-bqg6h\") pod \"nova-api-c59a-account-create-update-6xx54\" (UID: \"2f291b15-bc87-423a-8843-a3105ea5688b\") " pod="openstack/nova-api-c59a-account-create-update-6xx54" Mar 11 09:05:10 crc kubenswrapper[4808]: I0311 09:05:10.241421 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzvdr\" (UniqueName: \"kubernetes.io/projected/40ec7dec-ae7d-49ff-95cd-af13c2ab08a5-kube-api-access-lzvdr\") pod \"neutron-dde6-account-create-update-5lc8z\" (UID: \"40ec7dec-ae7d-49ff-95cd-af13c2ab08a5\") " pod="openstack/neutron-dde6-account-create-update-5lc8z" Mar 11 09:05:10 crc kubenswrapper[4808]: I0311 09:05:10.241458 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f291b15-bc87-423a-8843-a3105ea5688b-operator-scripts\") pod \"nova-api-c59a-account-create-update-6xx54\" (UID: \"2f291b15-bc87-423a-8843-a3105ea5688b\") " pod="openstack/nova-api-c59a-account-create-update-6xx54" Mar 11 09:05:10 crc kubenswrapper[4808]: I0311 09:05:10.241486 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/40ec7dec-ae7d-49ff-95cd-af13c2ab08a5-operator-scripts\") pod \"neutron-dde6-account-create-update-5lc8z\" (UID: \"40ec7dec-ae7d-49ff-95cd-af13c2ab08a5\") " pod="openstack/neutron-dde6-account-create-update-5lc8z" Mar 11 09:05:10 crc kubenswrapper[4808]: I0311 09:05:10.258605 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-3a44-account-create-update-bp6hq"] Mar 11 09:05:10 crc kubenswrapper[4808]: I0311 09:05:10.259858 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-3a44-account-create-update-bp6hq" Mar 11 09:05:10 crc kubenswrapper[4808]: I0311 09:05:10.277203 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Mar 11 09:05:10 crc kubenswrapper[4808]: I0311 09:05:10.346359 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d56dace1-91d0-4b10-a4f1-06e5917d1675-operator-scripts\") pod \"nova-cell1-3a44-account-create-update-bp6hq\" (UID: \"d56dace1-91d0-4b10-a4f1-06e5917d1675\") " pod="openstack/nova-cell1-3a44-account-create-update-bp6hq" Mar 11 09:05:10 crc kubenswrapper[4808]: I0311 09:05:10.374716 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqg6h\" (UniqueName: \"kubernetes.io/projected/2f291b15-bc87-423a-8843-a3105ea5688b-kube-api-access-bqg6h\") pod \"nova-api-c59a-account-create-update-6xx54\" (UID: \"2f291b15-bc87-423a-8843-a3105ea5688b\") " pod="openstack/nova-api-c59a-account-create-update-6xx54" Mar 11 09:05:10 crc kubenswrapper[4808]: I0311 09:05:10.377320 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzvdr\" (UniqueName: \"kubernetes.io/projected/40ec7dec-ae7d-49ff-95cd-af13c2ab08a5-kube-api-access-lzvdr\") pod \"neutron-dde6-account-create-update-5lc8z\" (UID: \"40ec7dec-ae7d-49ff-95cd-af13c2ab08a5\") " pod="openstack/neutron-dde6-account-create-update-5lc8z" Mar 11 09:05:10 crc kubenswrapper[4808]: I0311 09:05:10.386389 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f291b15-bc87-423a-8843-a3105ea5688b-operator-scripts\") pod \"nova-api-c59a-account-create-update-6xx54\" (UID: \"2f291b15-bc87-423a-8843-a3105ea5688b\") " pod="openstack/nova-api-c59a-account-create-update-6xx54" Mar 11 09:05:10 crc kubenswrapper[4808]: I0311 09:05:10.387680 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/40ec7dec-ae7d-49ff-95cd-af13c2ab08a5-operator-scripts\") pod \"neutron-dde6-account-create-update-5lc8z\" (UID: \"40ec7dec-ae7d-49ff-95cd-af13c2ab08a5\") " pod="openstack/neutron-dde6-account-create-update-5lc8z" Mar 11 09:05:10 crc kubenswrapper[4808]: I0311 09:05:10.388056 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6g4b\" (UniqueName: \"kubernetes.io/projected/d56dace1-91d0-4b10-a4f1-06e5917d1675-kube-api-access-h6g4b\") pod \"nova-cell1-3a44-account-create-update-bp6hq\" (UID: \"d56dace1-91d0-4b10-a4f1-06e5917d1675\") " pod="openstack/nova-cell1-3a44-account-create-update-bp6hq" Mar 11 09:05:10 crc kubenswrapper[4808]: I0311 09:05:10.387576 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f291b15-bc87-423a-8843-a3105ea5688b-operator-scripts\") pod \"nova-api-c59a-account-create-update-6xx54\" (UID: \"2f291b15-bc87-423a-8843-a3105ea5688b\") " pod="openstack/nova-api-c59a-account-create-update-6xx54" Mar 11 09:05:10 crc kubenswrapper[4808]: I0311 09:05:10.390262 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/40ec7dec-ae7d-49ff-95cd-af13c2ab08a5-operator-scripts\") pod \"neutron-dde6-account-create-update-5lc8z\" (UID: \"40ec7dec-ae7d-49ff-95cd-af13c2ab08a5\") " pod="openstack/neutron-dde6-account-create-update-5lc8z" Mar 11 09:05:10 crc kubenswrapper[4808]: E0311 09:05:10.390306 4808 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Mar 11 09:05:10 crc kubenswrapper[4808]: E0311 09:05:10.409102 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a1e42e33-7453-4b97-abca-0c45cc27faa2-config-data podName:a1e42e33-7453-4b97-abca-0c45cc27faa2 nodeName:}" failed. No retries permitted until 2026-03-11 09:05:10.909066971 +0000 UTC m=+1561.862390291 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/a1e42e33-7453-4b97-abca-0c45cc27faa2-config-data") pod "rabbitmq-server-0" (UID: "a1e42e33-7453-4b97-abca-0c45cc27faa2") : configmap "rabbitmq-config-data" not found Mar 11 09:05:10 crc kubenswrapper[4808]: I0311 09:05:10.400192 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqg6h\" (UniqueName: \"kubernetes.io/projected/2f291b15-bc87-423a-8843-a3105ea5688b-kube-api-access-bqg6h\") pod \"nova-api-c59a-account-create-update-6xx54\" (UID: \"2f291b15-bc87-423a-8843-a3105ea5688b\") " pod="openstack/nova-api-c59a-account-create-update-6xx54" Mar 11 09:05:10 crc kubenswrapper[4808]: E0311 09:05:10.394090 4808 projected.go:263] Couldn't get secret openstack/swift-conf: secret "swift-conf" not found Mar 11 09:05:10 crc kubenswrapper[4808]: E0311 09:05:10.409547 4808 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 11 09:05:10 crc kubenswrapper[4808]: E0311 09:05:10.409607 4808 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-proxy-8749b8c99-fl7cg: [secret "swift-conf" not found, configmap "swift-ring-files" not found] Mar 11 09:05:10 crc kubenswrapper[4808]: E0311 09:05:10.409697 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c8cf2302-c420-4e0f-a292-a601a5f66bfa-etc-swift podName:c8cf2302-c420-4e0f-a292-a601a5f66bfa nodeName:}" failed. No retries permitted until 2026-03-11 09:05:11.409685859 +0000 UTC m=+1562.363009179 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c8cf2302-c420-4e0f-a292-a601a5f66bfa-etc-swift") pod "swift-proxy-8749b8c99-fl7cg" (UID: "c8cf2302-c420-4e0f-a292-a601a5f66bfa") : [secret "swift-conf" not found, configmap "swift-ring-files" not found] Mar 11 09:05:10 crc kubenswrapper[4808]: I0311 09:05:10.426442 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-3a44-account-create-update-bp6hq"] Mar 11 09:05:10 crc kubenswrapper[4808]: I0311 09:05:10.460127 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzvdr\" (UniqueName: \"kubernetes.io/projected/40ec7dec-ae7d-49ff-95cd-af13c2ab08a5-kube-api-access-lzvdr\") pod \"neutron-dde6-account-create-update-5lc8z\" (UID: \"40ec7dec-ae7d-49ff-95cd-af13c2ab08a5\") " pod="openstack/neutron-dde6-account-create-update-5lc8z" Mar 11 09:05:10 crc kubenswrapper[4808]: I0311 09:05:10.482583 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-ac71-account-create-update-jdqhw"] Mar 11 09:05:10 crc kubenswrapper[4808]: I0311 09:05:10.483798 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-ac71-account-create-update-jdqhw" Mar 11 09:05:10 crc kubenswrapper[4808]: I0311 09:05:10.494576 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6g4b\" (UniqueName: \"kubernetes.io/projected/d56dace1-91d0-4b10-a4f1-06e5917d1675-kube-api-access-h6g4b\") pod \"nova-cell1-3a44-account-create-update-bp6hq\" (UID: \"d56dace1-91d0-4b10-a4f1-06e5917d1675\") " pod="openstack/nova-cell1-3a44-account-create-update-bp6hq" Mar 11 09:05:10 crc kubenswrapper[4808]: I0311 09:05:10.494656 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d56dace1-91d0-4b10-a4f1-06e5917d1675-operator-scripts\") pod \"nova-cell1-3a44-account-create-update-bp6hq\" (UID: \"d56dace1-91d0-4b10-a4f1-06e5917d1675\") " pod="openstack/nova-cell1-3a44-account-create-update-bp6hq" Mar 11 09:05:10 crc kubenswrapper[4808]: E0311 09:05:10.494879 4808 secret.go:188] Couldn't get secret openstack/cinder-scripts: secret "cinder-scripts" not found Mar 11 09:05:10 crc kubenswrapper[4808]: E0311 09:05:10.494919 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45e8823d-6df6-41fb-b7cd-9cb19e680db1-scripts podName:45e8823d-6df6-41fb-b7cd-9cb19e680db1 nodeName:}" failed. No retries permitted until 2026-03-11 09:05:11.494906106 +0000 UTC m=+1562.448229426 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "scripts" (UniqueName: "kubernetes.io/secret/45e8823d-6df6-41fb-b7cd-9cb19e680db1-scripts") pod "cinder-api-0" (UID: "45e8823d-6df6-41fb-b7cd-9cb19e680db1") : secret "cinder-scripts" not found Mar 11 09:05:10 crc kubenswrapper[4808]: I0311 09:05:10.495880 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d56dace1-91d0-4b10-a4f1-06e5917d1675-operator-scripts\") pod \"nova-cell1-3a44-account-create-update-bp6hq\" (UID: \"d56dace1-91d0-4b10-a4f1-06e5917d1675\") " pod="openstack/nova-cell1-3a44-account-create-update-bp6hq" Mar 11 09:05:10 crc kubenswrapper[4808]: E0311 09:05:10.495946 4808 projected.go:263] Couldn't get secret openstack/swift-conf: secret "swift-conf" not found Mar 11 09:05:10 crc kubenswrapper[4808]: E0311 09:05:10.495957 4808 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 11 09:05:10 crc kubenswrapper[4808]: E0311 09:05:10.495966 4808 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: [secret "swift-conf" not found, configmap "swift-ring-files" not found] Mar 11 09:05:10 crc kubenswrapper[4808]: E0311 09:05:10.495987 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b2531f01-6ef8-4583-b788-97e0c8b4b50b-etc-swift podName:b2531f01-6ef8-4583-b788-97e0c8b4b50b nodeName:}" failed. No retries permitted until 2026-03-11 09:05:11.495979648 +0000 UTC m=+1562.449302968 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b2531f01-6ef8-4583-b788-97e0c8b4b50b-etc-swift") pod "swift-storage-0" (UID: "b2531f01-6ef8-4583-b788-97e0c8b4b50b") : [secret "swift-conf" not found, configmap "swift-ring-files" not found] Mar 11 09:05:10 crc kubenswrapper[4808]: E0311 09:05:10.496019 4808 secret.go:188] Couldn't get secret openstack/cinder-config-data: secret "cinder-config-data" not found Mar 11 09:05:10 crc kubenswrapper[4808]: E0311 09:05:10.496036 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45e8823d-6df6-41fb-b7cd-9cb19e680db1-config-data podName:45e8823d-6df6-41fb-b7cd-9cb19e680db1 nodeName:}" failed. No retries permitted until 2026-03-11 09:05:11.496030469 +0000 UTC m=+1562.449353789 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/45e8823d-6df6-41fb-b7cd-9cb19e680db1-config-data") pod "cinder-api-0" (UID: "45e8823d-6df6-41fb-b7cd-9cb19e680db1") : secret "cinder-config-data" not found Mar 11 09:05:10 crc kubenswrapper[4808]: I0311 09:05:10.506468 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-ac71-account-create-update-jdqhw"] Mar 11 09:05:10 crc kubenswrapper[4808]: I0311 09:05:10.509153 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Mar 11 09:05:10 crc kubenswrapper[4808]: I0311 09:05:10.532203 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6g4b\" (UniqueName: \"kubernetes.io/projected/d56dace1-91d0-4b10-a4f1-06e5917d1675-kube-api-access-h6g4b\") pod \"nova-cell1-3a44-account-create-update-bp6hq\" (UID: \"d56dace1-91d0-4b10-a4f1-06e5917d1675\") " pod="openstack/nova-cell1-3a44-account-create-update-bp6hq" Mar 11 09:05:10 crc kubenswrapper[4808]: I0311 09:05:10.571634 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b04e-account-create-update-6vvff" Mar 11 09:05:10 crc kubenswrapper[4808]: I0311 09:05:10.572130 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-dde6-account-create-update-kdxb5"] Mar 11 09:05:10 crc kubenswrapper[4808]: I0311 09:05:10.601207 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba89fa63-8105-4acf-883b-f6aa1deb70de-operator-scripts\") pod \"nova-cell0-ac71-account-create-update-jdqhw\" (UID: \"ba89fa63-8105-4acf-883b-f6aa1deb70de\") " pod="openstack/nova-cell0-ac71-account-create-update-jdqhw" Mar 11 09:05:10 crc kubenswrapper[4808]: I0311 09:05:10.601396 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5x4xm\" (UniqueName: \"kubernetes.io/projected/ba89fa63-8105-4acf-883b-f6aa1deb70de-kube-api-access-5x4xm\") pod \"nova-cell0-ac71-account-create-update-jdqhw\" (UID: \"ba89fa63-8105-4acf-883b-f6aa1deb70de\") " pod="openstack/nova-cell0-ac71-account-create-update-jdqhw" Mar 11 09:05:10 crc kubenswrapper[4808]: I0311 09:05:10.611449 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-dde6-account-create-update-kdxb5"] Mar 11 09:05:10 crc kubenswrapper[4808]: I0311 09:05:10.612639 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-3a44-account-create-update-bp6hq" Mar 11 09:05:10 crc kubenswrapper[4808]: I0311 09:05:10.622479 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-j6tdp"] Mar 11 09:05:10 crc kubenswrapper[4808]: I0311 09:05:10.641214 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-j6tdp"] Mar 11 09:05:10 crc kubenswrapper[4808]: I0311 09:05:10.646522 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-c59a-account-create-update-6xx54" Mar 11 09:05:10 crc kubenswrapper[4808]: I0311 09:05:10.656488 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-c59a-account-create-update-xzlgx"] Mar 11 09:05:10 crc kubenswrapper[4808]: I0311 09:05:10.684704 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-c59a-account-create-update-xzlgx"] Mar 11 09:05:10 crc kubenswrapper[4808]: I0311 09:05:10.684871 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dde6-account-create-update-5lc8z" Mar 11 09:05:10 crc kubenswrapper[4808]: I0311 09:05:10.704766 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5x4xm\" (UniqueName: \"kubernetes.io/projected/ba89fa63-8105-4acf-883b-f6aa1deb70de-kube-api-access-5x4xm\") pod \"nova-cell0-ac71-account-create-update-jdqhw\" (UID: \"ba89fa63-8105-4acf-883b-f6aa1deb70de\") " pod="openstack/nova-cell0-ac71-account-create-update-jdqhw" Mar 11 09:05:10 crc kubenswrapper[4808]: I0311 09:05:10.705056 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba89fa63-8105-4acf-883b-f6aa1deb70de-operator-scripts\") pod \"nova-cell0-ac71-account-create-update-jdqhw\" (UID: \"ba89fa63-8105-4acf-883b-f6aa1deb70de\") " pod="openstack/nova-cell0-ac71-account-create-update-jdqhw" Mar 11 09:05:10 crc kubenswrapper[4808]: I0311 09:05:10.706803 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba89fa63-8105-4acf-883b-f6aa1deb70de-operator-scripts\") pod \"nova-cell0-ac71-account-create-update-jdqhw\" (UID: \"ba89fa63-8105-4acf-883b-f6aa1deb70de\") " pod="openstack/nova-cell0-ac71-account-create-update-jdqhw" Mar 11 09:05:10 crc kubenswrapper[4808]: I0311 09:05:10.710998 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Mar 11 09:05:10 crc kubenswrapper[4808]: I0311 09:05:10.711834 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="08b88a34-0eac-4a47-b3b3-89a8024bbe7b" containerName="ovn-northd" containerID="cri-o://5b2448d64b61c9eccd7290e46f4f9a7b28f6868fca3fd9535f5871db771e8840" gracePeriod=30 Mar 11 09:05:10 crc kubenswrapper[4808]: I0311 09:05:10.712770 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="08b88a34-0eac-4a47-b3b3-89a8024bbe7b" containerName="openstack-network-exporter" containerID="cri-o://6ce012fcceee4fbe97b03ef149a01fa24e752a2989ef48bafa67da5e91a5d644" gracePeriod=30 Mar 11 09:05:10 crc kubenswrapper[4808]: I0311 09:05:10.737530 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-rtcbw"] Mar 11 09:05:10 crc kubenswrapper[4808]: I0311 09:05:10.749349 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5x4xm\" (UniqueName: \"kubernetes.io/projected/ba89fa63-8105-4acf-883b-f6aa1deb70de-kube-api-access-5x4xm\") pod \"nova-cell0-ac71-account-create-update-jdqhw\" (UID: \"ba89fa63-8105-4acf-883b-f6aa1deb70de\") " pod="openstack/nova-cell0-ac71-account-create-update-jdqhw" Mar 11 09:05:10 crc kubenswrapper[4808]: I0311 09:05:10.754803 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-rtcbw"] Mar 11 09:05:10 crc kubenswrapper[4808]: I0311 09:05:10.769450 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-bvzx7"] Mar 11 09:05:10 crc kubenswrapper[4808]: I0311 09:05:10.779781 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-bvzx7"] Mar 11 09:05:10 crc kubenswrapper[4808]: I0311 09:05:10.796445 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-2sbpn"] Mar 11 09:05:10 crc kubenswrapper[4808]: I0311 09:05:10.824116 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-2sbpn"] Mar 11 09:05:10 crc kubenswrapper[4808]: I0311 09:05:10.843411 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7749c44969-t6dx4"] Mar 11 09:05:10 crc kubenswrapper[4808]: I0311 09:05:10.843732 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7749c44969-t6dx4" podUID="760732fc-fc8a-4a24-beca-c969fb0260fe" containerName="dnsmasq-dns" containerID="cri-o://d02f043e013fd12091fec4a22839ba78f4684779ae4ff1b219c9f90367f903e7" gracePeriod=10 Mar 11 09:05:10 crc kubenswrapper[4808]: I0311 09:05:10.860056 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-rksh6"] Mar 11 09:05:10 crc kubenswrapper[4808]: I0311 09:05:10.896640 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-rksh6"] Mar 11 09:05:10 crc kubenswrapper[4808]: I0311 09:05:10.905081 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-3a44-account-create-update-prpnm"] Mar 11 09:05:10 crc kubenswrapper[4808]: E0311 09:05:10.914034 4808 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Mar 11 09:05:10 crc kubenswrapper[4808]: E0311 09:05:10.914552 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a1e42e33-7453-4b97-abca-0c45cc27faa2-config-data podName:a1e42e33-7453-4b97-abca-0c45cc27faa2 nodeName:}" failed. No retries permitted until 2026-03-11 09:05:11.914537633 +0000 UTC m=+1562.867860953 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/a1e42e33-7453-4b97-abca-0c45cc27faa2-config-data") pod "rabbitmq-server-0" (UID: "a1e42e33-7453-4b97-abca-0c45cc27faa2") : configmap "rabbitmq-config-data" not found Mar 11 09:05:10 crc kubenswrapper[4808]: I0311 09:05:10.920462 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-3a44-account-create-update-prpnm"] Mar 11 09:05:10 crc kubenswrapper[4808]: I0311 09:05:10.931524 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-ac71-account-create-update-q4864"] Mar 11 09:05:10 crc kubenswrapper[4808]: I0311 09:05:10.941404 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-ac71-account-create-update-q4864"] Mar 11 09:05:10 crc kubenswrapper[4808]: I0311 09:05:10.965664 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 11 09:05:10 crc kubenswrapper[4808]: I0311 09:05:10.966254 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="03f33ae9-1e48-4adf-94bc-69ede69802d0" containerName="openstack-network-exporter" containerID="cri-o://caf403e5666b9d2c99d9a9fbdab7975e088bc83ebcc56dde5fee8c7b6be528e4" gracePeriod=300 Mar 11 09:05:11 crc kubenswrapper[4808]: I0311 09:05:11.005753 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-zcd8b_2f244d77-0b6a-4bcf-a6b4-dc7028019e29/openstack-network-exporter/0.log" Mar 11 09:05:11 crc kubenswrapper[4808]: I0311 09:05:11.007216 4808 generic.go:334] "Generic (PLEG): container finished" podID="2f244d77-0b6a-4bcf-a6b4-dc7028019e29" containerID="6fd674f4d1eb009a734a5c6b505490fc1b34cc96907e81fed8a9032b16af9052" exitCode=2 Mar 11 09:05:11 crc kubenswrapper[4808]: I0311 09:05:11.007340 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-zcd8b" event={"ID":"2f244d77-0b6a-4bcf-a6b4-dc7028019e29","Type":"ContainerDied","Data":"6fd674f4d1eb009a734a5c6b505490fc1b34cc96907e81fed8a9032b16af9052"} Mar 11 09:05:11 crc kubenswrapper[4808]: I0311 09:05:11.020529 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-2t99h"] Mar 11 09:05:11 crc kubenswrapper[4808]: I0311 09:05:11.025936 4808 generic.go:334] "Generic (PLEG): container finished" podID="3fd1979f-d1de-42a8-be8e-b61087f737bc" containerID="36c750f48ac33ddc1cee8a438d54bf885813c3f9d9bdb51b0dfec0a66d24f056" exitCode=0 Mar 11 09:05:11 crc kubenswrapper[4808]: I0311 09:05:11.025974 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-spf22" event={"ID":"3fd1979f-d1de-42a8-be8e-b61087f737bc","Type":"ContainerDied","Data":"36c750f48ac33ddc1cee8a438d54bf885813c3f9d9bdb51b0dfec0a66d24f056"} Mar 11 09:05:11 crc kubenswrapper[4808]: I0311 09:05:11.031737 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-ac71-account-create-update-jdqhw" Mar 11 09:05:11 crc kubenswrapper[4808]: I0311 09:05:11.034083 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-2t99h"] Mar 11 09:05:11 crc kubenswrapper[4808]: I0311 09:05:11.094018 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-665556c5fd-bnc2f"] Mar 11 09:05:11 crc kubenswrapper[4808]: I0311 09:05:11.094278 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-665556c5fd-bnc2f" podUID="91b4b44f-8d7c-4933-b20d-a1d79d7c90b4" containerName="placement-log" containerID="cri-o://9bd7b73d921af51c66c9406fce69c56ff4d3946cb37941b50732c8f9d3483ca1" gracePeriod=30 Mar 11 09:05:11 crc kubenswrapper[4808]: I0311 09:05:11.094738 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-665556c5fd-bnc2f" podUID="91b4b44f-8d7c-4933-b20d-a1d79d7c90b4" containerName="placement-api" containerID="cri-o://797976e443dd5e136b0f3528631bbb358b7bde1c228efadea193d6fc83f5f1cb" gracePeriod=30 Mar 11 09:05:11 crc kubenswrapper[4808]: W0311 09:05:11.138816 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod85b43352_7580_46b6_a90c_93a2598ac134.slice/crio-aabe2e2bd1867deaa13fefb19a67bbe19c0b7135d5cff6c4c262817d915a16cc WatchSource:0}: Error finding container aabe2e2bd1867deaa13fefb19a67bbe19c0b7135d5cff6c4c262817d915a16cc: Status 404 returned error can't find the container with id aabe2e2bd1867deaa13fefb19a67bbe19c0b7135d5cff6c4c262817d915a16cc Mar 11 09:05:11 crc kubenswrapper[4808]: I0311 09:05:11.161708 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 11 09:05:11 crc kubenswrapper[4808]: I0311 09:05:11.162348 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="073052f7-094c-467a-8910-b2ce25e5b981" containerName="openstack-network-exporter" containerID="cri-o://0b1ad33990e113a73cab3cbdff3db9029771d7e96b54d5e45a675f0010b3a17c" gracePeriod=300 Mar 11 09:05:11 crc kubenswrapper[4808]: I0311 09:05:11.179479 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-5fbh5"] Mar 11 09:05:11 crc kubenswrapper[4808]: I0311 09:05:11.191005 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="03f33ae9-1e48-4adf-94bc-69ede69802d0" containerName="ovsdbserver-nb" containerID="cri-o://3904a275fa7eb7164f6dcc6972082021fbcb12b9a1d3cf667a02688b376fc0eb" gracePeriod=300 Mar 11 09:05:11 crc kubenswrapper[4808]: I0311 09:05:11.209766 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-5fbh5"] Mar 11 09:05:11 crc kubenswrapper[4808]: I0311 09:05:11.279050 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Mar 11 09:05:11 crc kubenswrapper[4808]: E0311 09:05:11.287268 4808 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 11 09:05:11 crc kubenswrapper[4808]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:763d1f1e8a1cf877c151c59609960fd2fa29e7e50001f8818122a2d51878befa,Command:[/bin/sh -c #!/bin/bash Mar 11 09:05:11 crc kubenswrapper[4808]: Mar 11 09:05:11 crc kubenswrapper[4808]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Mar 11 09:05:11 crc kubenswrapper[4808]: Mar 11 09:05:11 crc kubenswrapper[4808]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Mar 11 09:05:11 crc kubenswrapper[4808]: Mar 11 09:05:11 crc kubenswrapper[4808]: MYSQL_CMD="mysql -h -u root -P 3306" Mar 11 09:05:11 crc kubenswrapper[4808]: Mar 11 09:05:11 crc kubenswrapper[4808]: if [ -n "barbican" ]; then Mar 11 09:05:11 crc kubenswrapper[4808]: GRANT_DATABASE="barbican" Mar 11 09:05:11 crc kubenswrapper[4808]: else Mar 11 09:05:11 crc kubenswrapper[4808]: GRANT_DATABASE="*" Mar 11 09:05:11 crc kubenswrapper[4808]: fi Mar 11 09:05:11 crc kubenswrapper[4808]: Mar 11 09:05:11 crc kubenswrapper[4808]: # going for maximum compatibility here: Mar 11 09:05:11 crc kubenswrapper[4808]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Mar 11 09:05:11 crc kubenswrapper[4808]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Mar 11 09:05:11 crc kubenswrapper[4808]: # 3. create user with CREATE but then do all password and TLS with ALTER to Mar 11 09:05:11 crc kubenswrapper[4808]: # support updates Mar 11 09:05:11 crc kubenswrapper[4808]: Mar 11 09:05:11 crc kubenswrapper[4808]: $MYSQL_CMD < logger="UnhandledError" Mar 11 09:05:11 crc kubenswrapper[4808]: E0311 09:05:11.288652 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"barbican-db-secret\\\" not found\"" pod="openstack/barbican-c800-account-create-update-cnrds" podUID="85b43352-7580-46b6-a90c-93a2598ac134" Mar 11 09:05:11 crc kubenswrapper[4808]: I0311 09:05:11.332216 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-c800-account-create-update-cnrds"] Mar 11 09:05:11 crc kubenswrapper[4808]: I0311 09:05:11.362699 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="073052f7-094c-467a-8910-b2ce25e5b981" containerName="ovsdbserver-sb" containerID="cri-o://464b1dff84d68cb54e2785c4998d80bcd9d2d9c96576bd186fecafee0ad6ee92" gracePeriod=300 Mar 11 09:05:11 crc kubenswrapper[4808]: I0311 09:05:11.378345 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Mar 11 09:05:11 crc kubenswrapper[4808]: I0311 09:05:11.380289 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-pcjsh"] Mar 11 09:05:11 crc kubenswrapper[4808]: E0311 09:05:11.385296 4808 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 11 09:05:11 crc kubenswrapper[4808]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:763d1f1e8a1cf877c151c59609960fd2fa29e7e50001f8818122a2d51878befa,Command:[/bin/sh -c #!/bin/bash Mar 11 09:05:11 crc kubenswrapper[4808]: Mar 11 09:05:11 crc kubenswrapper[4808]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Mar 11 09:05:11 crc kubenswrapper[4808]: Mar 11 09:05:11 crc kubenswrapper[4808]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Mar 11 09:05:11 crc kubenswrapper[4808]: Mar 11 09:05:11 crc kubenswrapper[4808]: MYSQL_CMD="mysql -h -u root -P 3306" Mar 11 09:05:11 crc kubenswrapper[4808]: Mar 11 09:05:11 crc kubenswrapper[4808]: if [ -n "" ]; then Mar 11 09:05:11 crc kubenswrapper[4808]: GRANT_DATABASE="" Mar 11 09:05:11 crc kubenswrapper[4808]: else Mar 11 09:05:11 crc kubenswrapper[4808]: GRANT_DATABASE="*" Mar 11 09:05:11 crc kubenswrapper[4808]: fi Mar 11 09:05:11 crc kubenswrapper[4808]: Mar 11 09:05:11 crc kubenswrapper[4808]: # going for maximum compatibility here: Mar 11 09:05:11 crc kubenswrapper[4808]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Mar 11 09:05:11 crc kubenswrapper[4808]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Mar 11 09:05:11 crc kubenswrapper[4808]: # 3. create user with CREATE but then do all password and TLS with ALTER to Mar 11 09:05:11 crc kubenswrapper[4808]: # support updates Mar 11 09:05:11 crc kubenswrapper[4808]: Mar 11 09:05:11 crc kubenswrapper[4808]: $MYSQL_CMD < logger="UnhandledError" Mar 11 09:05:11 crc kubenswrapper[4808]: E0311 09:05:11.387292 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"openstack-cell1-mariadb-root-db-secret\\\" not found\"" pod="openstack/root-account-create-update-pcjsh" podUID="b4922ac8-998e-4ba3-88cb-6805fa10c7fd" Mar 11 09:05:11 crc kubenswrapper[4808]: I0311 09:05:11.393064 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 11 09:05:11 crc kubenswrapper[4808]: I0311 09:05:11.416565 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-f44ls"] Mar 11 09:05:11 crc kubenswrapper[4808]: I0311 09:05:11.427139 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-f44ls"] Mar 11 09:05:11 crc kubenswrapper[4808]: E0311 09:05:11.437919 4808 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Mar 11 09:05:11 crc kubenswrapper[4808]: E0311 09:05:11.437998 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/549d4ad5-b5b0-45bd-87b0-b9a6ee77866e-config-data podName:549d4ad5-b5b0-45bd-87b0-b9a6ee77866e nodeName:}" failed. No retries permitted until 2026-03-11 09:05:11.937980178 +0000 UTC m=+1562.891303498 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/549d4ad5-b5b0-45bd-87b0-b9a6ee77866e-config-data") pod "rabbitmq-cell1-server-0" (UID: "549d4ad5-b5b0-45bd-87b0-b9a6ee77866e") : configmap "rabbitmq-cell1-config-data" not found Mar 11 09:05:11 crc kubenswrapper[4808]: E0311 09:05:11.438947 4808 projected.go:263] Couldn't get secret openstack/swift-conf: secret "swift-conf" not found Mar 11 09:05:11 crc kubenswrapper[4808]: E0311 09:05:11.438968 4808 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 11 09:05:11 crc kubenswrapper[4808]: E0311 09:05:11.438977 4808 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-proxy-8749b8c99-fl7cg: [secret "swift-conf" not found, configmap "swift-ring-files" not found] Mar 11 09:05:11 crc kubenswrapper[4808]: E0311 09:05:11.439001 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c8cf2302-c420-4e0f-a292-a601a5f66bfa-etc-swift podName:c8cf2302-c420-4e0f-a292-a601a5f66bfa nodeName:}" failed. No retries permitted until 2026-03-11 09:05:13.438992227 +0000 UTC m=+1564.392315547 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c8cf2302-c420-4e0f-a292-a601a5f66bfa-etc-swift") pod "swift-proxy-8749b8c99-fl7cg" (UID: "c8cf2302-c420-4e0f-a292-a601a5f66bfa") : [secret "swift-conf" not found, configmap "swift-ring-files" not found] Mar 11 09:05:11 crc kubenswrapper[4808]: I0311 09:05:11.445712 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-e766-account-create-update-9vw89"] Mar 11 09:05:11 crc kubenswrapper[4808]: I0311 09:05:11.454751 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-e766-account-create-update-9vw89"] Mar 11 09:05:11 crc kubenswrapper[4808]: I0311 09:05:11.485446 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Mar 11 09:05:11 crc kubenswrapper[4808]: I0311 09:05:11.485767 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="45e8823d-6df6-41fb-b7cd-9cb19e680db1" containerName="cinder-api-log" containerID="cri-o://449aa92d48851baa151002599c1327e205ca3ac321e49b56f196db3dc8961bcc" gracePeriod=30 Mar 11 09:05:11 crc kubenswrapper[4808]: I0311 09:05:11.486257 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="45e8823d-6df6-41fb-b7cd-9cb19e680db1" containerName="cinder-api" containerID="cri-o://600ad91b4b18f17d71cd6096452f04faaca352ace36331fc5ba1a67011114ccf" gracePeriod=30 Mar 11 09:05:11 crc kubenswrapper[4808]: I0311 09:05:11.522596 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 11 09:05:11 crc kubenswrapper[4808]: I0311 09:05:11.522832 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="bf0df220-037c-4d17-b4ac-93f6d7eb4fa0" containerName="cinder-scheduler" containerID="cri-o://aba62c8514267bc46dbefc96570aade30f30e9975367a34465e330970831b81c" gracePeriod=30 Mar 11 09:05:11 crc kubenswrapper[4808]: I0311 09:05:11.523069 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="bf0df220-037c-4d17-b4ac-93f6d7eb4fa0" containerName="probe" containerID="cri-o://fa00d39da6c2ff243156e8ec3d359bdd8214eb14e877427f3b6d367217f70a14" gracePeriod=30 Mar 11 09:05:11 crc kubenswrapper[4808]: E0311 09:05:11.541736 4808 projected.go:288] Couldn't get configMap openstack/swift-storage-config-data: configmap "swift-storage-config-data" not found Mar 11 09:05:11 crc kubenswrapper[4808]: E0311 09:05:11.541779 4808 projected.go:263] Couldn't get secret openstack/swift-conf: secret "swift-conf" not found Mar 11 09:05:11 crc kubenswrapper[4808]: E0311 09:05:11.541811 4808 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 11 09:05:11 crc kubenswrapper[4808]: E0311 09:05:11.541825 4808 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: [configmap "swift-storage-config-data" not found, secret "swift-conf" not found, configmap "swift-ring-files" not found] Mar 11 09:05:11 crc kubenswrapper[4808]: E0311 09:05:11.541907 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b2531f01-6ef8-4583-b788-97e0c8b4b50b-etc-swift podName:b2531f01-6ef8-4583-b788-97e0c8b4b50b nodeName:}" failed. No retries permitted until 2026-03-11 09:05:13.541883938 +0000 UTC m=+1564.495207298 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b2531f01-6ef8-4583-b788-97e0c8b4b50b-etc-swift") pod "swift-storage-0" (UID: "b2531f01-6ef8-4583-b788-97e0c8b4b50b") : [configmap "swift-storage-config-data" not found, secret "swift-conf" not found, configmap "swift-ring-files" not found] Mar 11 09:05:11 crc kubenswrapper[4808]: I0311 09:05:11.542869 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-2cjt7"] Mar 11 09:05:11 crc kubenswrapper[4808]: E0311 09:05:11.543680 4808 secret.go:188] Couldn't get secret openstack/cinder-config-data: secret "cinder-config-data" not found Mar 11 09:05:11 crc kubenswrapper[4808]: E0311 09:05:11.543751 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45e8823d-6df6-41fb-b7cd-9cb19e680db1-config-data podName:45e8823d-6df6-41fb-b7cd-9cb19e680db1 nodeName:}" failed. No retries permitted until 2026-03-11 09:05:13.543715261 +0000 UTC m=+1564.497038581 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/45e8823d-6df6-41fb-b7cd-9cb19e680db1-config-data") pod "cinder-api-0" (UID: "45e8823d-6df6-41fb-b7cd-9cb19e680db1") : secret "cinder-config-data" not found Mar 11 09:05:11 crc kubenswrapper[4808]: E0311 09:05:11.546467 4808 secret.go:188] Couldn't get secret openstack/cinder-scripts: secret "cinder-scripts" not found Mar 11 09:05:11 crc kubenswrapper[4808]: E0311 09:05:11.546520 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45e8823d-6df6-41fb-b7cd-9cb19e680db1-scripts podName:45e8823d-6df6-41fb-b7cd-9cb19e680db1 nodeName:}" failed. No retries permitted until 2026-03-11 09:05:13.546505522 +0000 UTC m=+1564.499828842 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "scripts" (UniqueName: "kubernetes.io/secret/45e8823d-6df6-41fb-b7cd-9cb19e680db1-scripts") pod "cinder-api-0" (UID: "45e8823d-6df6-41fb-b7cd-9cb19e680db1") : secret "cinder-scripts" not found Mar 11 09:05:11 crc kubenswrapper[4808]: I0311 09:05:11.574879 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-2cjt7"] Mar 11 09:05:11 crc kubenswrapper[4808]: I0311 09:05:11.665826 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 11 09:05:11 crc kubenswrapper[4808]: I0311 09:05:11.666092 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="b3cf17f3-18e6-43f9-ab09-5882a99ffa51" containerName="glance-log" containerID="cri-o://d3ed6d04157e55ecb439f27f04de8f8ce7e201a4c77868d475de1fccec4b926a" gracePeriod=30 Mar 11 09:05:11 crc kubenswrapper[4808]: I0311 09:05:11.666503 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="b3cf17f3-18e6-43f9-ab09-5882a99ffa51" containerName="glance-httpd" containerID="cri-o://69ba1fe4eed62164ec20e29e3973c8d7238e07cc99a3de39bc2e21c2312afc26" gracePeriod=30 Mar 11 09:05:11 crc kubenswrapper[4808]: I0311 09:05:11.699558 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 11 09:05:11 crc kubenswrapper[4808]: I0311 09:05:11.699828 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="45a36b4a-f974-46f6-a719-9765499308ed" containerName="glance-log" containerID="cri-o://b19b84b99164bc73197e675a39d0a76695e78a944f51b46351fa56c764200830" gracePeriod=30 Mar 11 09:05:11 crc kubenswrapper[4808]: I0311 09:05:11.700271 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="45a36b4a-f974-46f6-a719-9765499308ed" containerName="glance-httpd" containerID="cri-o://cf55bdcbeb3d626ba9dfd3112f8a4875f325fed6c1ddb8829be7326c5b814762" gracePeriod=30 Mar 11 09:05:11 crc kubenswrapper[4808]: I0311 09:05:11.710410 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Mar 11 09:05:11 crc kubenswrapper[4808]: I0311 09:05:11.711057 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="b2531f01-6ef8-4583-b788-97e0c8b4b50b" containerName="account-server" containerID="cri-o://82f8358e3c23ae5caf4686e4d2ed129be3db93f6c1646acae71a1541a038ba65" gracePeriod=30 Mar 11 09:05:11 crc kubenswrapper[4808]: I0311 09:05:11.711515 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="b2531f01-6ef8-4583-b788-97e0c8b4b50b" containerName="swift-recon-cron" containerID="cri-o://073e4d7fcffe762aa5c3e2750fba257255a258fb332465e09d8529f78025ea59" gracePeriod=30 Mar 11 09:05:11 crc kubenswrapper[4808]: I0311 09:05:11.711560 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="b2531f01-6ef8-4583-b788-97e0c8b4b50b" containerName="rsync" containerID="cri-o://8703e021e4664d65f7198a1e10e27fae65ecd623ec350adc5affd0e319e1f91c" gracePeriod=30 Mar 11 09:05:11 crc kubenswrapper[4808]: I0311 09:05:11.711592 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="b2531f01-6ef8-4583-b788-97e0c8b4b50b" containerName="object-expirer" containerID="cri-o://1c242216161e4c3f5f19cceec89f8a4f772fb8534970698b0f08070d25afb355" gracePeriod=30 Mar 11 09:05:11 crc kubenswrapper[4808]: I0311 09:05:11.711633 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="b2531f01-6ef8-4583-b788-97e0c8b4b50b" containerName="object-updater" containerID="cri-o://1fe8a90328e3c5fc5913211fdc47a8b9d9c43d633f87180fdcbdef9a02958f4e" gracePeriod=30 Mar 11 09:05:11 crc kubenswrapper[4808]: I0311 09:05:11.711664 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="b2531f01-6ef8-4583-b788-97e0c8b4b50b" containerName="object-auditor" containerID="cri-o://dd4a12d0f40b70bed0ff12cd9961f609f614dcb0c77bb2b41c37fb77c51b62c9" gracePeriod=30 Mar 11 09:05:11 crc kubenswrapper[4808]: I0311 09:05:11.711693 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="b2531f01-6ef8-4583-b788-97e0c8b4b50b" containerName="object-replicator" containerID="cri-o://36d57f8f584cb1d8ebdc130edda1da090b5344f1959c1b5fbee4de63ad660d1d" gracePeriod=30 Mar 11 09:05:11 crc kubenswrapper[4808]: I0311 09:05:11.711731 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="b2531f01-6ef8-4583-b788-97e0c8b4b50b" containerName="object-server" containerID="cri-o://5610b5414b923dbe5f29196fe9b69a93bc333d712a301053a8290a033d1900e2" gracePeriod=30 Mar 11 09:05:11 crc kubenswrapper[4808]: I0311 09:05:11.711770 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="b2531f01-6ef8-4583-b788-97e0c8b4b50b" containerName="container-updater" containerID="cri-o://91cb6e0a1f936b6dc40058ec5049a18b1acb5b6601c22fa29cce4e18b74747dd" gracePeriod=30 Mar 11 09:05:11 crc kubenswrapper[4808]: I0311 09:05:11.711825 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="b2531f01-6ef8-4583-b788-97e0c8b4b50b" containerName="container-auditor" containerID="cri-o://37d0039d1a631f590de0477dafeba545b23952735655b1396940b1af448dcf5c" gracePeriod=30 Mar 11 09:05:11 crc kubenswrapper[4808]: I0311 09:05:11.711882 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="b2531f01-6ef8-4583-b788-97e0c8b4b50b" containerName="container-replicator" containerID="cri-o://b34c64e3d4e3825344ce6d566596454bdaa22b31521f5119b23e2df58bc1f23d" gracePeriod=30 Mar 11 09:05:11 crc kubenswrapper[4808]: I0311 09:05:11.711939 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="b2531f01-6ef8-4583-b788-97e0c8b4b50b" containerName="container-server" containerID="cri-o://ba474a4645c4e2ea029e0f9ff8bded4ccbfb98f7157a6bc5a8efcb5ca613c7de" gracePeriod=30 Mar 11 09:05:11 crc kubenswrapper[4808]: I0311 09:05:11.712012 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="b2531f01-6ef8-4583-b788-97e0c8b4b50b" containerName="account-reaper" containerID="cri-o://081fbc280b7cac976f12e7408ccac953bdb3e17b1d8f4bac92f9023da0402d27" gracePeriod=30 Mar 11 09:05:11 crc kubenswrapper[4808]: I0311 09:05:11.712090 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="b2531f01-6ef8-4583-b788-97e0c8b4b50b" containerName="account-auditor" containerID="cri-o://267b966e81c3ff2dcbbdf7e8e9d6fb5a08be9d360dec65483ea3395dbc33a811" gracePeriod=30 Mar 11 09:05:11 crc kubenswrapper[4808]: I0311 09:05:11.712144 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="b2531f01-6ef8-4583-b788-97e0c8b4b50b" containerName="account-replicator" containerID="cri-o://681ee01ae1a57e4a64b46a088c7eb77d95b1fdf9586896e89410116abec29a90" gracePeriod=30 Mar 11 09:05:11 crc kubenswrapper[4808]: I0311 09:05:11.766611 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-sz9x6"] Mar 11 09:05:11 crc kubenswrapper[4808]: I0311 09:05:11.857216 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09a9abaa-b83b-45d9-8fdd-cd9df7b814bb" path="/var/lib/kubelet/pods/09a9abaa-b83b-45d9-8fdd-cd9df7b814bb/volumes" Mar 11 09:05:11 crc kubenswrapper[4808]: I0311 09:05:11.860934 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d430928-4434-4037-8d1c-d8cb7c8ff0f8" path="/var/lib/kubelet/pods/0d430928-4434-4037-8d1c-d8cb7c8ff0f8/volumes" Mar 11 09:05:11 crc kubenswrapper[4808]: I0311 09:05:11.861709 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1132fd26-9b0b-4a76-9e1c-ad025025ed8a" path="/var/lib/kubelet/pods/1132fd26-9b0b-4a76-9e1c-ad025025ed8a/volumes" Mar 11 09:05:11 crc kubenswrapper[4808]: I0311 09:05:11.863314 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3df778c6-d975-42f0-853c-37b8840e76a5" path="/var/lib/kubelet/pods/3df778c6-d975-42f0-853c-37b8840e76a5/volumes" Mar 11 09:05:11 crc kubenswrapper[4808]: I0311 09:05:11.865313 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3eb827b1-9ec7-4234-a8e6-38072c48a09c" path="/var/lib/kubelet/pods/3eb827b1-9ec7-4234-a8e6-38072c48a09c/volumes" Mar 11 09:05:11 crc kubenswrapper[4808]: I0311 09:05:11.865992 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42045259-953d-4ea6-bda4-d24008a021b6" path="/var/lib/kubelet/pods/42045259-953d-4ea6-bda4-d24008a021b6/volumes" Mar 11 09:05:11 crc kubenswrapper[4808]: I0311 09:05:11.866576 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44caf86d-00aa-48a8-b56d-d4395487da92" path="/var/lib/kubelet/pods/44caf86d-00aa-48a8-b56d-d4395487da92/volumes" Mar 11 09:05:11 crc kubenswrapper[4808]: I0311 09:05:11.867132 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52065a8d-3ef3-4770-9f24-60d40800efcb" path="/var/lib/kubelet/pods/52065a8d-3ef3-4770-9f24-60d40800efcb/volumes" Mar 11 09:05:11 crc kubenswrapper[4808]: I0311 09:05:11.871668 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60b98410-cbed-4562-b46c-0c34025045b6" path="/var/lib/kubelet/pods/60b98410-cbed-4562-b46c-0c34025045b6/volumes" Mar 11 09:05:11 crc kubenswrapper[4808]: I0311 09:05:11.874682 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d5c43da-a137-4ac3-be09-ec76e9c204d1" path="/var/lib/kubelet/pods/6d5c43da-a137-4ac3-be09-ec76e9c204d1/volumes" Mar 11 09:05:11 crc kubenswrapper[4808]: I0311 09:05:11.876669 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="712fc7d0-7b9d-4293-abfb-262e5482bfed" path="/var/lib/kubelet/pods/712fc7d0-7b9d-4293-abfb-262e5482bfed/volumes" Mar 11 09:05:11 crc kubenswrapper[4808]: I0311 09:05:11.877200 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75b5595d-1d35-47d9-b6a2-196e30848a13" path="/var/lib/kubelet/pods/75b5595d-1d35-47d9-b6a2-196e30848a13/volumes" Mar 11 09:05:11 crc kubenswrapper[4808]: I0311 09:05:11.887390 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="847a58e2-c27f-4b49-8300-cbe239822ffa" path="/var/lib/kubelet/pods/847a58e2-c27f-4b49-8300-cbe239822ffa/volumes" Mar 11 09:05:11 crc kubenswrapper[4808]: I0311 09:05:11.888463 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a7b14d5-82e6-402d-a64d-cec1541d5195" path="/var/lib/kubelet/pods/9a7b14d5-82e6-402d-a64d-cec1541d5195/volumes" Mar 11 09:05:11 crc kubenswrapper[4808]: I0311 09:05:11.895611 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6da51ff-4f87-4c78-aff9-1b60b3a23633" path="/var/lib/kubelet/pods/e6da51ff-4f87-4c78-aff9-1b60b3a23633/volumes" Mar 11 09:05:11 crc kubenswrapper[4808]: I0311 09:05:11.898061 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb889123-a36e-4211-af3c-a0febc942f46" path="/var/lib/kubelet/pods/eb889123-a36e-4211-af3c-a0febc942f46/volumes" Mar 11 09:05:11 crc kubenswrapper[4808]: I0311 09:05:11.898657 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee99a1df-1d19-4463-a0ae-84a18e2f6d4e" path="/var/lib/kubelet/pods/ee99a1df-1d19-4463-a0ae-84a18e2f6d4e/volumes" Mar 11 09:05:11 crc kubenswrapper[4808]: I0311 09:05:11.899188 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-sz9x6"] Mar 11 09:05:11 crc kubenswrapper[4808]: I0311 09:05:11.938525 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-b04e-account-create-update-6vvff"] Mar 11 09:05:11 crc kubenswrapper[4808]: I0311 09:05:11.952510 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-p2tj7"] Mar 11 09:05:11 crc kubenswrapper[4808]: I0311 09:05:11.965431 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-d84g2"] Mar 11 09:05:11 crc kubenswrapper[4808]: E0311 09:05:11.975211 4808 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Mar 11 09:05:11 crc kubenswrapper[4808]: E0311 09:05:11.975265 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/549d4ad5-b5b0-45bd-87b0-b9a6ee77866e-config-data podName:549d4ad5-b5b0-45bd-87b0-b9a6ee77866e nodeName:}" failed. No retries permitted until 2026-03-11 09:05:12.975251504 +0000 UTC m=+1563.928574824 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/549d4ad5-b5b0-45bd-87b0-b9a6ee77866e-config-data") pod "rabbitmq-cell1-server-0" (UID: "549d4ad5-b5b0-45bd-87b0-b9a6ee77866e") : configmap "rabbitmq-cell1-config-data" not found Mar 11 09:05:11 crc kubenswrapper[4808]: E0311 09:05:11.975571 4808 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Mar 11 09:05:11 crc kubenswrapper[4808]: E0311 09:05:11.975594 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a1e42e33-7453-4b97-abca-0c45cc27faa2-config-data podName:a1e42e33-7453-4b97-abca-0c45cc27faa2 nodeName:}" failed. No retries permitted until 2026-03-11 09:05:13.975587194 +0000 UTC m=+1564.928910514 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/a1e42e33-7453-4b97-abca-0c45cc27faa2-config-data") pod "rabbitmq-server-0" (UID: "a1e42e33-7453-4b97-abca-0c45cc27faa2") : configmap "rabbitmq-config-data" not found Mar 11 09:05:11 crc kubenswrapper[4808]: I0311 09:05:11.992106 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-d84g2"] Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.010135 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-p2tj7"] Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.026878 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-c800-account-create-update-cnrds"] Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.036317 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-dde6-account-create-update-5lc8z"] Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.037454 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-pcjsh" event={"ID":"b4922ac8-998e-4ba3-88cb-6805fa10c7fd","Type":"ContainerStarted","Data":"d3df68d4100af01b078ff305f0a9dce550768827d624e7f2306f7b769344af76"} Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.038494 4808 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/root-account-create-update-pcjsh" secret="" err="secret \"galera-openstack-cell1-dockercfg-kg7gr\" not found" Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.048735 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-j6v9x"] Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.061890 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-j6v9x"] Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.062804 4808 generic.go:334] "Generic (PLEG): container finished" podID="45e8823d-6df6-41fb-b7cd-9cb19e680db1" containerID="449aa92d48851baa151002599c1327e205ca3ac321e49b56f196db3dc8961bcc" exitCode=143 Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.062909 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"45e8823d-6df6-41fb-b7cd-9cb19e680db1","Type":"ContainerDied","Data":"449aa92d48851baa151002599c1327e205ca3ac321e49b56f196db3dc8961bcc"} Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.070956 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-mbbhf" podUID="ac93b356-9c32-4094-9de5-8fd25c677810" containerName="ovs-vswitchd" containerID="cri-o://d0f6b59c6c2a37c85684bda61d18699ea95a7b07fe74cd2b454ef4d645be3cec" gracePeriod=28 Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.075981 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7676f56769-zslbs"] Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.076241 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7676f56769-zslbs" podUID="37361775-fb6c-486f-8d7b-fd93f31bbaf5" containerName="neutron-api" containerID="cri-o://35ee99bbbf3c1d04399562decf8fafbf34dea50fb8418e32d3bedd96e8190659" gracePeriod=30 Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.076716 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7676f56769-zslbs" podUID="37361775-fb6c-486f-8d7b-fd93f31bbaf5" containerName="neutron-httpd" containerID="cri-o://313c3f34d3027bd9947d2e5694c49e600d145074cf0323486a527bfeeb269fbc" gracePeriod=30 Mar 11 09:05:12 crc kubenswrapper[4808]: E0311 09:05:12.095435 4808 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err=< Mar 11 09:05:12 crc kubenswrapper[4808]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Mar 11 09:05:12 crc kubenswrapper[4808]: + source /usr/local/bin/container-scripts/functions Mar 11 09:05:12 crc kubenswrapper[4808]: ++ OVNBridge=br-int Mar 11 09:05:12 crc kubenswrapper[4808]: ++ OVNRemote=tcp:localhost:6642 Mar 11 09:05:12 crc kubenswrapper[4808]: ++ OVNEncapType=geneve Mar 11 09:05:12 crc kubenswrapper[4808]: ++ OVNAvailabilityZones= Mar 11 09:05:12 crc kubenswrapper[4808]: ++ EnableChassisAsGateway=true Mar 11 09:05:12 crc kubenswrapper[4808]: ++ PhysicalNetworks= Mar 11 09:05:12 crc kubenswrapper[4808]: ++ OVNHostName= Mar 11 09:05:12 crc kubenswrapper[4808]: ++ DB_FILE=/etc/openvswitch/conf.db Mar 11 09:05:12 crc kubenswrapper[4808]: ++ ovs_dir=/var/lib/openvswitch Mar 11 09:05:12 crc kubenswrapper[4808]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Mar 11 09:05:12 crc kubenswrapper[4808]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Mar 11 09:05:12 crc kubenswrapper[4808]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Mar 11 09:05:12 crc kubenswrapper[4808]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Mar 11 09:05:12 crc kubenswrapper[4808]: + sleep 0.5 Mar 11 09:05:12 crc kubenswrapper[4808]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Mar 11 09:05:12 crc kubenswrapper[4808]: + sleep 0.5 Mar 11 09:05:12 crc kubenswrapper[4808]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Mar 11 09:05:12 crc kubenswrapper[4808]: + sleep 0.5 Mar 11 09:05:12 crc kubenswrapper[4808]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Mar 11 09:05:12 crc kubenswrapper[4808]: + cleanup_ovsdb_server_semaphore Mar 11 09:05:12 crc kubenswrapper[4808]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Mar 11 09:05:12 crc kubenswrapper[4808]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Mar 11 09:05:12 crc kubenswrapper[4808]: > execCommand=["/usr/local/bin/container-scripts/stop-ovsdb-server.sh"] containerName="ovsdb-server" pod="openstack/ovn-controller-ovs-mbbhf" message=< Mar 11 09:05:12 crc kubenswrapper[4808]: Exiting ovsdb-server (5) [ OK ] Mar 11 09:05:12 crc kubenswrapper[4808]: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Mar 11 09:05:12 crc kubenswrapper[4808]: + source /usr/local/bin/container-scripts/functions Mar 11 09:05:12 crc kubenswrapper[4808]: ++ OVNBridge=br-int Mar 11 09:05:12 crc kubenswrapper[4808]: ++ OVNRemote=tcp:localhost:6642 Mar 11 09:05:12 crc kubenswrapper[4808]: ++ OVNEncapType=geneve Mar 11 09:05:12 crc kubenswrapper[4808]: ++ OVNAvailabilityZones= Mar 11 09:05:12 crc kubenswrapper[4808]: ++ EnableChassisAsGateway=true Mar 11 09:05:12 crc kubenswrapper[4808]: ++ PhysicalNetworks= Mar 11 09:05:12 crc kubenswrapper[4808]: ++ OVNHostName= Mar 11 09:05:12 crc kubenswrapper[4808]: ++ DB_FILE=/etc/openvswitch/conf.db Mar 11 09:05:12 crc kubenswrapper[4808]: ++ ovs_dir=/var/lib/openvswitch Mar 11 09:05:12 crc kubenswrapper[4808]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Mar 11 09:05:12 crc kubenswrapper[4808]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Mar 11 09:05:12 crc kubenswrapper[4808]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Mar 11 09:05:12 crc kubenswrapper[4808]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Mar 11 09:05:12 crc kubenswrapper[4808]: + sleep 0.5 Mar 11 09:05:12 crc kubenswrapper[4808]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Mar 11 09:05:12 crc kubenswrapper[4808]: + sleep 0.5 Mar 11 09:05:12 crc kubenswrapper[4808]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Mar 11 09:05:12 crc kubenswrapper[4808]: + sleep 0.5 Mar 11 09:05:12 crc kubenswrapper[4808]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Mar 11 09:05:12 crc kubenswrapper[4808]: + cleanup_ovsdb_server_semaphore Mar 11 09:05:12 crc kubenswrapper[4808]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Mar 11 09:05:12 crc kubenswrapper[4808]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Mar 11 09:05:12 crc kubenswrapper[4808]: > Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.095485 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 11 09:05:12 crc kubenswrapper[4808]: E0311 09:05:12.095478 4808 kuberuntime_container.go:691] "PreStop hook failed" err=< Mar 11 09:05:12 crc kubenswrapper[4808]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Mar 11 09:05:12 crc kubenswrapper[4808]: + source /usr/local/bin/container-scripts/functions Mar 11 09:05:12 crc kubenswrapper[4808]: ++ OVNBridge=br-int Mar 11 09:05:12 crc kubenswrapper[4808]: ++ OVNRemote=tcp:localhost:6642 Mar 11 09:05:12 crc kubenswrapper[4808]: ++ OVNEncapType=geneve Mar 11 09:05:12 crc kubenswrapper[4808]: ++ OVNAvailabilityZones= Mar 11 09:05:12 crc kubenswrapper[4808]: ++ EnableChassisAsGateway=true Mar 11 09:05:12 crc kubenswrapper[4808]: ++ PhysicalNetworks= Mar 11 09:05:12 crc kubenswrapper[4808]: ++ OVNHostName= Mar 11 09:05:12 crc kubenswrapper[4808]: ++ DB_FILE=/etc/openvswitch/conf.db Mar 11 09:05:12 crc kubenswrapper[4808]: ++ ovs_dir=/var/lib/openvswitch Mar 11 09:05:12 crc kubenswrapper[4808]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Mar 11 09:05:12 crc kubenswrapper[4808]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Mar 11 09:05:12 crc kubenswrapper[4808]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Mar 11 09:05:12 crc kubenswrapper[4808]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Mar 11 09:05:12 crc kubenswrapper[4808]: + sleep 0.5 Mar 11 09:05:12 crc kubenswrapper[4808]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Mar 11 09:05:12 crc kubenswrapper[4808]: + sleep 0.5 Mar 11 09:05:12 crc kubenswrapper[4808]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Mar 11 09:05:12 crc kubenswrapper[4808]: + sleep 0.5 Mar 11 09:05:12 crc kubenswrapper[4808]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Mar 11 09:05:12 crc kubenswrapper[4808]: + cleanup_ovsdb_server_semaphore Mar 11 09:05:12 crc kubenswrapper[4808]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Mar 11 09:05:12 crc kubenswrapper[4808]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Mar 11 09:05:12 crc kubenswrapper[4808]: > pod="openstack/ovn-controller-ovs-mbbhf" podUID="ac93b356-9c32-4094-9de5-8fd25c677810" containerName="ovsdb-server" containerID="cri-o://be587cd4387689015812b77c921ed987cf9e6f3848ffcd2be39c7c2a7a593cef" Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.095531 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-mbbhf" podUID="ac93b356-9c32-4094-9de5-8fd25c677810" containerName="ovsdb-server" containerID="cri-o://be587cd4387689015812b77c921ed987cf9e6f3848ffcd2be39c7c2a7a593cef" gracePeriod=28 Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.096050 4808 generic.go:334] "Generic (PLEG): container finished" podID="45a36b4a-f974-46f6-a719-9765499308ed" containerID="b19b84b99164bc73197e675a39d0a76695e78a944f51b46351fa56c764200830" exitCode=143 Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.096124 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"45a36b4a-f974-46f6-a719-9765499308ed","Type":"ContainerDied","Data":"b19b84b99164bc73197e675a39d0a76695e78a944f51b46351fa56c764200830"} Mar 11 09:05:12 crc kubenswrapper[4808]: E0311 09:05:12.108937 4808 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 36c750f48ac33ddc1cee8a438d54bf885813c3f9d9bdb51b0dfec0a66d24f056 is running failed: container process not found" containerID="36c750f48ac33ddc1cee8a438d54bf885813c3f9d9bdb51b0dfec0a66d24f056" cmd=["/usr/local/bin/container-scripts/ovn_controller_readiness.sh"] Mar 11 09:05:12 crc kubenswrapper[4808]: E0311 09:05:12.109678 4808 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 11 09:05:12 crc kubenswrapper[4808]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:763d1f1e8a1cf877c151c59609960fd2fa29e7e50001f8818122a2d51878befa,Command:[/bin/sh -c #!/bin/bash Mar 11 09:05:12 crc kubenswrapper[4808]: Mar 11 09:05:12 crc kubenswrapper[4808]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Mar 11 09:05:12 crc kubenswrapper[4808]: Mar 11 09:05:12 crc kubenswrapper[4808]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Mar 11 09:05:12 crc kubenswrapper[4808]: Mar 11 09:05:12 crc kubenswrapper[4808]: MYSQL_CMD="mysql -h -u root -P 3306" Mar 11 09:05:12 crc kubenswrapper[4808]: Mar 11 09:05:12 crc kubenswrapper[4808]: if [ -n "" ]; then Mar 11 09:05:12 crc kubenswrapper[4808]: GRANT_DATABASE="" Mar 11 09:05:12 crc kubenswrapper[4808]: else Mar 11 09:05:12 crc kubenswrapper[4808]: GRANT_DATABASE="*" Mar 11 09:05:12 crc kubenswrapper[4808]: fi Mar 11 09:05:12 crc kubenswrapper[4808]: Mar 11 09:05:12 crc kubenswrapper[4808]: # going for maximum compatibility here: Mar 11 09:05:12 crc kubenswrapper[4808]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Mar 11 09:05:12 crc kubenswrapper[4808]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Mar 11 09:05:12 crc kubenswrapper[4808]: # 3. create user with CREATE but then do all password and TLS with ALTER to Mar 11 09:05:12 crc kubenswrapper[4808]: # support updates Mar 11 09:05:12 crc kubenswrapper[4808]: Mar 11 09:05:12 crc kubenswrapper[4808]: $MYSQL_CMD < logger="UnhandledError" Mar 11 09:05:12 crc kubenswrapper[4808]: E0311 09:05:12.111489 4808 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 36c750f48ac33ddc1cee8a438d54bf885813c3f9d9bdb51b0dfec0a66d24f056 is running failed: container process not found" containerID="36c750f48ac33ddc1cee8a438d54bf885813c3f9d9bdb51b0dfec0a66d24f056" cmd=["/usr/local/bin/container-scripts/ovn_controller_readiness.sh"] Mar 11 09:05:12 crc kubenswrapper[4808]: E0311 09:05:12.111495 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"openstack-cell1-mariadb-root-db-secret\\\" not found\"" pod="openstack/root-account-create-update-pcjsh" podUID="b4922ac8-998e-4ba3-88cb-6805fa10c7fd" Mar 11 09:05:12 crc kubenswrapper[4808]: E0311 09:05:12.130670 4808 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 36c750f48ac33ddc1cee8a438d54bf885813c3f9d9bdb51b0dfec0a66d24f056 is running failed: container process not found" containerID="36c750f48ac33ddc1cee8a438d54bf885813c3f9d9bdb51b0dfec0a66d24f056" cmd=["/usr/local/bin/container-scripts/ovn_controller_readiness.sh"] Mar 11 09:05:12 crc kubenswrapper[4808]: E0311 09:05:12.130725 4808 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 36c750f48ac33ddc1cee8a438d54bf885813c3f9d9bdb51b0dfec0a66d24f056 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-spf22" podUID="3fd1979f-d1de-42a8-be8e-b61087f737bc" containerName="ovn-controller" Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.140412 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7749c44969-t6dx4" event={"ID":"760732fc-fc8a-4a24-beca-c969fb0260fe","Type":"ContainerDied","Data":"d02f043e013fd12091fec4a22839ba78f4684779ae4ff1b219c9f90367f903e7"} Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.140357 4808 generic.go:334] "Generic (PLEG): container finished" podID="760732fc-fc8a-4a24-beca-c969fb0260fe" containerID="d02f043e013fd12091fec4a22839ba78f4684779ae4ff1b219c9f90367f903e7" exitCode=0 Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.140505 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7749c44969-t6dx4" event={"ID":"760732fc-fc8a-4a24-beca-c969fb0260fe","Type":"ContainerDied","Data":"9237057f960e720d78d3340d0292fa97ddbc83277a383baf3d84e0fb50a4b103"} Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.140526 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9237057f960e720d78d3340d0292fa97ddbc83277a383baf3d84e0fb50a4b103" Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.141516 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-spf22" Mar 11 09:05:12 crc kubenswrapper[4808]: E0311 09:05:12.151678 4808 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d0f6b59c6c2a37c85684bda61d18699ea95a7b07fe74cd2b454ef4d645be3cec" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Mar 11 09:05:12 crc kubenswrapper[4808]: E0311 09:05:12.151851 4808 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of be587cd4387689015812b77c921ed987cf9e6f3848ffcd2be39c7c2a7a593cef is running failed: container process not found" containerID="be587cd4387689015812b77c921ed987cf9e6f3848ffcd2be39c7c2a7a593cef" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Mar 11 09:05:12 crc kubenswrapper[4808]: E0311 09:05:12.155505 4808 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d0f6b59c6c2a37c85684bda61d18699ea95a7b07fe74cd2b454ef4d645be3cec" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Mar 11 09:05:12 crc kubenswrapper[4808]: E0311 09:05:12.155615 4808 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of be587cd4387689015812b77c921ed987cf9e6f3848ffcd2be39c7c2a7a593cef is running failed: container process not found" containerID="be587cd4387689015812b77c921ed987cf9e6f3848ffcd2be39c7c2a7a593cef" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.158343 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-ac71-account-create-update-jdqhw"] Mar 11 09:05:12 crc kubenswrapper[4808]: E0311 09:05:12.158916 4808 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of be587cd4387689015812b77c921ed987cf9e6f3848ffcd2be39c7c2a7a593cef is running failed: container process not found" containerID="be587cd4387689015812b77c921ed987cf9e6f3848ffcd2be39c7c2a7a593cef" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Mar 11 09:05:12 crc kubenswrapper[4808]: E0311 09:05:12.159024 4808 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of be587cd4387689015812b77c921ed987cf9e6f3848ffcd2be39c7c2a7a593cef is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-mbbhf" podUID="ac93b356-9c32-4094-9de5-8fd25c677810" containerName="ovsdb-server" Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.159823 4808 generic.go:334] "Generic (PLEG): container finished" podID="b3cf17f3-18e6-43f9-ab09-5882a99ffa51" containerID="d3ed6d04157e55ecb439f27f04de8f8ce7e201a4c77868d475de1fccec4b926a" exitCode=143 Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.159908 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b3cf17f3-18e6-43f9-ab09-5882a99ffa51","Type":"ContainerDied","Data":"d3ed6d04157e55ecb439f27f04de8f8ce7e201a4c77868d475de1fccec4b926a"} Mar 11 09:05:12 crc kubenswrapper[4808]: E0311 09:05:12.178500 4808 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d0f6b59c6c2a37c85684bda61d18699ea95a7b07fe74cd2b454ef4d645be3cec" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Mar 11 09:05:12 crc kubenswrapper[4808]: E0311 09:05:12.178556 4808 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-mbbhf" podUID="ac93b356-9c32-4094-9de5-8fd25c677810" containerName="ovs-vswitchd" Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.183466 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-8tb5k"] Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.194050 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3fd1979f-d1de-42a8-be8e-b61087f737bc-var-run-ovn\") pod \"3fd1979f-d1de-42a8-be8e-b61087f737bc\" (UID: \"3fd1979f-d1de-42a8-be8e-b61087f737bc\") " Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.194107 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3fd1979f-d1de-42a8-be8e-b61087f737bc-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "3fd1979f-d1de-42a8-be8e-b61087f737bc" (UID: "3fd1979f-d1de-42a8-be8e-b61087f737bc"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 11 09:05:12 crc kubenswrapper[4808]: E0311 09:05:12.195006 4808 configmap.go:193] Couldn't get configMap openstack/openstack-cell1-scripts: configmap "openstack-cell1-scripts" not found Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.195083 4808 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3fd1979f-d1de-42a8-be8e-b61087f737bc-var-run-ovn\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:12 crc kubenswrapper[4808]: E0311 09:05:12.195171 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b4922ac8-998e-4ba3-88cb-6805fa10c7fd-operator-scripts podName:b4922ac8-998e-4ba3-88cb-6805fa10c7fd nodeName:}" failed. No retries permitted until 2026-03-11 09:05:12.695150166 +0000 UTC m=+1563.648473486 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/b4922ac8-998e-4ba3-88cb-6805fa10c7fd-operator-scripts") pod "root-account-create-update-pcjsh" (UID: "b4922ac8-998e-4ba3-88cb-6805fa10c7fd") : configmap "openstack-cell1-scripts" not found Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.197197 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-c800-account-create-update-cnrds" event={"ID":"85b43352-7580-46b6-a90c-93a2598ac134","Type":"ContainerStarted","Data":"aabe2e2bd1867deaa13fefb19a67bbe19c0b7135d5cff6c4c262817d915a16cc"} Mar 11 09:05:12 crc kubenswrapper[4808]: E0311 09:05:12.210313 4808 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 11 09:05:12 crc kubenswrapper[4808]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:763d1f1e8a1cf877c151c59609960fd2fa29e7e50001f8818122a2d51878befa,Command:[/bin/sh -c #!/bin/bash Mar 11 09:05:12 crc kubenswrapper[4808]: Mar 11 09:05:12 crc kubenswrapper[4808]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Mar 11 09:05:12 crc kubenswrapper[4808]: Mar 11 09:05:12 crc kubenswrapper[4808]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Mar 11 09:05:12 crc kubenswrapper[4808]: Mar 11 09:05:12 crc kubenswrapper[4808]: MYSQL_CMD="mysql -h -u root -P 3306" Mar 11 09:05:12 crc kubenswrapper[4808]: Mar 11 09:05:12 crc kubenswrapper[4808]: if [ -n "barbican" ]; then Mar 11 09:05:12 crc kubenswrapper[4808]: GRANT_DATABASE="barbican" Mar 11 09:05:12 crc kubenswrapper[4808]: else Mar 11 09:05:12 crc kubenswrapper[4808]: GRANT_DATABASE="*" Mar 11 09:05:12 crc kubenswrapper[4808]: fi Mar 11 09:05:12 crc kubenswrapper[4808]: Mar 11 09:05:12 crc kubenswrapper[4808]: # going for maximum compatibility here: Mar 11 09:05:12 crc kubenswrapper[4808]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Mar 11 09:05:12 crc kubenswrapper[4808]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Mar 11 09:05:12 crc kubenswrapper[4808]: # 3. create user with CREATE but then do all password and TLS with ALTER to Mar 11 09:05:12 crc kubenswrapper[4808]: # support updates Mar 11 09:05:12 crc kubenswrapper[4808]: Mar 11 09:05:12 crc kubenswrapper[4808]: $MYSQL_CMD < logger="UnhandledError" Mar 11 09:05:12 crc kubenswrapper[4808]: E0311 09:05:12.211448 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"barbican-db-secret\\\" not found\"" pod="openstack/barbican-c800-account-create-update-cnrds" podUID="85b43352-7580-46b6-a90c-93a2598ac134" Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.211759 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-zcd8b_2f244d77-0b6a-4bcf-a6b4-dc7028019e29/openstack-network-exporter/0.log" Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.211822 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-zcd8b" Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.212401 4808 generic.go:334] "Generic (PLEG): container finished" podID="91b4b44f-8d7c-4933-b20d-a1d79d7c90b4" containerID="9bd7b73d921af51c66c9406fce69c56ff4d3946cb37941b50732c8f9d3483ca1" exitCode=143 Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.212474 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-665556c5fd-bnc2f" event={"ID":"91b4b44f-8d7c-4933-b20d-a1d79d7c90b4","Type":"ContainerDied","Data":"9bd7b73d921af51c66c9406fce69c56ff4d3946cb37941b50732c8f9d3483ca1"} Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.223665 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-8tb5k"] Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.232451 4808 generic.go:334] "Generic (PLEG): container finished" podID="f9deef18-212d-4f90-adbe-84f8bb0177e1" containerID="daa9d5be0dd494f080e478e2334f54c0d62a05a1d3e5c21ee9c65ba6d4767c26" exitCode=137 Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.259266 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-vqtl9"] Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.272499 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_03f33ae9-1e48-4adf-94bc-69ede69802d0/ovsdbserver-nb/0.log" Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.272568 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.278021 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.279467 4808 generic.go:334] "Generic (PLEG): container finished" podID="b2531f01-6ef8-4583-b788-97e0c8b4b50b" containerID="1c242216161e4c3f5f19cceec89f8a4f772fb8534970698b0f08070d25afb355" exitCode=0 Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.279540 4808 generic.go:334] "Generic (PLEG): container finished" podID="b2531f01-6ef8-4583-b788-97e0c8b4b50b" containerID="1fe8a90328e3c5fc5913211fdc47a8b9d9c43d633f87180fdcbdef9a02958f4e" exitCode=0 Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.279554 4808 generic.go:334] "Generic (PLEG): container finished" podID="b2531f01-6ef8-4583-b788-97e0c8b4b50b" containerID="dd4a12d0f40b70bed0ff12cd9961f609f614dcb0c77bb2b41c37fb77c51b62c9" exitCode=0 Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.279561 4808 generic.go:334] "Generic (PLEG): container finished" podID="b2531f01-6ef8-4583-b788-97e0c8b4b50b" containerID="91cb6e0a1f936b6dc40058ec5049a18b1acb5b6601c22fa29cce4e18b74747dd" exitCode=0 Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.279568 4808 generic.go:334] "Generic (PLEG): container finished" podID="b2531f01-6ef8-4583-b788-97e0c8b4b50b" containerID="37d0039d1a631f590de0477dafeba545b23952735655b1396940b1af448dcf5c" exitCode=0 Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.279573 4808 generic.go:334] "Generic (PLEG): container finished" podID="b2531f01-6ef8-4583-b788-97e0c8b4b50b" containerID="b34c64e3d4e3825344ce6d566596454bdaa22b31521f5119b23e2df58bc1f23d" exitCode=0 Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.279580 4808 generic.go:334] "Generic (PLEG): container finished" podID="b2531f01-6ef8-4583-b788-97e0c8b4b50b" containerID="081fbc280b7cac976f12e7408ccac953bdb3e17b1d8f4bac92f9023da0402d27" exitCode=0 Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.279587 4808 generic.go:334] "Generic (PLEG): container finished" podID="b2531f01-6ef8-4583-b788-97e0c8b4b50b" containerID="267b966e81c3ff2dcbbdf7e8e9d6fb5a08be9d360dec65483ea3395dbc33a811" exitCode=0 Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.279594 4808 generic.go:334] "Generic (PLEG): container finished" podID="b2531f01-6ef8-4583-b788-97e0c8b4b50b" containerID="681ee01ae1a57e4a64b46a088c7eb77d95b1fdf9586896e89410116abec29a90" exitCode=0 Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.279640 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2531f01-6ef8-4583-b788-97e0c8b4b50b","Type":"ContainerDied","Data":"1c242216161e4c3f5f19cceec89f8a4f772fb8534970698b0f08070d25afb355"} Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.279665 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2531f01-6ef8-4583-b788-97e0c8b4b50b","Type":"ContainerDied","Data":"1fe8a90328e3c5fc5913211fdc47a8b9d9c43d633f87180fdcbdef9a02958f4e"} Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.279674 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2531f01-6ef8-4583-b788-97e0c8b4b50b","Type":"ContainerDied","Data":"dd4a12d0f40b70bed0ff12cd9961f609f614dcb0c77bb2b41c37fb77c51b62c9"} Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.279683 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2531f01-6ef8-4583-b788-97e0c8b4b50b","Type":"ContainerDied","Data":"91cb6e0a1f936b6dc40058ec5049a18b1acb5b6601c22fa29cce4e18b74747dd"} Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.279692 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2531f01-6ef8-4583-b788-97e0c8b4b50b","Type":"ContainerDied","Data":"37d0039d1a631f590de0477dafeba545b23952735655b1396940b1af448dcf5c"} Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.279700 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2531f01-6ef8-4583-b788-97e0c8b4b50b","Type":"ContainerDied","Data":"b34c64e3d4e3825344ce6d566596454bdaa22b31521f5119b23e2df58bc1f23d"} Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.279709 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2531f01-6ef8-4583-b788-97e0c8b4b50b","Type":"ContainerDied","Data":"081fbc280b7cac976f12e7408ccac953bdb3e17b1d8f4bac92f9023da0402d27"} Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.279719 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2531f01-6ef8-4583-b788-97e0c8b4b50b","Type":"ContainerDied","Data":"267b966e81c3ff2dcbbdf7e8e9d6fb5a08be9d360dec65483ea3395dbc33a811"} Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.279727 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2531f01-6ef8-4583-b788-97e0c8b4b50b","Type":"ContainerDied","Data":"681ee01ae1a57e4a64b46a088c7eb77d95b1fdf9586896e89410116abec29a90"} Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.281222 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-spf22" event={"ID":"3fd1979f-d1de-42a8-be8e-b61087f737bc","Type":"ContainerDied","Data":"ebdadbe8efd61c0f588c8c23d6c9c374038818d6f8894768278498a3945a465e"} Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.281249 4808 scope.go:117] "RemoveContainer" containerID="36c750f48ac33ddc1cee8a438d54bf885813c3f9d9bdb51b0dfec0a66d24f056" Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.281361 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-spf22" Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.287819 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7749c44969-t6dx4" Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.304820 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/3fd1979f-d1de-42a8-be8e-b61087f737bc-ovn-controller-tls-certs\") pod \"3fd1979f-d1de-42a8-be8e-b61087f737bc\" (UID: \"3fd1979f-d1de-42a8-be8e-b61087f737bc\") " Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.304959 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3fd1979f-d1de-42a8-be8e-b61087f737bc-var-run\") pod \"3fd1979f-d1de-42a8-be8e-b61087f737bc\" (UID: \"3fd1979f-d1de-42a8-be8e-b61087f737bc\") " Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.304994 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sj9dh\" (UniqueName: \"kubernetes.io/projected/3fd1979f-d1de-42a8-be8e-b61087f737bc-kube-api-access-sj9dh\") pod \"3fd1979f-d1de-42a8-be8e-b61087f737bc\" (UID: \"3fd1979f-d1de-42a8-be8e-b61087f737bc\") " Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.305025 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fd1979f-d1de-42a8-be8e-b61087f737bc-combined-ca-bundle\") pod \"3fd1979f-d1de-42a8-be8e-b61087f737bc\" (UID: \"3fd1979f-d1de-42a8-be8e-b61087f737bc\") " Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.305075 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3fd1979f-d1de-42a8-be8e-b61087f737bc-scripts\") pod \"3fd1979f-d1de-42a8-be8e-b61087f737bc\" (UID: \"3fd1979f-d1de-42a8-be8e-b61087f737bc\") " Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.305152 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3fd1979f-d1de-42a8-be8e-b61087f737bc-var-log-ovn\") pod \"3fd1979f-d1de-42a8-be8e-b61087f737bc\" (UID: \"3fd1979f-d1de-42a8-be8e-b61087f737bc\") " Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.305977 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3fd1979f-d1de-42a8-be8e-b61087f737bc-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "3fd1979f-d1de-42a8-be8e-b61087f737bc" (UID: "3fd1979f-d1de-42a8-be8e-b61087f737bc"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.323158 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-zcd8b_2f244d77-0b6a-4bcf-a6b4-dc7028019e29/openstack-network-exporter/0.log" Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.323265 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-zcd8b" event={"ID":"2f244d77-0b6a-4bcf-a6b4-dc7028019e29","Type":"ContainerDied","Data":"a19043764ff278bbdc9aa76ca369bf97c17bacbe1c814fc53c6cdd85814e418b"} Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.323354 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-zcd8b" Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.326462 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-vqtl9"] Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.327262 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3fd1979f-d1de-42a8-be8e-b61087f737bc-var-run" (OuterVolumeSpecName: "var-run") pod "3fd1979f-d1de-42a8-be8e-b61087f737bc" (UID: "3fd1979f-d1de-42a8-be8e-b61087f737bc"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.332958 4808 generic.go:334] "Generic (PLEG): container finished" podID="08b88a34-0eac-4a47-b3b3-89a8024bbe7b" containerID="6ce012fcceee4fbe97b03ef149a01fa24e752a2989ef48bafa67da5e91a5d644" exitCode=2 Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.333076 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"08b88a34-0eac-4a47-b3b3-89a8024bbe7b","Type":"ContainerDied","Data":"6ce012fcceee4fbe97b03ef149a01fa24e752a2989ef48bafa67da5e91a5d644"} Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.334690 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3fd1979f-d1de-42a8-be8e-b61087f737bc-scripts" (OuterVolumeSpecName: "scripts") pod "3fd1979f-d1de-42a8-be8e-b61087f737bc" (UID: "3fd1979f-d1de-42a8-be8e-b61087f737bc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.357624 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3fd1979f-d1de-42a8-be8e-b61087f737bc-kube-api-access-sj9dh" (OuterVolumeSpecName: "kube-api-access-sj9dh") pod "3fd1979f-d1de-42a8-be8e-b61087f737bc" (UID: "3fd1979f-d1de-42a8-be8e-b61087f737bc"). InnerVolumeSpecName "kube-api-access-sj9dh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.361591 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_03f33ae9-1e48-4adf-94bc-69ede69802d0/ovsdbserver-nb/0.log" Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.361638 4808 generic.go:334] "Generic (PLEG): container finished" podID="03f33ae9-1e48-4adf-94bc-69ede69802d0" containerID="caf403e5666b9d2c99d9a9fbdab7975e088bc83ebcc56dde5fee8c7b6be528e4" exitCode=2 Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.361657 4808 generic.go:334] "Generic (PLEG): container finished" podID="03f33ae9-1e48-4adf-94bc-69ede69802d0" containerID="3904a275fa7eb7164f6dcc6972082021fbcb12b9a1d3cf667a02688b376fc0eb" exitCode=143 Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.361848 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"03f33ae9-1e48-4adf-94bc-69ede69802d0","Type":"ContainerDied","Data":"caf403e5666b9d2c99d9a9fbdab7975e088bc83ebcc56dde5fee8c7b6be528e4"} Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.361879 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"03f33ae9-1e48-4adf-94bc-69ede69802d0","Type":"ContainerDied","Data":"3904a275fa7eb7164f6dcc6972082021fbcb12b9a1d3cf667a02688b376fc0eb"} Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.361923 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.379789 4808 scope.go:117] "RemoveContainer" containerID="6fd674f4d1eb009a734a5c6b505490fc1b34cc96907e81fed8a9032b16af9052" Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.381741 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fd1979f-d1de-42a8-be8e-b61087f737bc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3fd1979f-d1de-42a8-be8e-b61087f737bc" (UID: "3fd1979f-d1de-42a8-be8e-b61087f737bc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.387198 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_073052f7-094c-467a-8910-b2ce25e5b981/ovsdbserver-sb/0.log" Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.387263 4808 generic.go:334] "Generic (PLEG): container finished" podID="073052f7-094c-467a-8910-b2ce25e5b981" containerID="0b1ad33990e113a73cab3cbdff3db9029771d7e96b54d5e45a675f0010b3a17c" exitCode=2 Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.387282 4808 generic.go:334] "Generic (PLEG): container finished" podID="073052f7-094c-467a-8910-b2ce25e5b981" containerID="464b1dff84d68cb54e2785c4998d80bcd9d2d9c96576bd186fecafee0ad6ee92" exitCode=143 Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.387310 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"073052f7-094c-467a-8910-b2ce25e5b981","Type":"ContainerDied","Data":"0b1ad33990e113a73cab3cbdff3db9029771d7e96b54d5e45a675f0010b3a17c"} Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.387335 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"073052f7-094c-467a-8910-b2ce25e5b981","Type":"ContainerDied","Data":"464b1dff84d68cb54e2785c4998d80bcd9d2d9c96576bd186fecafee0ad6ee92"} Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.402867 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-c59a-account-create-update-6xx54"] Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.415733 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="5c805958-e789-4689-bbd0-dc1a1a116486" containerName="galera" containerID="cri-o://9dc74041b5de0f337b97cef5d7a082b76d5ccc5083fc2046cdd3bdf353512e2e" gracePeriod=30 Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.417008 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p8c7t\" (UniqueName: \"kubernetes.io/projected/f9deef18-212d-4f90-adbe-84f8bb0177e1-kube-api-access-p8c7t\") pod \"f9deef18-212d-4f90-adbe-84f8bb0177e1\" (UID: \"f9deef18-212d-4f90-adbe-84f8bb0177e1\") " Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.417044 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f9deef18-212d-4f90-adbe-84f8bb0177e1-openstack-config-secret\") pod \"f9deef18-212d-4f90-adbe-84f8bb0177e1\" (UID: \"f9deef18-212d-4f90-adbe-84f8bb0177e1\") " Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.417067 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/03f33ae9-1e48-4adf-94bc-69ede69802d0-metrics-certs-tls-certs\") pod \"03f33ae9-1e48-4adf-94bc-69ede69802d0\" (UID: \"03f33ae9-1e48-4adf-94bc-69ede69802d0\") " Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.417083 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03f33ae9-1e48-4adf-94bc-69ede69802d0-combined-ca-bundle\") pod \"03f33ae9-1e48-4adf-94bc-69ede69802d0\" (UID: \"03f33ae9-1e48-4adf-94bc-69ede69802d0\") " Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.417132 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f9deef18-212d-4f90-adbe-84f8bb0177e1-openstack-config\") pod \"f9deef18-212d-4f90-adbe-84f8bb0177e1\" (UID: \"f9deef18-212d-4f90-adbe-84f8bb0177e1\") " Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.417158 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/760732fc-fc8a-4a24-beca-c969fb0260fe-config\") pod \"760732fc-fc8a-4a24-beca-c969fb0260fe\" (UID: \"760732fc-fc8a-4a24-beca-c969fb0260fe\") " Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.417186 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-nb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"03f33ae9-1e48-4adf-94bc-69ede69802d0\" (UID: \"03f33ae9-1e48-4adf-94bc-69ede69802d0\") " Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.417203 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/03f33ae9-1e48-4adf-94bc-69ede69802d0-scripts\") pod \"03f33ae9-1e48-4adf-94bc-69ede69802d0\" (UID: \"03f33ae9-1e48-4adf-94bc-69ede69802d0\") " Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.417225 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/760732fc-fc8a-4a24-beca-c969fb0260fe-ovsdbserver-sb\") pod \"760732fc-fc8a-4a24-beca-c969fb0260fe\" (UID: \"760732fc-fc8a-4a24-beca-c969fb0260fe\") " Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.417250 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03f33ae9-1e48-4adf-94bc-69ede69802d0-config\") pod \"03f33ae9-1e48-4adf-94bc-69ede69802d0\" (UID: \"03f33ae9-1e48-4adf-94bc-69ede69802d0\") " Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.417266 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f244d77-0b6a-4bcf-a6b4-dc7028019e29-config\") pod \"2f244d77-0b6a-4bcf-a6b4-dc7028019e29\" (UID: \"2f244d77-0b6a-4bcf-a6b4-dc7028019e29\") " Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.417303 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f244d77-0b6a-4bcf-a6b4-dc7028019e29-metrics-certs-tls-certs\") pod \"2f244d77-0b6a-4bcf-a6b4-dc7028019e29\" (UID: \"2f244d77-0b6a-4bcf-a6b4-dc7028019e29\") " Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.417339 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/760732fc-fc8a-4a24-beca-c969fb0260fe-dns-swift-storage-0\") pod \"760732fc-fc8a-4a24-beca-c969fb0260fe\" (UID: \"760732fc-fc8a-4a24-beca-c969fb0260fe\") " Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.417363 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/2f244d77-0b6a-4bcf-a6b4-dc7028019e29-ovs-rundir\") pod \"2f244d77-0b6a-4bcf-a6b4-dc7028019e29\" (UID: \"2f244d77-0b6a-4bcf-a6b4-dc7028019e29\") " Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.417430 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/760732fc-fc8a-4a24-beca-c969fb0260fe-ovsdbserver-nb\") pod \"760732fc-fc8a-4a24-beca-c969fb0260fe\" (UID: \"760732fc-fc8a-4a24-beca-c969fb0260fe\") " Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.417452 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-brwrt\" (UniqueName: \"kubernetes.io/projected/760732fc-fc8a-4a24-beca-c969fb0260fe-kube-api-access-brwrt\") pod \"760732fc-fc8a-4a24-beca-c969fb0260fe\" (UID: \"760732fc-fc8a-4a24-beca-c969fb0260fe\") " Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.417479 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9deef18-212d-4f90-adbe-84f8bb0177e1-combined-ca-bundle\") pod \"f9deef18-212d-4f90-adbe-84f8bb0177e1\" (UID: \"f9deef18-212d-4f90-adbe-84f8bb0177e1\") " Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.417523 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/03f33ae9-1e48-4adf-94bc-69ede69802d0-ovsdbserver-nb-tls-certs\") pod \"03f33ae9-1e48-4adf-94bc-69ede69802d0\" (UID: \"03f33ae9-1e48-4adf-94bc-69ede69802d0\") " Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.417543 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/2f244d77-0b6a-4bcf-a6b4-dc7028019e29-ovn-rundir\") pod \"2f244d77-0b6a-4bcf-a6b4-dc7028019e29\" (UID: \"2f244d77-0b6a-4bcf-a6b4-dc7028019e29\") " Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.417566 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qltrn\" (UniqueName: \"kubernetes.io/projected/03f33ae9-1e48-4adf-94bc-69ede69802d0-kube-api-access-qltrn\") pod \"03f33ae9-1e48-4adf-94bc-69ede69802d0\" (UID: \"03f33ae9-1e48-4adf-94bc-69ede69802d0\") " Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.417586 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/760732fc-fc8a-4a24-beca-c969fb0260fe-dns-svc\") pod \"760732fc-fc8a-4a24-beca-c969fb0260fe\" (UID: \"760732fc-fc8a-4a24-beca-c969fb0260fe\") " Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.417605 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f244d77-0b6a-4bcf-a6b4-dc7028019e29-combined-ca-bundle\") pod \"2f244d77-0b6a-4bcf-a6b4-dc7028019e29\" (UID: \"2f244d77-0b6a-4bcf-a6b4-dc7028019e29\") " Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.417651 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/03f33ae9-1e48-4adf-94bc-69ede69802d0-ovsdb-rundir\") pod \"03f33ae9-1e48-4adf-94bc-69ede69802d0\" (UID: \"03f33ae9-1e48-4adf-94bc-69ede69802d0\") " Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.417678 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dlb92\" (UniqueName: \"kubernetes.io/projected/2f244d77-0b6a-4bcf-a6b4-dc7028019e29-kube-api-access-dlb92\") pod \"2f244d77-0b6a-4bcf-a6b4-dc7028019e29\" (UID: \"2f244d77-0b6a-4bcf-a6b4-dc7028019e29\") " Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.418107 4808 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3fd1979f-d1de-42a8-be8e-b61087f737bc-var-run\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.418127 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sj9dh\" (UniqueName: \"kubernetes.io/projected/3fd1979f-d1de-42a8-be8e-b61087f737bc-kube-api-access-sj9dh\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.418137 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fd1979f-d1de-42a8-be8e-b61087f737bc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.418146 4808 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3fd1979f-d1de-42a8-be8e-b61087f737bc-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.418153 4808 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3fd1979f-d1de-42a8-be8e-b61087f737bc-var-log-ovn\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.418568 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2f244d77-0b6a-4bcf-a6b4-dc7028019e29-ovs-rundir" (OuterVolumeSpecName: "ovs-rundir") pod "2f244d77-0b6a-4bcf-a6b4-dc7028019e29" (UID: "2f244d77-0b6a-4bcf-a6b4-dc7028019e29"). InnerVolumeSpecName "ovs-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.438423 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03f33ae9-1e48-4adf-94bc-69ede69802d0-config" (OuterVolumeSpecName: "config") pod "03f33ae9-1e48-4adf-94bc-69ede69802d0" (UID: "03f33ae9-1e48-4adf-94bc-69ede69802d0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.439587 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2f244d77-0b6a-4bcf-a6b4-dc7028019e29-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "2f244d77-0b6a-4bcf-a6b4-dc7028019e29" (UID: "2f244d77-0b6a-4bcf-a6b4-dc7028019e29"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.444275 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-g579h"] Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.445203 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03f33ae9-1e48-4adf-94bc-69ede69802d0-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "03f33ae9-1e48-4adf-94bc-69ede69802d0" (UID: "03f33ae9-1e48-4adf-94bc-69ede69802d0"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.454008 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03f33ae9-1e48-4adf-94bc-69ede69802d0-scripts" (OuterVolumeSpecName: "scripts") pod "03f33ae9-1e48-4adf-94bc-69ede69802d0" (UID: "03f33ae9-1e48-4adf-94bc-69ede69802d0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.454119 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "ovndbcluster-nb-etc-ovn") pod "03f33ae9-1e48-4adf-94bc-69ede69802d0" (UID: "03f33ae9-1e48-4adf-94bc-69ede69802d0"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.460064 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f244d77-0b6a-4bcf-a6b4-dc7028019e29-config" (OuterVolumeSpecName: "config") pod "2f244d77-0b6a-4bcf-a6b4-dc7028019e29" (UID: "2f244d77-0b6a-4bcf-a6b4-dc7028019e29"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.465908 4808 scope.go:117] "RemoveContainer" containerID="caf403e5666b9d2c99d9a9fbdab7975e088bc83ebcc56dde5fee8c7b6be528e4" Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.466662 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f244d77-0b6a-4bcf-a6b4-dc7028019e29-kube-api-access-dlb92" (OuterVolumeSpecName: "kube-api-access-dlb92") pod "2f244d77-0b6a-4bcf-a6b4-dc7028019e29" (UID: "2f244d77-0b6a-4bcf-a6b4-dc7028019e29"). InnerVolumeSpecName "kube-api-access-dlb92". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.470426 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_073052f7-094c-467a-8910-b2ce25e5b981/ovsdbserver-sb/0.log" Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.470494 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.477368 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03f33ae9-1e48-4adf-94bc-69ede69802d0-kube-api-access-qltrn" (OuterVolumeSpecName: "kube-api-access-qltrn") pod "03f33ae9-1e48-4adf-94bc-69ede69802d0" (UID: "03f33ae9-1e48-4adf-94bc-69ede69802d0"). InnerVolumeSpecName "kube-api-access-qltrn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.477548 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9deef18-212d-4f90-adbe-84f8bb0177e1-kube-api-access-p8c7t" (OuterVolumeSpecName: "kube-api-access-p8c7t") pod "f9deef18-212d-4f90-adbe-84f8bb0177e1" (UID: "f9deef18-212d-4f90-adbe-84f8bb0177e1"). InnerVolumeSpecName "kube-api-access-p8c7t". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.493198 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/760732fc-fc8a-4a24-beca-c969fb0260fe-kube-api-access-brwrt" (OuterVolumeSpecName: "kube-api-access-brwrt") pod "760732fc-fc8a-4a24-beca-c969fb0260fe" (UID: "760732fc-fc8a-4a24-beca-c969fb0260fe"). InnerVolumeSpecName "kube-api-access-brwrt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.505980 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-g579h"] Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.517109 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9deef18-212d-4f90-adbe-84f8bb0177e1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f9deef18-212d-4f90-adbe-84f8bb0177e1" (UID: "f9deef18-212d-4f90-adbe-84f8bb0177e1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.519995 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/073052f7-094c-467a-8910-b2ce25e5b981-combined-ca-bundle\") pod \"073052f7-094c-467a-8910-b2ce25e5b981\" (UID: \"073052f7-094c-467a-8910-b2ce25e5b981\") " Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.520042 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/073052f7-094c-467a-8910-b2ce25e5b981-ovsdbserver-sb-tls-certs\") pod \"073052f7-094c-467a-8910-b2ce25e5b981\" (UID: \"073052f7-094c-467a-8910-b2ce25e5b981\") " Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.520108 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-sb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"073052f7-094c-467a-8910-b2ce25e5b981\" (UID: \"073052f7-094c-467a-8910-b2ce25e5b981\") " Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.520143 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/073052f7-094c-467a-8910-b2ce25e5b981-metrics-certs-tls-certs\") pod \"073052f7-094c-467a-8910-b2ce25e5b981\" (UID: \"073052f7-094c-467a-8910-b2ce25e5b981\") " Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.520166 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/073052f7-094c-467a-8910-b2ce25e5b981-scripts\") pod \"073052f7-094c-467a-8910-b2ce25e5b981\" (UID: \"073052f7-094c-467a-8910-b2ce25e5b981\") " Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.520361 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7lhl\" (UniqueName: \"kubernetes.io/projected/073052f7-094c-467a-8910-b2ce25e5b981-kube-api-access-n7lhl\") pod \"073052f7-094c-467a-8910-b2ce25e5b981\" (UID: \"073052f7-094c-467a-8910-b2ce25e5b981\") " Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.520446 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/073052f7-094c-467a-8910-b2ce25e5b981-ovsdb-rundir\") pod \"073052f7-094c-467a-8910-b2ce25e5b981\" (UID: \"073052f7-094c-467a-8910-b2ce25e5b981\") " Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.520552 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/073052f7-094c-467a-8910-b2ce25e5b981-config\") pod \"073052f7-094c-467a-8910-b2ce25e5b981\" (UID: \"073052f7-094c-467a-8910-b2ce25e5b981\") " Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.521042 4808 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.521058 4808 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/03f33ae9-1e48-4adf-94bc-69ede69802d0-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.521067 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03f33ae9-1e48-4adf-94bc-69ede69802d0-config\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.521076 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f244d77-0b6a-4bcf-a6b4-dc7028019e29-config\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.521084 4808 reconciler_common.go:293] "Volume detached for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/2f244d77-0b6a-4bcf-a6b4-dc7028019e29-ovs-rundir\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.521093 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-brwrt\" (UniqueName: \"kubernetes.io/projected/760732fc-fc8a-4a24-beca-c969fb0260fe-kube-api-access-brwrt\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.521102 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9deef18-212d-4f90-adbe-84f8bb0177e1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.521110 4808 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/2f244d77-0b6a-4bcf-a6b4-dc7028019e29-ovn-rundir\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.521118 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qltrn\" (UniqueName: \"kubernetes.io/projected/03f33ae9-1e48-4adf-94bc-69ede69802d0-kube-api-access-qltrn\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.521126 4808 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/03f33ae9-1e48-4adf-94bc-69ede69802d0-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.521135 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dlb92\" (UniqueName: \"kubernetes.io/projected/2f244d77-0b6a-4bcf-a6b4-dc7028019e29-kube-api-access-dlb92\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.521143 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p8c7t\" (UniqueName: \"kubernetes.io/projected/f9deef18-212d-4f90-adbe-84f8bb0177e1-kube-api-access-p8c7t\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.521408 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9deef18-212d-4f90-adbe-84f8bb0177e1-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "f9deef18-212d-4f90-adbe-84f8bb0177e1" (UID: "f9deef18-212d-4f90-adbe-84f8bb0177e1"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.523703 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/073052f7-094c-467a-8910-b2ce25e5b981-config" (OuterVolumeSpecName: "config") pod "073052f7-094c-467a-8910-b2ce25e5b981" (UID: "073052f7-094c-467a-8910-b2ce25e5b981"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.524629 4808 scope.go:117] "RemoveContainer" containerID="3904a275fa7eb7164f6dcc6972082021fbcb12b9a1d3cf667a02688b376fc0eb" Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.532135 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/073052f7-094c-467a-8910-b2ce25e5b981-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "073052f7-094c-467a-8910-b2ce25e5b981" (UID: "073052f7-094c-467a-8910-b2ce25e5b981"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.532997 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.533261 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="25dc3abb-1552-49e8-a8b4-c51edd37f47c" containerName="nova-metadata-log" containerID="cri-o://a086c0e3022a2db2d34ca60d4015160e76a8270327ee343fcdb2cb62321f0689" gracePeriod=30 Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.533415 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="25dc3abb-1552-49e8-a8b4-c51edd37f47c" containerName="nova-metadata-metadata" containerID="cri-o://48b494dd6ddaf1d2614480b4dbea2fb21a3de491e1743fed834630233f736314" gracePeriod=30 Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.535072 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/073052f7-094c-467a-8910-b2ce25e5b981-scripts" (OuterVolumeSpecName: "scripts") pod "073052f7-094c-467a-8910-b2ce25e5b981" (UID: "073052f7-094c-467a-8910-b2ce25e5b981"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.535505 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/073052f7-094c-467a-8910-b2ce25e5b981-kube-api-access-n7lhl" (OuterVolumeSpecName: "kube-api-access-n7lhl") pod "073052f7-094c-467a-8910-b2ce25e5b981" (UID: "073052f7-094c-467a-8910-b2ce25e5b981"). InnerVolumeSpecName "kube-api-access-n7lhl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.537917 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "ovndbcluster-sb-etc-ovn") pod "073052f7-094c-467a-8910-b2ce25e5b981" (UID: "073052f7-094c-467a-8910-b2ce25e5b981"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.541882 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-3a44-account-create-update-bp6hq"] Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.551096 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.551443 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="192d6d53-4174-487e-b652-0ad887475d54" containerName="nova-api-log" containerID="cri-o://c7fcbacbe8923acd9369c79ee4cc7aa70294bd2899daf98c5c24346801197e4e" gracePeriod=30 Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.551540 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="192d6d53-4174-487e-b652-0ad887475d54" containerName="nova-api-api" containerID="cri-o://dc94ba0454419f0c1846eea9c0458329c712e6687fa7a17e1ad016ba48cf2ae7" gracePeriod=30 Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.558485 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-86db6b574-lsd58"] Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.558771 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-86db6b574-lsd58" podUID="16eab58c-16f2-4054-aae1-d4de176db24c" containerName="barbican-keystone-listener-log" containerID="cri-o://b632c1c0284e2ec0e59641f479394eaf69354fe98cfe9ecbe781f8db04299ccf" gracePeriod=30 Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.559301 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-86db6b574-lsd58" podUID="16eab58c-16f2-4054-aae1-d4de176db24c" containerName="barbican-keystone-listener" containerID="cri-o://aa0c1e84c07ee55b3422a4d17c6d167a2c103b98d130bfcf9441a6a8c53b6cb2" gracePeriod=30 Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.579696 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-688446ffb8-4n8n7"] Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.580020 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-688446ffb8-4n8n7" podUID="909d233e-60cb-4a66-989b-2dc8706ea143" containerName="barbican-api-log" containerID="cri-o://e9ba485b53ad1d07b1fb284bc1453eaeb1f6baef2ad3437f4b34571187340332" gracePeriod=30 Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.580153 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-688446ffb8-4n8n7" podUID="909d233e-60cb-4a66-989b-2dc8706ea143" containerName="barbican-api" containerID="cri-o://199e1ac6e17d3bec4008ea266cd9404ffd73ec3cdee3f462464896d016c4f52b" gracePeriod=30 Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.590921 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-6b745fd47c-v25f8"] Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.591161 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-6b745fd47c-v25f8" podUID="50b3975b-d699-4f86-8aba-3a00f99bfdbc" containerName="barbican-worker-log" containerID="cri-o://7c409fde438e843afe5c9926ca77c42ad26f7c963455ed3066ee3ef3491431c2" gracePeriod=30 Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.591746 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-6b745fd47c-v25f8" podUID="50b3975b-d699-4f86-8aba-3a00f99bfdbc" containerName="barbican-worker" containerID="cri-o://e3ce99f2e8ebb84a8c715f80b60a3bd785a8f332525b48b0979b54f0ca248423" gracePeriod=30 Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.599937 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.602877 4808 scope.go:117] "RemoveContainer" containerID="caf403e5666b9d2c99d9a9fbdab7975e088bc83ebcc56dde5fee8c7b6be528e4" Mar 11 09:05:12 crc kubenswrapper[4808]: E0311 09:05:12.603925 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"caf403e5666b9d2c99d9a9fbdab7975e088bc83ebcc56dde5fee8c7b6be528e4\": container with ID starting with caf403e5666b9d2c99d9a9fbdab7975e088bc83ebcc56dde5fee8c7b6be528e4 not found: ID does not exist" containerID="caf403e5666b9d2c99d9a9fbdab7975e088bc83ebcc56dde5fee8c7b6be528e4" Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.603956 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"caf403e5666b9d2c99d9a9fbdab7975e088bc83ebcc56dde5fee8c7b6be528e4"} err="failed to get container status \"caf403e5666b9d2c99d9a9fbdab7975e088bc83ebcc56dde5fee8c7b6be528e4\": rpc error: code = NotFound desc = could not find container \"caf403e5666b9d2c99d9a9fbdab7975e088bc83ebcc56dde5fee8c7b6be528e4\": container with ID starting with caf403e5666b9d2c99d9a9fbdab7975e088bc83ebcc56dde5fee8c7b6be528e4 not found: ID does not exist" Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.603976 4808 scope.go:117] "RemoveContainer" containerID="3904a275fa7eb7164f6dcc6972082021fbcb12b9a1d3cf667a02688b376fc0eb" Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.612722 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.614975 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f244d77-0b6a-4bcf-a6b4-dc7028019e29-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2f244d77-0b6a-4bcf-a6b4-dc7028019e29" (UID: "2f244d77-0b6a-4bcf-a6b4-dc7028019e29"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.616978 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="3e648abe-27f1-49ac-aebb-a38e206fe101" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://169cbac68f068fd8106574eac7954c3f52641921608913e7e44b9ca108d7b78a" gracePeriod=30 Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.622438 4808 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.622466 4808 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/073052f7-094c-467a-8910-b2ce25e5b981-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.622477 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f244d77-0b6a-4bcf-a6b4-dc7028019e29-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.622489 4808 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f9deef18-212d-4f90-adbe-84f8bb0177e1-openstack-config\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.622499 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n7lhl\" (UniqueName: \"kubernetes.io/projected/073052f7-094c-467a-8910-b2ce25e5b981-kube-api-access-n7lhl\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.622508 4808 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/073052f7-094c-467a-8910-b2ce25e5b981-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.622517 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/073052f7-094c-467a-8910-b2ce25e5b981-config\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:12 crc kubenswrapper[4808]: E0311 09:05:12.624913 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3904a275fa7eb7164f6dcc6972082021fbcb12b9a1d3cf667a02688b376fc0eb\": container with ID starting with 3904a275fa7eb7164f6dcc6972082021fbcb12b9a1d3cf667a02688b376fc0eb not found: ID does not exist" containerID="3904a275fa7eb7164f6dcc6972082021fbcb12b9a1d3cf667a02688b376fc0eb" Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.624953 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3904a275fa7eb7164f6dcc6972082021fbcb12b9a1d3cf667a02688b376fc0eb"} err="failed to get container status \"3904a275fa7eb7164f6dcc6972082021fbcb12b9a1d3cf667a02688b376fc0eb\": rpc error: code = NotFound desc = could not find container \"3904a275fa7eb7164f6dcc6972082021fbcb12b9a1d3cf667a02688b376fc0eb\": container with ID starting with 3904a275fa7eb7164f6dcc6972082021fbcb12b9a1d3cf667a02688b376fc0eb not found: ID does not exist" Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.625619 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-pcjsh"] Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.639829 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.669300 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03f33ae9-1e48-4adf-94bc-69ede69802d0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "03f33ae9-1e48-4adf-94bc-69ede69802d0" (UID: "03f33ae9-1e48-4adf-94bc-69ede69802d0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.677679 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/760732fc-fc8a-4a24-beca-c969fb0260fe-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "760732fc-fc8a-4a24-beca-c969fb0260fe" (UID: "760732fc-fc8a-4a24-beca-c969fb0260fe"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.703649 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.703893 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="e28fc76b-781e-4397-87b5-2b2bf6d2a496" containerName="nova-scheduler-scheduler" containerID="cri-o://46dcd1f24c24da00b44ee1905152bd43556064f3dd7b7d9f3e140bf0866a8897" gracePeriod=30 Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.711787 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.712255 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="512d8427-151d-42dd-a2fe-b52d22583604" containerName="nova-cell1-conductor-conductor" containerID="cri-o://e2259d06d5e8abb57d4defe5dfed63d448596138a276f57890cb8984cb01e2ed" gracePeriod=30 Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.715239 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-5z55v"] Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.723453 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-5z55v"] Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.727751 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-dmgm7"] Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.733740 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.733954 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="c8798b96-74d7-4e0e-a4c7-97f3c995544b" containerName="nova-cell0-conductor-conductor" containerID="cri-o://3eb75e498cfe3f13fb0222d0de5cdb5657fd0be88039ded0f03ccc09d63cb1e3" gracePeriod=30 Mar 11 09:05:12 crc kubenswrapper[4808]: E0311 09:05:12.734350 4808 configmap.go:193] Couldn't get configMap openstack/openstack-cell1-scripts: configmap "openstack-cell1-scripts" not found Mar 11 09:05:12 crc kubenswrapper[4808]: E0311 09:05:12.734426 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b4922ac8-998e-4ba3-88cb-6805fa10c7fd-operator-scripts podName:b4922ac8-998e-4ba3-88cb-6805fa10c7fd nodeName:}" failed. No retries permitted until 2026-03-11 09:05:13.734405029 +0000 UTC m=+1564.687728349 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/b4922ac8-998e-4ba3-88cb-6805fa10c7fd-operator-scripts") pod "root-account-create-update-pcjsh" (UID: "b4922ac8-998e-4ba3-88cb-6805fa10c7fd") : configmap "openstack-cell1-scripts" not found Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.734763 4808 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/760732fc-fc8a-4a24-beca-c969fb0260fe-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.734781 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03f33ae9-1e48-4adf-94bc-69ede69802d0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.739856 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-dmgm7"] Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.759224 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="a1e42e33-7453-4b97-abca-0c45cc27faa2" containerName="rabbitmq" containerID="cri-o://0f8754c1594d2feb21d234a73d819b71534a5e40f918961ac2feb938e1330d6c" gracePeriod=604800 Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.762590 4808 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.766310 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/760732fc-fc8a-4a24-beca-c969fb0260fe-config" (OuterVolumeSpecName: "config") pod "760732fc-fc8a-4a24-beca-c969fb0260fe" (UID: "760732fc-fc8a-4a24-beca-c969fb0260fe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.781959 4808 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.782903 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="549d4ad5-b5b0-45bd-87b0-b9a6ee77866e" containerName="rabbitmq" containerID="cri-o://05f865332615ad9f698e0cf3c33551f4d94238b92da639d722149c1d2ab22b35" gracePeriod=604800 Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.786797 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/760732fc-fc8a-4a24-beca-c969fb0260fe-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "760732fc-fc8a-4a24-beca-c969fb0260fe" (UID: "760732fc-fc8a-4a24-beca-c969fb0260fe"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.787337 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/760732fc-fc8a-4a24-beca-c969fb0260fe-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "760732fc-fc8a-4a24-beca-c969fb0260fe" (UID: "760732fc-fc8a-4a24-beca-c969fb0260fe"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.811098 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/073052f7-094c-467a-8910-b2ce25e5b981-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "073052f7-094c-467a-8910-b2ce25e5b981" (UID: "073052f7-094c-467a-8910-b2ce25e5b981"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:05:12 crc kubenswrapper[4808]: E0311 09:05:12.837577 4808 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="46dcd1f24c24da00b44ee1905152bd43556064f3dd7b7d9f3e140bf0866a8897" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.838692 4808 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.838716 4808 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/760732fc-fc8a-4a24-beca-c969fb0260fe-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.838725 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/760732fc-fc8a-4a24-beca-c969fb0260fe-config\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.838733 4808 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.838743 4808 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/760732fc-fc8a-4a24-beca-c969fb0260fe-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.838753 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/073052f7-094c-467a-8910-b2ce25e5b981-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.863765 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-3a44-account-create-update-bp6hq"] Mar 11 09:05:12 crc kubenswrapper[4808]: E0311 09:05:12.865538 4808 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="46dcd1f24c24da00b44ee1905152bd43556064f3dd7b7d9f3e140bf0866a8897" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.868755 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03f33ae9-1e48-4adf-94bc-69ede69802d0-ovsdbserver-nb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-nb-tls-certs") pod "03f33ae9-1e48-4adf-94bc-69ede69802d0" (UID: "03f33ae9-1e48-4adf-94bc-69ede69802d0"). InnerVolumeSpecName "ovsdbserver-nb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.872346 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-b04e-account-create-update-6vvff"] Mar 11 09:05:12 crc kubenswrapper[4808]: E0311 09:05:12.872934 4808 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="46dcd1f24c24da00b44ee1905152bd43556064f3dd7b7d9f3e140bf0866a8897" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 11 09:05:12 crc kubenswrapper[4808]: E0311 09:05:12.872997 4808 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="e28fc76b-781e-4397-87b5-2b2bf6d2a496" containerName="nova-scheduler-scheduler" Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.876850 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9deef18-212d-4f90-adbe-84f8bb0177e1-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "f9deef18-212d-4f90-adbe-84f8bb0177e1" (UID: "f9deef18-212d-4f90-adbe-84f8bb0177e1"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:05:12 crc kubenswrapper[4808]: W0311 09:05:12.887449 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd56dace1_91d0_4b10_a4f1_06e5917d1675.slice/crio-bc6bbee7778e88c95e65ad7e7c540a3cb4fef17f3d184a5bb2412c26b3f05199 WatchSource:0}: Error finding container bc6bbee7778e88c95e65ad7e7c540a3cb4fef17f3d184a5bb2412c26b3f05199: Status 404 returned error can't find the container with id bc6bbee7778e88c95e65ad7e7c540a3cb4fef17f3d184a5bb2412c26b3f05199 Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.889478 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03f33ae9-1e48-4adf-94bc-69ede69802d0-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "03f33ae9-1e48-4adf-94bc-69ede69802d0" (UID: "03f33ae9-1e48-4adf-94bc-69ede69802d0"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:05:12 crc kubenswrapper[4808]: E0311 09:05:12.891575 4808 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 11 09:05:12 crc kubenswrapper[4808]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:763d1f1e8a1cf877c151c59609960fd2fa29e7e50001f8818122a2d51878befa,Command:[/bin/sh -c #!/bin/bash Mar 11 09:05:12 crc kubenswrapper[4808]: Mar 11 09:05:12 crc kubenswrapper[4808]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Mar 11 09:05:12 crc kubenswrapper[4808]: Mar 11 09:05:12 crc kubenswrapper[4808]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Mar 11 09:05:12 crc kubenswrapper[4808]: Mar 11 09:05:12 crc kubenswrapper[4808]: MYSQL_CMD="mysql -h -u root -P 3306" Mar 11 09:05:12 crc kubenswrapper[4808]: Mar 11 09:05:12 crc kubenswrapper[4808]: if [ -n "nova_cell1" ]; then Mar 11 09:05:12 crc kubenswrapper[4808]: GRANT_DATABASE="nova_cell1" Mar 11 09:05:12 crc kubenswrapper[4808]: else Mar 11 09:05:12 crc kubenswrapper[4808]: GRANT_DATABASE="*" Mar 11 09:05:12 crc kubenswrapper[4808]: fi Mar 11 09:05:12 crc kubenswrapper[4808]: Mar 11 09:05:12 crc kubenswrapper[4808]: # going for maximum compatibility here: Mar 11 09:05:12 crc kubenswrapper[4808]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Mar 11 09:05:12 crc kubenswrapper[4808]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Mar 11 09:05:12 crc kubenswrapper[4808]: # 3. create user with CREATE but then do all password and TLS with ALTER to Mar 11 09:05:12 crc kubenswrapper[4808]: # support updates Mar 11 09:05:12 crc kubenswrapper[4808]: Mar 11 09:05:12 crc kubenswrapper[4808]: $MYSQL_CMD < logger="UnhandledError" Mar 11 09:05:12 crc kubenswrapper[4808]: E0311 09:05:12.893018 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"nova-cell1-db-secret\\\" not found\"" pod="openstack/nova-cell1-3a44-account-create-update-bp6hq" podUID="d56dace1-91d0-4b10-a4f1-06e5917d1675" Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.898827 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fd1979f-d1de-42a8-be8e-b61087f737bc-ovn-controller-tls-certs" (OuterVolumeSpecName: "ovn-controller-tls-certs") pod "3fd1979f-d1de-42a8-be8e-b61087f737bc" (UID: "3fd1979f-d1de-42a8-be8e-b61087f737bc"). InnerVolumeSpecName "ovn-controller-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:05:12 crc kubenswrapper[4808]: W0311 09:05:12.905821 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod041d9008_1855_405b_ad45_aee31ade42f2.slice/crio-cd1fb7cdbc8fd6866bae80960e3c16f0c1e3d58cc0a0353d88ec1d1fa16b8a38 WatchSource:0}: Error finding container cd1fb7cdbc8fd6866bae80960e3c16f0c1e3d58cc0a0353d88ec1d1fa16b8a38: Status 404 returned error can't find the container with id cd1fb7cdbc8fd6866bae80960e3c16f0c1e3d58cc0a0353d88ec1d1fa16b8a38 Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.906124 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/760732fc-fc8a-4a24-beca-c969fb0260fe-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "760732fc-fc8a-4a24-beca-c969fb0260fe" (UID: "760732fc-fc8a-4a24-beca-c969fb0260fe"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:05:12 crc kubenswrapper[4808]: E0311 09:05:12.908581 4808 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 11 09:05:12 crc kubenswrapper[4808]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:763d1f1e8a1cf877c151c59609960fd2fa29e7e50001f8818122a2d51878befa,Command:[/bin/sh -c #!/bin/bash Mar 11 09:05:12 crc kubenswrapper[4808]: Mar 11 09:05:12 crc kubenswrapper[4808]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Mar 11 09:05:12 crc kubenswrapper[4808]: Mar 11 09:05:12 crc kubenswrapper[4808]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Mar 11 09:05:12 crc kubenswrapper[4808]: Mar 11 09:05:12 crc kubenswrapper[4808]: MYSQL_CMD="mysql -h -u root -P 3306" Mar 11 09:05:12 crc kubenswrapper[4808]: Mar 11 09:05:12 crc kubenswrapper[4808]: if [ -n "glance" ]; then Mar 11 09:05:12 crc kubenswrapper[4808]: GRANT_DATABASE="glance" Mar 11 09:05:12 crc kubenswrapper[4808]: else Mar 11 09:05:12 crc kubenswrapper[4808]: GRANT_DATABASE="*" Mar 11 09:05:12 crc kubenswrapper[4808]: fi Mar 11 09:05:12 crc kubenswrapper[4808]: Mar 11 09:05:12 crc kubenswrapper[4808]: # going for maximum compatibility here: Mar 11 09:05:12 crc kubenswrapper[4808]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Mar 11 09:05:12 crc kubenswrapper[4808]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Mar 11 09:05:12 crc kubenswrapper[4808]: # 3. create user with CREATE but then do all password and TLS with ALTER to Mar 11 09:05:12 crc kubenswrapper[4808]: # support updates Mar 11 09:05:12 crc kubenswrapper[4808]: Mar 11 09:05:12 crc kubenswrapper[4808]: $MYSQL_CMD < logger="UnhandledError" Mar 11 09:05:12 crc kubenswrapper[4808]: E0311 09:05:12.911124 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"glance-db-secret\\\" not found\"" pod="openstack/glance-b04e-account-create-update-6vvff" podUID="041d9008-1855-405b-ad45-aee31ade42f2" Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.940152 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/073052f7-094c-467a-8910-b2ce25e5b981-ovsdbserver-sb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-sb-tls-certs") pod "073052f7-094c-467a-8910-b2ce25e5b981" (UID: "073052f7-094c-467a-8910-b2ce25e5b981"). InnerVolumeSpecName "ovsdbserver-sb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.940458 4808 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f9deef18-212d-4f90-adbe-84f8bb0177e1-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.940481 4808 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/03f33ae9-1e48-4adf-94bc-69ede69802d0-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.940490 4808 reconciler_common.go:293] "Volume detached for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/3fd1979f-d1de-42a8-be8e-b61087f737bc-ovn-controller-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.940499 4808 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/760732fc-fc8a-4a24-beca-c969fb0260fe-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.940512 4808 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/073052f7-094c-467a-8910-b2ce25e5b981-ovsdbserver-sb-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.940520 4808 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/03f33ae9-1e48-4adf-94bc-69ede69802d0-ovsdbserver-nb-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.943482 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f244d77-0b6a-4bcf-a6b4-dc7028019e29-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "2f244d77-0b6a-4bcf-a6b4-dc7028019e29" (UID: "2f244d77-0b6a-4bcf-a6b4-dc7028019e29"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:05:12 crc kubenswrapper[4808]: I0311 09:05:12.968961 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/073052f7-094c-467a-8910-b2ce25e5b981-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "073052f7-094c-467a-8910-b2ce25e5b981" (UID: "073052f7-094c-467a-8910-b2ce25e5b981"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.043735 4808 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f244d77-0b6a-4bcf-a6b4-dc7028019e29-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.043779 4808 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/073052f7-094c-467a-8910-b2ce25e5b981-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:13 crc kubenswrapper[4808]: E0311 09:05:13.043878 4808 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Mar 11 09:05:13 crc kubenswrapper[4808]: E0311 09:05:13.043939 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/549d4ad5-b5b0-45bd-87b0-b9a6ee77866e-config-data podName:549d4ad5-b5b0-45bd-87b0-b9a6ee77866e nodeName:}" failed. No retries permitted until 2026-03-11 09:05:15.043922136 +0000 UTC m=+1565.997245456 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/549d4ad5-b5b0-45bd-87b0-b9a6ee77866e-config-data") pod "rabbitmq-cell1-server-0" (UID: "549d4ad5-b5b0-45bd-87b0-b9a6ee77866e") : configmap "rabbitmq-cell1-config-data" not found Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.064524 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.071286 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.263904 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-c59a-account-create-update-6xx54"] Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.277851 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-ac71-account-create-update-jdqhw"] Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.289731 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-8749b8c99-fl7cg"] Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.290708 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-8749b8c99-fl7cg" podUID="c8cf2302-c420-4e0f-a292-a601a5f66bfa" containerName="proxy-httpd" containerID="cri-o://1fd24f68692976a2a706a06224d08627bc5728d36c24719be4fb6237bdcc7311" gracePeriod=30 Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.290996 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-8749b8c99-fl7cg" podUID="c8cf2302-c420-4e0f-a292-a601a5f66bfa" containerName="proxy-server" containerID="cri-o://3a2a599e4e760a6afe597af2b5d260b266e6c1394ce37e557149afa679478a32" gracePeriod=30 Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.301130 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-dde6-account-create-update-5lc8z"] Mar 11 09:05:13 crc kubenswrapper[4808]: E0311 09:05:13.336816 4808 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 11 09:05:13 crc kubenswrapper[4808]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:763d1f1e8a1cf877c151c59609960fd2fa29e7e50001f8818122a2d51878befa,Command:[/bin/sh -c #!/bin/bash Mar 11 09:05:13 crc kubenswrapper[4808]: Mar 11 09:05:13 crc kubenswrapper[4808]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Mar 11 09:05:13 crc kubenswrapper[4808]: Mar 11 09:05:13 crc kubenswrapper[4808]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Mar 11 09:05:13 crc kubenswrapper[4808]: Mar 11 09:05:13 crc kubenswrapper[4808]: MYSQL_CMD="mysql -h -u root -P 3306" Mar 11 09:05:13 crc kubenswrapper[4808]: Mar 11 09:05:13 crc kubenswrapper[4808]: if [ -n "nova_api" ]; then Mar 11 09:05:13 crc kubenswrapper[4808]: GRANT_DATABASE="nova_api" Mar 11 09:05:13 crc kubenswrapper[4808]: else Mar 11 09:05:13 crc kubenswrapper[4808]: GRANT_DATABASE="*" Mar 11 09:05:13 crc kubenswrapper[4808]: fi Mar 11 09:05:13 crc kubenswrapper[4808]: Mar 11 09:05:13 crc kubenswrapper[4808]: # going for maximum compatibility here: Mar 11 09:05:13 crc kubenswrapper[4808]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Mar 11 09:05:13 crc kubenswrapper[4808]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Mar 11 09:05:13 crc kubenswrapper[4808]: # 3. create user with CREATE but then do all password and TLS with ALTER to Mar 11 09:05:13 crc kubenswrapper[4808]: # support updates Mar 11 09:05:13 crc kubenswrapper[4808]: Mar 11 09:05:13 crc kubenswrapper[4808]: $MYSQL_CMD < logger="UnhandledError" Mar 11 09:05:13 crc kubenswrapper[4808]: E0311 09:05:13.341577 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"nova-api-db-secret\\\" not found\"" pod="openstack/nova-api-c59a-account-create-update-6xx54" podUID="2f291b15-bc87-423a-8843-a3105ea5688b" Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.350235 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-spf22"] Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.380210 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-spf22"] Mar 11 09:05:13 crc kubenswrapper[4808]: E0311 09:05:13.384578 4808 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 11 09:05:13 crc kubenswrapper[4808]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:763d1f1e8a1cf877c151c59609960fd2fa29e7e50001f8818122a2d51878befa,Command:[/bin/sh -c #!/bin/bash Mar 11 09:05:13 crc kubenswrapper[4808]: Mar 11 09:05:13 crc kubenswrapper[4808]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Mar 11 09:05:13 crc kubenswrapper[4808]: Mar 11 09:05:13 crc kubenswrapper[4808]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Mar 11 09:05:13 crc kubenswrapper[4808]: Mar 11 09:05:13 crc kubenswrapper[4808]: MYSQL_CMD="mysql -h -u root -P 3306" Mar 11 09:05:13 crc kubenswrapper[4808]: Mar 11 09:05:13 crc kubenswrapper[4808]: if [ -n "nova_cell0" ]; then Mar 11 09:05:13 crc kubenswrapper[4808]: GRANT_DATABASE="nova_cell0" Mar 11 09:05:13 crc kubenswrapper[4808]: else Mar 11 09:05:13 crc kubenswrapper[4808]: GRANT_DATABASE="*" Mar 11 09:05:13 crc kubenswrapper[4808]: fi Mar 11 09:05:13 crc kubenswrapper[4808]: Mar 11 09:05:13 crc kubenswrapper[4808]: # going for maximum compatibility here: Mar 11 09:05:13 crc kubenswrapper[4808]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Mar 11 09:05:13 crc kubenswrapper[4808]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Mar 11 09:05:13 crc kubenswrapper[4808]: # 3. create user with CREATE but then do all password and TLS with ALTER to Mar 11 09:05:13 crc kubenswrapper[4808]: # support updates Mar 11 09:05:13 crc kubenswrapper[4808]: Mar 11 09:05:13 crc kubenswrapper[4808]: $MYSQL_CMD < logger="UnhandledError" Mar 11 09:05:13 crc kubenswrapper[4808]: E0311 09:05:13.385645 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"nova-cell0-db-secret\\\" not found\"" pod="openstack/nova-cell0-ac71-account-create-update-jdqhw" podUID="ba89fa63-8105-4acf-883b-f6aa1deb70de" Mar 11 09:05:13 crc kubenswrapper[4808]: E0311 09:05:13.399543 4808 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 11 09:05:13 crc kubenswrapper[4808]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:763d1f1e8a1cf877c151c59609960fd2fa29e7e50001f8818122a2d51878befa,Command:[/bin/sh -c #!/bin/bash Mar 11 09:05:13 crc kubenswrapper[4808]: Mar 11 09:05:13 crc kubenswrapper[4808]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Mar 11 09:05:13 crc kubenswrapper[4808]: Mar 11 09:05:13 crc kubenswrapper[4808]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Mar 11 09:05:13 crc kubenswrapper[4808]: Mar 11 09:05:13 crc kubenswrapper[4808]: MYSQL_CMD="mysql -h -u root -P 3306" Mar 11 09:05:13 crc kubenswrapper[4808]: Mar 11 09:05:13 crc kubenswrapper[4808]: if [ -n "neutron" ]; then Mar 11 09:05:13 crc kubenswrapper[4808]: GRANT_DATABASE="neutron" Mar 11 09:05:13 crc kubenswrapper[4808]: else Mar 11 09:05:13 crc kubenswrapper[4808]: GRANT_DATABASE="*" Mar 11 09:05:13 crc kubenswrapper[4808]: fi Mar 11 09:05:13 crc kubenswrapper[4808]: Mar 11 09:05:13 crc kubenswrapper[4808]: # going for maximum compatibility here: Mar 11 09:05:13 crc kubenswrapper[4808]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Mar 11 09:05:13 crc kubenswrapper[4808]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Mar 11 09:05:13 crc kubenswrapper[4808]: # 3. create user with CREATE but then do all password and TLS with ALTER to Mar 11 09:05:13 crc kubenswrapper[4808]: # support updates Mar 11 09:05:13 crc kubenswrapper[4808]: Mar 11 09:05:13 crc kubenswrapper[4808]: $MYSQL_CMD < logger="UnhandledError" Mar 11 09:05:13 crc kubenswrapper[4808]: E0311 09:05:13.401281 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"neutron-db-secret\\\" not found\"" pod="openstack/neutron-dde6-account-create-update-5lc8z" podUID="40ec7dec-ae7d-49ff-95cd-af13c2ab08a5" Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.404256 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-zcd8b"] Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.407550 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"bf0df220-037c-4d17-b4ac-93f6d7eb4fa0","Type":"ContainerDied","Data":"fa00d39da6c2ff243156e8ec3d359bdd8214eb14e877427f3b6d367217f70a14"} Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.405314 4808 generic.go:334] "Generic (PLEG): container finished" podID="bf0df220-037c-4d17-b4ac-93f6d7eb4fa0" containerID="fa00d39da6c2ff243156e8ec3d359bdd8214eb14e877427f3b6d367217f70a14" exitCode=0 Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.410525 4808 generic.go:334] "Generic (PLEG): container finished" podID="192d6d53-4174-487e-b652-0ad887475d54" containerID="c7fcbacbe8923acd9369c79ee4cc7aa70294bd2899daf98c5c24346801197e4e" exitCode=143 Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.410566 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"192d6d53-4174-487e-b652-0ad887475d54","Type":"ContainerDied","Data":"c7fcbacbe8923acd9369c79ee4cc7aa70294bd2899daf98c5c24346801197e4e"} Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.412564 4808 generic.go:334] "Generic (PLEG): container finished" podID="16eab58c-16f2-4054-aae1-d4de176db24c" containerID="b632c1c0284e2ec0e59641f479394eaf69354fe98cfe9ecbe781f8db04299ccf" exitCode=143 Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.412589 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-86db6b574-lsd58" event={"ID":"16eab58c-16f2-4054-aae1-d4de176db24c","Type":"ContainerDied","Data":"b632c1c0284e2ec0e59641f479394eaf69354fe98cfe9ecbe781f8db04299ccf"} Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.414391 4808 generic.go:334] "Generic (PLEG): container finished" podID="909d233e-60cb-4a66-989b-2dc8706ea143" containerID="e9ba485b53ad1d07b1fb284bc1453eaeb1f6baef2ad3437f4b34571187340332" exitCode=143 Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.414436 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-688446ffb8-4n8n7" event={"ID":"909d233e-60cb-4a66-989b-2dc8706ea143","Type":"ContainerDied","Data":"e9ba485b53ad1d07b1fb284bc1453eaeb1f6baef2ad3437f4b34571187340332"} Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.417042 4808 generic.go:334] "Generic (PLEG): container finished" podID="50b3975b-d699-4f86-8aba-3a00f99bfdbc" containerID="7c409fde438e843afe5c9926ca77c42ad26f7c963455ed3066ee3ef3491431c2" exitCode=143 Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.417085 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6b745fd47c-v25f8" event={"ID":"50b3975b-d699-4f86-8aba-3a00f99bfdbc","Type":"ContainerDied","Data":"7c409fde438e843afe5c9926ca77c42ad26f7c963455ed3066ee3ef3491431c2"} Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.418786 4808 generic.go:334] "Generic (PLEG): container finished" podID="37361775-fb6c-486f-8d7b-fd93f31bbaf5" containerID="313c3f34d3027bd9947d2e5694c49e600d145074cf0323486a527bfeeb269fbc" exitCode=0 Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.418825 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7676f56769-zslbs" event={"ID":"37361775-fb6c-486f-8d7b-fd93f31bbaf5","Type":"ContainerDied","Data":"313c3f34d3027bd9947d2e5694c49e600d145074cf0323486a527bfeeb269fbc"} Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.420475 4808 generic.go:334] "Generic (PLEG): container finished" podID="25dc3abb-1552-49e8-a8b4-c51edd37f47c" containerID="a086c0e3022a2db2d34ca60d4015160e76a8270327ee343fcdb2cb62321f0689" exitCode=143 Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.420522 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"25dc3abb-1552-49e8-a8b4-c51edd37f47c","Type":"ContainerDied","Data":"a086c0e3022a2db2d34ca60d4015160e76a8270327ee343fcdb2cb62321f0689"} Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.421957 4808 generic.go:334] "Generic (PLEG): container finished" podID="3e648abe-27f1-49ac-aebb-a38e206fe101" containerID="169cbac68f068fd8106574eac7954c3f52641921608913e7e44b9ca108d7b78a" exitCode=0 Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.421999 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"3e648abe-27f1-49ac-aebb-a38e206fe101","Type":"ContainerDied","Data":"169cbac68f068fd8106574eac7954c3f52641921608913e7e44b9ca108d7b78a"} Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.424145 4808 generic.go:334] "Generic (PLEG): container finished" podID="5c805958-e789-4689-bbd0-dc1a1a116486" containerID="9dc74041b5de0f337b97cef5d7a082b76d5ccc5083fc2046cdd3bdf353512e2e" exitCode=0 Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.424185 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"5c805958-e789-4689-bbd0-dc1a1a116486","Type":"ContainerDied","Data":"9dc74041b5de0f337b97cef5d7a082b76d5ccc5083fc2046cdd3bdf353512e2e"} Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.424603 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-metrics-zcd8b"] Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.428875 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_073052f7-094c-467a-8910-b2ce25e5b981/ovsdbserver-sb/0.log" Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.429081 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.429124 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"073052f7-094c-467a-8910-b2ce25e5b981","Type":"ContainerDied","Data":"fc6459998b365291973bdcf6e6c732481664a5654cff7869f5dee26b3f089e03"} Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.429376 4808 scope.go:117] "RemoveContainer" containerID="0b1ad33990e113a73cab3cbdff3db9029771d7e96b54d5e45a675f0010b3a17c" Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.432798 4808 generic.go:334] "Generic (PLEG): container finished" podID="ac93b356-9c32-4094-9de5-8fd25c677810" containerID="be587cd4387689015812b77c921ed987cf9e6f3848ffcd2be39c7c2a7a593cef" exitCode=0 Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.432856 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-mbbhf" event={"ID":"ac93b356-9c32-4094-9de5-8fd25c677810","Type":"ContainerDied","Data":"be587cd4387689015812b77c921ed987cf9e6f3848ffcd2be39c7c2a7a593cef"} Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.433906 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-ac71-account-create-update-jdqhw" event={"ID":"ba89fa63-8105-4acf-883b-f6aa1deb70de","Type":"ContainerStarted","Data":"6531bb21f0f775083387aa1b2a09466ddb297520c5a4d29e1b113d3c5f119da5"} Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.436462 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-3a44-account-create-update-bp6hq" event={"ID":"d56dace1-91d0-4b10-a4f1-06e5917d1675","Type":"ContainerStarted","Data":"bc6bbee7778e88c95e65ad7e7c540a3cb4fef17f3d184a5bb2412c26b3f05199"} Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.439942 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-c59a-account-create-update-6xx54" event={"ID":"2f291b15-bc87-423a-8843-a3105ea5688b","Type":"ContainerStarted","Data":"41410cf794a65fcf27950e6a4ecead64f1c295f55b728e8dee426b9235a27952"} Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.461319 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b04e-account-create-update-6vvff" event={"ID":"041d9008-1855-405b-ad45-aee31ade42f2","Type":"ContainerStarted","Data":"cd1fb7cdbc8fd6866bae80960e3c16f0c1e3d58cc0a0353d88ec1d1fa16b8a38"} Mar 11 09:05:13 crc kubenswrapper[4808]: E0311 09:05:13.461429 4808 projected.go:263] Couldn't get secret openstack/swift-proxy-config-data: secret "swift-proxy-config-data" not found Mar 11 09:05:13 crc kubenswrapper[4808]: E0311 09:05:13.461462 4808 projected.go:263] Couldn't get secret openstack/swift-conf: secret "swift-conf" not found Mar 11 09:05:13 crc kubenswrapper[4808]: E0311 09:05:13.461474 4808 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 11 09:05:13 crc kubenswrapper[4808]: E0311 09:05:13.461490 4808 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-proxy-8749b8c99-fl7cg: [secret "swift-proxy-config-data" not found, secret "swift-conf" not found, configmap "swift-ring-files" not found] Mar 11 09:05:13 crc kubenswrapper[4808]: E0311 09:05:13.461542 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c8cf2302-c420-4e0f-a292-a601a5f66bfa-etc-swift podName:c8cf2302-c420-4e0f-a292-a601a5f66bfa nodeName:}" failed. No retries permitted until 2026-03-11 09:05:17.461527794 +0000 UTC m=+1568.414851114 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c8cf2302-c420-4e0f-a292-a601a5f66bfa-etc-swift") pod "swift-proxy-8749b8c99-fl7cg" (UID: "c8cf2302-c420-4e0f-a292-a601a5f66bfa") : [secret "swift-proxy-config-data" not found, secret "swift-conf" not found, configmap "swift-ring-files" not found] Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.488534 4808 generic.go:334] "Generic (PLEG): container finished" podID="b2531f01-6ef8-4583-b788-97e0c8b4b50b" containerID="8703e021e4664d65f7198a1e10e27fae65ecd623ec350adc5affd0e319e1f91c" exitCode=0 Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.488588 4808 generic.go:334] "Generic (PLEG): container finished" podID="b2531f01-6ef8-4583-b788-97e0c8b4b50b" containerID="36d57f8f584cb1d8ebdc130edda1da090b5344f1959c1b5fbee4de63ad660d1d" exitCode=0 Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.488598 4808 generic.go:334] "Generic (PLEG): container finished" podID="b2531f01-6ef8-4583-b788-97e0c8b4b50b" containerID="5610b5414b923dbe5f29196fe9b69a93bc333d712a301053a8290a033d1900e2" exitCode=0 Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.488608 4808 generic.go:334] "Generic (PLEG): container finished" podID="b2531f01-6ef8-4583-b788-97e0c8b4b50b" containerID="ba474a4645c4e2ea029e0f9ff8bded4ccbfb98f7157a6bc5a8efcb5ca613c7de" exitCode=0 Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.488616 4808 generic.go:334] "Generic (PLEG): container finished" podID="b2531f01-6ef8-4583-b788-97e0c8b4b50b" containerID="82f8358e3c23ae5caf4686e4d2ed129be3db93f6c1646acae71a1541a038ba65" exitCode=0 Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.488684 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2531f01-6ef8-4583-b788-97e0c8b4b50b","Type":"ContainerDied","Data":"8703e021e4664d65f7198a1e10e27fae65ecd623ec350adc5affd0e319e1f91c"} Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.488735 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2531f01-6ef8-4583-b788-97e0c8b4b50b","Type":"ContainerDied","Data":"36d57f8f584cb1d8ebdc130edda1da090b5344f1959c1b5fbee4de63ad660d1d"} Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.488750 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2531f01-6ef8-4583-b788-97e0c8b4b50b","Type":"ContainerDied","Data":"5610b5414b923dbe5f29196fe9b69a93bc333d712a301053a8290a033d1900e2"} Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.488758 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2531f01-6ef8-4583-b788-97e0c8b4b50b","Type":"ContainerDied","Data":"ba474a4645c4e2ea029e0f9ff8bded4ccbfb98f7157a6bc5a8efcb5ca613c7de"} Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.488767 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2531f01-6ef8-4583-b788-97e0c8b4b50b","Type":"ContainerDied","Data":"82f8358e3c23ae5caf4686e4d2ed129be3db93f6c1646acae71a1541a038ba65"} Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.501171 4808 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/root-account-create-update-pcjsh" secret="" err="secret \"galera-openstack-cell1-dockercfg-kg7gr\" not found" Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.501750 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.502280 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7749c44969-t6dx4" Mar 11 09:05:13 crc kubenswrapper[4808]: E0311 09:05:13.528937 4808 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 11 09:05:13 crc kubenswrapper[4808]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:763d1f1e8a1cf877c151c59609960fd2fa29e7e50001f8818122a2d51878befa,Command:[/bin/sh -c #!/bin/bash Mar 11 09:05:13 crc kubenswrapper[4808]: Mar 11 09:05:13 crc kubenswrapper[4808]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Mar 11 09:05:13 crc kubenswrapper[4808]: Mar 11 09:05:13 crc kubenswrapper[4808]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Mar 11 09:05:13 crc kubenswrapper[4808]: Mar 11 09:05:13 crc kubenswrapper[4808]: MYSQL_CMD="mysql -h -u root -P 3306" Mar 11 09:05:13 crc kubenswrapper[4808]: Mar 11 09:05:13 crc kubenswrapper[4808]: if [ -n "" ]; then Mar 11 09:05:13 crc kubenswrapper[4808]: GRANT_DATABASE="" Mar 11 09:05:13 crc kubenswrapper[4808]: else Mar 11 09:05:13 crc kubenswrapper[4808]: GRANT_DATABASE="*" Mar 11 09:05:13 crc kubenswrapper[4808]: fi Mar 11 09:05:13 crc kubenswrapper[4808]: Mar 11 09:05:13 crc kubenswrapper[4808]: # going for maximum compatibility here: Mar 11 09:05:13 crc kubenswrapper[4808]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Mar 11 09:05:13 crc kubenswrapper[4808]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Mar 11 09:05:13 crc kubenswrapper[4808]: # 3. create user with CREATE but then do all password and TLS with ALTER to Mar 11 09:05:13 crc kubenswrapper[4808]: # support updates Mar 11 09:05:13 crc kubenswrapper[4808]: Mar 11 09:05:13 crc kubenswrapper[4808]: $MYSQL_CMD < logger="UnhandledError" Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.529080 4808 scope.go:117] "RemoveContainer" containerID="464b1dff84d68cb54e2785c4998d80bcd9d2d9c96576bd186fecafee0ad6ee92" Mar 11 09:05:13 crc kubenswrapper[4808]: E0311 09:05:13.536947 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"openstack-cell1-mariadb-root-db-secret\\\" not found\"" pod="openstack/root-account-create-update-pcjsh" podUID="b4922ac8-998e-4ba3-88cb-6805fa10c7fd" Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.539965 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-8749b8c99-fl7cg" podUID="c8cf2302-c420-4e0f-a292-a601a5f66bfa" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.175:8080/healthcheck\": dial tcp 10.217.0.175:8080: connect: connection refused" Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.540026 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-8749b8c99-fl7cg" podUID="c8cf2302-c420-4e0f-a292-a601a5f66bfa" containerName="proxy-server" probeResult="failure" output="Get \"https://10.217.0.175:8080/healthcheck\": dial tcp 10.217.0.175:8080: connect: connection refused" Mar 11 09:05:13 crc kubenswrapper[4808]: E0311 09:05:13.563027 4808 secret.go:188] Couldn't get secret openstack/cinder-config-data: secret "cinder-config-data" not found Mar 11 09:05:13 crc kubenswrapper[4808]: E0311 09:05:13.563100 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45e8823d-6df6-41fb-b7cd-9cb19e680db1-config-data podName:45e8823d-6df6-41fb-b7cd-9cb19e680db1 nodeName:}" failed. No retries permitted until 2026-03-11 09:05:17.563086536 +0000 UTC m=+1568.516409856 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/45e8823d-6df6-41fb-b7cd-9cb19e680db1-config-data") pod "cinder-api-0" (UID: "45e8823d-6df6-41fb-b7cd-9cb19e680db1") : secret "cinder-config-data" not found Mar 11 09:05:13 crc kubenswrapper[4808]: E0311 09:05:13.563159 4808 secret.go:188] Couldn't get secret openstack/cinder-scripts: secret "cinder-scripts" not found Mar 11 09:05:13 crc kubenswrapper[4808]: E0311 09:05:13.563179 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45e8823d-6df6-41fb-b7cd-9cb19e680db1-scripts podName:45e8823d-6df6-41fb-b7cd-9cb19e680db1 nodeName:}" failed. No retries permitted until 2026-03-11 09:05:17.563173518 +0000 UTC m=+1568.516496838 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "scripts" (UniqueName: "kubernetes.io/secret/45e8823d-6df6-41fb-b7cd-9cb19e680db1-scripts") pod "cinder-api-0" (UID: "45e8823d-6df6-41fb-b7cd-9cb19e680db1") : secret "cinder-scripts" not found Mar 11 09:05:13 crc kubenswrapper[4808]: E0311 09:05:13.563235 4808 projected.go:288] Couldn't get configMap openstack/swift-storage-config-data: configmap "swift-storage-config-data" not found Mar 11 09:05:13 crc kubenswrapper[4808]: E0311 09:05:13.563247 4808 projected.go:263] Couldn't get secret openstack/swift-conf: secret "swift-conf" not found Mar 11 09:05:13 crc kubenswrapper[4808]: E0311 09:05:13.563255 4808 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 11 09:05:13 crc kubenswrapper[4808]: E0311 09:05:13.563265 4808 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: [configmap "swift-storage-config-data" not found, secret "swift-conf" not found, configmap "swift-ring-files" not found] Mar 11 09:05:13 crc kubenswrapper[4808]: E0311 09:05:13.563299 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b2531f01-6ef8-4583-b788-97e0c8b4b50b-etc-swift podName:b2531f01-6ef8-4583-b788-97e0c8b4b50b nodeName:}" failed. No retries permitted until 2026-03-11 09:05:17.563279732 +0000 UTC m=+1568.516603052 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b2531f01-6ef8-4583-b788-97e0c8b4b50b-etc-swift") pod "swift-storage-0" (UID: "b2531f01-6ef8-4583-b788-97e0c8b4b50b") : [configmap "swift-storage-config-data" not found, secret "swift-conf" not found, configmap "swift-ring-files" not found] Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.577558 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="a1e42e33-7453-4b97-abca-0c45cc27faa2" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.104:5671: connect: connection refused" Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.597750 4808 scope.go:117] "RemoveContainer" containerID="daa9d5be0dd494f080e478e2334f54c0d62a05a1d3e5c21ee9c65ba6d4767c26" Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.656919 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.659146 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.693415 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.713929 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7749c44969-t6dx4"] Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.719933 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7749c44969-t6dx4"] Mar 11 09:05:13 crc kubenswrapper[4808]: E0311 09:05:13.766845 4808 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3fd1979f_d1de_42a8_be8e_b61087f737bc.slice/crio-36c750f48ac33ddc1cee8a438d54bf885813c3f9d9bdb51b0dfec0a66d24f056.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3fd1979f_d1de_42a8_be8e_b61087f737bc.slice/crio-conmon-36c750f48ac33ddc1cee8a438d54bf885813c3f9d9bdb51b0dfec0a66d24f056.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf9deef18_212d_4f90_adbe_84f8bb0177e1.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod073052f7_094c_467a_8910_b2ce25e5b981.slice/crio-fc6459998b365291973bdcf6e6c732481664a5654cff7869f5dee26b3f089e03\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod760732fc_fc8a_4a24_beca_c969fb0260fe.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod073052f7_094c_467a_8910_b2ce25e5b981.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf9deef18_212d_4f90_adbe_84f8bb0177e1.slice/crio-c56a17b436c1cbb93386b8afa966880c3fbb5a6b9d4558dc14343d3a9befc357\": RecentStats: unable to find data in memory cache]" Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.767480 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lrmnv\" (UniqueName: \"kubernetes.io/projected/3e648abe-27f1-49ac-aebb-a38e206fe101-kube-api-access-lrmnv\") pod \"3e648abe-27f1-49ac-aebb-a38e206fe101\" (UID: \"3e648abe-27f1-49ac-aebb-a38e206fe101\") " Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.767582 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e648abe-27f1-49ac-aebb-a38e206fe101-nova-novncproxy-tls-certs\") pod \"3e648abe-27f1-49ac-aebb-a38e206fe101\" (UID: \"3e648abe-27f1-49ac-aebb-a38e206fe101\") " Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.767758 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e648abe-27f1-49ac-aebb-a38e206fe101-vencrypt-tls-certs\") pod \"3e648abe-27f1-49ac-aebb-a38e206fe101\" (UID: \"3e648abe-27f1-49ac-aebb-a38e206fe101\") " Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.767778 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e648abe-27f1-49ac-aebb-a38e206fe101-combined-ca-bundle\") pod \"3e648abe-27f1-49ac-aebb-a38e206fe101\" (UID: \"3e648abe-27f1-49ac-aebb-a38e206fe101\") " Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.767828 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e648abe-27f1-49ac-aebb-a38e206fe101-config-data\") pod \"3e648abe-27f1-49ac-aebb-a38e206fe101\" (UID: \"3e648abe-27f1-49ac-aebb-a38e206fe101\") " Mar 11 09:05:13 crc kubenswrapper[4808]: E0311 09:05:13.768411 4808 configmap.go:193] Couldn't get configMap openstack/openstack-cell1-scripts: configmap "openstack-cell1-scripts" not found Mar 11 09:05:13 crc kubenswrapper[4808]: E0311 09:05:13.768456 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b4922ac8-998e-4ba3-88cb-6805fa10c7fd-operator-scripts podName:b4922ac8-998e-4ba3-88cb-6805fa10c7fd nodeName:}" failed. No retries permitted until 2026-03-11 09:05:15.768442024 +0000 UTC m=+1566.721765344 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/b4922ac8-998e-4ba3-88cb-6805fa10c7fd-operator-scripts") pod "root-account-create-update-pcjsh" (UID: "b4922ac8-998e-4ba3-88cb-6805fa10c7fd") : configmap "openstack-cell1-scripts" not found Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.774689 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e648abe-27f1-49ac-aebb-a38e206fe101-kube-api-access-lrmnv" (OuterVolumeSpecName: "kube-api-access-lrmnv") pod "3e648abe-27f1-49ac-aebb-a38e206fe101" (UID: "3e648abe-27f1-49ac-aebb-a38e206fe101"). InnerVolumeSpecName "kube-api-access-lrmnv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.793794 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.822114 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02770ee6-83bc-4c09-a98e-d1e1624bd759" path="/var/lib/kubelet/pods/02770ee6-83bc-4c09-a98e-d1e1624bd759/volumes" Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.825703 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e648abe-27f1-49ac-aebb-a38e206fe101-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3e648abe-27f1-49ac-aebb-a38e206fe101" (UID: "3e648abe-27f1-49ac-aebb-a38e206fe101"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.828606 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e648abe-27f1-49ac-aebb-a38e206fe101-config-data" (OuterVolumeSpecName: "config-data") pod "3e648abe-27f1-49ac-aebb-a38e206fe101" (UID: "3e648abe-27f1-49ac-aebb-a38e206fe101"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.828999 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03f33ae9-1e48-4adf-94bc-69ede69802d0" path="/var/lib/kubelet/pods/03f33ae9-1e48-4adf-94bc-69ede69802d0/volumes" Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.835678 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="073052f7-094c-467a-8910-b2ce25e5b981" path="/var/lib/kubelet/pods/073052f7-094c-467a-8910-b2ce25e5b981/volumes" Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.836417 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0beff323-e564-46ab-b5d1-ff40920e373e" path="/var/lib/kubelet/pods/0beff323-e564-46ab-b5d1-ff40920e373e/volumes" Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.837042 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14fb8c78-3b3c-4c0c-afba-59b67c406374" path="/var/lib/kubelet/pods/14fb8c78-3b3c-4c0c-afba-59b67c406374/volumes" Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.846980 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16e5dd70-3baf-4a95-be4e-d3f27d963aa6" path="/var/lib/kubelet/pods/16e5dd70-3baf-4a95-be4e-d3f27d963aa6/volumes" Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.848302 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f244d77-0b6a-4bcf-a6b4-dc7028019e29" path="/var/lib/kubelet/pods/2f244d77-0b6a-4bcf-a6b4-dc7028019e29/volumes" Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.849056 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3fd1979f-d1de-42a8-be8e-b61087f737bc" path="/var/lib/kubelet/pods/3fd1979f-d1de-42a8-be8e-b61087f737bc/volumes" Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.852128 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="730a184e-642a-4df2-a747-c04625a046b8" path="/var/lib/kubelet/pods/730a184e-642a-4df2-a747-c04625a046b8/volumes" Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.852998 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75d7eafe-ecf9-4784-853c-7538a5bb00ca" path="/var/lib/kubelet/pods/75d7eafe-ecf9-4784-853c-7538a5bb00ca/volumes" Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.853491 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="760732fc-fc8a-4a24-beca-c969fb0260fe" path="/var/lib/kubelet/pods/760732fc-fc8a-4a24-beca-c969fb0260fe/volumes" Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.854012 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afa60fd1-a148-4127-ae9d-72f6a61f6cce" path="/var/lib/kubelet/pods/afa60fd1-a148-4127-ae9d-72f6a61f6cce/volumes" Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.856431 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc20fcdb-674b-47bf-abcb-c7985d23f8c8" path="/var/lib/kubelet/pods/cc20fcdb-674b-47bf-abcb-c7985d23f8c8/volumes" Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.857123 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7af695b-1871-4cff-91ad-0bf62afc9ef6" path="/var/lib/kubelet/pods/e7af695b-1871-4cff-91ad-0bf62afc9ef6/volumes" Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.857690 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9deef18-212d-4f90-adbe-84f8bb0177e1" path="/var/lib/kubelet/pods/f9deef18-212d-4f90-adbe-84f8bb0177e1/volumes" Mar 11 09:05:13 crc kubenswrapper[4808]: E0311 09:05:13.859707 4808 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3eb75e498cfe3f13fb0222d0de5cdb5657fd0be88039ded0f03ccc09d63cb1e3" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.870435 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/5c805958-e789-4689-bbd0-dc1a1a116486-config-data-default\") pod \"5c805958-e789-4689-bbd0-dc1a1a116486\" (UID: \"5c805958-e789-4689-bbd0-dc1a1a116486\") " Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.870527 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c805958-e789-4689-bbd0-dc1a1a116486-combined-ca-bundle\") pod \"5c805958-e789-4689-bbd0-dc1a1a116486\" (UID: \"5c805958-e789-4689-bbd0-dc1a1a116486\") " Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.870592 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5c805958-e789-4689-bbd0-dc1a1a116486-kolla-config\") pod \"5c805958-e789-4689-bbd0-dc1a1a116486\" (UID: \"5c805958-e789-4689-bbd0-dc1a1a116486\") " Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.870626 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4v4xg\" (UniqueName: \"kubernetes.io/projected/5c805958-e789-4689-bbd0-dc1a1a116486-kube-api-access-4v4xg\") pod \"5c805958-e789-4689-bbd0-dc1a1a116486\" (UID: \"5c805958-e789-4689-bbd0-dc1a1a116486\") " Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.870648 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"5c805958-e789-4689-bbd0-dc1a1a116486\" (UID: \"5c805958-e789-4689-bbd0-dc1a1a116486\") " Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.870735 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c805958-e789-4689-bbd0-dc1a1a116486-galera-tls-certs\") pod \"5c805958-e789-4689-bbd0-dc1a1a116486\" (UID: \"5c805958-e789-4689-bbd0-dc1a1a116486\") " Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.870782 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/5c805958-e789-4689-bbd0-dc1a1a116486-config-data-generated\") pod \"5c805958-e789-4689-bbd0-dc1a1a116486\" (UID: \"5c805958-e789-4689-bbd0-dc1a1a116486\") " Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.870814 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c805958-e789-4689-bbd0-dc1a1a116486-operator-scripts\") pod \"5c805958-e789-4689-bbd0-dc1a1a116486\" (UID: \"5c805958-e789-4689-bbd0-dc1a1a116486\") " Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.871342 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e648abe-27f1-49ac-aebb-a38e206fe101-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.871406 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e648abe-27f1-49ac-aebb-a38e206fe101-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.871421 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lrmnv\" (UniqueName: \"kubernetes.io/projected/3e648abe-27f1-49ac-aebb-a38e206fe101-kube-api-access-lrmnv\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.872943 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c805958-e789-4689-bbd0-dc1a1a116486-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "5c805958-e789-4689-bbd0-dc1a1a116486" (UID: "5c805958-e789-4689-bbd0-dc1a1a116486"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.874552 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c805958-e789-4689-bbd0-dc1a1a116486-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "5c805958-e789-4689-bbd0-dc1a1a116486" (UID: "5c805958-e789-4689-bbd0-dc1a1a116486"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:05:13 crc kubenswrapper[4808]: E0311 09:05:13.877912 4808 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3eb75e498cfe3f13fb0222d0de5cdb5657fd0be88039ded0f03ccc09d63cb1e3" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.884936 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c805958-e789-4689-bbd0-dc1a1a116486-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5c805958-e789-4689-bbd0-dc1a1a116486" (UID: "5c805958-e789-4689-bbd0-dc1a1a116486"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:05:13 crc kubenswrapper[4808]: E0311 09:05:13.885746 4808 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3eb75e498cfe3f13fb0222d0de5cdb5657fd0be88039ded0f03ccc09d63cb1e3" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Mar 11 09:05:13 crc kubenswrapper[4808]: E0311 09:05:13.885957 4808 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="c8798b96-74d7-4e0e-a4c7-97f3c995544b" containerName="nova-cell0-conductor-conductor" Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.887595 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c805958-e789-4689-bbd0-dc1a1a116486-kube-api-access-4v4xg" (OuterVolumeSpecName: "kube-api-access-4v4xg") pod "5c805958-e789-4689-bbd0-dc1a1a116486" (UID: "5c805958-e789-4689-bbd0-dc1a1a116486"). InnerVolumeSpecName "kube-api-access-4v4xg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.889538 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c805958-e789-4689-bbd0-dc1a1a116486-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "5c805958-e789-4689-bbd0-dc1a1a116486" (UID: "5c805958-e789-4689-bbd0-dc1a1a116486"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.895555 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e648abe-27f1-49ac-aebb-a38e206fe101-vencrypt-tls-certs" (OuterVolumeSpecName: "vencrypt-tls-certs") pod "3e648abe-27f1-49ac-aebb-a38e206fe101" (UID: "3e648abe-27f1-49ac-aebb-a38e206fe101"). InnerVolumeSpecName "vencrypt-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.904304 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "mysql-db") pod "5c805958-e789-4689-bbd0-dc1a1a116486" (UID: "5c805958-e789-4689-bbd0-dc1a1a116486"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.916012 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c805958-e789-4689-bbd0-dc1a1a116486-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5c805958-e789-4689-bbd0-dc1a1a116486" (UID: "5c805958-e789-4689-bbd0-dc1a1a116486"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.954657 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e648abe-27f1-49ac-aebb-a38e206fe101-nova-novncproxy-tls-certs" (OuterVolumeSpecName: "nova-novncproxy-tls-certs") pod "3e648abe-27f1-49ac-aebb-a38e206fe101" (UID: "3e648abe-27f1-49ac-aebb-a38e206fe101"). InnerVolumeSpecName "nova-novncproxy-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.973464 4808 reconciler_common.go:293] "Volume detached for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e648abe-27f1-49ac-aebb-a38e206fe101-vencrypt-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.973488 4808 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/5c805958-e789-4689-bbd0-dc1a1a116486-config-data-default\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.973498 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c805958-e789-4689-bbd0-dc1a1a116486-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.973507 4808 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5c805958-e789-4689-bbd0-dc1a1a116486-kolla-config\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.973517 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4v4xg\" (UniqueName: \"kubernetes.io/projected/5c805958-e789-4689-bbd0-dc1a1a116486-kube-api-access-4v4xg\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.973542 4808 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.973551 4808 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/5c805958-e789-4689-bbd0-dc1a1a116486-config-data-generated\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.973560 4808 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c805958-e789-4689-bbd0-dc1a1a116486-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.973569 4808 reconciler_common.go:293] "Volume detached for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e648abe-27f1-49ac-aebb-a38e206fe101-nova-novncproxy-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:13 crc kubenswrapper[4808]: I0311 09:05:13.983127 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c805958-e789-4689-bbd0-dc1a1a116486-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "5c805958-e789-4689-bbd0-dc1a1a116486" (UID: "5c805958-e789-4689-bbd0-dc1a1a116486"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.002048 4808 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.075574 4808 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.075610 4808 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c805958-e789-4689-bbd0-dc1a1a116486-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:14 crc kubenswrapper[4808]: E0311 09:05:14.075678 4808 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Mar 11 09:05:14 crc kubenswrapper[4808]: E0311 09:05:14.075726 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a1e42e33-7453-4b97-abca-0c45cc27faa2-config-data podName:a1e42e33-7453-4b97-abca-0c45cc27faa2 nodeName:}" failed. No retries permitted until 2026-03-11 09:05:18.075710715 +0000 UTC m=+1569.029034035 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/a1e42e33-7453-4b97-abca-0c45cc27faa2-config-data") pod "rabbitmq-server-0" (UID: "a1e42e33-7453-4b97-abca-0c45cc27faa2") : configmap "rabbitmq-config-data" not found Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.141964 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-ac71-account-create-update-jdqhw" Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.264967 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-3a44-account-create-update-bp6hq" Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.281576 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba89fa63-8105-4acf-883b-f6aa1deb70de-operator-scripts\") pod \"ba89fa63-8105-4acf-883b-f6aa1deb70de\" (UID: \"ba89fa63-8105-4acf-883b-f6aa1deb70de\") " Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.281633 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5x4xm\" (UniqueName: \"kubernetes.io/projected/ba89fa63-8105-4acf-883b-f6aa1deb70de-kube-api-access-5x4xm\") pod \"ba89fa63-8105-4acf-883b-f6aa1deb70de\" (UID: \"ba89fa63-8105-4acf-883b-f6aa1deb70de\") " Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.286037 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba89fa63-8105-4acf-883b-f6aa1deb70de-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ba89fa63-8105-4acf-883b-f6aa1deb70de" (UID: "ba89fa63-8105-4acf-883b-f6aa1deb70de"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.291607 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-c59a-account-create-update-6xx54" Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.291650 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba89fa63-8105-4acf-883b-f6aa1deb70de-kube-api-access-5x4xm" (OuterVolumeSpecName: "kube-api-access-5x4xm") pod "ba89fa63-8105-4acf-883b-f6aa1deb70de" (UID: "ba89fa63-8105-4acf-883b-f6aa1deb70de"). InnerVolumeSpecName "kube-api-access-5x4xm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.316793 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-c800-account-create-update-cnrds" Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.321118 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b04e-account-create-update-6vvff" Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.383835 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vhs6z\" (UniqueName: \"kubernetes.io/projected/85b43352-7580-46b6-a90c-93a2598ac134-kube-api-access-vhs6z\") pod \"85b43352-7580-46b6-a90c-93a2598ac134\" (UID: \"85b43352-7580-46b6-a90c-93a2598ac134\") " Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.383935 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bqg6h\" (UniqueName: \"kubernetes.io/projected/2f291b15-bc87-423a-8843-a3105ea5688b-kube-api-access-bqg6h\") pod \"2f291b15-bc87-423a-8843-a3105ea5688b\" (UID: \"2f291b15-bc87-423a-8843-a3105ea5688b\") " Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.383977 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85b43352-7580-46b6-a90c-93a2598ac134-operator-scripts\") pod \"85b43352-7580-46b6-a90c-93a2598ac134\" (UID: \"85b43352-7580-46b6-a90c-93a2598ac134\") " Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.384139 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f291b15-bc87-423a-8843-a3105ea5688b-operator-scripts\") pod \"2f291b15-bc87-423a-8843-a3105ea5688b\" (UID: \"2f291b15-bc87-423a-8843-a3105ea5688b\") " Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.384188 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h6g4b\" (UniqueName: \"kubernetes.io/projected/d56dace1-91d0-4b10-a4f1-06e5917d1675-kube-api-access-h6g4b\") pod \"d56dace1-91d0-4b10-a4f1-06e5917d1675\" (UID: \"d56dace1-91d0-4b10-a4f1-06e5917d1675\") " Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.384207 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d56dace1-91d0-4b10-a4f1-06e5917d1675-operator-scripts\") pod \"d56dace1-91d0-4b10-a4f1-06e5917d1675\" (UID: \"d56dace1-91d0-4b10-a4f1-06e5917d1675\") " Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.385216 4808 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba89fa63-8105-4acf-883b-f6aa1deb70de-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.385241 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5x4xm\" (UniqueName: \"kubernetes.io/projected/ba89fa63-8105-4acf-883b-f6aa1deb70de-kube-api-access-5x4xm\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.385846 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d56dace1-91d0-4b10-a4f1-06e5917d1675-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d56dace1-91d0-4b10-a4f1-06e5917d1675" (UID: "d56dace1-91d0-4b10-a4f1-06e5917d1675"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.386307 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f291b15-bc87-423a-8843-a3105ea5688b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2f291b15-bc87-423a-8843-a3105ea5688b" (UID: "2f291b15-bc87-423a-8843-a3105ea5688b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.386750 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85b43352-7580-46b6-a90c-93a2598ac134-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "85b43352-7580-46b6-a90c-93a2598ac134" (UID: "85b43352-7580-46b6-a90c-93a2598ac134"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.390422 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85b43352-7580-46b6-a90c-93a2598ac134-kube-api-access-vhs6z" (OuterVolumeSpecName: "kube-api-access-vhs6z") pod "85b43352-7580-46b6-a90c-93a2598ac134" (UID: "85b43352-7580-46b6-a90c-93a2598ac134"). InnerVolumeSpecName "kube-api-access-vhs6z". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.390556 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f291b15-bc87-423a-8843-a3105ea5688b-kube-api-access-bqg6h" (OuterVolumeSpecName: "kube-api-access-bqg6h") pod "2f291b15-bc87-423a-8843-a3105ea5688b" (UID: "2f291b15-bc87-423a-8843-a3105ea5688b"). InnerVolumeSpecName "kube-api-access-bqg6h". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.399305 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d56dace1-91d0-4b10-a4f1-06e5917d1675-kube-api-access-h6g4b" (OuterVolumeSpecName: "kube-api-access-h6g4b") pod "d56dace1-91d0-4b10-a4f1-06e5917d1675" (UID: "d56dace1-91d0-4b10-a4f1-06e5917d1675"). InnerVolumeSpecName "kube-api-access-h6g4b". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.428552 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-5hjcc"] Mar 11 09:05:14 crc kubenswrapper[4808]: E0311 09:05:14.429256 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03f33ae9-1e48-4adf-94bc-69ede69802d0" containerName="openstack-network-exporter" Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.429271 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="03f33ae9-1e48-4adf-94bc-69ede69802d0" containerName="openstack-network-exporter" Mar 11 09:05:14 crc kubenswrapper[4808]: E0311 09:05:14.429293 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="073052f7-094c-467a-8910-b2ce25e5b981" containerName="ovsdbserver-sb" Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.429301 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="073052f7-094c-467a-8910-b2ce25e5b981" containerName="ovsdbserver-sb" Mar 11 09:05:14 crc kubenswrapper[4808]: E0311 09:05:14.429324 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c805958-e789-4689-bbd0-dc1a1a116486" containerName="mysql-bootstrap" Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.429333 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c805958-e789-4689-bbd0-dc1a1a116486" containerName="mysql-bootstrap" Mar 11 09:05:14 crc kubenswrapper[4808]: E0311 09:05:14.429354 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c805958-e789-4689-bbd0-dc1a1a116486" containerName="galera" Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.429372 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c805958-e789-4689-bbd0-dc1a1a116486" containerName="galera" Mar 11 09:05:14 crc kubenswrapper[4808]: E0311 09:05:14.429392 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f244d77-0b6a-4bcf-a6b4-dc7028019e29" containerName="openstack-network-exporter" Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.429398 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f244d77-0b6a-4bcf-a6b4-dc7028019e29" containerName="openstack-network-exporter" Mar 11 09:05:14 crc kubenswrapper[4808]: E0311 09:05:14.429410 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="073052f7-094c-467a-8910-b2ce25e5b981" containerName="openstack-network-exporter" Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.429417 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="073052f7-094c-467a-8910-b2ce25e5b981" containerName="openstack-network-exporter" Mar 11 09:05:14 crc kubenswrapper[4808]: E0311 09:05:14.429440 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fd1979f-d1de-42a8-be8e-b61087f737bc" containerName="ovn-controller" Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.429447 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fd1979f-d1de-42a8-be8e-b61087f737bc" containerName="ovn-controller" Mar 11 09:05:14 crc kubenswrapper[4808]: E0311 09:05:14.429462 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="760732fc-fc8a-4a24-beca-c969fb0260fe" containerName="init" Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.429468 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="760732fc-fc8a-4a24-beca-c969fb0260fe" containerName="init" Mar 11 09:05:14 crc kubenswrapper[4808]: E0311 09:05:14.429477 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03f33ae9-1e48-4adf-94bc-69ede69802d0" containerName="ovsdbserver-nb" Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.429485 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="03f33ae9-1e48-4adf-94bc-69ede69802d0" containerName="ovsdbserver-nb" Mar 11 09:05:14 crc kubenswrapper[4808]: E0311 09:05:14.429500 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e648abe-27f1-49ac-aebb-a38e206fe101" containerName="nova-cell1-novncproxy-novncproxy" Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.429507 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e648abe-27f1-49ac-aebb-a38e206fe101" containerName="nova-cell1-novncproxy-novncproxy" Mar 11 09:05:14 crc kubenswrapper[4808]: E0311 09:05:14.429533 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="760732fc-fc8a-4a24-beca-c969fb0260fe" containerName="dnsmasq-dns" Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.429539 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="760732fc-fc8a-4a24-beca-c969fb0260fe" containerName="dnsmasq-dns" Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.429905 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="03f33ae9-1e48-4adf-94bc-69ede69802d0" containerName="ovsdbserver-nb" Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.429926 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="760732fc-fc8a-4a24-beca-c969fb0260fe" containerName="dnsmasq-dns" Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.429946 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="073052f7-094c-467a-8910-b2ce25e5b981" containerName="openstack-network-exporter" Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.429956 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="03f33ae9-1e48-4adf-94bc-69ede69802d0" containerName="openstack-network-exporter" Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.429971 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c805958-e789-4689-bbd0-dc1a1a116486" containerName="galera" Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.429981 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="3fd1979f-d1de-42a8-be8e-b61087f737bc" containerName="ovn-controller" Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.430000 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="073052f7-094c-467a-8910-b2ce25e5b981" containerName="ovsdbserver-sb" Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.430055 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e648abe-27f1-49ac-aebb-a38e206fe101" containerName="nova-cell1-novncproxy-novncproxy" Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.430077 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f244d77-0b6a-4bcf-a6b4-dc7028019e29" containerName="openstack-network-exporter" Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.432134 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-5hjcc" Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.434550 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Mar 11 09:05:14 crc kubenswrapper[4808]: E0311 09:05:14.441648 4808 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e2259d06d5e8abb57d4defe5dfed63d448596138a276f57890cb8984cb01e2ed" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Mar 11 09:05:14 crc kubenswrapper[4808]: E0311 09:05:14.452172 4808 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e2259d06d5e8abb57d4defe5dfed63d448596138a276f57890cb8984cb01e2ed" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Mar 11 09:05:14 crc kubenswrapper[4808]: E0311 09:05:14.464143 4808 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e2259d06d5e8abb57d4defe5dfed63d448596138a276f57890cb8984cb01e2ed" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Mar 11 09:05:14 crc kubenswrapper[4808]: E0311 09:05:14.464258 4808 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="512d8427-151d-42dd-a2fe-b52d22583604" containerName="nova-cell1-conductor-conductor" Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.471604 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-5hjcc"] Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.480808 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-8749b8c99-fl7cg" Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.486011 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8c7p2\" (UniqueName: \"kubernetes.io/projected/041d9008-1855-405b-ad45-aee31ade42f2-kube-api-access-8c7p2\") pod \"041d9008-1855-405b-ad45-aee31ade42f2\" (UID: \"041d9008-1855-405b-ad45-aee31ade42f2\") " Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.486236 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/041d9008-1855-405b-ad45-aee31ade42f2-operator-scripts\") pod \"041d9008-1855-405b-ad45-aee31ade42f2\" (UID: \"041d9008-1855-405b-ad45-aee31ade42f2\") " Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.486694 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bqg6h\" (UniqueName: \"kubernetes.io/projected/2f291b15-bc87-423a-8843-a3105ea5688b-kube-api-access-bqg6h\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.486705 4808 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85b43352-7580-46b6-a90c-93a2598ac134-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.486713 4808 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f291b15-bc87-423a-8843-a3105ea5688b-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.486722 4808 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d56dace1-91d0-4b10-a4f1-06e5917d1675-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.486730 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h6g4b\" (UniqueName: \"kubernetes.io/projected/d56dace1-91d0-4b10-a4f1-06e5917d1675-kube-api-access-h6g4b\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.486739 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vhs6z\" (UniqueName: \"kubernetes.io/projected/85b43352-7580-46b6-a90c-93a2598ac134-kube-api-access-vhs6z\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.487276 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/041d9008-1855-405b-ad45-aee31ade42f2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "041d9008-1855-405b-ad45-aee31ade42f2" (UID: "041d9008-1855-405b-ad45-aee31ade42f2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.491440 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/041d9008-1855-405b-ad45-aee31ade42f2-kube-api-access-8c7p2" (OuterVolumeSpecName: "kube-api-access-8c7p2") pod "041d9008-1855-405b-ad45-aee31ade42f2" (UID: "041d9008-1855-405b-ad45-aee31ade42f2"). InnerVolumeSpecName "kube-api-access-8c7p2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.520494 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-3a44-account-create-update-bp6hq" event={"ID":"d56dace1-91d0-4b10-a4f1-06e5917d1675","Type":"ContainerDied","Data":"bc6bbee7778e88c95e65ad7e7c540a3cb4fef17f3d184a5bb2412c26b3f05199"} Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.520573 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-3a44-account-create-update-bp6hq" Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.522522 4808 generic.go:334] "Generic (PLEG): container finished" podID="16eab58c-16f2-4054-aae1-d4de176db24c" containerID="aa0c1e84c07ee55b3422a4d17c6d167a2c103b98d130bfcf9441a6a8c53b6cb2" exitCode=0 Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.522572 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-86db6b574-lsd58" event={"ID":"16eab58c-16f2-4054-aae1-d4de176db24c","Type":"ContainerDied","Data":"aa0c1e84c07ee55b3422a4d17c6d167a2c103b98d130bfcf9441a6a8c53b6cb2"} Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.523958 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"3e648abe-27f1-49ac-aebb-a38e206fe101","Type":"ContainerDied","Data":"cee64291f1f08ef382b743b230ebdae20a4aff24fbd3810672176112c8afae9e"} Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.523983 4808 scope.go:117] "RemoveContainer" containerID="169cbac68f068fd8106574eac7954c3f52641921608913e7e44b9ca108d7b78a" Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.524079 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.528843 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-c59a-account-create-update-6xx54" event={"ID":"2f291b15-bc87-423a-8843-a3105ea5688b","Type":"ContainerDied","Data":"41410cf794a65fcf27950e6a4ecead64f1c295f55b728e8dee426b9235a27952"} Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.528904 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-c59a-account-create-update-6xx54" Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.535544 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b04e-account-create-update-6vvff" event={"ID":"041d9008-1855-405b-ad45-aee31ade42f2","Type":"ContainerDied","Data":"cd1fb7cdbc8fd6866bae80960e3c16f0c1e3d58cc0a0353d88ec1d1fa16b8a38"} Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.535659 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b04e-account-create-update-6vvff" Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.548099 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-c800-account-create-update-cnrds" event={"ID":"85b43352-7580-46b6-a90c-93a2598ac134","Type":"ContainerDied","Data":"aabe2e2bd1867deaa13fefb19a67bbe19c0b7135d5cff6c4c262817d915a16cc"} Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.548215 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-c800-account-create-update-cnrds" Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.566822 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"5c805958-e789-4689-bbd0-dc1a1a116486","Type":"ContainerDied","Data":"d446736c3f1848ec2ed002693bffc75f040a0f6d96d48605bb0b1e96e588b24d"} Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.566863 4808 scope.go:117] "RemoveContainer" containerID="9dc74041b5de0f337b97cef5d7a082b76d5ccc5083fc2046cdd3bdf353512e2e" Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.566959 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.568928 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-ac71-account-create-update-jdqhw" Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.568983 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-ac71-account-create-update-jdqhw" event={"ID":"ba89fa63-8105-4acf-883b-f6aa1deb70de","Type":"ContainerDied","Data":"6531bb21f0f775083387aa1b2a09466ddb297520c5a4d29e1b113d3c5f119da5"} Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.578887 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.588124 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6fcw2\" (UniqueName: \"kubernetes.io/projected/c8cf2302-c420-4e0f-a292-a601a5f66bfa-kube-api-access-6fcw2\") pod \"c8cf2302-c420-4e0f-a292-a601a5f66bfa\" (UID: \"c8cf2302-c420-4e0f-a292-a601a5f66bfa\") " Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.588258 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c8cf2302-c420-4e0f-a292-a601a5f66bfa-run-httpd\") pod \"c8cf2302-c420-4e0f-a292-a601a5f66bfa\" (UID: \"c8cf2302-c420-4e0f-a292-a601a5f66bfa\") " Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.588433 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c8cf2302-c420-4e0f-a292-a601a5f66bfa-log-httpd\") pod \"c8cf2302-c420-4e0f-a292-a601a5f66bfa\" (UID: \"c8cf2302-c420-4e0f-a292-a601a5f66bfa\") " Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.588453 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.588532 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8cf2302-c420-4e0f-a292-a601a5f66bfa-combined-ca-bundle\") pod \"c8cf2302-c420-4e0f-a292-a601a5f66bfa\" (UID: \"c8cf2302-c420-4e0f-a292-a601a5f66bfa\") " Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.588584 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c8cf2302-c420-4e0f-a292-a601a5f66bfa-etc-swift\") pod \"c8cf2302-c420-4e0f-a292-a601a5f66bfa\" (UID: \"c8cf2302-c420-4e0f-a292-a601a5f66bfa\") " Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.588634 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8cf2302-c420-4e0f-a292-a601a5f66bfa-public-tls-certs\") pod \"c8cf2302-c420-4e0f-a292-a601a5f66bfa\" (UID: \"c8cf2302-c420-4e0f-a292-a601a5f66bfa\") " Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.588693 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8cf2302-c420-4e0f-a292-a601a5f66bfa-config-data\") pod \"c8cf2302-c420-4e0f-a292-a601a5f66bfa\" (UID: \"c8cf2302-c420-4e0f-a292-a601a5f66bfa\") " Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.588724 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8cf2302-c420-4e0f-a292-a601a5f66bfa-internal-tls-certs\") pod \"c8cf2302-c420-4e0f-a292-a601a5f66bfa\" (UID: \"c8cf2302-c420-4e0f-a292-a601a5f66bfa\") " Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.589493 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da221ea6-e100-48b2-aacc-4b34bcd71782-operator-scripts\") pod \"root-account-create-update-5hjcc\" (UID: \"da221ea6-e100-48b2-aacc-4b34bcd71782\") " pod="openstack/root-account-create-update-5hjcc" Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.589542 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzlhh\" (UniqueName: \"kubernetes.io/projected/da221ea6-e100-48b2-aacc-4b34bcd71782-kube-api-access-zzlhh\") pod \"root-account-create-update-5hjcc\" (UID: \"da221ea6-e100-48b2-aacc-4b34bcd71782\") " pod="openstack/root-account-create-update-5hjcc" Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.589618 4808 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/041d9008-1855-405b-ad45-aee31ade42f2-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.589636 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8c7p2\" (UniqueName: \"kubernetes.io/projected/041d9008-1855-405b-ad45-aee31ade42f2-kube-api-access-8c7p2\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.589863 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8cf2302-c420-4e0f-a292-a601a5f66bfa-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c8cf2302-c420-4e0f-a292-a601a5f66bfa" (UID: "c8cf2302-c420-4e0f-a292-a601a5f66bfa"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.590157 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8cf2302-c420-4e0f-a292-a601a5f66bfa-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c8cf2302-c420-4e0f-a292-a601a5f66bfa" (UID: "c8cf2302-c420-4e0f-a292-a601a5f66bfa"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.594259 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dde6-account-create-update-5lc8z" event={"ID":"40ec7dec-ae7d-49ff-95cd-af13c2ab08a5","Type":"ContainerStarted","Data":"a19aa85183ac7524e08e8dc76739b122f1a6d9e7caf3282801da873b2fbbb8bf"} Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.595149 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8cf2302-c420-4e0f-a292-a601a5f66bfa-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "c8cf2302-c420-4e0f-a292-a601a5f66bfa" (UID: "c8cf2302-c420-4e0f-a292-a601a5f66bfa"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.596108 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8cf2302-c420-4e0f-a292-a601a5f66bfa-kube-api-access-6fcw2" (OuterVolumeSpecName: "kube-api-access-6fcw2") pod "c8cf2302-c420-4e0f-a292-a601a5f66bfa" (UID: "c8cf2302-c420-4e0f-a292-a601a5f66bfa"). InnerVolumeSpecName "kube-api-access-6fcw2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.611333 4808 scope.go:117] "RemoveContainer" containerID="e3e5d3531e06a473345f5a4c4e07e59335e271eccb4b7baaa2c90cf709058c91" Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.612615 4808 generic.go:334] "Generic (PLEG): container finished" podID="c8cf2302-c420-4e0f-a292-a601a5f66bfa" containerID="3a2a599e4e760a6afe597af2b5d260b266e6c1394ce37e557149afa679478a32" exitCode=0 Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.612632 4808 generic.go:334] "Generic (PLEG): container finished" podID="c8cf2302-c420-4e0f-a292-a601a5f66bfa" containerID="1fd24f68692976a2a706a06224d08627bc5728d36c24719be4fb6237bdcc7311" exitCode=0 Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.612699 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-8749b8c99-fl7cg" Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.612699 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-8749b8c99-fl7cg" event={"ID":"c8cf2302-c420-4e0f-a292-a601a5f66bfa","Type":"ContainerDied","Data":"3a2a599e4e760a6afe597af2b5d260b266e6c1394ce37e557149afa679478a32"} Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.612746 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-8749b8c99-fl7cg" event={"ID":"c8cf2302-c420-4e0f-a292-a601a5f66bfa","Type":"ContainerDied","Data":"1fd24f68692976a2a706a06224d08627bc5728d36c24719be4fb6237bdcc7311"} Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.612760 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-8749b8c99-fl7cg" event={"ID":"c8cf2302-c420-4e0f-a292-a601a5f66bfa","Type":"ContainerDied","Data":"53a5aa11a7efd6aa2edf975053013012ba3c78d137ef5363d2a76cff4d87301c"} Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.688320 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8cf2302-c420-4e0f-a292-a601a5f66bfa-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "c8cf2302-c420-4e0f-a292-a601a5f66bfa" (UID: "c8cf2302-c420-4e0f-a292-a601a5f66bfa"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.692540 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da221ea6-e100-48b2-aacc-4b34bcd71782-operator-scripts\") pod \"root-account-create-update-5hjcc\" (UID: \"da221ea6-e100-48b2-aacc-4b34bcd71782\") " pod="openstack/root-account-create-update-5hjcc" Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.692617 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzlhh\" (UniqueName: \"kubernetes.io/projected/da221ea6-e100-48b2-aacc-4b34bcd71782-kube-api-access-zzlhh\") pod \"root-account-create-update-5hjcc\" (UID: \"da221ea6-e100-48b2-aacc-4b34bcd71782\") " pod="openstack/root-account-create-update-5hjcc" Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.692681 4808 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8cf2302-c420-4e0f-a292-a601a5f66bfa-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.692696 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6fcw2\" (UniqueName: \"kubernetes.io/projected/c8cf2302-c420-4e0f-a292-a601a5f66bfa-kube-api-access-6fcw2\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.692709 4808 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c8cf2302-c420-4e0f-a292-a601a5f66bfa-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.692720 4808 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c8cf2302-c420-4e0f-a292-a601a5f66bfa-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.692730 4808 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c8cf2302-c420-4e0f-a292-a601a5f66bfa-etc-swift\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.693845 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da221ea6-e100-48b2-aacc-4b34bcd71782-operator-scripts\") pod \"root-account-create-update-5hjcc\" (UID: \"da221ea6-e100-48b2-aacc-4b34bcd71782\") " pod="openstack/root-account-create-update-5hjcc" Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.719177 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzlhh\" (UniqueName: \"kubernetes.io/projected/da221ea6-e100-48b2-aacc-4b34bcd71782-kube-api-access-zzlhh\") pod \"root-account-create-update-5hjcc\" (UID: \"da221ea6-e100-48b2-aacc-4b34bcd71782\") " pod="openstack/root-account-create-update-5hjcc" Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.731717 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8cf2302-c420-4e0f-a292-a601a5f66bfa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c8cf2302-c420-4e0f-a292-a601a5f66bfa" (UID: "c8cf2302-c420-4e0f-a292-a601a5f66bfa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.739827 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-86db6b574-lsd58" Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.781024 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-5hjcc" Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.803119 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16eab58c-16f2-4054-aae1-d4de176db24c-config-data\") pod \"16eab58c-16f2-4054-aae1-d4de176db24c\" (UID: \"16eab58c-16f2-4054-aae1-d4de176db24c\") " Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.803534 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/16eab58c-16f2-4054-aae1-d4de176db24c-config-data-custom\") pod \"16eab58c-16f2-4054-aae1-d4de176db24c\" (UID: \"16eab58c-16f2-4054-aae1-d4de176db24c\") " Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.803563 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8bvxp\" (UniqueName: \"kubernetes.io/projected/16eab58c-16f2-4054-aae1-d4de176db24c-kube-api-access-8bvxp\") pod \"16eab58c-16f2-4054-aae1-d4de176db24c\" (UID: \"16eab58c-16f2-4054-aae1-d4de176db24c\") " Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.803661 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16eab58c-16f2-4054-aae1-d4de176db24c-combined-ca-bundle\") pod \"16eab58c-16f2-4054-aae1-d4de176db24c\" (UID: \"16eab58c-16f2-4054-aae1-d4de176db24c\") " Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.803754 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/16eab58c-16f2-4054-aae1-d4de176db24c-logs\") pod \"16eab58c-16f2-4054-aae1-d4de176db24c\" (UID: \"16eab58c-16f2-4054-aae1-d4de176db24c\") " Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.804272 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8cf2302-c420-4e0f-a292-a601a5f66bfa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.810302 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16eab58c-16f2-4054-aae1-d4de176db24c-logs" (OuterVolumeSpecName: "logs") pod "16eab58c-16f2-4054-aae1-d4de176db24c" (UID: "16eab58c-16f2-4054-aae1-d4de176db24c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.812857 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16eab58c-16f2-4054-aae1-d4de176db24c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "16eab58c-16f2-4054-aae1-d4de176db24c" (UID: "16eab58c-16f2-4054-aae1-d4de176db24c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.818279 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8cf2302-c420-4e0f-a292-a601a5f66bfa-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "c8cf2302-c420-4e0f-a292-a601a5f66bfa" (UID: "c8cf2302-c420-4e0f-a292-a601a5f66bfa"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.824698 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16eab58c-16f2-4054-aae1-d4de176db24c-kube-api-access-8bvxp" (OuterVolumeSpecName: "kube-api-access-8bvxp") pod "16eab58c-16f2-4054-aae1-d4de176db24c" (UID: "16eab58c-16f2-4054-aae1-d4de176db24c"). InnerVolumeSpecName "kube-api-access-8bvxp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.847173 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-c800-account-create-update-cnrds"] Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.856597 4808 scope.go:117] "RemoveContainer" containerID="3a2a599e4e760a6afe597af2b5d260b266e6c1394ce37e557149afa679478a32" Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.859910 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16eab58c-16f2-4054-aae1-d4de176db24c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "16eab58c-16f2-4054-aae1-d4de176db24c" (UID: "16eab58c-16f2-4054-aae1-d4de176db24c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.889580 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-c800-account-create-update-cnrds"] Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.915952 4808 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/16eab58c-16f2-4054-aae1-d4de176db24c-logs\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.915975 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8bvxp\" (UniqueName: \"kubernetes.io/projected/16eab58c-16f2-4054-aae1-d4de176db24c-kube-api-access-8bvxp\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.915984 4808 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/16eab58c-16f2-4054-aae1-d4de176db24c-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.915993 4808 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8cf2302-c420-4e0f-a292-a601a5f66bfa-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.916002 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16eab58c-16f2-4054-aae1-d4de176db24c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.917937 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16eab58c-16f2-4054-aae1-d4de176db24c-config-data" (OuterVolumeSpecName: "config-data") pod "16eab58c-16f2-4054-aae1-d4de176db24c" (UID: "16eab58c-16f2-4054-aae1-d4de176db24c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.920546 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-3a44-account-create-update-bp6hq"] Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.937427 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-3a44-account-create-update-bp6hq"] Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.949528 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8cf2302-c420-4e0f-a292-a601a5f66bfa-config-data" (OuterVolumeSpecName: "config-data") pod "c8cf2302-c420-4e0f-a292-a601a5f66bfa" (UID: "c8cf2302-c420-4e0f-a292-a601a5f66bfa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.975427 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-ac71-account-create-update-jdqhw"] Mar 11 09:05:14 crc kubenswrapper[4808]: I0311 09:05:14.985513 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-ac71-account-create-update-jdqhw"] Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.000320 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-c59a-account-create-update-6xx54"] Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.012300 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-c59a-account-create-update-6xx54"] Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.018072 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8cf2302-c420-4e0f-a292-a601a5f66bfa-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.018103 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16eab58c-16f2-4054-aae1-d4de176db24c-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.023197 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.029805 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.046019 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="45e8823d-6df6-41fb-b7cd-9cb19e680db1" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.171:8776/healthcheck\": read tcp 10.217.0.2:49652->10.217.0.171:8776: read: connection reset by peer" Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.048390 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-b04e-account-create-update-6vvff"] Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.056951 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-b04e-account-create-update-6vvff"] Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.090654 4808 scope.go:117] "RemoveContainer" containerID="1fd24f68692976a2a706a06224d08627bc5728d36c24719be4fb6237bdcc7311" Mar 11 09:05:15 crc kubenswrapper[4808]: E0311 09:05:15.120133 4808 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Mar 11 09:05:15 crc kubenswrapper[4808]: E0311 09:05:15.120194 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/549d4ad5-b5b0-45bd-87b0-b9a6ee77866e-config-data podName:549d4ad5-b5b0-45bd-87b0-b9a6ee77866e nodeName:}" failed. No retries permitted until 2026-03-11 09:05:19.120179743 +0000 UTC m=+1570.073503063 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/549d4ad5-b5b0-45bd-87b0-b9a6ee77866e-config-data") pod "rabbitmq-cell1-server-0" (UID: "549d4ad5-b5b0-45bd-87b0-b9a6ee77866e") : configmap "rabbitmq-cell1-config-data" not found Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.186300 4808 scope.go:117] "RemoveContainer" containerID="3a2a599e4e760a6afe597af2b5d260b266e6c1394ce37e557149afa679478a32" Mar 11 09:05:15 crc kubenswrapper[4808]: E0311 09:05:15.192814 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a2a599e4e760a6afe597af2b5d260b266e6c1394ce37e557149afa679478a32\": container with ID starting with 3a2a599e4e760a6afe597af2b5d260b266e6c1394ce37e557149afa679478a32 not found: ID does not exist" containerID="3a2a599e4e760a6afe597af2b5d260b266e6c1394ce37e557149afa679478a32" Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.192853 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a2a599e4e760a6afe597af2b5d260b266e6c1394ce37e557149afa679478a32"} err="failed to get container status \"3a2a599e4e760a6afe597af2b5d260b266e6c1394ce37e557149afa679478a32\": rpc error: code = NotFound desc = could not find container \"3a2a599e4e760a6afe597af2b5d260b266e6c1394ce37e557149afa679478a32\": container with ID starting with 3a2a599e4e760a6afe597af2b5d260b266e6c1394ce37e557149afa679478a32 not found: ID does not exist" Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.192889 4808 scope.go:117] "RemoveContainer" containerID="1fd24f68692976a2a706a06224d08627bc5728d36c24719be4fb6237bdcc7311" Mar 11 09:05:15 crc kubenswrapper[4808]: E0311 09:05:15.195099 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1fd24f68692976a2a706a06224d08627bc5728d36c24719be4fb6237bdcc7311\": container with ID starting with 1fd24f68692976a2a706a06224d08627bc5728d36c24719be4fb6237bdcc7311 not found: ID does not exist" containerID="1fd24f68692976a2a706a06224d08627bc5728d36c24719be4fb6237bdcc7311" Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.195139 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1fd24f68692976a2a706a06224d08627bc5728d36c24719be4fb6237bdcc7311"} err="failed to get container status \"1fd24f68692976a2a706a06224d08627bc5728d36c24719be4fb6237bdcc7311\": rpc error: code = NotFound desc = could not find container \"1fd24f68692976a2a706a06224d08627bc5728d36c24719be4fb6237bdcc7311\": container with ID starting with 1fd24f68692976a2a706a06224d08627bc5728d36c24719be4fb6237bdcc7311 not found: ID does not exist" Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.195154 4808 scope.go:117] "RemoveContainer" containerID="3a2a599e4e760a6afe597af2b5d260b266e6c1394ce37e557149afa679478a32" Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.199081 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a2a599e4e760a6afe597af2b5d260b266e6c1394ce37e557149afa679478a32"} err="failed to get container status \"3a2a599e4e760a6afe597af2b5d260b266e6c1394ce37e557149afa679478a32\": rpc error: code = NotFound desc = could not find container \"3a2a599e4e760a6afe597af2b5d260b266e6c1394ce37e557149afa679478a32\": container with ID starting with 3a2a599e4e760a6afe597af2b5d260b266e6c1394ce37e557149afa679478a32 not found: ID does not exist" Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.199139 4808 scope.go:117] "RemoveContainer" containerID="1fd24f68692976a2a706a06224d08627bc5728d36c24719be4fb6237bdcc7311" Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.204721 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1fd24f68692976a2a706a06224d08627bc5728d36c24719be4fb6237bdcc7311"} err="failed to get container status \"1fd24f68692976a2a706a06224d08627bc5728d36c24719be4fb6237bdcc7311\": rpc error: code = NotFound desc = could not find container \"1fd24f68692976a2a706a06224d08627bc5728d36c24719be4fb6237bdcc7311\": container with ID starting with 1fd24f68692976a2a706a06224d08627bc5728d36c24719be4fb6237bdcc7311 not found: ID does not exist" Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.283795 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-8749b8c99-fl7cg"] Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.291644 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-proxy-8749b8c99-fl7cg"] Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.321104 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dde6-account-create-update-5lc8z" Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.336270 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.336588 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c526a61c-3322-446a-8ff5-edd5a02f4b1f" containerName="ceilometer-central-agent" containerID="cri-o://4dffba39f9aa8f7f8d804f85bd4b10f46074c0a37fa13b669739dacf69102465" gracePeriod=30 Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.336743 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c526a61c-3322-446a-8ff5-edd5a02f4b1f" containerName="proxy-httpd" containerID="cri-o://476be30bd69c12932b5a3bc740f763e7b974ace089360c9808141c4922a76390" gracePeriod=30 Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.336792 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c526a61c-3322-446a-8ff5-edd5a02f4b1f" containerName="sg-core" containerID="cri-o://d192897093f81eb77d49df57c52794a2e80dd6a9b36aabde6381dca653540f27" gracePeriod=30 Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.336839 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c526a61c-3322-446a-8ff5-edd5a02f4b1f" containerName="ceilometer-notification-agent" containerID="cri-o://cac6c475704a03c7e5006a4b2741b4004efa4bf58536e1048913a3630bde7cda" gracePeriod=30 Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.347047 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.347244 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="7fde4956-a749-475e-9b5e-978fd33a4239" containerName="kube-state-metrics" containerID="cri-o://93a0e55009e5032ca6043f65c6537e79b802bcb416d44c8d232d960a3cf12786" gracePeriod=30 Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.356027 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-665556c5fd-bnc2f" Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.427684 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91b4b44f-8d7c-4933-b20d-a1d79d7c90b4-config-data\") pod \"91b4b44f-8d7c-4933-b20d-a1d79d7c90b4\" (UID: \"91b4b44f-8d7c-4933-b20d-a1d79d7c90b4\") " Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.428347 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91b4b44f-8d7c-4933-b20d-a1d79d7c90b4-combined-ca-bundle\") pod \"91b4b44f-8d7c-4933-b20d-a1d79d7c90b4\" (UID: \"91b4b44f-8d7c-4933-b20d-a1d79d7c90b4\") " Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.428395 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91b4b44f-8d7c-4933-b20d-a1d79d7c90b4-scripts\") pod \"91b4b44f-8d7c-4933-b20d-a1d79d7c90b4\" (UID: \"91b4b44f-8d7c-4933-b20d-a1d79d7c90b4\") " Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.428424 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/91b4b44f-8d7c-4933-b20d-a1d79d7c90b4-logs\") pod \"91b4b44f-8d7c-4933-b20d-a1d79d7c90b4\" (UID: \"91b4b44f-8d7c-4933-b20d-a1d79d7c90b4\") " Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.428505 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/91b4b44f-8d7c-4933-b20d-a1d79d7c90b4-public-tls-certs\") pod \"91b4b44f-8d7c-4933-b20d-a1d79d7c90b4\" (UID: \"91b4b44f-8d7c-4933-b20d-a1d79d7c90b4\") " Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.428547 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/91b4b44f-8d7c-4933-b20d-a1d79d7c90b4-internal-tls-certs\") pod \"91b4b44f-8d7c-4933-b20d-a1d79d7c90b4\" (UID: \"91b4b44f-8d7c-4933-b20d-a1d79d7c90b4\") " Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.428579 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzvdr\" (UniqueName: \"kubernetes.io/projected/40ec7dec-ae7d-49ff-95cd-af13c2ab08a5-kube-api-access-lzvdr\") pod \"40ec7dec-ae7d-49ff-95cd-af13c2ab08a5\" (UID: \"40ec7dec-ae7d-49ff-95cd-af13c2ab08a5\") " Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.428597 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66gj5\" (UniqueName: \"kubernetes.io/projected/91b4b44f-8d7c-4933-b20d-a1d79d7c90b4-kube-api-access-66gj5\") pod \"91b4b44f-8d7c-4933-b20d-a1d79d7c90b4\" (UID: \"91b4b44f-8d7c-4933-b20d-a1d79d7c90b4\") " Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.428620 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/40ec7dec-ae7d-49ff-95cd-af13c2ab08a5-operator-scripts\") pod \"40ec7dec-ae7d-49ff-95cd-af13c2ab08a5\" (UID: \"40ec7dec-ae7d-49ff-95cd-af13c2ab08a5\") " Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.430609 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91b4b44f-8d7c-4933-b20d-a1d79d7c90b4-logs" (OuterVolumeSpecName: "logs") pod "91b4b44f-8d7c-4933-b20d-a1d79d7c90b4" (UID: "91b4b44f-8d7c-4933-b20d-a1d79d7c90b4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.430962 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40ec7dec-ae7d-49ff-95cd-af13c2ab08a5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "40ec7dec-ae7d-49ff-95cd-af13c2ab08a5" (UID: "40ec7dec-ae7d-49ff-95cd-af13c2ab08a5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.432100 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-pcjsh" Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.434232 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91b4b44f-8d7c-4933-b20d-a1d79d7c90b4-kube-api-access-66gj5" (OuterVolumeSpecName: "kube-api-access-66gj5") pod "91b4b44f-8d7c-4933-b20d-a1d79d7c90b4" (UID: "91b4b44f-8d7c-4933-b20d-a1d79d7c90b4"). InnerVolumeSpecName "kube-api-access-66gj5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.454913 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40ec7dec-ae7d-49ff-95cd-af13c2ab08a5-kube-api-access-lzvdr" (OuterVolumeSpecName: "kube-api-access-lzvdr") pod "40ec7dec-ae7d-49ff-95cd-af13c2ab08a5" (UID: "40ec7dec-ae7d-49ff-95cd-af13c2ab08a5"). InnerVolumeSpecName "kube-api-access-lzvdr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.463472 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91b4b44f-8d7c-4933-b20d-a1d79d7c90b4-scripts" (OuterVolumeSpecName: "scripts") pod "91b4b44f-8d7c-4933-b20d-a1d79d7c90b4" (UID: "91b4b44f-8d7c-4933-b20d-a1d79d7c90b4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.500634 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-35b5-account-create-update-4zxv8"] Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.523506 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.523746 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/memcached-0" podUID="c6baf079-20ab-45df-8e2d-2459a4286c9a" containerName="memcached" containerID="cri-o://82f8eb58e9f3ad3c8cc07e575893f64a2f9e5741e48c57fc799afc4a33b2dc6a" gracePeriod=30 Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.540932 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-35b5-account-create-update-4zxv8"] Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.579775 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4922ac8-998e-4ba3-88cb-6805fa10c7fd-operator-scripts\") pod \"b4922ac8-998e-4ba3-88cb-6805fa10c7fd\" (UID: \"b4922ac8-998e-4ba3-88cb-6805fa10c7fd\") " Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.579835 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-chn2x\" (UniqueName: \"kubernetes.io/projected/b4922ac8-998e-4ba3-88cb-6805fa10c7fd-kube-api-access-chn2x\") pod \"b4922ac8-998e-4ba3-88cb-6805fa10c7fd\" (UID: \"b4922ac8-998e-4ba3-88cb-6805fa10c7fd\") " Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.580198 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4922ac8-998e-4ba3-88cb-6805fa10c7fd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b4922ac8-998e-4ba3-88cb-6805fa10c7fd" (UID: "b4922ac8-998e-4ba3-88cb-6805fa10c7fd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.584691 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4922ac8-998e-4ba3-88cb-6805fa10c7fd-kube-api-access-chn2x" (OuterVolumeSpecName: "kube-api-access-chn2x") pod "b4922ac8-998e-4ba3-88cb-6805fa10c7fd" (UID: "b4922ac8-998e-4ba3-88cb-6805fa10c7fd"). InnerVolumeSpecName "kube-api-access-chn2x". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.592351 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-35b5-account-create-update-vhqzx"] Mar 11 09:05:15 crc kubenswrapper[4808]: E0311 09:05:15.593258 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16eab58c-16f2-4054-aae1-d4de176db24c" containerName="barbican-keystone-listener-log" Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.593330 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="16eab58c-16f2-4054-aae1-d4de176db24c" containerName="barbican-keystone-listener-log" Mar 11 09:05:15 crc kubenswrapper[4808]: E0311 09:05:15.593423 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91b4b44f-8d7c-4933-b20d-a1d79d7c90b4" containerName="placement-log" Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.593482 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="91b4b44f-8d7c-4933-b20d-a1d79d7c90b4" containerName="placement-log" Mar 11 09:05:15 crc kubenswrapper[4808]: E0311 09:05:15.593566 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8cf2302-c420-4e0f-a292-a601a5f66bfa" containerName="proxy-server" Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.593630 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8cf2302-c420-4e0f-a292-a601a5f66bfa" containerName="proxy-server" Mar 11 09:05:15 crc kubenswrapper[4808]: E0311 09:05:15.593693 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16eab58c-16f2-4054-aae1-d4de176db24c" containerName="barbican-keystone-listener" Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.593745 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="16eab58c-16f2-4054-aae1-d4de176db24c" containerName="barbican-keystone-listener" Mar 11 09:05:15 crc kubenswrapper[4808]: E0311 09:05:15.593822 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91b4b44f-8d7c-4933-b20d-a1d79d7c90b4" containerName="placement-api" Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.593879 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="91b4b44f-8d7c-4933-b20d-a1d79d7c90b4" containerName="placement-api" Mar 11 09:05:15 crc kubenswrapper[4808]: E0311 09:05:15.593984 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8cf2302-c420-4e0f-a292-a601a5f66bfa" containerName="proxy-httpd" Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.600442 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8cf2302-c420-4e0f-a292-a601a5f66bfa" containerName="proxy-httpd" Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.596814 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91b4b44f-8d7c-4933-b20d-a1d79d7c90b4-config-data" (OuterVolumeSpecName: "config-data") pod "91b4b44f-8d7c-4933-b20d-a1d79d7c90b4" (UID: "91b4b44f-8d7c-4933-b20d-a1d79d7c90b4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.601712 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="91b4b44f-8d7c-4933-b20d-a1d79d7c90b4" containerName="placement-api" Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.602318 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="16eab58c-16f2-4054-aae1-d4de176db24c" containerName="barbican-keystone-listener-log" Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.602676 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8cf2302-c420-4e0f-a292-a601a5f66bfa" containerName="proxy-server" Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.602744 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="16eab58c-16f2-4054-aae1-d4de176db24c" containerName="barbican-keystone-listener" Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.602808 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8cf2302-c420-4e0f-a292-a601a5f66bfa" containerName="proxy-httpd" Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.602961 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="91b4b44f-8d7c-4933-b20d-a1d79d7c90b4" containerName="placement-log" Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.608113 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-35b5-account-create-update-vhqzx" Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.612210 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.612400 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzvdr\" (UniqueName: \"kubernetes.io/projected/40ec7dec-ae7d-49ff-95cd-af13c2ab08a5-kube-api-access-lzvdr\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.612423 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-66gj5\" (UniqueName: \"kubernetes.io/projected/91b4b44f-8d7c-4933-b20d-a1d79d7c90b4-kube-api-access-66gj5\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.612434 4808 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/40ec7dec-ae7d-49ff-95cd-af13c2ab08a5-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.612450 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91b4b44f-8d7c-4933-b20d-a1d79d7c90b4-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.612459 4808 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91b4b44f-8d7c-4933-b20d-a1d79d7c90b4-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.612468 4808 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/91b4b44f-8d7c-4933-b20d-a1d79d7c90b4-logs\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.612481 4808 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4922ac8-998e-4ba3-88cb-6805fa10c7fd-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.612490 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-chn2x\" (UniqueName: \"kubernetes.io/projected/b4922ac8-998e-4ba3-88cb-6805fa10c7fd-kube-api-access-chn2x\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.670905 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-35b5-account-create-update-vhqzx"] Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.678014 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-cgtv2"] Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.679408 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-86db6b574-lsd58" Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.679403 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-86db6b574-lsd58" event={"ID":"16eab58c-16f2-4054-aae1-d4de176db24c","Type":"ContainerDied","Data":"0ec428591a54fc8c98b0acdd4e7245c695734dd18c90913fdfe58c9477c71b2f"} Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.679730 4808 scope.go:117] "RemoveContainer" containerID="aa0c1e84c07ee55b3422a4d17c6d167a2c103b98d130bfcf9441a6a8c53b6cb2" Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.681233 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91b4b44f-8d7c-4933-b20d-a1d79d7c90b4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "91b4b44f-8d7c-4933-b20d-a1d79d7c90b4" (UID: "91b4b44f-8d7c-4933-b20d-a1d79d7c90b4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.694161 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-cgtv2"] Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.696009 4808 generic.go:334] "Generic (PLEG): container finished" podID="91b4b44f-8d7c-4933-b20d-a1d79d7c90b4" containerID="797976e443dd5e136b0f3528631bbb358b7bde1c228efadea193d6fc83f5f1cb" exitCode=0 Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.696084 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-665556c5fd-bnc2f" Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.696122 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-665556c5fd-bnc2f" event={"ID":"91b4b44f-8d7c-4933-b20d-a1d79d7c90b4","Type":"ContainerDied","Data":"797976e443dd5e136b0f3528631bbb358b7bde1c228efadea193d6fc83f5f1cb"} Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.696151 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-665556c5fd-bnc2f" event={"ID":"91b4b44f-8d7c-4933-b20d-a1d79d7c90b4","Type":"ContainerDied","Data":"88aad07c41b45aa7c909d0c3c3f13a25003548f918ddba0ce040d2df8905eba5"} Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.712488 4808 scope.go:117] "RemoveContainer" containerID="b632c1c0284e2ec0e59641f479394eaf69354fe98cfe9ecbe781f8db04299ccf" Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.712633 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-tbnpr"] Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.714065 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/02e2eb2e-01e6-4f49-9880-3aed7eb20bf6-operator-scripts\") pod \"keystone-35b5-account-create-update-vhqzx\" (UID: \"02e2eb2e-01e6-4f49-9880-3aed7eb20bf6\") " pod="openstack/keystone-35b5-account-create-update-vhqzx" Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.714108 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lrsb\" (UniqueName: \"kubernetes.io/projected/02e2eb2e-01e6-4f49-9880-3aed7eb20bf6-kube-api-access-4lrsb\") pod \"keystone-35b5-account-create-update-vhqzx\" (UID: \"02e2eb2e-01e6-4f49-9880-3aed7eb20bf6\") " pod="openstack/keystone-35b5-account-create-update-vhqzx" Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.714300 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91b4b44f-8d7c-4933-b20d-a1d79d7c90b4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.721970 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-tbnpr"] Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.724518 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="25dc3abb-1552-49e8-a8b4-c51edd37f47c" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.212:8775/\": read tcp 10.217.0.2:53402->10.217.0.212:8775: read: connection reset by peer" Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.724618 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="25dc3abb-1552-49e8-a8b4-c51edd37f47c" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.212:8775/\": read tcp 10.217.0.2:53390->10.217.0.212:8775: read: connection reset by peer" Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.743329 4808 generic.go:334] "Generic (PLEG): container finished" podID="b3cf17f3-18e6-43f9-ab09-5882a99ffa51" containerID="69ba1fe4eed62164ec20e29e3973c8d7238e07cc99a3de39bc2e21c2312afc26" exitCode=0 Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.743426 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b3cf17f3-18e6-43f9-ab09-5882a99ffa51","Type":"ContainerDied","Data":"69ba1fe4eed62164ec20e29e3973c8d7238e07cc99a3de39bc2e21c2312afc26"} Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.744886 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.745903 4808 scope.go:117] "RemoveContainer" containerID="797976e443dd5e136b0f3528631bbb358b7bde1c228efadea193d6fc83f5f1cb" Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.754292 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91b4b44f-8d7c-4933-b20d-a1d79d7c90b4-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "91b4b44f-8d7c-4933-b20d-a1d79d7c90b4" (UID: "91b4b44f-8d7c-4933-b20d-a1d79d7c90b4"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.758973 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-6447d66dcc-mc8df"] Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.759736 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/keystone-6447d66dcc-mc8df" podUID="b4b8b237-3ec7-41fd-b6a1-fa3670d61c38" containerName="keystone-api" containerID="cri-o://b47c6964048276c5d98b0417ce7ca53e6a7bf3b09f49607b9460de27e6f58132" gracePeriod=30 Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.764061 4808 generic.go:334] "Generic (PLEG): container finished" podID="45e8823d-6df6-41fb-b7cd-9cb19e680db1" containerID="600ad91b4b18f17d71cd6096452f04faaca352ace36331fc5ba1a67011114ccf" exitCode=0 Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.764149 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"45e8823d-6df6-41fb-b7cd-9cb19e680db1","Type":"ContainerDied","Data":"600ad91b4b18f17d71cd6096452f04faaca352ace36331fc5ba1a67011114ccf"} Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.767685 4808 generic.go:334] "Generic (PLEG): container finished" podID="7fde4956-a749-475e-9b5e-978fd33a4239" containerID="93a0e55009e5032ca6043f65c6537e79b802bcb416d44c8d232d960a3cf12786" exitCode=2 Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.767751 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"7fde4956-a749-475e-9b5e-978fd33a4239","Type":"ContainerDied","Data":"93a0e55009e5032ca6043f65c6537e79b802bcb416d44c8d232d960a3cf12786"} Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.769948 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-l7k4m"] Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.775657 4808 generic.go:334] "Generic (PLEG): container finished" podID="c526a61c-3322-446a-8ff5-edd5a02f4b1f" containerID="d192897093f81eb77d49df57c52794a2e80dd6a9b36aabde6381dca653540f27" exitCode=2 Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.775723 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c526a61c-3322-446a-8ff5-edd5a02f4b1f","Type":"ContainerDied","Data":"d192897093f81eb77d49df57c52794a2e80dd6a9b36aabde6381dca653540f27"} Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.782354 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-l7k4m"] Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.804010 4808 generic.go:334] "Generic (PLEG): container finished" podID="45a36b4a-f974-46f6-a719-9765499308ed" containerID="cf55bdcbeb3d626ba9dfd3112f8a4875f325fed6c1ddb8829be7326c5b814762" exitCode=0 Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.804554 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91b4b44f-8d7c-4933-b20d-a1d79d7c90b4-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "91b4b44f-8d7c-4933-b20d-a1d79d7c90b4" (UID: "91b4b44f-8d7c-4933-b20d-a1d79d7c90b4"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.807319 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dde6-account-create-update-5lc8z" Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.823923 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/02e2eb2e-01e6-4f49-9880-3aed7eb20bf6-operator-scripts\") pod \"keystone-35b5-account-create-update-vhqzx\" (UID: \"02e2eb2e-01e6-4f49-9880-3aed7eb20bf6\") " pod="openstack/keystone-35b5-account-create-update-vhqzx" Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.824012 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lrsb\" (UniqueName: \"kubernetes.io/projected/02e2eb2e-01e6-4f49-9880-3aed7eb20bf6-kube-api-access-4lrsb\") pod \"keystone-35b5-account-create-update-vhqzx\" (UID: \"02e2eb2e-01e6-4f49-9880-3aed7eb20bf6\") " pod="openstack/keystone-35b5-account-create-update-vhqzx" Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.824126 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="041d9008-1855-405b-ad45-aee31ade42f2" path="/var/lib/kubelet/pods/041d9008-1855-405b-ad45-aee31ade42f2/volumes" Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.824173 4808 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/91b4b44f-8d7c-4933-b20d-a1d79d7c90b4-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.824185 4808 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/91b4b44f-8d7c-4933-b20d-a1d79d7c90b4-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.824570 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f291b15-bc87-423a-8843-a3105ea5688b" path="/var/lib/kubelet/pods/2f291b15-bc87-423a-8843-a3105ea5688b/volumes" Mar 11 09:05:15 crc kubenswrapper[4808]: E0311 09:05:15.824648 4808 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Mar 11 09:05:15 crc kubenswrapper[4808]: E0311 09:05:15.824695 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/02e2eb2e-01e6-4f49-9880-3aed7eb20bf6-operator-scripts podName:02e2eb2e-01e6-4f49-9880-3aed7eb20bf6 nodeName:}" failed. No retries permitted until 2026-03-11 09:05:16.32468042 +0000 UTC m=+1567.278003730 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/02e2eb2e-01e6-4f49-9880-3aed7eb20bf6-operator-scripts") pod "keystone-35b5-account-create-update-vhqzx" (UID: "02e2eb2e-01e6-4f49-9880-3aed7eb20bf6") : configmap "openstack-scripts" not found Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.824922 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e648abe-27f1-49ac-aebb-a38e206fe101" path="/var/lib/kubelet/pods/3e648abe-27f1-49ac-aebb-a38e206fe101/volumes" Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.825612 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c805958-e789-4689-bbd0-dc1a1a116486" path="/var/lib/kubelet/pods/5c805958-e789-4689-bbd0-dc1a1a116486/volumes" Mar 11 09:05:15 crc kubenswrapper[4808]: E0311 09:05:15.828947 4808 projected.go:194] Error preparing data for projected volume kube-api-access-4lrsb for pod openstack/keystone-35b5-account-create-update-vhqzx: failed to fetch token: serviceaccounts "galera-openstack" not found Mar 11 09:05:15 crc kubenswrapper[4808]: E0311 09:05:15.829058 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/02e2eb2e-01e6-4f49-9880-3aed7eb20bf6-kube-api-access-4lrsb podName:02e2eb2e-01e6-4f49-9880-3aed7eb20bf6 nodeName:}" failed. No retries permitted until 2026-03-11 09:05:16.329040167 +0000 UTC m=+1567.282363487 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-4lrsb" (UniqueName: "kubernetes.io/projected/02e2eb2e-01e6-4f49-9880-3aed7eb20bf6-kube-api-access-4lrsb") pod "keystone-35b5-account-create-update-vhqzx" (UID: "02e2eb2e-01e6-4f49-9880-3aed7eb20bf6") : failed to fetch token: serviceaccounts "galera-openstack" not found Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.836963 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="657b6f1c-c040-46ea-b7b9-603c41615d66" path="/var/lib/kubelet/pods/657b6f1c-c040-46ea-b7b9-603c41615d66/volumes" Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.837477 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85b43352-7580-46b6-a90c-93a2598ac134" path="/var/lib/kubelet/pods/85b43352-7580-46b6-a90c-93a2598ac134/volumes" Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.837842 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93d7affe-753b-40fe-8e4a-fbe3d2618527" path="/var/lib/kubelet/pods/93d7affe-753b-40fe-8e4a-fbe3d2618527/volumes" Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.839988 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a124b7a1-bcf0-4d53-82b1-2a32992b56b8" path="/var/lib/kubelet/pods/a124b7a1-bcf0-4d53-82b1-2a32992b56b8/volumes" Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.840903 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba89fa63-8105-4acf-883b-f6aa1deb70de" path="/var/lib/kubelet/pods/ba89fa63-8105-4acf-883b-f6aa1deb70de/volumes" Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.841249 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc2d33c7-29e8-46b7-a743-80d0eed3412d" path="/var/lib/kubelet/pods/bc2d33c7-29e8-46b7-a743-80d0eed3412d/volumes" Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.841821 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8cf2302-c420-4e0f-a292-a601a5f66bfa" path="/var/lib/kubelet/pods/c8cf2302-c420-4e0f-a292-a601a5f66bfa/volumes" Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.842849 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d56dace1-91d0-4b10-a4f1-06e5917d1675" path="/var/lib/kubelet/pods/d56dace1-91d0-4b10-a4f1-06e5917d1675/volumes" Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.843578 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"45a36b4a-f974-46f6-a719-9765499308ed","Type":"ContainerDied","Data":"cf55bdcbeb3d626ba9dfd3112f8a4875f325fed6c1ddb8829be7326c5b814762"} Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.843607 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dde6-account-create-update-5lc8z" event={"ID":"40ec7dec-ae7d-49ff-95cd-af13c2ab08a5","Type":"ContainerDied","Data":"a19aa85183ac7524e08e8dc76739b122f1a6d9e7caf3282801da873b2fbbb8bf"} Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.843620 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-35b5-account-create-update-vhqzx"] Mar 11 09:05:15 crc kubenswrapper[4808]: E0311 09:05:15.844021 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-4lrsb operator-scripts], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/keystone-35b5-account-create-update-vhqzx" podUID="02e2eb2e-01e6-4f49-9880-3aed7eb20bf6" Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.854609 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-5hjcc"] Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.863124 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-86db6b574-lsd58"] Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.869915 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-86db6b574-lsd58"] Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.880217 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-pcjsh" event={"ID":"b4922ac8-998e-4ba3-88cb-6805fa10c7fd","Type":"ContainerDied","Data":"d3df68d4100af01b078ff305f0a9dce550768827d624e7f2306f7b769344af76"} Mar 11 09:05:15 crc kubenswrapper[4808]: I0311 09:05:15.880332 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-pcjsh" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.023886 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-5hjcc"] Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.027145 4808 patch_prober.go:28] interesting pod/machine-config-daemon-tfsm9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.027193 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.027228 4808 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.030000 4808 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9a282cc104124a36930296e8446190e2ce24a7536755b142f5bbb0e29d5b97d9"} pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.030063 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerName="machine-config-daemon" containerID="cri-o://9a282cc104124a36930296e8446190e2ce24a7536755b142f5bbb0e29d5b97d9" gracePeriod=600 Mar 11 09:05:16 crc kubenswrapper[4808]: E0311 09:05:16.045001 4808 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 11 09:05:16 crc kubenswrapper[4808]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:763d1f1e8a1cf877c151c59609960fd2fa29e7e50001f8818122a2d51878befa,Command:[/bin/sh -c #!/bin/bash Mar 11 09:05:16 crc kubenswrapper[4808]: Mar 11 09:05:16 crc kubenswrapper[4808]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Mar 11 09:05:16 crc kubenswrapper[4808]: Mar 11 09:05:16 crc kubenswrapper[4808]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Mar 11 09:05:16 crc kubenswrapper[4808]: Mar 11 09:05:16 crc kubenswrapper[4808]: MYSQL_CMD="mysql -h -u root -P 3306" Mar 11 09:05:16 crc kubenswrapper[4808]: Mar 11 09:05:16 crc kubenswrapper[4808]: if [ -n "" ]; then Mar 11 09:05:16 crc kubenswrapper[4808]: GRANT_DATABASE="" Mar 11 09:05:16 crc kubenswrapper[4808]: else Mar 11 09:05:16 crc kubenswrapper[4808]: GRANT_DATABASE="*" Mar 11 09:05:16 crc kubenswrapper[4808]: fi Mar 11 09:05:16 crc kubenswrapper[4808]: Mar 11 09:05:16 crc kubenswrapper[4808]: # going for maximum compatibility here: Mar 11 09:05:16 crc kubenswrapper[4808]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Mar 11 09:05:16 crc kubenswrapper[4808]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Mar 11 09:05:16 crc kubenswrapper[4808]: # 3. create user with CREATE but then do all password and TLS with ALTER to Mar 11 09:05:16 crc kubenswrapper[4808]: # support updates Mar 11 09:05:16 crc kubenswrapper[4808]: Mar 11 09:05:16 crc kubenswrapper[4808]: $MYSQL_CMD < logger="UnhandledError" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.045311 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="49e0938f-9c77-4bf3-b649-1be492ef1647" containerName="galera" containerID="cri-o://eb0fb77e5ed841a8b6370ae95eb99a470aa96b2d34bd6fc4310915ae83795884" gracePeriod=30 Mar 11 09:05:16 crc kubenswrapper[4808]: E0311 09:05:16.046275 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"openstack-mariadb-root-db-secret\\\" not found\"" pod="openstack/root-account-create-update-5hjcc" podUID="da221ea6-e100-48b2-aacc-4b34bcd71782" Mar 11 09:05:16 crc kubenswrapper[4808]: E0311 09:05:16.155115 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.275086 4808 scope.go:117] "RemoveContainer" containerID="9bd7b73d921af51c66c9406fce69c56ff4d3946cb37941b50732c8f9d3483ca1" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.278123 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.299707 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.326564 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-dde6-account-create-update-5lc8z"] Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.331784 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-dde6-account-create-update-5lc8z"] Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.333321 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/45e8823d-6df6-41fb-b7cd-9cb19e680db1-public-tls-certs\") pod \"45e8823d-6df6-41fb-b7cd-9cb19e680db1\" (UID: \"45e8823d-6df6-41fb-b7cd-9cb19e680db1\") " Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.333421 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wcsns\" (UniqueName: \"kubernetes.io/projected/45a36b4a-f974-46f6-a719-9765499308ed-kube-api-access-wcsns\") pod \"45a36b4a-f974-46f6-a719-9765499308ed\" (UID: \"45a36b4a-f974-46f6-a719-9765499308ed\") " Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.333444 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45a36b4a-f974-46f6-a719-9765499308ed-combined-ca-bundle\") pod \"45a36b4a-f974-46f6-a719-9765499308ed\" (UID: \"45a36b4a-f974-46f6-a719-9765499308ed\") " Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.333460 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45a36b4a-f974-46f6-a719-9765499308ed-scripts\") pod \"45a36b4a-f974-46f6-a719-9765499308ed\" (UID: \"45a36b4a-f974-46f6-a719-9765499308ed\") " Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.333486 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45e8823d-6df6-41fb-b7cd-9cb19e680db1-config-data\") pod \"45e8823d-6df6-41fb-b7cd-9cb19e680db1\" (UID: \"45e8823d-6df6-41fb-b7cd-9cb19e680db1\") " Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.333512 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45e8823d-6df6-41fb-b7cd-9cb19e680db1-logs\") pod \"45e8823d-6df6-41fb-b7cd-9cb19e680db1\" (UID: \"45e8823d-6df6-41fb-b7cd-9cb19e680db1\") " Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.333535 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f2w44\" (UniqueName: \"kubernetes.io/projected/45e8823d-6df6-41fb-b7cd-9cb19e680db1-kube-api-access-f2w44\") pod \"45e8823d-6df6-41fb-b7cd-9cb19e680db1\" (UID: \"45e8823d-6df6-41fb-b7cd-9cb19e680db1\") " Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.333599 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/45e8823d-6df6-41fb-b7cd-9cb19e680db1-etc-machine-id\") pod \"45e8823d-6df6-41fb-b7cd-9cb19e680db1\" (UID: \"45e8823d-6df6-41fb-b7cd-9cb19e680db1\") " Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.333617 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/45e8823d-6df6-41fb-b7cd-9cb19e680db1-config-data-custom\") pod \"45e8823d-6df6-41fb-b7cd-9cb19e680db1\" (UID: \"45e8823d-6df6-41fb-b7cd-9cb19e680db1\") " Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.333645 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/45a36b4a-f974-46f6-a719-9765499308ed-httpd-run\") pod \"45a36b4a-f974-46f6-a719-9765499308ed\" (UID: \"45a36b4a-f974-46f6-a719-9765499308ed\") " Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.333667 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/45a36b4a-f974-46f6-a719-9765499308ed-public-tls-certs\") pod \"45a36b4a-f974-46f6-a719-9765499308ed\" (UID: \"45a36b4a-f974-46f6-a719-9765499308ed\") " Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.333683 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45e8823d-6df6-41fb-b7cd-9cb19e680db1-scripts\") pod \"45e8823d-6df6-41fb-b7cd-9cb19e680db1\" (UID: \"45e8823d-6df6-41fb-b7cd-9cb19e680db1\") " Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.333704 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45e8823d-6df6-41fb-b7cd-9cb19e680db1-combined-ca-bundle\") pod \"45e8823d-6df6-41fb-b7cd-9cb19e680db1\" (UID: \"45e8823d-6df6-41fb-b7cd-9cb19e680db1\") " Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.333729 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/45e8823d-6df6-41fb-b7cd-9cb19e680db1-internal-tls-certs\") pod \"45e8823d-6df6-41fb-b7cd-9cb19e680db1\" (UID: \"45e8823d-6df6-41fb-b7cd-9cb19e680db1\") " Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.333744 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45a36b4a-f974-46f6-a719-9765499308ed-config-data\") pod \"45a36b4a-f974-46f6-a719-9765499308ed\" (UID: \"45a36b4a-f974-46f6-a719-9765499308ed\") " Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.333765 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45a36b4a-f974-46f6-a719-9765499308ed-logs\") pod \"45a36b4a-f974-46f6-a719-9765499308ed\" (UID: \"45a36b4a-f974-46f6-a719-9765499308ed\") " Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.333788 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"45a36b4a-f974-46f6-a719-9765499308ed\" (UID: \"45a36b4a-f974-46f6-a719-9765499308ed\") " Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.333960 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/02e2eb2e-01e6-4f49-9880-3aed7eb20bf6-operator-scripts\") pod \"keystone-35b5-account-create-update-vhqzx\" (UID: \"02e2eb2e-01e6-4f49-9880-3aed7eb20bf6\") " pod="openstack/keystone-35b5-account-create-update-vhqzx" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.333980 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lrsb\" (UniqueName: \"kubernetes.io/projected/02e2eb2e-01e6-4f49-9880-3aed7eb20bf6-kube-api-access-4lrsb\") pod \"keystone-35b5-account-create-update-vhqzx\" (UID: \"02e2eb2e-01e6-4f49-9880-3aed7eb20bf6\") " pod="openstack/keystone-35b5-account-create-update-vhqzx" Mar 11 09:05:16 crc kubenswrapper[4808]: E0311 09:05:16.343901 4808 projected.go:194] Error preparing data for projected volume kube-api-access-4lrsb for pod openstack/keystone-35b5-account-create-update-vhqzx: failed to fetch token: serviceaccounts "galera-openstack" not found Mar 11 09:05:16 crc kubenswrapper[4808]: E0311 09:05:16.343982 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/02e2eb2e-01e6-4f49-9880-3aed7eb20bf6-kube-api-access-4lrsb podName:02e2eb2e-01e6-4f49-9880-3aed7eb20bf6 nodeName:}" failed. No retries permitted until 2026-03-11 09:05:17.343960364 +0000 UTC m=+1568.297283684 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-4lrsb" (UniqueName: "kubernetes.io/projected/02e2eb2e-01e6-4f49-9880-3aed7eb20bf6-kube-api-access-4lrsb") pod "keystone-35b5-account-create-update-vhqzx" (UID: "02e2eb2e-01e6-4f49-9880-3aed7eb20bf6") : failed to fetch token: serviceaccounts "galera-openstack" not found Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.344539 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-665556c5fd-bnc2f"] Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.344580 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-665556c5fd-bnc2f"] Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.345124 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45a36b4a-f974-46f6-a719-9765499308ed-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "45a36b4a-f974-46f6-a719-9765499308ed" (UID: "45a36b4a-f974-46f6-a719-9765499308ed"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.346677 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45a36b4a-f974-46f6-a719-9765499308ed-logs" (OuterVolumeSpecName: "logs") pod "45a36b4a-f974-46f6-a719-9765499308ed" (UID: "45a36b4a-f974-46f6-a719-9765499308ed"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.353004 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45e8823d-6df6-41fb-b7cd-9cb19e680db1-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "45e8823d-6df6-41fb-b7cd-9cb19e680db1" (UID: "45e8823d-6df6-41fb-b7cd-9cb19e680db1"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 11 09:05:16 crc kubenswrapper[4808]: E0311 09:05:16.353630 4808 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Mar 11 09:05:16 crc kubenswrapper[4808]: E0311 09:05:16.354128 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/02e2eb2e-01e6-4f49-9880-3aed7eb20bf6-operator-scripts podName:02e2eb2e-01e6-4f49-9880-3aed7eb20bf6 nodeName:}" failed. No retries permitted until 2026-03-11 09:05:17.354105979 +0000 UTC m=+1568.307429309 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/02e2eb2e-01e6-4f49-9880-3aed7eb20bf6-operator-scripts") pod "keystone-35b5-account-create-update-vhqzx" (UID: "02e2eb2e-01e6-4f49-9880-3aed7eb20bf6") : configmap "openstack-scripts" not found Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.347889 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45e8823d-6df6-41fb-b7cd-9cb19e680db1-logs" (OuterVolumeSpecName: "logs") pod "45e8823d-6df6-41fb-b7cd-9cb19e680db1" (UID: "45e8823d-6df6-41fb-b7cd-9cb19e680db1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.360280 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45a36b4a-f974-46f6-a719-9765499308ed-kube-api-access-wcsns" (OuterVolumeSpecName: "kube-api-access-wcsns") pod "45a36b4a-f974-46f6-a719-9765499308ed" (UID: "45a36b4a-f974-46f6-a719-9765499308ed"). InnerVolumeSpecName "kube-api-access-wcsns". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.362559 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.364157 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "45a36b4a-f974-46f6-a719-9765499308ed" (UID: "45a36b4a-f974-46f6-a719-9765499308ed"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.376063 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45e8823d-6df6-41fb-b7cd-9cb19e680db1-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "45e8823d-6df6-41fb-b7cd-9cb19e680db1" (UID: "45e8823d-6df6-41fb-b7cd-9cb19e680db1"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.377085 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45a36b4a-f974-46f6-a719-9765499308ed-scripts" (OuterVolumeSpecName: "scripts") pod "45a36b4a-f974-46f6-a719-9765499308ed" (UID: "45a36b4a-f974-46f6-a719-9765499308ed"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.400481 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45e8823d-6df6-41fb-b7cd-9cb19e680db1-scripts" (OuterVolumeSpecName: "scripts") pod "45e8823d-6df6-41fb-b7cd-9cb19e680db1" (UID: "45e8823d-6df6-41fb-b7cd-9cb19e680db1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.430981 4808 scope.go:117] "RemoveContainer" containerID="797976e443dd5e136b0f3528631bbb358b7bde1c228efadea193d6fc83f5f1cb" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.431915 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45e8823d-6df6-41fb-b7cd-9cb19e680db1-kube-api-access-f2w44" (OuterVolumeSpecName: "kube-api-access-f2w44") pod "45e8823d-6df6-41fb-b7cd-9cb19e680db1" (UID: "45e8823d-6df6-41fb-b7cd-9cb19e680db1"). InnerVolumeSpecName "kube-api-access-f2w44". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.434921 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3cf17f3-18e6-43f9-ab09-5882a99ffa51-combined-ca-bundle\") pod \"b3cf17f3-18e6-43f9-ab09-5882a99ffa51\" (UID: \"b3cf17f3-18e6-43f9-ab09-5882a99ffa51\") " Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.434961 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qnbxq\" (UniqueName: \"kubernetes.io/projected/b3cf17f3-18e6-43f9-ab09-5882a99ffa51-kube-api-access-qnbxq\") pod \"b3cf17f3-18e6-43f9-ab09-5882a99ffa51\" (UID: \"b3cf17f3-18e6-43f9-ab09-5882a99ffa51\") " Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.434986 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3cf17f3-18e6-43f9-ab09-5882a99ffa51-scripts\") pod \"b3cf17f3-18e6-43f9-ab09-5882a99ffa51\" (UID: \"b3cf17f3-18e6-43f9-ab09-5882a99ffa51\") " Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.435085 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3cf17f3-18e6-43f9-ab09-5882a99ffa51-config-data\") pod \"b3cf17f3-18e6-43f9-ab09-5882a99ffa51\" (UID: \"b3cf17f3-18e6-43f9-ab09-5882a99ffa51\") " Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.435125 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b3cf17f3-18e6-43f9-ab09-5882a99ffa51-httpd-run\") pod \"b3cf17f3-18e6-43f9-ab09-5882a99ffa51\" (UID: \"b3cf17f3-18e6-43f9-ab09-5882a99ffa51\") " Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.435178 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3cf17f3-18e6-43f9-ab09-5882a99ffa51-internal-tls-certs\") pod \"b3cf17f3-18e6-43f9-ab09-5882a99ffa51\" (UID: \"b3cf17f3-18e6-43f9-ab09-5882a99ffa51\") " Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.435216 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"b3cf17f3-18e6-43f9-ab09-5882a99ffa51\" (UID: \"b3cf17f3-18e6-43f9-ab09-5882a99ffa51\") " Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.435256 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b3cf17f3-18e6-43f9-ab09-5882a99ffa51-logs\") pod \"b3cf17f3-18e6-43f9-ab09-5882a99ffa51\" (UID: \"b3cf17f3-18e6-43f9-ab09-5882a99ffa51\") " Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.435665 4808 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45e8823d-6df6-41fb-b7cd-9cb19e680db1-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.435680 4808 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45a36b4a-f974-46f6-a719-9765499308ed-logs\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.435701 4808 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.435711 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wcsns\" (UniqueName: \"kubernetes.io/projected/45a36b4a-f974-46f6-a719-9765499308ed-kube-api-access-wcsns\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.435725 4808 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45a36b4a-f974-46f6-a719-9765499308ed-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.435735 4808 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45e8823d-6df6-41fb-b7cd-9cb19e680db1-logs\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.435746 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f2w44\" (UniqueName: \"kubernetes.io/projected/45e8823d-6df6-41fb-b7cd-9cb19e680db1-kube-api-access-f2w44\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.435758 4808 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/45e8823d-6df6-41fb-b7cd-9cb19e680db1-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.435769 4808 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/45e8823d-6df6-41fb-b7cd-9cb19e680db1-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.435781 4808 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/45a36b4a-f974-46f6-a719-9765499308ed-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.436123 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45a36b4a-f974-46f6-a719-9765499308ed-config-data" (OuterVolumeSpecName: "config-data") pod "45a36b4a-f974-46f6-a719-9765499308ed" (UID: "45a36b4a-f974-46f6-a719-9765499308ed"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.440012 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3cf17f3-18e6-43f9-ab09-5882a99ffa51-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "b3cf17f3-18e6-43f9-ab09-5882a99ffa51" (UID: "b3cf17f3-18e6-43f9-ab09-5882a99ffa51"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.442013 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3cf17f3-18e6-43f9-ab09-5882a99ffa51-logs" (OuterVolumeSpecName: "logs") pod "b3cf17f3-18e6-43f9-ab09-5882a99ffa51" (UID: "b3cf17f3-18e6-43f9-ab09-5882a99ffa51"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:05:16 crc kubenswrapper[4808]: E0311 09:05:16.453145 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"797976e443dd5e136b0f3528631bbb358b7bde1c228efadea193d6fc83f5f1cb\": container with ID starting with 797976e443dd5e136b0f3528631bbb358b7bde1c228efadea193d6fc83f5f1cb not found: ID does not exist" containerID="797976e443dd5e136b0f3528631bbb358b7bde1c228efadea193d6fc83f5f1cb" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.453195 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"797976e443dd5e136b0f3528631bbb358b7bde1c228efadea193d6fc83f5f1cb"} err="failed to get container status \"797976e443dd5e136b0f3528631bbb358b7bde1c228efadea193d6fc83f5f1cb\": rpc error: code = NotFound desc = could not find container \"797976e443dd5e136b0f3528631bbb358b7bde1c228efadea193d6fc83f5f1cb\": container with ID starting with 797976e443dd5e136b0f3528631bbb358b7bde1c228efadea193d6fc83f5f1cb not found: ID does not exist" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.453222 4808 scope.go:117] "RemoveContainer" containerID="9bd7b73d921af51c66c9406fce69c56ff4d3946cb37941b50732c8f9d3483ca1" Mar 11 09:05:16 crc kubenswrapper[4808]: E0311 09:05:16.458794 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9bd7b73d921af51c66c9406fce69c56ff4d3946cb37941b50732c8f9d3483ca1\": container with ID starting with 9bd7b73d921af51c66c9406fce69c56ff4d3946cb37941b50732c8f9d3483ca1 not found: ID does not exist" containerID="9bd7b73d921af51c66c9406fce69c56ff4d3946cb37941b50732c8f9d3483ca1" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.459099 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bd7b73d921af51c66c9406fce69c56ff4d3946cb37941b50732c8f9d3483ca1"} err="failed to get container status \"9bd7b73d921af51c66c9406fce69c56ff4d3946cb37941b50732c8f9d3483ca1\": rpc error: code = NotFound desc = could not find container \"9bd7b73d921af51c66c9406fce69c56ff4d3946cb37941b50732c8f9d3483ca1\": container with ID starting with 9bd7b73d921af51c66c9406fce69c56ff4d3946cb37941b50732c8f9d3483ca1 not found: ID does not exist" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.469561 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3cf17f3-18e6-43f9-ab09-5882a99ffa51-kube-api-access-qnbxq" (OuterVolumeSpecName: "kube-api-access-qnbxq") pod "b3cf17f3-18e6-43f9-ab09-5882a99ffa51" (UID: "b3cf17f3-18e6-43f9-ab09-5882a99ffa51"). InnerVolumeSpecName "kube-api-access-qnbxq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.475197 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3cf17f3-18e6-43f9-ab09-5882a99ffa51-scripts" (OuterVolumeSpecName: "scripts") pod "b3cf17f3-18e6-43f9-ab09-5882a99ffa51" (UID: "b3cf17f3-18e6-43f9-ab09-5882a99ffa51"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.486678 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.487473 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45e8823d-6df6-41fb-b7cd-9cb19e680db1-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "45e8823d-6df6-41fb-b7cd-9cb19e680db1" (UID: "45e8823d-6df6-41fb-b7cd-9cb19e680db1"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.521787 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45a36b4a-f974-46f6-a719-9765499308ed-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "45a36b4a-f974-46f6-a719-9765499308ed" (UID: "45a36b4a-f974-46f6-a719-9765499308ed"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.524253 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance") pod "b3cf17f3-18e6-43f9-ab09-5882a99ffa51" (UID: "b3cf17f3-18e6-43f9-ab09-5882a99ffa51"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.527304 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-pcjsh"] Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.538750 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-pcjsh"] Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.547515 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/7fde4956-a749-475e-9b5e-978fd33a4239-kube-state-metrics-tls-config\") pod \"7fde4956-a749-475e-9b5e-978fd33a4239\" (UID: \"7fde4956-a749-475e-9b5e-978fd33a4239\") " Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.547665 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/7fde4956-a749-475e-9b5e-978fd33a4239-kube-state-metrics-tls-certs\") pod \"7fde4956-a749-475e-9b5e-978fd33a4239\" (UID: \"7fde4956-a749-475e-9b5e-978fd33a4239\") " Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.547769 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fde4956-a749-475e-9b5e-978fd33a4239-combined-ca-bundle\") pod \"7fde4956-a749-475e-9b5e-978fd33a4239\" (UID: \"7fde4956-a749-475e-9b5e-978fd33a4239\") " Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.547820 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zwmb5\" (UniqueName: \"kubernetes.io/projected/7fde4956-a749-475e-9b5e-978fd33a4239-kube-api-access-zwmb5\") pod \"7fde4956-a749-475e-9b5e-978fd33a4239\" (UID: \"7fde4956-a749-475e-9b5e-978fd33a4239\") " Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.548409 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45e8823d-6df6-41fb-b7cd-9cb19e680db1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "45e8823d-6df6-41fb-b7cd-9cb19e680db1" (UID: "45e8823d-6df6-41fb-b7cd-9cb19e680db1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.548498 4808 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/45e8823d-6df6-41fb-b7cd-9cb19e680db1-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.548511 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45a36b4a-f974-46f6-a719-9765499308ed-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.548520 4808 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b3cf17f3-18e6-43f9-ab09-5882a99ffa51-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.548624 4808 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.549017 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45a36b4a-f974-46f6-a719-9765499308ed-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.549039 4808 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b3cf17f3-18e6-43f9-ab09-5882a99ffa51-logs\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.549050 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qnbxq\" (UniqueName: \"kubernetes.io/projected/b3cf17f3-18e6-43f9-ab09-5882a99ffa51-kube-api-access-qnbxq\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.549064 4808 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3cf17f3-18e6-43f9-ab09-5882a99ffa51-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.554540 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fde4956-a749-475e-9b5e-978fd33a4239-kube-api-access-zwmb5" (OuterVolumeSpecName: "kube-api-access-zwmb5") pod "7fde4956-a749-475e-9b5e-978fd33a4239" (UID: "7fde4956-a749-475e-9b5e-978fd33a4239"). InnerVolumeSpecName "kube-api-access-zwmb5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.554695 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-688446ffb8-4n8n7" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.583772 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.591309 4808 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.592501 4808 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.596482 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.603265 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3cf17f3-18e6-43f9-ab09-5882a99ffa51-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b3cf17f3-18e6-43f9-ab09-5882a99ffa51" (UID: "b3cf17f3-18e6-43f9-ab09-5882a99ffa51"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.604415 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45e8823d-6df6-41fb-b7cd-9cb19e680db1-config-data" (OuterVolumeSpecName: "config-data") pod "45e8823d-6df6-41fb-b7cd-9cb19e680db1" (UID: "45e8823d-6df6-41fb-b7cd-9cb19e680db1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.604714 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fde4956-a749-475e-9b5e-978fd33a4239-kube-state-metrics-tls-config" (OuterVolumeSpecName: "kube-state-metrics-tls-config") pod "7fde4956-a749-475e-9b5e-978fd33a4239" (UID: "7fde4956-a749-475e-9b5e-978fd33a4239"). InnerVolumeSpecName "kube-state-metrics-tls-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.604861 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fde4956-a749-475e-9b5e-978fd33a4239-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7fde4956-a749-475e-9b5e-978fd33a4239" (UID: "7fde4956-a749-475e-9b5e-978fd33a4239"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.612963 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3cf17f3-18e6-43f9-ab09-5882a99ffa51-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "b3cf17f3-18e6-43f9-ab09-5882a99ffa51" (UID: "b3cf17f3-18e6-43f9-ab09-5882a99ffa51"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.648628 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45a36b4a-f974-46f6-a719-9765499308ed-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "45a36b4a-f974-46f6-a719-9765499308ed" (UID: "45a36b4a-f974-46f6-a719-9765499308ed"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.650134 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25dc3abb-1552-49e8-a8b4-c51edd37f47c-logs\") pod \"25dc3abb-1552-49e8-a8b4-c51edd37f47c\" (UID: \"25dc3abb-1552-49e8-a8b4-c51edd37f47c\") " Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.650219 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/909d233e-60cb-4a66-989b-2dc8706ea143-logs\") pod \"909d233e-60cb-4a66-989b-2dc8706ea143\" (UID: \"909d233e-60cb-4a66-989b-2dc8706ea143\") " Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.650273 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/909d233e-60cb-4a66-989b-2dc8706ea143-config-data-custom\") pod \"909d233e-60cb-4a66-989b-2dc8706ea143\" (UID: \"909d233e-60cb-4a66-989b-2dc8706ea143\") " Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.650384 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-25dnq\" (UniqueName: \"kubernetes.io/projected/25dc3abb-1552-49e8-a8b4-c51edd37f47c-kube-api-access-25dnq\") pod \"25dc3abb-1552-49e8-a8b4-c51edd37f47c\" (UID: \"25dc3abb-1552-49e8-a8b4-c51edd37f47c\") " Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.650421 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25dc3abb-1552-49e8-a8b4-c51edd37f47c-combined-ca-bundle\") pod \"25dc3abb-1552-49e8-a8b4-c51edd37f47c\" (UID: \"25dc3abb-1552-49e8-a8b4-c51edd37f47c\") " Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.650456 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/192d6d53-4174-487e-b652-0ad887475d54-logs\") pod \"192d6d53-4174-487e-b652-0ad887475d54\" (UID: \"192d6d53-4174-487e-b652-0ad887475d54\") " Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.650482 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/909d233e-60cb-4a66-989b-2dc8706ea143-public-tls-certs\") pod \"909d233e-60cb-4a66-989b-2dc8706ea143\" (UID: \"909d233e-60cb-4a66-989b-2dc8706ea143\") " Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.650509 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/25dc3abb-1552-49e8-a8b4-c51edd37f47c-nova-metadata-tls-certs\") pod \"25dc3abb-1552-49e8-a8b4-c51edd37f47c\" (UID: \"25dc3abb-1552-49e8-a8b4-c51edd37f47c\") " Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.650547 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/192d6d53-4174-487e-b652-0ad887475d54-internal-tls-certs\") pod \"192d6d53-4174-487e-b652-0ad887475d54\" (UID: \"192d6d53-4174-487e-b652-0ad887475d54\") " Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.650597 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/192d6d53-4174-487e-b652-0ad887475d54-combined-ca-bundle\") pod \"192d6d53-4174-487e-b652-0ad887475d54\" (UID: \"192d6d53-4174-487e-b652-0ad887475d54\") " Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.650620 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cxl5v\" (UniqueName: \"kubernetes.io/projected/192d6d53-4174-487e-b652-0ad887475d54-kube-api-access-cxl5v\") pod \"192d6d53-4174-487e-b652-0ad887475d54\" (UID: \"192d6d53-4174-487e-b652-0ad887475d54\") " Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.650633 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25dc3abb-1552-49e8-a8b4-c51edd37f47c-logs" (OuterVolumeSpecName: "logs") pod "25dc3abb-1552-49e8-a8b4-c51edd37f47c" (UID: "25dc3abb-1552-49e8-a8b4-c51edd37f47c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.650658 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pgj4\" (UniqueName: \"kubernetes.io/projected/909d233e-60cb-4a66-989b-2dc8706ea143-kube-api-access-8pgj4\") pod \"909d233e-60cb-4a66-989b-2dc8706ea143\" (UID: \"909d233e-60cb-4a66-989b-2dc8706ea143\") " Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.650710 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/192d6d53-4174-487e-b652-0ad887475d54-config-data\") pod \"192d6d53-4174-487e-b652-0ad887475d54\" (UID: \"192d6d53-4174-487e-b652-0ad887475d54\") " Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.650732 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/192d6d53-4174-487e-b652-0ad887475d54-public-tls-certs\") pod \"192d6d53-4174-487e-b652-0ad887475d54\" (UID: \"192d6d53-4174-487e-b652-0ad887475d54\") " Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.650755 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/909d233e-60cb-4a66-989b-2dc8706ea143-combined-ca-bundle\") pod \"909d233e-60cb-4a66-989b-2dc8706ea143\" (UID: \"909d233e-60cb-4a66-989b-2dc8706ea143\") " Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.650775 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25dc3abb-1552-49e8-a8b4-c51edd37f47c-config-data\") pod \"25dc3abb-1552-49e8-a8b4-c51edd37f47c\" (UID: \"25dc3abb-1552-49e8-a8b4-c51edd37f47c\") " Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.650793 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/909d233e-60cb-4a66-989b-2dc8706ea143-internal-tls-certs\") pod \"909d233e-60cb-4a66-989b-2dc8706ea143\" (UID: \"909d233e-60cb-4a66-989b-2dc8706ea143\") " Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.650819 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/909d233e-60cb-4a66-989b-2dc8706ea143-config-data\") pod \"909d233e-60cb-4a66-989b-2dc8706ea143\" (UID: \"909d233e-60cb-4a66-989b-2dc8706ea143\") " Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.652901 4808 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.652931 4808 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/7fde4956-a749-475e-9b5e-978fd33a4239-kube-state-metrics-tls-config\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.652941 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45e8823d-6df6-41fb-b7cd-9cb19e680db1-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.652952 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3cf17f3-18e6-43f9-ab09-5882a99ffa51-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.652963 4808 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/45a36b4a-f974-46f6-a719-9765499308ed-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.652976 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45e8823d-6df6-41fb-b7cd-9cb19e680db1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.652983 4808 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25dc3abb-1552-49e8-a8b4-c51edd37f47c-logs\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.652992 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fde4956-a749-475e-9b5e-978fd33a4239-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.653001 4808 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.653012 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zwmb5\" (UniqueName: \"kubernetes.io/projected/7fde4956-a749-475e-9b5e-978fd33a4239-kube-api-access-zwmb5\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.653022 4808 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3cf17f3-18e6-43f9-ab09-5882a99ffa51-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.650790 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/909d233e-60cb-4a66-989b-2dc8706ea143-logs" (OuterVolumeSpecName: "logs") pod "909d233e-60cb-4a66-989b-2dc8706ea143" (UID: "909d233e-60cb-4a66-989b-2dc8706ea143"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.651108 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/192d6d53-4174-487e-b652-0ad887475d54-logs" (OuterVolumeSpecName: "logs") pod "192d6d53-4174-487e-b652-0ad887475d54" (UID: "192d6d53-4174-487e-b652-0ad887475d54"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.660181 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45e8823d-6df6-41fb-b7cd-9cb19e680db1-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "45e8823d-6df6-41fb-b7cd-9cb19e680db1" (UID: "45e8823d-6df6-41fb-b7cd-9cb19e680db1"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.660269 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25dc3abb-1552-49e8-a8b4-c51edd37f47c-kube-api-access-25dnq" (OuterVolumeSpecName: "kube-api-access-25dnq") pod "25dc3abb-1552-49e8-a8b4-c51edd37f47c" (UID: "25dc3abb-1552-49e8-a8b4-c51edd37f47c"). InnerVolumeSpecName "kube-api-access-25dnq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.660288 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/909d233e-60cb-4a66-989b-2dc8706ea143-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "909d233e-60cb-4a66-989b-2dc8706ea143" (UID: "909d233e-60cb-4a66-989b-2dc8706ea143"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.661996 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/909d233e-60cb-4a66-989b-2dc8706ea143-kube-api-access-8pgj4" (OuterVolumeSpecName: "kube-api-access-8pgj4") pod "909d233e-60cb-4a66-989b-2dc8706ea143" (UID: "909d233e-60cb-4a66-989b-2dc8706ea143"). InnerVolumeSpecName "kube-api-access-8pgj4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.663468 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/192d6d53-4174-487e-b652-0ad887475d54-kube-api-access-cxl5v" (OuterVolumeSpecName: "kube-api-access-cxl5v") pod "192d6d53-4174-487e-b652-0ad887475d54" (UID: "192d6d53-4174-487e-b652-0ad887475d54"). InnerVolumeSpecName "kube-api-access-cxl5v". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.680502 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3cf17f3-18e6-43f9-ab09-5882a99ffa51-config-data" (OuterVolumeSpecName: "config-data") pod "b3cf17f3-18e6-43f9-ab09-5882a99ffa51" (UID: "b3cf17f3-18e6-43f9-ab09-5882a99ffa51"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.699949 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25dc3abb-1552-49e8-a8b4-c51edd37f47c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "25dc3abb-1552-49e8-a8b4-c51edd37f47c" (UID: "25dc3abb-1552-49e8-a8b4-c51edd37f47c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.708292 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/192d6d53-4174-487e-b652-0ad887475d54-config-data" (OuterVolumeSpecName: "config-data") pod "192d6d53-4174-487e-b652-0ad887475d54" (UID: "192d6d53-4174-487e-b652-0ad887475d54"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.709020 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fde4956-a749-475e-9b5e-978fd33a4239-kube-state-metrics-tls-certs" (OuterVolumeSpecName: "kube-state-metrics-tls-certs") pod "7fde4956-a749-475e-9b5e-978fd33a4239" (UID: "7fde4956-a749-475e-9b5e-978fd33a4239"). InnerVolumeSpecName "kube-state-metrics-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.719460 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/909d233e-60cb-4a66-989b-2dc8706ea143-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "909d233e-60cb-4a66-989b-2dc8706ea143" (UID: "909d233e-60cb-4a66-989b-2dc8706ea143"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.743707 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/192d6d53-4174-487e-b652-0ad887475d54-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "192d6d53-4174-487e-b652-0ad887475d54" (UID: "192d6d53-4174-487e-b652-0ad887475d54"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.753813 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.754097 4808 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/45e8823d-6df6-41fb-b7cd-9cb19e680db1-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.754122 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-25dnq\" (UniqueName: \"kubernetes.io/projected/25dc3abb-1552-49e8-a8b4-c51edd37f47c-kube-api-access-25dnq\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.754135 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25dc3abb-1552-49e8-a8b4-c51edd37f47c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.754144 4808 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/192d6d53-4174-487e-b652-0ad887475d54-logs\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.754153 4808 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/192d6d53-4174-487e-b652-0ad887475d54-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.754161 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cxl5v\" (UniqueName: \"kubernetes.io/projected/192d6d53-4174-487e-b652-0ad887475d54-kube-api-access-cxl5v\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.754169 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8pgj4\" (UniqueName: \"kubernetes.io/projected/909d233e-60cb-4a66-989b-2dc8706ea143-kube-api-access-8pgj4\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.754177 4808 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/7fde4956-a749-475e-9b5e-978fd33a4239-kube-state-metrics-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.754186 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/192d6d53-4174-487e-b652-0ad887475d54-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.754195 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/909d233e-60cb-4a66-989b-2dc8706ea143-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.754204 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3cf17f3-18e6-43f9-ab09-5882a99ffa51-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.754212 4808 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/909d233e-60cb-4a66-989b-2dc8706ea143-logs\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.754222 4808 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/909d233e-60cb-4a66-989b-2dc8706ea143-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.758796 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25dc3abb-1552-49e8-a8b4-c51edd37f47c-config-data" (OuterVolumeSpecName: "config-data") pod "25dc3abb-1552-49e8-a8b4-c51edd37f47c" (UID: "25dc3abb-1552-49e8-a8b4-c51edd37f47c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.759693 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/192d6d53-4174-487e-b652-0ad887475d54-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "192d6d53-4174-487e-b652-0ad887475d54" (UID: "192d6d53-4174-487e-b652-0ad887475d54"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.769668 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/909d233e-60cb-4a66-989b-2dc8706ea143-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "909d233e-60cb-4a66-989b-2dc8706ea143" (UID: "909d233e-60cb-4a66-989b-2dc8706ea143"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.774158 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25dc3abb-1552-49e8-a8b4-c51edd37f47c-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "25dc3abb-1552-49e8-a8b4-c51edd37f47c" (UID: "25dc3abb-1552-49e8-a8b4-c51edd37f47c"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.791523 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/192d6d53-4174-487e-b652-0ad887475d54-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "192d6d53-4174-487e-b652-0ad887475d54" (UID: "192d6d53-4174-487e-b652-0ad887475d54"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.791747 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/909d233e-60cb-4a66-989b-2dc8706ea143-config-data" (OuterVolumeSpecName: "config-data") pod "909d233e-60cb-4a66-989b-2dc8706ea143" (UID: "909d233e-60cb-4a66-989b-2dc8706ea143"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.793667 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/909d233e-60cb-4a66-989b-2dc8706ea143-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "909d233e-60cb-4a66-989b-2dc8706ea143" (UID: "909d233e-60cb-4a66-989b-2dc8706ea143"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.854848 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c526a61c-3322-446a-8ff5-edd5a02f4b1f-config-data\") pod \"c526a61c-3322-446a-8ff5-edd5a02f4b1f\" (UID: \"c526a61c-3322-446a-8ff5-edd5a02f4b1f\") " Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.854925 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c526a61c-3322-446a-8ff5-edd5a02f4b1f-log-httpd\") pod \"c526a61c-3322-446a-8ff5-edd5a02f4b1f\" (UID: \"c526a61c-3322-446a-8ff5-edd5a02f4b1f\") " Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.854972 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zmrvq\" (UniqueName: \"kubernetes.io/projected/c526a61c-3322-446a-8ff5-edd5a02f4b1f-kube-api-access-zmrvq\") pod \"c526a61c-3322-446a-8ff5-edd5a02f4b1f\" (UID: \"c526a61c-3322-446a-8ff5-edd5a02f4b1f\") " Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.855019 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c526a61c-3322-446a-8ff5-edd5a02f4b1f-ceilometer-tls-certs\") pod \"c526a61c-3322-446a-8ff5-edd5a02f4b1f\" (UID: \"c526a61c-3322-446a-8ff5-edd5a02f4b1f\") " Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.855115 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c526a61c-3322-446a-8ff5-edd5a02f4b1f-sg-core-conf-yaml\") pod \"c526a61c-3322-446a-8ff5-edd5a02f4b1f\" (UID: \"c526a61c-3322-446a-8ff5-edd5a02f4b1f\") " Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.855212 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c526a61c-3322-446a-8ff5-edd5a02f4b1f-scripts\") pod \"c526a61c-3322-446a-8ff5-edd5a02f4b1f\" (UID: \"c526a61c-3322-446a-8ff5-edd5a02f4b1f\") " Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.855251 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c526a61c-3322-446a-8ff5-edd5a02f4b1f-run-httpd\") pod \"c526a61c-3322-446a-8ff5-edd5a02f4b1f\" (UID: \"c526a61c-3322-446a-8ff5-edd5a02f4b1f\") " Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.855314 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c526a61c-3322-446a-8ff5-edd5a02f4b1f-combined-ca-bundle\") pod \"c526a61c-3322-446a-8ff5-edd5a02f4b1f\" (UID: \"c526a61c-3322-446a-8ff5-edd5a02f4b1f\") " Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.855634 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c526a61c-3322-446a-8ff5-edd5a02f4b1f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c526a61c-3322-446a-8ff5-edd5a02f4b1f" (UID: "c526a61c-3322-446a-8ff5-edd5a02f4b1f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.855956 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c526a61c-3322-446a-8ff5-edd5a02f4b1f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c526a61c-3322-446a-8ff5-edd5a02f4b1f" (UID: "c526a61c-3322-446a-8ff5-edd5a02f4b1f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.856008 4808 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/909d233e-60cb-4a66-989b-2dc8706ea143-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.856025 4808 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/25dc3abb-1552-49e8-a8b4-c51edd37f47c-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.856174 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/192d6d53-4174-487e-b652-0ad887475d54-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.856187 4808 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/192d6d53-4174-487e-b652-0ad887475d54-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.856196 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25dc3abb-1552-49e8-a8b4-c51edd37f47c-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.856204 4808 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/909d233e-60cb-4a66-989b-2dc8706ea143-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.856212 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/909d233e-60cb-4a66-989b-2dc8706ea143-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.856220 4808 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c526a61c-3322-446a-8ff5-edd5a02f4b1f-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.863522 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c526a61c-3322-446a-8ff5-edd5a02f4b1f-kube-api-access-zmrvq" (OuterVolumeSpecName: "kube-api-access-zmrvq") pod "c526a61c-3322-446a-8ff5-edd5a02f4b1f" (UID: "c526a61c-3322-446a-8ff5-edd5a02f4b1f"). InnerVolumeSpecName "kube-api-access-zmrvq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.879859 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c526a61c-3322-446a-8ff5-edd5a02f4b1f-scripts" (OuterVolumeSpecName: "scripts") pod "c526a61c-3322-446a-8ff5-edd5a02f4b1f" (UID: "c526a61c-3322-446a-8ff5-edd5a02f4b1f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.905712 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c526a61c-3322-446a-8ff5-edd5a02f4b1f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c526a61c-3322-446a-8ff5-edd5a02f4b1f" (UID: "c526a61c-3322-446a-8ff5-edd5a02f4b1f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.939342 4808 generic.go:334] "Generic (PLEG): container finished" podID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerID="9a282cc104124a36930296e8446190e2ce24a7536755b142f5bbb0e29d5b97d9" exitCode=0 Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.939610 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" event={"ID":"3dda5309-668d-4e3c-b3b2-1d708eecc578","Type":"ContainerDied","Data":"9a282cc104124a36930296e8446190e2ce24a7536755b142f5bbb0e29d5b97d9"} Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.939700 4808 scope.go:117] "RemoveContainer" containerID="84afb20a36811210fd2305d9fb0d3f8a8331946a4c99ca791ee6a486c55a2dfe" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.939797 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c526a61c-3322-446a-8ff5-edd5a02f4b1f-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "c526a61c-3322-446a-8ff5-edd5a02f4b1f" (UID: "c526a61c-3322-446a-8ff5-edd5a02f4b1f"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.940876 4808 scope.go:117] "RemoveContainer" containerID="9a282cc104124a36930296e8446190e2ce24a7536755b142f5bbb0e29d5b97d9" Mar 11 09:05:16 crc kubenswrapper[4808]: E0311 09:05:16.941578 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.942349 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c526a61c-3322-446a-8ff5-edd5a02f4b1f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c526a61c-3322-446a-8ff5-edd5a02f4b1f" (UID: "c526a61c-3322-446a-8ff5-edd5a02f4b1f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.946225 4808 generic.go:334] "Generic (PLEG): container finished" podID="25dc3abb-1552-49e8-a8b4-c51edd37f47c" containerID="48b494dd6ddaf1d2614480b4dbea2fb21a3de491e1743fed834630233f736314" exitCode=0 Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.946293 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"25dc3abb-1552-49e8-a8b4-c51edd37f47c","Type":"ContainerDied","Data":"48b494dd6ddaf1d2614480b4dbea2fb21a3de491e1743fed834630233f736314"} Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.946320 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"25dc3abb-1552-49e8-a8b4-c51edd37f47c","Type":"ContainerDied","Data":"71abd76ae9d31e0a942b2074d7291ab833de2fdf4e16011866140ae73a79b9b4"} Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.946335 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.949918 4808 generic.go:334] "Generic (PLEG): container finished" podID="192d6d53-4174-487e-b652-0ad887475d54" containerID="dc94ba0454419f0c1846eea9c0458329c712e6687fa7a17e1ad016ba48cf2ae7" exitCode=0 Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.949984 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"192d6d53-4174-487e-b652-0ad887475d54","Type":"ContainerDied","Data":"dc94ba0454419f0c1846eea9c0458329c712e6687fa7a17e1ad016ba48cf2ae7"} Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.950012 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"192d6d53-4174-487e-b652-0ad887475d54","Type":"ContainerDied","Data":"a14632b0fceda7f05ace4d9f216e351bc0c5327cf9ff3070be85c7588a48e269"} Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.950092 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.957402 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zmrvq\" (UniqueName: \"kubernetes.io/projected/c526a61c-3322-446a-8ff5-edd5a02f4b1f-kube-api-access-zmrvq\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.957708 4808 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c526a61c-3322-446a-8ff5-edd5a02f4b1f-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.957718 4808 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c526a61c-3322-446a-8ff5-edd5a02f4b1f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.957727 4808 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c526a61c-3322-446a-8ff5-edd5a02f4b1f-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.957737 4808 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c526a61c-3322-446a-8ff5-edd5a02f4b1f-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.957746 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c526a61c-3322-446a-8ff5-edd5a02f4b1f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.965132 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c526a61c-3322-446a-8ff5-edd5a02f4b1f-config-data" (OuterVolumeSpecName: "config-data") pod "c526a61c-3322-446a-8ff5-edd5a02f4b1f" (UID: "c526a61c-3322-446a-8ff5-edd5a02f4b1f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.966585 4808 generic.go:334] "Generic (PLEG): container finished" podID="909d233e-60cb-4a66-989b-2dc8706ea143" containerID="199e1ac6e17d3bec4008ea266cd9404ffd73ec3cdee3f462464896d016c4f52b" exitCode=0 Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.966653 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-688446ffb8-4n8n7" event={"ID":"909d233e-60cb-4a66-989b-2dc8706ea143","Type":"ContainerDied","Data":"199e1ac6e17d3bec4008ea266cd9404ffd73ec3cdee3f462464896d016c4f52b"} Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.966680 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-688446ffb8-4n8n7" event={"ID":"909d233e-60cb-4a66-989b-2dc8706ea143","Type":"ContainerDied","Data":"93e2de8bb571ae2993d01391b8092a8ef73ba6e25e9b7acafa556b504387547b"} Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.966723 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-688446ffb8-4n8n7" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.968873 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b3cf17f3-18e6-43f9-ab09-5882a99ffa51","Type":"ContainerDied","Data":"635a62079ecd8d7a8be3b970aa7555337e5bc87059f40e3f248650e4fcbdcc13"} Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.968951 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.974553 4808 generic.go:334] "Generic (PLEG): container finished" podID="c526a61c-3322-446a-8ff5-edd5a02f4b1f" containerID="476be30bd69c12932b5a3bc740f763e7b974ace089360c9808141c4922a76390" exitCode=0 Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.974669 4808 generic.go:334] "Generic (PLEG): container finished" podID="c526a61c-3322-446a-8ff5-edd5a02f4b1f" containerID="cac6c475704a03c7e5006a4b2741b4004efa4bf58536e1048913a3630bde7cda" exitCode=0 Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.974790 4808 generic.go:334] "Generic (PLEG): container finished" podID="c526a61c-3322-446a-8ff5-edd5a02f4b1f" containerID="4dffba39f9aa8f7f8d804f85bd4b10f46074c0a37fa13b669739dacf69102465" exitCode=0 Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.974786 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.974832 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c526a61c-3322-446a-8ff5-edd5a02f4b1f","Type":"ContainerDied","Data":"476be30bd69c12932b5a3bc740f763e7b974ace089360c9808141c4922a76390"} Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.975416 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c526a61c-3322-446a-8ff5-edd5a02f4b1f","Type":"ContainerDied","Data":"cac6c475704a03c7e5006a4b2741b4004efa4bf58536e1048913a3630bde7cda"} Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.975437 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c526a61c-3322-446a-8ff5-edd5a02f4b1f","Type":"ContainerDied","Data":"4dffba39f9aa8f7f8d804f85bd4b10f46074c0a37fa13b669739dacf69102465"} Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.975448 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c526a61c-3322-446a-8ff5-edd5a02f4b1f","Type":"ContainerDied","Data":"09baa7e4d639a22c735a0165bcf92ae801d57d7fafd155d8b4b7f30a21984cd7"} Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.980566 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.980594 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"45e8823d-6df6-41fb-b7cd-9cb19e680db1","Type":"ContainerDied","Data":"b46362ca50d32a0a27a4ad3bf062ce747bfaf9bb1b99fc194374f9389ed1c025"} Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.989991 4808 generic.go:334] "Generic (PLEG): container finished" podID="c6baf079-20ab-45df-8e2d-2459a4286c9a" containerID="82f8eb58e9f3ad3c8cc07e575893f64a2f9e5741e48c57fc799afc4a33b2dc6a" exitCode=0 Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.990050 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"c6baf079-20ab-45df-8e2d-2459a4286c9a","Type":"ContainerDied","Data":"82f8eb58e9f3ad3c8cc07e575893f64a2f9e5741e48c57fc799afc4a33b2dc6a"} Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.990074 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"c6baf079-20ab-45df-8e2d-2459a4286c9a","Type":"ContainerDied","Data":"11d7c6ee6acbf7976a8efaf8a5635e0fb24158b78cb757031248629053d48f39"} Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.990084 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="11d7c6ee6acbf7976a8efaf8a5635e0fb24158b78cb757031248629053d48f39" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.991336 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"45a36b4a-f974-46f6-a719-9765499308ed","Type":"ContainerDied","Data":"ed23abd17b0f6436dcbda8e69d671ae6eccdff2892f515ea882b5ba02bae2340"} Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.991442 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.995620 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"7fde4956-a749-475e-9b5e-978fd33a4239","Type":"ContainerDied","Data":"470f25486156346186d1d954378f4ada476774612b051acac91371c7d4123c57"} Mar 11 09:05:16 crc kubenswrapper[4808]: I0311 09:05:16.995701 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.002077 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-35b5-account-create-update-vhqzx" Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.002184 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-5hjcc" event={"ID":"da221ea6-e100-48b2-aacc-4b34bcd71782","Type":"ContainerStarted","Data":"8e15c6d9764b493d6bdbdebfea6bf8fcc03ee4ee2baf41bd110ee9e2d42550fc"} Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.060228 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c526a61c-3322-446a-8ff5-edd5a02f4b1f-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.070131 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.090641 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-35b5-account-create-update-vhqzx" Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.099487 4808 scope.go:117] "RemoveContainer" containerID="48b494dd6ddaf1d2614480b4dbea2fb21a3de491e1743fed834630233f736314" Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.141405 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 11 09:05:17 crc kubenswrapper[4808]: E0311 09:05:17.150270 4808 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of be587cd4387689015812b77c921ed987cf9e6f3848ffcd2be39c7c2a7a593cef is running failed: container process not found" containerID="be587cd4387689015812b77c921ed987cf9e6f3848ffcd2be39c7c2a7a593cef" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Mar 11 09:05:17 crc kubenswrapper[4808]: E0311 09:05:17.150696 4808 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of be587cd4387689015812b77c921ed987cf9e6f3848ffcd2be39c7c2a7a593cef is running failed: container process not found" containerID="be587cd4387689015812b77c921ed987cf9e6f3848ffcd2be39c7c2a7a593cef" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Mar 11 09:05:17 crc kubenswrapper[4808]: E0311 09:05:17.154133 4808 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of be587cd4387689015812b77c921ed987cf9e6f3848ffcd2be39c7c2a7a593cef is running failed: container process not found" containerID="be587cd4387689015812b77c921ed987cf9e6f3848ffcd2be39c7c2a7a593cef" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Mar 11 09:05:17 crc kubenswrapper[4808]: E0311 09:05:17.154188 4808 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of be587cd4387689015812b77c921ed987cf9e6f3848ffcd2be39c7c2a7a593cef is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-mbbhf" podUID="ac93b356-9c32-4094-9de5-8fd25c677810" containerName="ovsdb-server" Mar 11 09:05:17 crc kubenswrapper[4808]: E0311 09:05:17.154138 4808 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d0f6b59c6c2a37c85684bda61d18699ea95a7b07fe74cd2b454ef4d645be3cec" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.163311 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6baf079-20ab-45df-8e2d-2459a4286c9a-combined-ca-bundle\") pod \"c6baf079-20ab-45df-8e2d-2459a4286c9a\" (UID: \"c6baf079-20ab-45df-8e2d-2459a4286c9a\") " Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.163351 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c6baf079-20ab-45df-8e2d-2459a4286c9a-config-data\") pod \"c6baf079-20ab-45df-8e2d-2459a4286c9a\" (UID: \"c6baf079-20ab-45df-8e2d-2459a4286c9a\") " Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.163427 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6baf079-20ab-45df-8e2d-2459a4286c9a-memcached-tls-certs\") pod \"c6baf079-20ab-45df-8e2d-2459a4286c9a\" (UID: \"c6baf079-20ab-45df-8e2d-2459a4286c9a\") " Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.163484 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wcfbs\" (UniqueName: \"kubernetes.io/projected/c6baf079-20ab-45df-8e2d-2459a4286c9a-kube-api-access-wcfbs\") pod \"c6baf079-20ab-45df-8e2d-2459a4286c9a\" (UID: \"c6baf079-20ab-45df-8e2d-2459a4286c9a\") " Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.163509 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c6baf079-20ab-45df-8e2d-2459a4286c9a-kolla-config\") pod \"c6baf079-20ab-45df-8e2d-2459a4286c9a\" (UID: \"c6baf079-20ab-45df-8e2d-2459a4286c9a\") " Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.164953 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6baf079-20ab-45df-8e2d-2459a4286c9a-config-data" (OuterVolumeSpecName: "config-data") pod "c6baf079-20ab-45df-8e2d-2459a4286c9a" (UID: "c6baf079-20ab-45df-8e2d-2459a4286c9a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:05:17 crc kubenswrapper[4808]: E0311 09:05:17.166342 4808 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d0f6b59c6c2a37c85684bda61d18699ea95a7b07fe74cd2b454ef4d645be3cec" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.170296 4808 scope.go:117] "RemoveContainer" containerID="a086c0e3022a2db2d34ca60d4015160e76a8270327ee343fcdb2cb62321f0689" Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.170692 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6baf079-20ab-45df-8e2d-2459a4286c9a-kube-api-access-wcfbs" (OuterVolumeSpecName: "kube-api-access-wcfbs") pod "c6baf079-20ab-45df-8e2d-2459a4286c9a" (UID: "c6baf079-20ab-45df-8e2d-2459a4286c9a"). InnerVolumeSpecName "kube-api-access-wcfbs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.171032 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Mar 11 09:05:17 crc kubenswrapper[4808]: E0311 09:05:17.176451 4808 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d0f6b59c6c2a37c85684bda61d18699ea95a7b07fe74cd2b454ef4d645be3cec" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Mar 11 09:05:17 crc kubenswrapper[4808]: E0311 09:05:17.176503 4808 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-mbbhf" podUID="ac93b356-9c32-4094-9de5-8fd25c677810" containerName="ovs-vswitchd" Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.179271 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-688446ffb8-4n8n7"] Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.206685 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6baf079-20ab-45df-8e2d-2459a4286c9a-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "c6baf079-20ab-45df-8e2d-2459a4286c9a" (UID: "c6baf079-20ab-45df-8e2d-2459a4286c9a"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.207507 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6baf079-20ab-45df-8e2d-2459a4286c9a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c6baf079-20ab-45df-8e2d-2459a4286c9a" (UID: "c6baf079-20ab-45df-8e2d-2459a4286c9a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.223059 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-688446ffb8-4n8n7"] Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.249056 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6baf079-20ab-45df-8e2d-2459a4286c9a-memcached-tls-certs" (OuterVolumeSpecName: "memcached-tls-certs") pod "c6baf079-20ab-45df-8e2d-2459a4286c9a" (UID: "c6baf079-20ab-45df-8e2d-2459a4286c9a"). InnerVolumeSpecName "memcached-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.251203 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.262623 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.265213 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6baf079-20ab-45df-8e2d-2459a4286c9a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.265244 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c6baf079-20ab-45df-8e2d-2459a4286c9a-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.265255 4808 reconciler_common.go:293] "Volume detached for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6baf079-20ab-45df-8e2d-2459a4286c9a-memcached-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.265265 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wcfbs\" (UniqueName: \"kubernetes.io/projected/c6baf079-20ab-45df-8e2d-2459a4286c9a-kube-api-access-wcfbs\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.265275 4808 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c6baf079-20ab-45df-8e2d-2459a4286c9a-kolla-config\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.275750 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.301062 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.303769 4808 scope.go:117] "RemoveContainer" containerID="48b494dd6ddaf1d2614480b4dbea2fb21a3de491e1743fed834630233f736314" Mar 11 09:05:17 crc kubenswrapper[4808]: E0311 09:05:17.304286 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48b494dd6ddaf1d2614480b4dbea2fb21a3de491e1743fed834630233f736314\": container with ID starting with 48b494dd6ddaf1d2614480b4dbea2fb21a3de491e1743fed834630233f736314 not found: ID does not exist" containerID="48b494dd6ddaf1d2614480b4dbea2fb21a3de491e1743fed834630233f736314" Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.304321 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48b494dd6ddaf1d2614480b4dbea2fb21a3de491e1743fed834630233f736314"} err="failed to get container status \"48b494dd6ddaf1d2614480b4dbea2fb21a3de491e1743fed834630233f736314\": rpc error: code = NotFound desc = could not find container \"48b494dd6ddaf1d2614480b4dbea2fb21a3de491e1743fed834630233f736314\": container with ID starting with 48b494dd6ddaf1d2614480b4dbea2fb21a3de491e1743fed834630233f736314 not found: ID does not exist" Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.304349 4808 scope.go:117] "RemoveContainer" containerID="a086c0e3022a2db2d34ca60d4015160e76a8270327ee343fcdb2cb62321f0689" Mar 11 09:05:17 crc kubenswrapper[4808]: E0311 09:05:17.304633 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a086c0e3022a2db2d34ca60d4015160e76a8270327ee343fcdb2cb62321f0689\": container with ID starting with a086c0e3022a2db2d34ca60d4015160e76a8270327ee343fcdb2cb62321f0689 not found: ID does not exist" containerID="a086c0e3022a2db2d34ca60d4015160e76a8270327ee343fcdb2cb62321f0689" Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.304662 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a086c0e3022a2db2d34ca60d4015160e76a8270327ee343fcdb2cb62321f0689"} err="failed to get container status \"a086c0e3022a2db2d34ca60d4015160e76a8270327ee343fcdb2cb62321f0689\": rpc error: code = NotFound desc = could not find container \"a086c0e3022a2db2d34ca60d4015160e76a8270327ee343fcdb2cb62321f0689\": container with ID starting with a086c0e3022a2db2d34ca60d4015160e76a8270327ee343fcdb2cb62321f0689 not found: ID does not exist" Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.304685 4808 scope.go:117] "RemoveContainer" containerID="dc94ba0454419f0c1846eea9c0458329c712e6687fa7a17e1ad016ba48cf2ae7" Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.307862 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.316462 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.322599 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.327845 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.332919 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.338453 4808 scope.go:117] "RemoveContainer" containerID="c7fcbacbe8923acd9369c79ee4cc7aa70294bd2899daf98c5c24346801197e4e" Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.338979 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.355054 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.358342 4808 scope.go:117] "RemoveContainer" containerID="dc94ba0454419f0c1846eea9c0458329c712e6687fa7a17e1ad016ba48cf2ae7" Mar 11 09:05:17 crc kubenswrapper[4808]: E0311 09:05:17.358911 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc94ba0454419f0c1846eea9c0458329c712e6687fa7a17e1ad016ba48cf2ae7\": container with ID starting with dc94ba0454419f0c1846eea9c0458329c712e6687fa7a17e1ad016ba48cf2ae7 not found: ID does not exist" containerID="dc94ba0454419f0c1846eea9c0458329c712e6687fa7a17e1ad016ba48cf2ae7" Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.359032 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc94ba0454419f0c1846eea9c0458329c712e6687fa7a17e1ad016ba48cf2ae7"} err="failed to get container status \"dc94ba0454419f0c1846eea9c0458329c712e6687fa7a17e1ad016ba48cf2ae7\": rpc error: code = NotFound desc = could not find container \"dc94ba0454419f0c1846eea9c0458329c712e6687fa7a17e1ad016ba48cf2ae7\": container with ID starting with dc94ba0454419f0c1846eea9c0458329c712e6687fa7a17e1ad016ba48cf2ae7 not found: ID does not exist" Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.359156 4808 scope.go:117] "RemoveContainer" containerID="c7fcbacbe8923acd9369c79ee4cc7aa70294bd2899daf98c5c24346801197e4e" Mar 11 09:05:17 crc kubenswrapper[4808]: E0311 09:05:17.359578 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7fcbacbe8923acd9369c79ee4cc7aa70294bd2899daf98c5c24346801197e4e\": container with ID starting with c7fcbacbe8923acd9369c79ee4cc7aa70294bd2899daf98c5c24346801197e4e not found: ID does not exist" containerID="c7fcbacbe8923acd9369c79ee4cc7aa70294bd2899daf98c5c24346801197e4e" Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.359613 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7fcbacbe8923acd9369c79ee4cc7aa70294bd2899daf98c5c24346801197e4e"} err="failed to get container status \"c7fcbacbe8923acd9369c79ee4cc7aa70294bd2899daf98c5c24346801197e4e\": rpc error: code = NotFound desc = could not find container \"c7fcbacbe8923acd9369c79ee4cc7aa70294bd2899daf98c5c24346801197e4e\": container with ID starting with c7fcbacbe8923acd9369c79ee4cc7aa70294bd2899daf98c5c24346801197e4e not found: ID does not exist" Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.359638 4808 scope.go:117] "RemoveContainer" containerID="199e1ac6e17d3bec4008ea266cd9404ffd73ec3cdee3f462464896d016c4f52b" Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.362540 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.366447 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/02e2eb2e-01e6-4f49-9880-3aed7eb20bf6-operator-scripts\") pod \"keystone-35b5-account-create-update-vhqzx\" (UID: \"02e2eb2e-01e6-4f49-9880-3aed7eb20bf6\") " pod="openstack/keystone-35b5-account-create-update-vhqzx" Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.366598 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lrsb\" (UniqueName: \"kubernetes.io/projected/02e2eb2e-01e6-4f49-9880-3aed7eb20bf6-kube-api-access-4lrsb\") pod \"keystone-35b5-account-create-update-vhqzx\" (UID: \"02e2eb2e-01e6-4f49-9880-3aed7eb20bf6\") " pod="openstack/keystone-35b5-account-create-update-vhqzx" Mar 11 09:05:17 crc kubenswrapper[4808]: E0311 09:05:17.367157 4808 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Mar 11 09:05:17 crc kubenswrapper[4808]: E0311 09:05:17.367291 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/02e2eb2e-01e6-4f49-9880-3aed7eb20bf6-operator-scripts podName:02e2eb2e-01e6-4f49-9880-3aed7eb20bf6 nodeName:}" failed. No retries permitted until 2026-03-11 09:05:19.367274606 +0000 UTC m=+1570.320597926 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/02e2eb2e-01e6-4f49-9880-3aed7eb20bf6-operator-scripts") pod "keystone-35b5-account-create-update-vhqzx" (UID: "02e2eb2e-01e6-4f49-9880-3aed7eb20bf6") : configmap "openstack-scripts" not found Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.368955 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-5hjcc" Mar 11 09:05:17 crc kubenswrapper[4808]: E0311 09:05:17.370404 4808 projected.go:194] Error preparing data for projected volume kube-api-access-4lrsb for pod openstack/keystone-35b5-account-create-update-vhqzx: failed to fetch token: serviceaccounts "galera-openstack" not found Mar 11 09:05:17 crc kubenswrapper[4808]: E0311 09:05:17.370505 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/02e2eb2e-01e6-4f49-9880-3aed7eb20bf6-kube-api-access-4lrsb podName:02e2eb2e-01e6-4f49-9880-3aed7eb20bf6 nodeName:}" failed. No retries permitted until 2026-03-11 09:05:19.370478609 +0000 UTC m=+1570.323801989 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-4lrsb" (UniqueName: "kubernetes.io/projected/02e2eb2e-01e6-4f49-9880-3aed7eb20bf6-kube-api-access-4lrsb") pod "keystone-35b5-account-create-update-vhqzx" (UID: "02e2eb2e-01e6-4f49-9880-3aed7eb20bf6") : failed to fetch token: serviceaccounts "galera-openstack" not found Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.399145 4808 scope.go:117] "RemoveContainer" containerID="e9ba485b53ad1d07b1fb284bc1453eaeb1f6baef2ad3437f4b34571187340332" Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.452695 4808 scope.go:117] "RemoveContainer" containerID="199e1ac6e17d3bec4008ea266cd9404ffd73ec3cdee3f462464896d016c4f52b" Mar 11 09:05:17 crc kubenswrapper[4808]: E0311 09:05:17.453178 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"199e1ac6e17d3bec4008ea266cd9404ffd73ec3cdee3f462464896d016c4f52b\": container with ID starting with 199e1ac6e17d3bec4008ea266cd9404ffd73ec3cdee3f462464896d016c4f52b not found: ID does not exist" containerID="199e1ac6e17d3bec4008ea266cd9404ffd73ec3cdee3f462464896d016c4f52b" Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.453208 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"199e1ac6e17d3bec4008ea266cd9404ffd73ec3cdee3f462464896d016c4f52b"} err="failed to get container status \"199e1ac6e17d3bec4008ea266cd9404ffd73ec3cdee3f462464896d016c4f52b\": rpc error: code = NotFound desc = could not find container \"199e1ac6e17d3bec4008ea266cd9404ffd73ec3cdee3f462464896d016c4f52b\": container with ID starting with 199e1ac6e17d3bec4008ea266cd9404ffd73ec3cdee3f462464896d016c4f52b not found: ID does not exist" Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.453228 4808 scope.go:117] "RemoveContainer" containerID="e9ba485b53ad1d07b1fb284bc1453eaeb1f6baef2ad3437f4b34571187340332" Mar 11 09:05:17 crc kubenswrapper[4808]: E0311 09:05:17.453515 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9ba485b53ad1d07b1fb284bc1453eaeb1f6baef2ad3437f4b34571187340332\": container with ID starting with e9ba485b53ad1d07b1fb284bc1453eaeb1f6baef2ad3437f4b34571187340332 not found: ID does not exist" containerID="e9ba485b53ad1d07b1fb284bc1453eaeb1f6baef2ad3437f4b34571187340332" Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.453540 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9ba485b53ad1d07b1fb284bc1453eaeb1f6baef2ad3437f4b34571187340332"} err="failed to get container status \"e9ba485b53ad1d07b1fb284bc1453eaeb1f6baef2ad3437f4b34571187340332\": rpc error: code = NotFound desc = could not find container \"e9ba485b53ad1d07b1fb284bc1453eaeb1f6baef2ad3437f4b34571187340332\": container with ID starting with e9ba485b53ad1d07b1fb284bc1453eaeb1f6baef2ad3437f4b34571187340332 not found: ID does not exist" Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.453567 4808 scope.go:117] "RemoveContainer" containerID="69ba1fe4eed62164ec20e29e3973c8d7238e07cc99a3de39bc2e21c2312afc26" Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.468429 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da221ea6-e100-48b2-aacc-4b34bcd71782-operator-scripts\") pod \"da221ea6-e100-48b2-aacc-4b34bcd71782\" (UID: \"da221ea6-e100-48b2-aacc-4b34bcd71782\") " Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.468649 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zzlhh\" (UniqueName: \"kubernetes.io/projected/da221ea6-e100-48b2-aacc-4b34bcd71782-kube-api-access-zzlhh\") pod \"da221ea6-e100-48b2-aacc-4b34bcd71782\" (UID: \"da221ea6-e100-48b2-aacc-4b34bcd71782\") " Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.469103 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da221ea6-e100-48b2-aacc-4b34bcd71782-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "da221ea6-e100-48b2-aacc-4b34bcd71782" (UID: "da221ea6-e100-48b2-aacc-4b34bcd71782"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.469514 4808 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da221ea6-e100-48b2-aacc-4b34bcd71782-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.472941 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da221ea6-e100-48b2-aacc-4b34bcd71782-kube-api-access-zzlhh" (OuterVolumeSpecName: "kube-api-access-zzlhh") pod "da221ea6-e100-48b2-aacc-4b34bcd71782" (UID: "da221ea6-e100-48b2-aacc-4b34bcd71782"). InnerVolumeSpecName "kube-api-access-zzlhh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.500309 4808 scope.go:117] "RemoveContainer" containerID="d3ed6d04157e55ecb439f27f04de8f8ce7e201a4c77868d475de1fccec4b926a" Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.535297 4808 scope.go:117] "RemoveContainer" containerID="476be30bd69c12932b5a3bc740f763e7b974ace089360c9808141c4922a76390" Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.569761 4808 scope.go:117] "RemoveContainer" containerID="d192897093f81eb77d49df57c52794a2e80dd6a9b36aabde6381dca653540f27" Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.570619 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zzlhh\" (UniqueName: \"kubernetes.io/projected/da221ea6-e100-48b2-aacc-4b34bcd71782-kube-api-access-zzlhh\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:17 crc kubenswrapper[4808]: E0311 09:05:17.570770 4808 projected.go:288] Couldn't get configMap openstack/swift-storage-config-data: configmap "swift-storage-config-data" not found Mar 11 09:05:17 crc kubenswrapper[4808]: E0311 09:05:17.570795 4808 projected.go:263] Couldn't get secret openstack/swift-conf: secret "swift-conf" not found Mar 11 09:05:17 crc kubenswrapper[4808]: E0311 09:05:17.570824 4808 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 11 09:05:17 crc kubenswrapper[4808]: E0311 09:05:17.570837 4808 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: [configmap "swift-storage-config-data" not found, secret "swift-conf" not found, configmap "swift-ring-files" not found] Mar 11 09:05:17 crc kubenswrapper[4808]: E0311 09:05:17.570907 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b2531f01-6ef8-4583-b788-97e0c8b4b50b-etc-swift podName:b2531f01-6ef8-4583-b788-97e0c8b4b50b nodeName:}" failed. No retries permitted until 2026-03-11 09:05:25.570872524 +0000 UTC m=+1576.524195844 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b2531f01-6ef8-4583-b788-97e0c8b4b50b-etc-swift") pod "swift-storage-0" (UID: "b2531f01-6ef8-4583-b788-97e0c8b4b50b") : [configmap "swift-storage-config-data" not found, secret "swift-conf" not found, configmap "swift-ring-files" not found] Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.598086 4808 scope.go:117] "RemoveContainer" containerID="cac6c475704a03c7e5006a4b2741b4004efa4bf58536e1048913a3630bde7cda" Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.620598 4808 scope.go:117] "RemoveContainer" containerID="4dffba39f9aa8f7f8d804f85bd4b10f46074c0a37fa13b669739dacf69102465" Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.639909 4808 scope.go:117] "RemoveContainer" containerID="476be30bd69c12932b5a3bc740f763e7b974ace089360c9808141c4922a76390" Mar 11 09:05:17 crc kubenswrapper[4808]: E0311 09:05:17.640394 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"476be30bd69c12932b5a3bc740f763e7b974ace089360c9808141c4922a76390\": container with ID starting with 476be30bd69c12932b5a3bc740f763e7b974ace089360c9808141c4922a76390 not found: ID does not exist" containerID="476be30bd69c12932b5a3bc740f763e7b974ace089360c9808141c4922a76390" Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.640433 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"476be30bd69c12932b5a3bc740f763e7b974ace089360c9808141c4922a76390"} err="failed to get container status \"476be30bd69c12932b5a3bc740f763e7b974ace089360c9808141c4922a76390\": rpc error: code = NotFound desc = could not find container \"476be30bd69c12932b5a3bc740f763e7b974ace089360c9808141c4922a76390\": container with ID starting with 476be30bd69c12932b5a3bc740f763e7b974ace089360c9808141c4922a76390 not found: ID does not exist" Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.640459 4808 scope.go:117] "RemoveContainer" containerID="d192897093f81eb77d49df57c52794a2e80dd6a9b36aabde6381dca653540f27" Mar 11 09:05:17 crc kubenswrapper[4808]: E0311 09:05:17.640918 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d192897093f81eb77d49df57c52794a2e80dd6a9b36aabde6381dca653540f27\": container with ID starting with d192897093f81eb77d49df57c52794a2e80dd6a9b36aabde6381dca653540f27 not found: ID does not exist" containerID="d192897093f81eb77d49df57c52794a2e80dd6a9b36aabde6381dca653540f27" Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.640967 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d192897093f81eb77d49df57c52794a2e80dd6a9b36aabde6381dca653540f27"} err="failed to get container status \"d192897093f81eb77d49df57c52794a2e80dd6a9b36aabde6381dca653540f27\": rpc error: code = NotFound desc = could not find container \"d192897093f81eb77d49df57c52794a2e80dd6a9b36aabde6381dca653540f27\": container with ID starting with d192897093f81eb77d49df57c52794a2e80dd6a9b36aabde6381dca653540f27 not found: ID does not exist" Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.640999 4808 scope.go:117] "RemoveContainer" containerID="cac6c475704a03c7e5006a4b2741b4004efa4bf58536e1048913a3630bde7cda" Mar 11 09:05:17 crc kubenswrapper[4808]: E0311 09:05:17.641425 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cac6c475704a03c7e5006a4b2741b4004efa4bf58536e1048913a3630bde7cda\": container with ID starting with cac6c475704a03c7e5006a4b2741b4004efa4bf58536e1048913a3630bde7cda not found: ID does not exist" containerID="cac6c475704a03c7e5006a4b2741b4004efa4bf58536e1048913a3630bde7cda" Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.641466 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cac6c475704a03c7e5006a4b2741b4004efa4bf58536e1048913a3630bde7cda"} err="failed to get container status \"cac6c475704a03c7e5006a4b2741b4004efa4bf58536e1048913a3630bde7cda\": rpc error: code = NotFound desc = could not find container \"cac6c475704a03c7e5006a4b2741b4004efa4bf58536e1048913a3630bde7cda\": container with ID starting with cac6c475704a03c7e5006a4b2741b4004efa4bf58536e1048913a3630bde7cda not found: ID does not exist" Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.641482 4808 scope.go:117] "RemoveContainer" containerID="4dffba39f9aa8f7f8d804f85bd4b10f46074c0a37fa13b669739dacf69102465" Mar 11 09:05:17 crc kubenswrapper[4808]: E0311 09:05:17.641764 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4dffba39f9aa8f7f8d804f85bd4b10f46074c0a37fa13b669739dacf69102465\": container with ID starting with 4dffba39f9aa8f7f8d804f85bd4b10f46074c0a37fa13b669739dacf69102465 not found: ID does not exist" containerID="4dffba39f9aa8f7f8d804f85bd4b10f46074c0a37fa13b669739dacf69102465" Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.641797 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4dffba39f9aa8f7f8d804f85bd4b10f46074c0a37fa13b669739dacf69102465"} err="failed to get container status \"4dffba39f9aa8f7f8d804f85bd4b10f46074c0a37fa13b669739dacf69102465\": rpc error: code = NotFound desc = could not find container \"4dffba39f9aa8f7f8d804f85bd4b10f46074c0a37fa13b669739dacf69102465\": container with ID starting with 4dffba39f9aa8f7f8d804f85bd4b10f46074c0a37fa13b669739dacf69102465 not found: ID does not exist" Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.641811 4808 scope.go:117] "RemoveContainer" containerID="476be30bd69c12932b5a3bc740f763e7b974ace089360c9808141c4922a76390" Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.642099 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"476be30bd69c12932b5a3bc740f763e7b974ace089360c9808141c4922a76390"} err="failed to get container status \"476be30bd69c12932b5a3bc740f763e7b974ace089360c9808141c4922a76390\": rpc error: code = NotFound desc = could not find container \"476be30bd69c12932b5a3bc740f763e7b974ace089360c9808141c4922a76390\": container with ID starting with 476be30bd69c12932b5a3bc740f763e7b974ace089360c9808141c4922a76390 not found: ID does not exist" Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.642156 4808 scope.go:117] "RemoveContainer" containerID="d192897093f81eb77d49df57c52794a2e80dd6a9b36aabde6381dca653540f27" Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.642608 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d192897093f81eb77d49df57c52794a2e80dd6a9b36aabde6381dca653540f27"} err="failed to get container status \"d192897093f81eb77d49df57c52794a2e80dd6a9b36aabde6381dca653540f27\": rpc error: code = NotFound desc = could not find container \"d192897093f81eb77d49df57c52794a2e80dd6a9b36aabde6381dca653540f27\": container with ID starting with d192897093f81eb77d49df57c52794a2e80dd6a9b36aabde6381dca653540f27 not found: ID does not exist" Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.642638 4808 scope.go:117] "RemoveContainer" containerID="cac6c475704a03c7e5006a4b2741b4004efa4bf58536e1048913a3630bde7cda" Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.642985 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cac6c475704a03c7e5006a4b2741b4004efa4bf58536e1048913a3630bde7cda"} err="failed to get container status \"cac6c475704a03c7e5006a4b2741b4004efa4bf58536e1048913a3630bde7cda\": rpc error: code = NotFound desc = could not find container \"cac6c475704a03c7e5006a4b2741b4004efa4bf58536e1048913a3630bde7cda\": container with ID starting with cac6c475704a03c7e5006a4b2741b4004efa4bf58536e1048913a3630bde7cda not found: ID does not exist" Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.643038 4808 scope.go:117] "RemoveContainer" containerID="4dffba39f9aa8f7f8d804f85bd4b10f46074c0a37fa13b669739dacf69102465" Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.643291 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4dffba39f9aa8f7f8d804f85bd4b10f46074c0a37fa13b669739dacf69102465"} err="failed to get container status \"4dffba39f9aa8f7f8d804f85bd4b10f46074c0a37fa13b669739dacf69102465\": rpc error: code = NotFound desc = could not find container \"4dffba39f9aa8f7f8d804f85bd4b10f46074c0a37fa13b669739dacf69102465\": container with ID starting with 4dffba39f9aa8f7f8d804f85bd4b10f46074c0a37fa13b669739dacf69102465 not found: ID does not exist" Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.643313 4808 scope.go:117] "RemoveContainer" containerID="476be30bd69c12932b5a3bc740f763e7b974ace089360c9808141c4922a76390" Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.643807 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"476be30bd69c12932b5a3bc740f763e7b974ace089360c9808141c4922a76390"} err="failed to get container status \"476be30bd69c12932b5a3bc740f763e7b974ace089360c9808141c4922a76390\": rpc error: code = NotFound desc = could not find container \"476be30bd69c12932b5a3bc740f763e7b974ace089360c9808141c4922a76390\": container with ID starting with 476be30bd69c12932b5a3bc740f763e7b974ace089360c9808141c4922a76390 not found: ID does not exist" Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.643839 4808 scope.go:117] "RemoveContainer" containerID="d192897093f81eb77d49df57c52794a2e80dd6a9b36aabde6381dca653540f27" Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.644091 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d192897093f81eb77d49df57c52794a2e80dd6a9b36aabde6381dca653540f27"} err="failed to get container status \"d192897093f81eb77d49df57c52794a2e80dd6a9b36aabde6381dca653540f27\": rpc error: code = NotFound desc = could not find container \"d192897093f81eb77d49df57c52794a2e80dd6a9b36aabde6381dca653540f27\": container with ID starting with d192897093f81eb77d49df57c52794a2e80dd6a9b36aabde6381dca653540f27 not found: ID does not exist" Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.644116 4808 scope.go:117] "RemoveContainer" containerID="cac6c475704a03c7e5006a4b2741b4004efa4bf58536e1048913a3630bde7cda" Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.644336 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cac6c475704a03c7e5006a4b2741b4004efa4bf58536e1048913a3630bde7cda"} err="failed to get container status \"cac6c475704a03c7e5006a4b2741b4004efa4bf58536e1048913a3630bde7cda\": rpc error: code = NotFound desc = could not find container \"cac6c475704a03c7e5006a4b2741b4004efa4bf58536e1048913a3630bde7cda\": container with ID starting with cac6c475704a03c7e5006a4b2741b4004efa4bf58536e1048913a3630bde7cda not found: ID does not exist" Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.644398 4808 scope.go:117] "RemoveContainer" containerID="4dffba39f9aa8f7f8d804f85bd4b10f46074c0a37fa13b669739dacf69102465" Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.644679 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4dffba39f9aa8f7f8d804f85bd4b10f46074c0a37fa13b669739dacf69102465"} err="failed to get container status \"4dffba39f9aa8f7f8d804f85bd4b10f46074c0a37fa13b669739dacf69102465\": rpc error: code = NotFound desc = could not find container \"4dffba39f9aa8f7f8d804f85bd4b10f46074c0a37fa13b669739dacf69102465\": container with ID starting with 4dffba39f9aa8f7f8d804f85bd4b10f46074c0a37fa13b669739dacf69102465 not found: ID does not exist" Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.644706 4808 scope.go:117] "RemoveContainer" containerID="600ad91b4b18f17d71cd6096452f04faaca352ace36331fc5ba1a67011114ccf" Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.656105 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_08b88a34-0eac-4a47-b3b3-89a8024bbe7b/ovn-northd/0.log" Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.656171 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.693324 4808 scope.go:117] "RemoveContainer" containerID="449aa92d48851baa151002599c1327e205ca3ac321e49b56f196db3dc8961bcc" Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.716773 4808 scope.go:117] "RemoveContainer" containerID="cf55bdcbeb3d626ba9dfd3112f8a4875f325fed6c1ddb8829be7326c5b814762" Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.742676 4808 scope.go:117] "RemoveContainer" containerID="b19b84b99164bc73197e675a39d0a76695e78a944f51b46351fa56c764200830" Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.777467 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/08b88a34-0eac-4a47-b3b3-89a8024bbe7b-ovn-rundir\") pod \"08b88a34-0eac-4a47-b3b3-89a8024bbe7b\" (UID: \"08b88a34-0eac-4a47-b3b3-89a8024bbe7b\") " Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.777561 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6z458\" (UniqueName: \"kubernetes.io/projected/08b88a34-0eac-4a47-b3b3-89a8024bbe7b-kube-api-access-6z458\") pod \"08b88a34-0eac-4a47-b3b3-89a8024bbe7b\" (UID: \"08b88a34-0eac-4a47-b3b3-89a8024bbe7b\") " Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.777752 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08b88a34-0eac-4a47-b3b3-89a8024bbe7b-combined-ca-bundle\") pod \"08b88a34-0eac-4a47-b3b3-89a8024bbe7b\" (UID: \"08b88a34-0eac-4a47-b3b3-89a8024bbe7b\") " Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.777804 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/08b88a34-0eac-4a47-b3b3-89a8024bbe7b-metrics-certs-tls-certs\") pod \"08b88a34-0eac-4a47-b3b3-89a8024bbe7b\" (UID: \"08b88a34-0eac-4a47-b3b3-89a8024bbe7b\") " Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.777823 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08b88a34-0eac-4a47-b3b3-89a8024bbe7b-config\") pod \"08b88a34-0eac-4a47-b3b3-89a8024bbe7b\" (UID: \"08b88a34-0eac-4a47-b3b3-89a8024bbe7b\") " Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.777843 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/08b88a34-0eac-4a47-b3b3-89a8024bbe7b-ovn-northd-tls-certs\") pod \"08b88a34-0eac-4a47-b3b3-89a8024bbe7b\" (UID: \"08b88a34-0eac-4a47-b3b3-89a8024bbe7b\") " Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.777890 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/08b88a34-0eac-4a47-b3b3-89a8024bbe7b-scripts\") pod \"08b88a34-0eac-4a47-b3b3-89a8024bbe7b\" (UID: \"08b88a34-0eac-4a47-b3b3-89a8024bbe7b\") " Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.778991 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08b88a34-0eac-4a47-b3b3-89a8024bbe7b-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "08b88a34-0eac-4a47-b3b3-89a8024bbe7b" (UID: "08b88a34-0eac-4a47-b3b3-89a8024bbe7b"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.779047 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08b88a34-0eac-4a47-b3b3-89a8024bbe7b-config" (OuterVolumeSpecName: "config") pod "08b88a34-0eac-4a47-b3b3-89a8024bbe7b" (UID: "08b88a34-0eac-4a47-b3b3-89a8024bbe7b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.781215 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08b88a34-0eac-4a47-b3b3-89a8024bbe7b-scripts" (OuterVolumeSpecName: "scripts") pod "08b88a34-0eac-4a47-b3b3-89a8024bbe7b" (UID: "08b88a34-0eac-4a47-b3b3-89a8024bbe7b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.795159 4808 scope.go:117] "RemoveContainer" containerID="93a0e55009e5032ca6043f65c6537e79b802bcb416d44c8d232d960a3cf12786" Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.796297 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08b88a34-0eac-4a47-b3b3-89a8024bbe7b-kube-api-access-6z458" (OuterVolumeSpecName: "kube-api-access-6z458") pod "08b88a34-0eac-4a47-b3b3-89a8024bbe7b" (UID: "08b88a34-0eac-4a47-b3b3-89a8024bbe7b"). InnerVolumeSpecName "kube-api-access-6z458". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.811020 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08b88a34-0eac-4a47-b3b3-89a8024bbe7b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "08b88a34-0eac-4a47-b3b3-89a8024bbe7b" (UID: "08b88a34-0eac-4a47-b3b3-89a8024bbe7b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.818935 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16eab58c-16f2-4054-aae1-d4de176db24c" path="/var/lib/kubelet/pods/16eab58c-16f2-4054-aae1-d4de176db24c/volumes" Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.819506 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="192d6d53-4174-487e-b652-0ad887475d54" path="/var/lib/kubelet/pods/192d6d53-4174-487e-b652-0ad887475d54/volumes" Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.820115 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25dc3abb-1552-49e8-a8b4-c51edd37f47c" path="/var/lib/kubelet/pods/25dc3abb-1552-49e8-a8b4-c51edd37f47c/volumes" Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.821167 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40ec7dec-ae7d-49ff-95cd-af13c2ab08a5" path="/var/lib/kubelet/pods/40ec7dec-ae7d-49ff-95cd-af13c2ab08a5/volumes" Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.821615 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45a36b4a-f974-46f6-a719-9765499308ed" path="/var/lib/kubelet/pods/45a36b4a-f974-46f6-a719-9765499308ed/volumes" Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.822418 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45e8823d-6df6-41fb-b7cd-9cb19e680db1" path="/var/lib/kubelet/pods/45e8823d-6df6-41fb-b7cd-9cb19e680db1/volumes" Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.823617 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fde4956-a749-475e-9b5e-978fd33a4239" path="/var/lib/kubelet/pods/7fde4956-a749-475e-9b5e-978fd33a4239/volumes" Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.824499 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="909d233e-60cb-4a66-989b-2dc8706ea143" path="/var/lib/kubelet/pods/909d233e-60cb-4a66-989b-2dc8706ea143/volumes" Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.825173 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91b4b44f-8d7c-4933-b20d-a1d79d7c90b4" path="/var/lib/kubelet/pods/91b4b44f-8d7c-4933-b20d-a1d79d7c90b4/volumes" Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.831028 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3cf17f3-18e6-43f9-ab09-5882a99ffa51" path="/var/lib/kubelet/pods/b3cf17f3-18e6-43f9-ab09-5882a99ffa51/volumes" Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.831791 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4922ac8-998e-4ba3-88cb-6805fa10c7fd" path="/var/lib/kubelet/pods/b4922ac8-998e-4ba3-88cb-6805fa10c7fd/volumes" Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.832268 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c526a61c-3322-446a-8ff5-edd5a02f4b1f" path="/var/lib/kubelet/pods/c526a61c-3322-446a-8ff5-edd5a02f4b1f/volumes" Mar 11 09:05:17 crc kubenswrapper[4808]: E0311 09:05:17.836617 4808 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="46dcd1f24c24da00b44ee1905152bd43556064f3dd7b7d9f3e140bf0866a8897" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 11 09:05:17 crc kubenswrapper[4808]: E0311 09:05:17.838347 4808 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="46dcd1f24c24da00b44ee1905152bd43556064f3dd7b7d9f3e140bf0866a8897" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.845915 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08b88a34-0eac-4a47-b3b3-89a8024bbe7b-ovn-northd-tls-certs" (OuterVolumeSpecName: "ovn-northd-tls-certs") pod "08b88a34-0eac-4a47-b3b3-89a8024bbe7b" (UID: "08b88a34-0eac-4a47-b3b3-89a8024bbe7b"). InnerVolumeSpecName "ovn-northd-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:05:17 crc kubenswrapper[4808]: E0311 09:05:17.851496 4808 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="46dcd1f24c24da00b44ee1905152bd43556064f3dd7b7d9f3e140bf0866a8897" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 11 09:05:17 crc kubenswrapper[4808]: E0311 09:05:17.852382 4808 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="e28fc76b-781e-4397-87b5-2b2bf6d2a496" containerName="nova-scheduler-scheduler" Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.861672 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08b88a34-0eac-4a47-b3b3-89a8024bbe7b-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "08b88a34-0eac-4a47-b3b3-89a8024bbe7b" (UID: "08b88a34-0eac-4a47-b3b3-89a8024bbe7b"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.879207 4808 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/08b88a34-0eac-4a47-b3b3-89a8024bbe7b-ovn-rundir\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.879236 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6z458\" (UniqueName: \"kubernetes.io/projected/08b88a34-0eac-4a47-b3b3-89a8024bbe7b-kube-api-access-6z458\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.879244 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08b88a34-0eac-4a47-b3b3-89a8024bbe7b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.879253 4808 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/08b88a34-0eac-4a47-b3b3-89a8024bbe7b-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.879261 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08b88a34-0eac-4a47-b3b3-89a8024bbe7b-config\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.879270 4808 reconciler_common.go:293] "Volume detached for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/08b88a34-0eac-4a47-b3b3-89a8024bbe7b-ovn-northd-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:17 crc kubenswrapper[4808]: I0311 09:05:17.879279 4808 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/08b88a34-0eac-4a47-b3b3-89a8024bbe7b-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:18 crc kubenswrapper[4808]: I0311 09:05:18.028510 4808 generic.go:334] "Generic (PLEG): container finished" podID="49e0938f-9c77-4bf3-b649-1be492ef1647" containerID="eb0fb77e5ed841a8b6370ae95eb99a470aa96b2d34bd6fc4310915ae83795884" exitCode=0 Mar 11 09:05:18 crc kubenswrapper[4808]: I0311 09:05:18.028576 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"49e0938f-9c77-4bf3-b649-1be492ef1647","Type":"ContainerDied","Data":"eb0fb77e5ed841a8b6370ae95eb99a470aa96b2d34bd6fc4310915ae83795884"} Mar 11 09:05:18 crc kubenswrapper[4808]: I0311 09:05:18.030888 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-5hjcc" event={"ID":"da221ea6-e100-48b2-aacc-4b34bcd71782","Type":"ContainerDied","Data":"8e15c6d9764b493d6bdbdebfea6bf8fcc03ee4ee2baf41bd110ee9e2d42550fc"} Mar 11 09:05:18 crc kubenswrapper[4808]: I0311 09:05:18.030900 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-5hjcc" Mar 11 09:05:18 crc kubenswrapper[4808]: I0311 09:05:18.034129 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_08b88a34-0eac-4a47-b3b3-89a8024bbe7b/ovn-northd/0.log" Mar 11 09:05:18 crc kubenswrapper[4808]: I0311 09:05:18.034175 4808 generic.go:334] "Generic (PLEG): container finished" podID="08b88a34-0eac-4a47-b3b3-89a8024bbe7b" containerID="5b2448d64b61c9eccd7290e46f4f9a7b28f6868fca3fd9535f5871db771e8840" exitCode=139 Mar 11 09:05:18 crc kubenswrapper[4808]: I0311 09:05:18.034262 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"08b88a34-0eac-4a47-b3b3-89a8024bbe7b","Type":"ContainerDied","Data":"5b2448d64b61c9eccd7290e46f4f9a7b28f6868fca3fd9535f5871db771e8840"} Mar 11 09:05:18 crc kubenswrapper[4808]: I0311 09:05:18.034293 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"08b88a34-0eac-4a47-b3b3-89a8024bbe7b","Type":"ContainerDied","Data":"a80a84c7a02538db7dd104a1c25f58454d8b8ad58240ca76147e40d3184d3d26"} Mar 11 09:05:18 crc kubenswrapper[4808]: I0311 09:05:18.034295 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Mar 11 09:05:18 crc kubenswrapper[4808]: I0311 09:05:18.034310 4808 scope.go:117] "RemoveContainer" containerID="6ce012fcceee4fbe97b03ef149a01fa24e752a2989ef48bafa67da5e91a5d644" Mar 11 09:05:18 crc kubenswrapper[4808]: I0311 09:05:18.060492 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-35b5-account-create-update-vhqzx" Mar 11 09:05:18 crc kubenswrapper[4808]: I0311 09:05:18.068858 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Mar 11 09:05:18 crc kubenswrapper[4808]: I0311 09:05:18.080640 4808 scope.go:117] "RemoveContainer" containerID="5b2448d64b61c9eccd7290e46f4f9a7b28f6868fca3fd9535f5871db771e8840" Mar 11 09:05:18 crc kubenswrapper[4808]: E0311 09:05:18.086146 4808 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Mar 11 09:05:18 crc kubenswrapper[4808]: E0311 09:05:18.087601 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a1e42e33-7453-4b97-abca-0c45cc27faa2-config-data podName:a1e42e33-7453-4b97-abca-0c45cc27faa2 nodeName:}" failed. No retries permitted until 2026-03-11 09:05:26.087575913 +0000 UTC m=+1577.040899233 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/a1e42e33-7453-4b97-abca-0c45cc27faa2-config-data") pod "rabbitmq-server-0" (UID: "a1e42e33-7453-4b97-abca-0c45cc27faa2") : configmap "rabbitmq-config-data" not found Mar 11 09:05:18 crc kubenswrapper[4808]: I0311 09:05:18.095488 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-5hjcc"] Mar 11 09:05:18 crc kubenswrapper[4808]: I0311 09:05:18.110892 4808 scope.go:117] "RemoveContainer" containerID="6ce012fcceee4fbe97b03ef149a01fa24e752a2989ef48bafa67da5e91a5d644" Mar 11 09:05:18 crc kubenswrapper[4808]: E0311 09:05:18.116568 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ce012fcceee4fbe97b03ef149a01fa24e752a2989ef48bafa67da5e91a5d644\": container with ID starting with 6ce012fcceee4fbe97b03ef149a01fa24e752a2989ef48bafa67da5e91a5d644 not found: ID does not exist" containerID="6ce012fcceee4fbe97b03ef149a01fa24e752a2989ef48bafa67da5e91a5d644" Mar 11 09:05:18 crc kubenswrapper[4808]: I0311 09:05:18.116634 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ce012fcceee4fbe97b03ef149a01fa24e752a2989ef48bafa67da5e91a5d644"} err="failed to get container status \"6ce012fcceee4fbe97b03ef149a01fa24e752a2989ef48bafa67da5e91a5d644\": rpc error: code = NotFound desc = could not find container \"6ce012fcceee4fbe97b03ef149a01fa24e752a2989ef48bafa67da5e91a5d644\": container with ID starting with 6ce012fcceee4fbe97b03ef149a01fa24e752a2989ef48bafa67da5e91a5d644 not found: ID does not exist" Mar 11 09:05:18 crc kubenswrapper[4808]: I0311 09:05:18.116662 4808 scope.go:117] "RemoveContainer" containerID="5b2448d64b61c9eccd7290e46f4f9a7b28f6868fca3fd9535f5871db771e8840" Mar 11 09:05:18 crc kubenswrapper[4808]: I0311 09:05:18.117205 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-5hjcc"] Mar 11 09:05:18 crc kubenswrapper[4808]: E0311 09:05:18.117334 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b2448d64b61c9eccd7290e46f4f9a7b28f6868fca3fd9535f5871db771e8840\": container with ID starting with 5b2448d64b61c9eccd7290e46f4f9a7b28f6868fca3fd9535f5871db771e8840 not found: ID does not exist" containerID="5b2448d64b61c9eccd7290e46f4f9a7b28f6868fca3fd9535f5871db771e8840" Mar 11 09:05:18 crc kubenswrapper[4808]: I0311 09:05:18.117460 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b2448d64b61c9eccd7290e46f4f9a7b28f6868fca3fd9535f5871db771e8840"} err="failed to get container status \"5b2448d64b61c9eccd7290e46f4f9a7b28f6868fca3fd9535f5871db771e8840\": rpc error: code = NotFound desc = could not find container \"5b2448d64b61c9eccd7290e46f4f9a7b28f6868fca3fd9535f5871db771e8840\": container with ID starting with 5b2448d64b61c9eccd7290e46f4f9a7b28f6868fca3fd9535f5871db771e8840 not found: ID does not exist" Mar 11 09:05:18 crc kubenswrapper[4808]: I0311 09:05:18.132257 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Mar 11 09:05:18 crc kubenswrapper[4808]: I0311 09:05:18.137440 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-northd-0"] Mar 11 09:05:18 crc kubenswrapper[4808]: I0311 09:05:18.146313 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-35b5-account-create-update-vhqzx"] Mar 11 09:05:18 crc kubenswrapper[4808]: I0311 09:05:18.152231 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-35b5-account-create-update-vhqzx"] Mar 11 09:05:18 crc kubenswrapper[4808]: I0311 09:05:18.157730 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Mar 11 09:05:18 crc kubenswrapper[4808]: I0311 09:05:18.162447 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/memcached-0"] Mar 11 09:05:18 crc kubenswrapper[4808]: I0311 09:05:18.189444 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4lrsb\" (UniqueName: \"kubernetes.io/projected/02e2eb2e-01e6-4f49-9880-3aed7eb20bf6-kube-api-access-4lrsb\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:18 crc kubenswrapper[4808]: I0311 09:05:18.189475 4808 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/02e2eb2e-01e6-4f49-9880-3aed7eb20bf6-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:18 crc kubenswrapper[4808]: I0311 09:05:18.382712 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Mar 11 09:05:18 crc kubenswrapper[4808]: I0311 09:05:18.494087 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2562v\" (UniqueName: \"kubernetes.io/projected/49e0938f-9c77-4bf3-b649-1be492ef1647-kube-api-access-2562v\") pod \"49e0938f-9c77-4bf3-b649-1be492ef1647\" (UID: \"49e0938f-9c77-4bf3-b649-1be492ef1647\") " Mar 11 09:05:18 crc kubenswrapper[4808]: I0311 09:05:18.494147 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/49e0938f-9c77-4bf3-b649-1be492ef1647-config-data-default\") pod \"49e0938f-9c77-4bf3-b649-1be492ef1647\" (UID: \"49e0938f-9c77-4bf3-b649-1be492ef1647\") " Mar 11 09:05:18 crc kubenswrapper[4808]: I0311 09:05:18.494205 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49e0938f-9c77-4bf3-b649-1be492ef1647-operator-scripts\") pod \"49e0938f-9c77-4bf3-b649-1be492ef1647\" (UID: \"49e0938f-9c77-4bf3-b649-1be492ef1647\") " Mar 11 09:05:18 crc kubenswrapper[4808]: I0311 09:05:18.494252 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49e0938f-9c77-4bf3-b649-1be492ef1647-combined-ca-bundle\") pod \"49e0938f-9c77-4bf3-b649-1be492ef1647\" (UID: \"49e0938f-9c77-4bf3-b649-1be492ef1647\") " Mar 11 09:05:18 crc kubenswrapper[4808]: I0311 09:05:18.494322 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/49e0938f-9c77-4bf3-b649-1be492ef1647-config-data-generated\") pod \"49e0938f-9c77-4bf3-b649-1be492ef1647\" (UID: \"49e0938f-9c77-4bf3-b649-1be492ef1647\") " Mar 11 09:05:18 crc kubenswrapper[4808]: I0311 09:05:18.494351 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/49e0938f-9c77-4bf3-b649-1be492ef1647-galera-tls-certs\") pod \"49e0938f-9c77-4bf3-b649-1be492ef1647\" (UID: \"49e0938f-9c77-4bf3-b649-1be492ef1647\") " Mar 11 09:05:18 crc kubenswrapper[4808]: I0311 09:05:18.494386 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/49e0938f-9c77-4bf3-b649-1be492ef1647-kolla-config\") pod \"49e0938f-9c77-4bf3-b649-1be492ef1647\" (UID: \"49e0938f-9c77-4bf3-b649-1be492ef1647\") " Mar 11 09:05:18 crc kubenswrapper[4808]: I0311 09:05:18.494408 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"49e0938f-9c77-4bf3-b649-1be492ef1647\" (UID: \"49e0938f-9c77-4bf3-b649-1be492ef1647\") " Mar 11 09:05:18 crc kubenswrapper[4808]: I0311 09:05:18.498552 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49e0938f-9c77-4bf3-b649-1be492ef1647-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "49e0938f-9c77-4bf3-b649-1be492ef1647" (UID: "49e0938f-9c77-4bf3-b649-1be492ef1647"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:05:18 crc kubenswrapper[4808]: I0311 09:05:18.498724 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49e0938f-9c77-4bf3-b649-1be492ef1647-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "49e0938f-9c77-4bf3-b649-1be492ef1647" (UID: "49e0938f-9c77-4bf3-b649-1be492ef1647"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:05:18 crc kubenswrapper[4808]: I0311 09:05:18.498726 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49e0938f-9c77-4bf3-b649-1be492ef1647-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "49e0938f-9c77-4bf3-b649-1be492ef1647" (UID: "49e0938f-9c77-4bf3-b649-1be492ef1647"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:05:18 crc kubenswrapper[4808]: I0311 09:05:18.499196 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49e0938f-9c77-4bf3-b649-1be492ef1647-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "49e0938f-9c77-4bf3-b649-1be492ef1647" (UID: "49e0938f-9c77-4bf3-b649-1be492ef1647"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:05:18 crc kubenswrapper[4808]: I0311 09:05:18.500622 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49e0938f-9c77-4bf3-b649-1be492ef1647-kube-api-access-2562v" (OuterVolumeSpecName: "kube-api-access-2562v") pod "49e0938f-9c77-4bf3-b649-1be492ef1647" (UID: "49e0938f-9c77-4bf3-b649-1be492ef1647"). InnerVolumeSpecName "kube-api-access-2562v". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:05:18 crc kubenswrapper[4808]: I0311 09:05:18.506640 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "mysql-db") pod "49e0938f-9c77-4bf3-b649-1be492ef1647" (UID: "49e0938f-9c77-4bf3-b649-1be492ef1647"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 11 09:05:18 crc kubenswrapper[4808]: I0311 09:05:18.526074 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49e0938f-9c77-4bf3-b649-1be492ef1647-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "49e0938f-9c77-4bf3-b649-1be492ef1647" (UID: "49e0938f-9c77-4bf3-b649-1be492ef1647"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:05:18 crc kubenswrapper[4808]: I0311 09:05:18.542068 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49e0938f-9c77-4bf3-b649-1be492ef1647-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "49e0938f-9c77-4bf3-b649-1be492ef1647" (UID: "49e0938f-9c77-4bf3-b649-1be492ef1647"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:05:18 crc kubenswrapper[4808]: I0311 09:05:18.596311 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49e0938f-9c77-4bf3-b649-1be492ef1647-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:18 crc kubenswrapper[4808]: I0311 09:05:18.596346 4808 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/49e0938f-9c77-4bf3-b649-1be492ef1647-config-data-generated\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:18 crc kubenswrapper[4808]: I0311 09:05:18.596366 4808 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/49e0938f-9c77-4bf3-b649-1be492ef1647-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:18 crc kubenswrapper[4808]: I0311 09:05:18.596375 4808 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/49e0938f-9c77-4bf3-b649-1be492ef1647-kolla-config\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:18 crc kubenswrapper[4808]: I0311 09:05:18.596397 4808 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Mar 11 09:05:18 crc kubenswrapper[4808]: I0311 09:05:18.596407 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2562v\" (UniqueName: \"kubernetes.io/projected/49e0938f-9c77-4bf3-b649-1be492ef1647-kube-api-access-2562v\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:18 crc kubenswrapper[4808]: I0311 09:05:18.596415 4808 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/49e0938f-9c77-4bf3-b649-1be492ef1647-config-data-default\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:18 crc kubenswrapper[4808]: I0311 09:05:18.596423 4808 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49e0938f-9c77-4bf3-b649-1be492ef1647-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:18 crc kubenswrapper[4808]: I0311 09:05:18.611270 4808 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Mar 11 09:05:18 crc kubenswrapper[4808]: I0311 09:05:18.697649 4808 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:18 crc kubenswrapper[4808]: E0311 09:05:18.857205 4808 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3eb75e498cfe3f13fb0222d0de5cdb5657fd0be88039ded0f03ccc09d63cb1e3" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Mar 11 09:05:18 crc kubenswrapper[4808]: E0311 09:05:18.858611 4808 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3eb75e498cfe3f13fb0222d0de5cdb5657fd0be88039ded0f03ccc09d63cb1e3" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Mar 11 09:05:18 crc kubenswrapper[4808]: E0311 09:05:18.859523 4808 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3eb75e498cfe3f13fb0222d0de5cdb5657fd0be88039ded0f03ccc09d63cb1e3" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Mar 11 09:05:18 crc kubenswrapper[4808]: E0311 09:05:18.859567 4808 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="c8798b96-74d7-4e0e-a4c7-97f3c995544b" containerName="nova-cell0-conductor-conductor" Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.100628 4808 generic.go:334] "Generic (PLEG): container finished" podID="549d4ad5-b5b0-45bd-87b0-b9a6ee77866e" containerID="05f865332615ad9f698e0cf3c33551f4d94238b92da639d722149c1d2ab22b35" exitCode=0 Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.100703 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"549d4ad5-b5b0-45bd-87b0-b9a6ee77866e","Type":"ContainerDied","Data":"05f865332615ad9f698e0cf3c33551f4d94238b92da639d722149c1d2ab22b35"} Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.104034 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.104466 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"49e0938f-9c77-4bf3-b649-1be492ef1647","Type":"ContainerDied","Data":"4dc13a953beb5f28c0a418de47c1cbf82f0c69dafcb7418cc269198faae562c5"} Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.104507 4808 scope.go:117] "RemoveContainer" containerID="eb0fb77e5ed841a8b6370ae95eb99a470aa96b2d34bd6fc4310915ae83795884" Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.109120 4808 generic.go:334] "Generic (PLEG): container finished" podID="b4b8b237-3ec7-41fd-b6a1-fa3670d61c38" containerID="b47c6964048276c5d98b0417ce7ca53e6a7bf3b09f49607b9460de27e6f58132" exitCode=0 Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.109198 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6447d66dcc-mc8df" event={"ID":"b4b8b237-3ec7-41fd-b6a1-fa3670d61c38","Type":"ContainerDied","Data":"b47c6964048276c5d98b0417ce7ca53e6a7bf3b09f49607b9460de27e6f58132"} Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.111384 4808 generic.go:334] "Generic (PLEG): container finished" podID="a1e42e33-7453-4b97-abca-0c45cc27faa2" containerID="0f8754c1594d2feb21d234a73d819b71534a5e40f918961ac2feb938e1330d6c" exitCode=0 Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.111416 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a1e42e33-7453-4b97-abca-0c45cc27faa2","Type":"ContainerDied","Data":"0f8754c1594d2feb21d234a73d819b71534a5e40f918961ac2feb938e1330d6c"} Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.165507 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.171543 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-galera-0"] Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.174305 4808 scope.go:117] "RemoveContainer" containerID="a65df125d18e1f77fc1240d2aa4a854121a8ae7dbd36e648b9ddc3c72315bc77" Mar 11 09:05:19 crc kubenswrapper[4808]: E0311 09:05:19.207465 4808 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Mar 11 09:05:19 crc kubenswrapper[4808]: E0311 09:05:19.207539 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/549d4ad5-b5b0-45bd-87b0-b9a6ee77866e-config-data podName:549d4ad5-b5b0-45bd-87b0-b9a6ee77866e nodeName:}" failed. No retries permitted until 2026-03-11 09:05:27.207520675 +0000 UTC m=+1578.160843995 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/549d4ad5-b5b0-45bd-87b0-b9a6ee77866e-config-data") pod "rabbitmq-cell1-server-0" (UID: "549d4ad5-b5b0-45bd-87b0-b9a6ee77866e") : configmap "rabbitmq-cell1-config-data" not found Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.365313 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.409430 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a1e42e33-7453-4b97-abca-0c45cc27faa2-plugins-conf\") pod \"a1e42e33-7453-4b97-abca-0c45cc27faa2\" (UID: \"a1e42e33-7453-4b97-abca-0c45cc27faa2\") " Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.409480 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rjfk\" (UniqueName: \"kubernetes.io/projected/a1e42e33-7453-4b97-abca-0c45cc27faa2-kube-api-access-2rjfk\") pod \"a1e42e33-7453-4b97-abca-0c45cc27faa2\" (UID: \"a1e42e33-7453-4b97-abca-0c45cc27faa2\") " Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.409538 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a1e42e33-7453-4b97-abca-0c45cc27faa2-rabbitmq-erlang-cookie\") pod \"a1e42e33-7453-4b97-abca-0c45cc27faa2\" (UID: \"a1e42e33-7453-4b97-abca-0c45cc27faa2\") " Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.410396 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1e42e33-7453-4b97-abca-0c45cc27faa2-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "a1e42e33-7453-4b97-abca-0c45cc27faa2" (UID: "a1e42e33-7453-4b97-abca-0c45cc27faa2"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.410727 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a1e42e33-7453-4b97-abca-0c45cc27faa2-pod-info\") pod \"a1e42e33-7453-4b97-abca-0c45cc27faa2\" (UID: \"a1e42e33-7453-4b97-abca-0c45cc27faa2\") " Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.410792 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a1e42e33-7453-4b97-abca-0c45cc27faa2-server-conf\") pod \"a1e42e33-7453-4b97-abca-0c45cc27faa2\" (UID: \"a1e42e33-7453-4b97-abca-0c45cc27faa2\") " Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.410817 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a1e42e33-7453-4b97-abca-0c45cc27faa2-rabbitmq-tls\") pod \"a1e42e33-7453-4b97-abca-0c45cc27faa2\" (UID: \"a1e42e33-7453-4b97-abca-0c45cc27faa2\") " Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.410860 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a1e42e33-7453-4b97-abca-0c45cc27faa2-erlang-cookie-secret\") pod \"a1e42e33-7453-4b97-abca-0c45cc27faa2\" (UID: \"a1e42e33-7453-4b97-abca-0c45cc27faa2\") " Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.410893 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a1e42e33-7453-4b97-abca-0c45cc27faa2-rabbitmq-plugins\") pod \"a1e42e33-7453-4b97-abca-0c45cc27faa2\" (UID: \"a1e42e33-7453-4b97-abca-0c45cc27faa2\") " Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.410957 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"a1e42e33-7453-4b97-abca-0c45cc27faa2\" (UID: \"a1e42e33-7453-4b97-abca-0c45cc27faa2\") " Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.410994 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a1e42e33-7453-4b97-abca-0c45cc27faa2-config-data\") pod \"a1e42e33-7453-4b97-abca-0c45cc27faa2\" (UID: \"a1e42e33-7453-4b97-abca-0c45cc27faa2\") " Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.413948 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1e42e33-7453-4b97-abca-0c45cc27faa2-kube-api-access-2rjfk" (OuterVolumeSpecName: "kube-api-access-2rjfk") pod "a1e42e33-7453-4b97-abca-0c45cc27faa2" (UID: "a1e42e33-7453-4b97-abca-0c45cc27faa2"). InnerVolumeSpecName "kube-api-access-2rjfk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.413994 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a1e42e33-7453-4b97-abca-0c45cc27faa2-rabbitmq-confd\") pod \"a1e42e33-7453-4b97-abca-0c45cc27faa2\" (UID: \"a1e42e33-7453-4b97-abca-0c45cc27faa2\") " Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.414462 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2rjfk\" (UniqueName: \"kubernetes.io/projected/a1e42e33-7453-4b97-abca-0c45cc27faa2-kube-api-access-2rjfk\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.414486 4808 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a1e42e33-7453-4b97-abca-0c45cc27faa2-plugins-conf\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.415884 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1e42e33-7453-4b97-abca-0c45cc27faa2-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "a1e42e33-7453-4b97-abca-0c45cc27faa2" (UID: "a1e42e33-7453-4b97-abca-0c45cc27faa2"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.417838 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1e42e33-7453-4b97-abca-0c45cc27faa2-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "a1e42e33-7453-4b97-abca-0c45cc27faa2" (UID: "a1e42e33-7453-4b97-abca-0c45cc27faa2"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.418800 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1e42e33-7453-4b97-abca-0c45cc27faa2-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "a1e42e33-7453-4b97-abca-0c45cc27faa2" (UID: "a1e42e33-7453-4b97-abca-0c45cc27faa2"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.419137 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1e42e33-7453-4b97-abca-0c45cc27faa2-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "a1e42e33-7453-4b97-abca-0c45cc27faa2" (UID: "a1e42e33-7453-4b97-abca-0c45cc27faa2"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.421220 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "persistence") pod "a1e42e33-7453-4b97-abca-0c45cc27faa2" (UID: "a1e42e33-7453-4b97-abca-0c45cc27faa2"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.423638 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/a1e42e33-7453-4b97-abca-0c45cc27faa2-pod-info" (OuterVolumeSpecName: "pod-info") pod "a1e42e33-7453-4b97-abca-0c45cc27faa2" (UID: "a1e42e33-7453-4b97-abca-0c45cc27faa2"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Mar 11 09:05:19 crc kubenswrapper[4808]: E0311 09:05:19.438786 4808 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e2259d06d5e8abb57d4defe5dfed63d448596138a276f57890cb8984cb01e2ed" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Mar 11 09:05:19 crc kubenswrapper[4808]: E0311 09:05:19.440188 4808 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e2259d06d5e8abb57d4defe5dfed63d448596138a276f57890cb8984cb01e2ed" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Mar 11 09:05:19 crc kubenswrapper[4808]: E0311 09:05:19.441838 4808 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e2259d06d5e8abb57d4defe5dfed63d448596138a276f57890cb8984cb01e2ed" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Mar 11 09:05:19 crc kubenswrapper[4808]: E0311 09:05:19.441874 4808 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="512d8427-151d-42dd-a2fe-b52d22583604" containerName="nova-cell1-conductor-conductor" Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.453944 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1e42e33-7453-4b97-abca-0c45cc27faa2-config-data" (OuterVolumeSpecName: "config-data") pod "a1e42e33-7453-4b97-abca-0c45cc27faa2" (UID: "a1e42e33-7453-4b97-abca-0c45cc27faa2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.458767 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1e42e33-7453-4b97-abca-0c45cc27faa2-server-conf" (OuterVolumeSpecName: "server-conf") pod "a1e42e33-7453-4b97-abca-0c45cc27faa2" (UID: "a1e42e33-7453-4b97-abca-0c45cc27faa2"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.470004 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6447d66dcc-mc8df" Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.476537 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.516164 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/549d4ad5-b5b0-45bd-87b0-b9a6ee77866e-rabbitmq-erlang-cookie\") pod \"549d4ad5-b5b0-45bd-87b0-b9a6ee77866e\" (UID: \"549d4ad5-b5b0-45bd-87b0-b9a6ee77866e\") " Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.516260 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/549d4ad5-b5b0-45bd-87b0-b9a6ee77866e-server-conf\") pod \"549d4ad5-b5b0-45bd-87b0-b9a6ee77866e\" (UID: \"549d4ad5-b5b0-45bd-87b0-b9a6ee77866e\") " Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.516336 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/549d4ad5-b5b0-45bd-87b0-b9a6ee77866e-config-data\") pod \"549d4ad5-b5b0-45bd-87b0-b9a6ee77866e\" (UID: \"549d4ad5-b5b0-45bd-87b0-b9a6ee77866e\") " Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.516353 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/549d4ad5-b5b0-45bd-87b0-b9a6ee77866e-plugins-conf\") pod \"549d4ad5-b5b0-45bd-87b0-b9a6ee77866e\" (UID: \"549d4ad5-b5b0-45bd-87b0-b9a6ee77866e\") " Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.516404 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4b8b237-3ec7-41fd-b6a1-fa3670d61c38-scripts\") pod \"b4b8b237-3ec7-41fd-b6a1-fa3670d61c38\" (UID: \"b4b8b237-3ec7-41fd-b6a1-fa3670d61c38\") " Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.516424 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"549d4ad5-b5b0-45bd-87b0-b9a6ee77866e\" (UID: \"549d4ad5-b5b0-45bd-87b0-b9a6ee77866e\") " Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.516450 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/549d4ad5-b5b0-45bd-87b0-b9a6ee77866e-rabbitmq-confd\") pod \"549d4ad5-b5b0-45bd-87b0-b9a6ee77866e\" (UID: \"549d4ad5-b5b0-45bd-87b0-b9a6ee77866e\") " Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.516530 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/549d4ad5-b5b0-45bd-87b0-b9a6ee77866e-rabbitmq-tls\") pod \"549d4ad5-b5b0-45bd-87b0-b9a6ee77866e\" (UID: \"549d4ad5-b5b0-45bd-87b0-b9a6ee77866e\") " Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.516560 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qmqz5\" (UniqueName: \"kubernetes.io/projected/b4b8b237-3ec7-41fd-b6a1-fa3670d61c38-kube-api-access-qmqz5\") pod \"b4b8b237-3ec7-41fd-b6a1-fa3670d61c38\" (UID: \"b4b8b237-3ec7-41fd-b6a1-fa3670d61c38\") " Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.517566 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4b8b237-3ec7-41fd-b6a1-fa3670d61c38-config-data\") pod \"b4b8b237-3ec7-41fd-b6a1-fa3670d61c38\" (UID: \"b4b8b237-3ec7-41fd-b6a1-fa3670d61c38\") " Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.517625 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5444\" (UniqueName: \"kubernetes.io/projected/549d4ad5-b5b0-45bd-87b0-b9a6ee77866e-kube-api-access-k5444\") pod \"549d4ad5-b5b0-45bd-87b0-b9a6ee77866e\" (UID: \"549d4ad5-b5b0-45bd-87b0-b9a6ee77866e\") " Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.517658 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b4b8b237-3ec7-41fd-b6a1-fa3670d61c38-credential-keys\") pod \"b4b8b237-3ec7-41fd-b6a1-fa3670d61c38\" (UID: \"b4b8b237-3ec7-41fd-b6a1-fa3670d61c38\") " Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.517690 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/549d4ad5-b5b0-45bd-87b0-b9a6ee77866e-rabbitmq-plugins\") pod \"549d4ad5-b5b0-45bd-87b0-b9a6ee77866e\" (UID: \"549d4ad5-b5b0-45bd-87b0-b9a6ee77866e\") " Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.517723 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4b8b237-3ec7-41fd-b6a1-fa3670d61c38-public-tls-certs\") pod \"b4b8b237-3ec7-41fd-b6a1-fa3670d61c38\" (UID: \"b4b8b237-3ec7-41fd-b6a1-fa3670d61c38\") " Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.517788 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b4b8b237-3ec7-41fd-b6a1-fa3670d61c38-fernet-keys\") pod \"b4b8b237-3ec7-41fd-b6a1-fa3670d61c38\" (UID: \"b4b8b237-3ec7-41fd-b6a1-fa3670d61c38\") " Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.517825 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/549d4ad5-b5b0-45bd-87b0-b9a6ee77866e-erlang-cookie-secret\") pod \"549d4ad5-b5b0-45bd-87b0-b9a6ee77866e\" (UID: \"549d4ad5-b5b0-45bd-87b0-b9a6ee77866e\") " Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.517896 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/549d4ad5-b5b0-45bd-87b0-b9a6ee77866e-pod-info\") pod \"549d4ad5-b5b0-45bd-87b0-b9a6ee77866e\" (UID: \"549d4ad5-b5b0-45bd-87b0-b9a6ee77866e\") " Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.517969 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4b8b237-3ec7-41fd-b6a1-fa3670d61c38-internal-tls-certs\") pod \"b4b8b237-3ec7-41fd-b6a1-fa3670d61c38\" (UID: \"b4b8b237-3ec7-41fd-b6a1-fa3670d61c38\") " Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.518003 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4b8b237-3ec7-41fd-b6a1-fa3670d61c38-combined-ca-bundle\") pod \"b4b8b237-3ec7-41fd-b6a1-fa3670d61c38\" (UID: \"b4b8b237-3ec7-41fd-b6a1-fa3670d61c38\") " Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.518813 4808 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a1e42e33-7453-4b97-abca-0c45cc27faa2-server-conf\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.518838 4808 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a1e42e33-7453-4b97-abca-0c45cc27faa2-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.518855 4808 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a1e42e33-7453-4b97-abca-0c45cc27faa2-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.518871 4808 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a1e42e33-7453-4b97-abca-0c45cc27faa2-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.518904 4808 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.518920 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a1e42e33-7453-4b97-abca-0c45cc27faa2-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.518936 4808 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a1e42e33-7453-4b97-abca-0c45cc27faa2-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.518952 4808 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a1e42e33-7453-4b97-abca-0c45cc27faa2-pod-info\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.520071 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/549d4ad5-b5b0-45bd-87b0-b9a6ee77866e-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "549d4ad5-b5b0-45bd-87b0-b9a6ee77866e" (UID: "549d4ad5-b5b0-45bd-87b0-b9a6ee77866e"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.521016 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/549d4ad5-b5b0-45bd-87b0-b9a6ee77866e-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "549d4ad5-b5b0-45bd-87b0-b9a6ee77866e" (UID: "549d4ad5-b5b0-45bd-87b0-b9a6ee77866e"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.522685 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/549d4ad5-b5b0-45bd-87b0-b9a6ee77866e-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "549d4ad5-b5b0-45bd-87b0-b9a6ee77866e" (UID: "549d4ad5-b5b0-45bd-87b0-b9a6ee77866e"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.523942 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4b8b237-3ec7-41fd-b6a1-fa3670d61c38-scripts" (OuterVolumeSpecName: "scripts") pod "b4b8b237-3ec7-41fd-b6a1-fa3670d61c38" (UID: "b4b8b237-3ec7-41fd-b6a1-fa3670d61c38"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.524894 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "persistence") pod "549d4ad5-b5b0-45bd-87b0-b9a6ee77866e" (UID: "549d4ad5-b5b0-45bd-87b0-b9a6ee77866e"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.528090 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1e42e33-7453-4b97-abca-0c45cc27faa2-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "a1e42e33-7453-4b97-abca-0c45cc27faa2" (UID: "a1e42e33-7453-4b97-abca-0c45cc27faa2"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.537151 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/549d4ad5-b5b0-45bd-87b0-b9a6ee77866e-kube-api-access-k5444" (OuterVolumeSpecName: "kube-api-access-k5444") pod "549d4ad5-b5b0-45bd-87b0-b9a6ee77866e" (UID: "549d4ad5-b5b0-45bd-87b0-b9a6ee77866e"). InnerVolumeSpecName "kube-api-access-k5444". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.537679 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4b8b237-3ec7-41fd-b6a1-fa3670d61c38-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "b4b8b237-3ec7-41fd-b6a1-fa3670d61c38" (UID: "b4b8b237-3ec7-41fd-b6a1-fa3670d61c38"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.537870 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/549d4ad5-b5b0-45bd-87b0-b9a6ee77866e-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "549d4ad5-b5b0-45bd-87b0-b9a6ee77866e" (UID: "549d4ad5-b5b0-45bd-87b0-b9a6ee77866e"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.540615 4808 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.541580 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/549d4ad5-b5b0-45bd-87b0-b9a6ee77866e-pod-info" (OuterVolumeSpecName: "pod-info") pod "549d4ad5-b5b0-45bd-87b0-b9a6ee77866e" (UID: "549d4ad5-b5b0-45bd-87b0-b9a6ee77866e"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.541797 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/549d4ad5-b5b0-45bd-87b0-b9a6ee77866e-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "549d4ad5-b5b0-45bd-87b0-b9a6ee77866e" (UID: "549d4ad5-b5b0-45bd-87b0-b9a6ee77866e"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.543517 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4b8b237-3ec7-41fd-b6a1-fa3670d61c38-kube-api-access-qmqz5" (OuterVolumeSpecName: "kube-api-access-qmqz5") pod "b4b8b237-3ec7-41fd-b6a1-fa3670d61c38" (UID: "b4b8b237-3ec7-41fd-b6a1-fa3670d61c38"). InnerVolumeSpecName "kube-api-access-qmqz5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.544234 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4b8b237-3ec7-41fd-b6a1-fa3670d61c38-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "b4b8b237-3ec7-41fd-b6a1-fa3670d61c38" (UID: "b4b8b237-3ec7-41fd-b6a1-fa3670d61c38"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.553240 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4b8b237-3ec7-41fd-b6a1-fa3670d61c38-config-data" (OuterVolumeSpecName: "config-data") pod "b4b8b237-3ec7-41fd-b6a1-fa3670d61c38" (UID: "b4b8b237-3ec7-41fd-b6a1-fa3670d61c38"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.554715 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/549d4ad5-b5b0-45bd-87b0-b9a6ee77866e-config-data" (OuterVolumeSpecName: "config-data") pod "549d4ad5-b5b0-45bd-87b0-b9a6ee77866e" (UID: "549d4ad5-b5b0-45bd-87b0-b9a6ee77866e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.562167 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/549d4ad5-b5b0-45bd-87b0-b9a6ee77866e-server-conf" (OuterVolumeSpecName: "server-conf") pod "549d4ad5-b5b0-45bd-87b0-b9a6ee77866e" (UID: "549d4ad5-b5b0-45bd-87b0-b9a6ee77866e"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.565493 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4b8b237-3ec7-41fd-b6a1-fa3670d61c38-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b4b8b237-3ec7-41fd-b6a1-fa3670d61c38" (UID: "b4b8b237-3ec7-41fd-b6a1-fa3670d61c38"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.567233 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4b8b237-3ec7-41fd-b6a1-fa3670d61c38-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "b4b8b237-3ec7-41fd-b6a1-fa3670d61c38" (UID: "b4b8b237-3ec7-41fd-b6a1-fa3670d61c38"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.591163 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4b8b237-3ec7-41fd-b6a1-fa3670d61c38-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "b4b8b237-3ec7-41fd-b6a1-fa3670d61c38" (UID: "b4b8b237-3ec7-41fd-b6a1-fa3670d61c38"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.606530 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/549d4ad5-b5b0-45bd-87b0-b9a6ee77866e-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "549d4ad5-b5b0-45bd-87b0-b9a6ee77866e" (UID: "549d4ad5-b5b0-45bd-87b0-b9a6ee77866e"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.620293 4808 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/549d4ad5-b5b0-45bd-87b0-b9a6ee77866e-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.620320 4808 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/549d4ad5-b5b0-45bd-87b0-b9a6ee77866e-pod-info\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.620333 4808 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4b8b237-3ec7-41fd-b6a1-fa3670d61c38-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.620344 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4b8b237-3ec7-41fd-b6a1-fa3670d61c38-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.620371 4808 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/549d4ad5-b5b0-45bd-87b0-b9a6ee77866e-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.620384 4808 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/549d4ad5-b5b0-45bd-87b0-b9a6ee77866e-server-conf\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.620396 4808 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.620407 4808 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a1e42e33-7453-4b97-abca-0c45cc27faa2-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.620418 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/549d4ad5-b5b0-45bd-87b0-b9a6ee77866e-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.620427 4808 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/549d4ad5-b5b0-45bd-87b0-b9a6ee77866e-plugins-conf\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.620439 4808 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4b8b237-3ec7-41fd-b6a1-fa3670d61c38-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.620466 4808 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.620477 4808 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/549d4ad5-b5b0-45bd-87b0-b9a6ee77866e-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.620488 4808 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/549d4ad5-b5b0-45bd-87b0-b9a6ee77866e-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.620500 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qmqz5\" (UniqueName: \"kubernetes.io/projected/b4b8b237-3ec7-41fd-b6a1-fa3670d61c38-kube-api-access-qmqz5\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.620511 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4b8b237-3ec7-41fd-b6a1-fa3670d61c38-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.620522 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k5444\" (UniqueName: \"kubernetes.io/projected/549d4ad5-b5b0-45bd-87b0-b9a6ee77866e-kube-api-access-k5444\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.620533 4808 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b4b8b237-3ec7-41fd-b6a1-fa3670d61c38-credential-keys\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.620544 4808 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/549d4ad5-b5b0-45bd-87b0-b9a6ee77866e-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.620555 4808 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4b8b237-3ec7-41fd-b6a1-fa3670d61c38-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.620565 4808 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b4b8b237-3ec7-41fd-b6a1-fa3670d61c38-fernet-keys\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.634841 4808 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.721881 4808 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.799319 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02e2eb2e-01e6-4f49-9880-3aed7eb20bf6" path="/var/lib/kubelet/pods/02e2eb2e-01e6-4f49-9880-3aed7eb20bf6/volumes" Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.799795 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08b88a34-0eac-4a47-b3b3-89a8024bbe7b" path="/var/lib/kubelet/pods/08b88a34-0eac-4a47-b3b3-89a8024bbe7b/volumes" Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.800583 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49e0938f-9c77-4bf3-b649-1be492ef1647" path="/var/lib/kubelet/pods/49e0938f-9c77-4bf3-b649-1be492ef1647/volumes" Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.802754 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6baf079-20ab-45df-8e2d-2459a4286c9a" path="/var/lib/kubelet/pods/c6baf079-20ab-45df-8e2d-2459a4286c9a/volumes" Mar 11 09:05:19 crc kubenswrapper[4808]: I0311 09:05:19.803309 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da221ea6-e100-48b2-aacc-4b34bcd71782" path="/var/lib/kubelet/pods/da221ea6-e100-48b2-aacc-4b34bcd71782/volumes" Mar 11 09:05:20 crc kubenswrapper[4808]: I0311 09:05:20.122831 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 11 09:05:20 crc kubenswrapper[4808]: I0311 09:05:20.123697 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"549d4ad5-b5b0-45bd-87b0-b9a6ee77866e","Type":"ContainerDied","Data":"634c1d9c33bd62beaea1da188b124ac41111282a8b7359d08dd847e583ed5b35"} Mar 11 09:05:20 crc kubenswrapper[4808]: I0311 09:05:20.123740 4808 scope.go:117] "RemoveContainer" containerID="05f865332615ad9f698e0cf3c33551f4d94238b92da639d722149c1d2ab22b35" Mar 11 09:05:20 crc kubenswrapper[4808]: I0311 09:05:20.128398 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6447d66dcc-mc8df" event={"ID":"b4b8b237-3ec7-41fd-b6a1-fa3670d61c38","Type":"ContainerDied","Data":"2cff0fdb0107f2b8631be6cf24556311063eef05685df078f55d9979aa002673"} Mar 11 09:05:20 crc kubenswrapper[4808]: I0311 09:05:20.128499 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6447d66dcc-mc8df" Mar 11 09:05:20 crc kubenswrapper[4808]: I0311 09:05:20.135021 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a1e42e33-7453-4b97-abca-0c45cc27faa2","Type":"ContainerDied","Data":"9ed8629a77a7b2a4196e9177e08748adb9c8b72b95a4adcf70710437ccb8d0db"} Mar 11 09:05:20 crc kubenswrapper[4808]: I0311 09:05:20.135071 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 11 09:05:20 crc kubenswrapper[4808]: I0311 09:05:20.151231 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-6447d66dcc-mc8df"] Mar 11 09:05:20 crc kubenswrapper[4808]: I0311 09:05:20.156518 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-6447d66dcc-mc8df"] Mar 11 09:05:20 crc kubenswrapper[4808]: I0311 09:05:20.160808 4808 scope.go:117] "RemoveContainer" containerID="8b395b42706b1de9013f9b75864a0671c56c131544021b5094dacdd4a57911d9" Mar 11 09:05:20 crc kubenswrapper[4808]: I0311 09:05:20.169411 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 11 09:05:20 crc kubenswrapper[4808]: I0311 09:05:20.190275 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 11 09:05:20 crc kubenswrapper[4808]: I0311 09:05:20.197073 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 11 09:05:20 crc kubenswrapper[4808]: I0311 09:05:20.212079 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 11 09:05:20 crc kubenswrapper[4808]: I0311 09:05:20.216581 4808 scope.go:117] "RemoveContainer" containerID="b47c6964048276c5d98b0417ce7ca53e6a7bf3b09f49607b9460de27e6f58132" Mar 11 09:05:20 crc kubenswrapper[4808]: I0311 09:05:20.275455 4808 scope.go:117] "RemoveContainer" containerID="0f8754c1594d2feb21d234a73d819b71534a5e40f918961ac2feb938e1330d6c" Mar 11 09:05:20 crc kubenswrapper[4808]: I0311 09:05:20.314148 4808 scope.go:117] "RemoveContainer" containerID="b5cfaa5690cabe34a4d4686bbe5047a703650cf16cae45773a50285423e560b6" Mar 11 09:05:20 crc kubenswrapper[4808]: I0311 09:05:20.680552 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6b745fd47c-v25f8" Mar 11 09:05:20 crc kubenswrapper[4808]: I0311 09:05:20.686959 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Mar 11 09:05:20 crc kubenswrapper[4808]: I0311 09:05:20.743053 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pzxs\" (UniqueName: \"kubernetes.io/projected/512d8427-151d-42dd-a2fe-b52d22583604-kube-api-access-8pzxs\") pod \"512d8427-151d-42dd-a2fe-b52d22583604\" (UID: \"512d8427-151d-42dd-a2fe-b52d22583604\") " Mar 11 09:05:20 crc kubenswrapper[4808]: I0311 09:05:20.743143 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/512d8427-151d-42dd-a2fe-b52d22583604-combined-ca-bundle\") pod \"512d8427-151d-42dd-a2fe-b52d22583604\" (UID: \"512d8427-151d-42dd-a2fe-b52d22583604\") " Mar 11 09:05:20 crc kubenswrapper[4808]: I0311 09:05:20.743187 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/512d8427-151d-42dd-a2fe-b52d22583604-config-data\") pod \"512d8427-151d-42dd-a2fe-b52d22583604\" (UID: \"512d8427-151d-42dd-a2fe-b52d22583604\") " Mar 11 09:05:20 crc kubenswrapper[4808]: I0311 09:05:20.743217 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50b3975b-d699-4f86-8aba-3a00f99bfdbc-config-data\") pod \"50b3975b-d699-4f86-8aba-3a00f99bfdbc\" (UID: \"50b3975b-d699-4f86-8aba-3a00f99bfdbc\") " Mar 11 09:05:20 crc kubenswrapper[4808]: I0311 09:05:20.743245 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/50b3975b-d699-4f86-8aba-3a00f99bfdbc-logs\") pod \"50b3975b-d699-4f86-8aba-3a00f99bfdbc\" (UID: \"50b3975b-d699-4f86-8aba-3a00f99bfdbc\") " Mar 11 09:05:20 crc kubenswrapper[4808]: I0311 09:05:20.743263 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/50b3975b-d699-4f86-8aba-3a00f99bfdbc-config-data-custom\") pod \"50b3975b-d699-4f86-8aba-3a00f99bfdbc\" (UID: \"50b3975b-d699-4f86-8aba-3a00f99bfdbc\") " Mar 11 09:05:20 crc kubenswrapper[4808]: I0311 09:05:20.743306 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4xwj2\" (UniqueName: \"kubernetes.io/projected/50b3975b-d699-4f86-8aba-3a00f99bfdbc-kube-api-access-4xwj2\") pod \"50b3975b-d699-4f86-8aba-3a00f99bfdbc\" (UID: \"50b3975b-d699-4f86-8aba-3a00f99bfdbc\") " Mar 11 09:05:20 crc kubenswrapper[4808]: I0311 09:05:20.743331 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50b3975b-d699-4f86-8aba-3a00f99bfdbc-combined-ca-bundle\") pod \"50b3975b-d699-4f86-8aba-3a00f99bfdbc\" (UID: \"50b3975b-d699-4f86-8aba-3a00f99bfdbc\") " Mar 11 09:05:20 crc kubenswrapper[4808]: I0311 09:05:20.749768 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50b3975b-d699-4f86-8aba-3a00f99bfdbc-logs" (OuterVolumeSpecName: "logs") pod "50b3975b-d699-4f86-8aba-3a00f99bfdbc" (UID: "50b3975b-d699-4f86-8aba-3a00f99bfdbc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:05:20 crc kubenswrapper[4808]: I0311 09:05:20.756601 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/512d8427-151d-42dd-a2fe-b52d22583604-kube-api-access-8pzxs" (OuterVolumeSpecName: "kube-api-access-8pzxs") pod "512d8427-151d-42dd-a2fe-b52d22583604" (UID: "512d8427-151d-42dd-a2fe-b52d22583604"). InnerVolumeSpecName "kube-api-access-8pzxs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:05:20 crc kubenswrapper[4808]: I0311 09:05:20.761651 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50b3975b-d699-4f86-8aba-3a00f99bfdbc-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "50b3975b-d699-4f86-8aba-3a00f99bfdbc" (UID: "50b3975b-d699-4f86-8aba-3a00f99bfdbc"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:05:20 crc kubenswrapper[4808]: I0311 09:05:20.767558 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50b3975b-d699-4f86-8aba-3a00f99bfdbc-kube-api-access-4xwj2" (OuterVolumeSpecName: "kube-api-access-4xwj2") pod "50b3975b-d699-4f86-8aba-3a00f99bfdbc" (UID: "50b3975b-d699-4f86-8aba-3a00f99bfdbc"). InnerVolumeSpecName "kube-api-access-4xwj2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:05:20 crc kubenswrapper[4808]: I0311 09:05:20.789861 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/512d8427-151d-42dd-a2fe-b52d22583604-config-data" (OuterVolumeSpecName: "config-data") pod "512d8427-151d-42dd-a2fe-b52d22583604" (UID: "512d8427-151d-42dd-a2fe-b52d22583604"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:05:20 crc kubenswrapper[4808]: I0311 09:05:20.805625 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/512d8427-151d-42dd-a2fe-b52d22583604-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "512d8427-151d-42dd-a2fe-b52d22583604" (UID: "512d8427-151d-42dd-a2fe-b52d22583604"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:05:20 crc kubenswrapper[4808]: I0311 09:05:20.815573 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50b3975b-d699-4f86-8aba-3a00f99bfdbc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "50b3975b-d699-4f86-8aba-3a00f99bfdbc" (UID: "50b3975b-d699-4f86-8aba-3a00f99bfdbc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:05:20 crc kubenswrapper[4808]: I0311 09:05:20.816583 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50b3975b-d699-4f86-8aba-3a00f99bfdbc-config-data" (OuterVolumeSpecName: "config-data") pod "50b3975b-d699-4f86-8aba-3a00f99bfdbc" (UID: "50b3975b-d699-4f86-8aba-3a00f99bfdbc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:05:20 crc kubenswrapper[4808]: I0311 09:05:20.845292 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/512d8427-151d-42dd-a2fe-b52d22583604-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:20 crc kubenswrapper[4808]: I0311 09:05:20.845323 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50b3975b-d699-4f86-8aba-3a00f99bfdbc-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:20 crc kubenswrapper[4808]: I0311 09:05:20.845331 4808 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/50b3975b-d699-4f86-8aba-3a00f99bfdbc-logs\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:20 crc kubenswrapper[4808]: I0311 09:05:20.845339 4808 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/50b3975b-d699-4f86-8aba-3a00f99bfdbc-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:20 crc kubenswrapper[4808]: I0311 09:05:20.845349 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4xwj2\" (UniqueName: \"kubernetes.io/projected/50b3975b-d699-4f86-8aba-3a00f99bfdbc-kube-api-access-4xwj2\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:20 crc kubenswrapper[4808]: I0311 09:05:20.845373 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50b3975b-d699-4f86-8aba-3a00f99bfdbc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:20 crc kubenswrapper[4808]: I0311 09:05:20.845382 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8pzxs\" (UniqueName: \"kubernetes.io/projected/512d8427-151d-42dd-a2fe-b52d22583604-kube-api-access-8pzxs\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:20 crc kubenswrapper[4808]: I0311 09:05:20.845397 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/512d8427-151d-42dd-a2fe-b52d22583604-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:20 crc kubenswrapper[4808]: I0311 09:05:20.900585 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Mar 11 09:05:20 crc kubenswrapper[4808]: I0311 09:05:20.917795 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 11 09:05:20 crc kubenswrapper[4808]: I0311 09:05:20.921336 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 11 09:05:20 crc kubenswrapper[4808]: I0311 09:05:20.948013 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s5hsw\" (UniqueName: \"kubernetes.io/projected/c8798b96-74d7-4e0e-a4c7-97f3c995544b-kube-api-access-s5hsw\") pod \"c8798b96-74d7-4e0e-a4c7-97f3c995544b\" (UID: \"c8798b96-74d7-4e0e-a4c7-97f3c995544b\") " Mar 11 09:05:20 crc kubenswrapper[4808]: I0311 09:05:20.948052 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8798b96-74d7-4e0e-a4c7-97f3c995544b-config-data\") pod \"c8798b96-74d7-4e0e-a4c7-97f3c995544b\" (UID: \"c8798b96-74d7-4e0e-a4c7-97f3c995544b\") " Mar 11 09:05:20 crc kubenswrapper[4808]: I0311 09:05:20.948246 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8798b96-74d7-4e0e-a4c7-97f3c995544b-combined-ca-bundle\") pod \"c8798b96-74d7-4e0e-a4c7-97f3c995544b\" (UID: \"c8798b96-74d7-4e0e-a4c7-97f3c995544b\") " Mar 11 09:05:20 crc kubenswrapper[4808]: I0311 09:05:20.956494 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8798b96-74d7-4e0e-a4c7-97f3c995544b-kube-api-access-s5hsw" (OuterVolumeSpecName: "kube-api-access-s5hsw") pod "c8798b96-74d7-4e0e-a4c7-97f3c995544b" (UID: "c8798b96-74d7-4e0e-a4c7-97f3c995544b"). InnerVolumeSpecName "kube-api-access-s5hsw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:05:20 crc kubenswrapper[4808]: I0311 09:05:20.971497 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8798b96-74d7-4e0e-a4c7-97f3c995544b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c8798b96-74d7-4e0e-a4c7-97f3c995544b" (UID: "c8798b96-74d7-4e0e-a4c7-97f3c995544b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:05:20 crc kubenswrapper[4808]: I0311 09:05:20.974526 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8798b96-74d7-4e0e-a4c7-97f3c995544b-config-data" (OuterVolumeSpecName: "config-data") pod "c8798b96-74d7-4e0e-a4c7-97f3c995544b" (UID: "c8798b96-74d7-4e0e-a4c7-97f3c995544b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:05:21 crc kubenswrapper[4808]: I0311 09:05:21.049243 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf0df220-037c-4d17-b4ac-93f6d7eb4fa0-combined-ca-bundle\") pod \"bf0df220-037c-4d17-b4ac-93f6d7eb4fa0\" (UID: \"bf0df220-037c-4d17-b4ac-93f6d7eb4fa0\") " Mar 11 09:05:21 crc kubenswrapper[4808]: I0311 09:05:21.049320 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bf0df220-037c-4d17-b4ac-93f6d7eb4fa0-config-data-custom\") pod \"bf0df220-037c-4d17-b4ac-93f6d7eb4fa0\" (UID: \"bf0df220-037c-4d17-b4ac-93f6d7eb4fa0\") " Mar 11 09:05:21 crc kubenswrapper[4808]: I0311 09:05:21.049388 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf0df220-037c-4d17-b4ac-93f6d7eb4fa0-config-data\") pod \"bf0df220-037c-4d17-b4ac-93f6d7eb4fa0\" (UID: \"bf0df220-037c-4d17-b4ac-93f6d7eb4fa0\") " Mar 11 09:05:21 crc kubenswrapper[4808]: I0311 09:05:21.049405 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vp7ld\" (UniqueName: \"kubernetes.io/projected/bf0df220-037c-4d17-b4ac-93f6d7eb4fa0-kube-api-access-vp7ld\") pod \"bf0df220-037c-4d17-b4ac-93f6d7eb4fa0\" (UID: \"bf0df220-037c-4d17-b4ac-93f6d7eb4fa0\") " Mar 11 09:05:21 crc kubenswrapper[4808]: I0311 09:05:21.049424 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf0df220-037c-4d17-b4ac-93f6d7eb4fa0-scripts\") pod \"bf0df220-037c-4d17-b4ac-93f6d7eb4fa0\" (UID: \"bf0df220-037c-4d17-b4ac-93f6d7eb4fa0\") " Mar 11 09:05:21 crc kubenswrapper[4808]: I0311 09:05:21.049466 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e28fc76b-781e-4397-87b5-2b2bf6d2a496-config-data\") pod \"e28fc76b-781e-4397-87b5-2b2bf6d2a496\" (UID: \"e28fc76b-781e-4397-87b5-2b2bf6d2a496\") " Mar 11 09:05:21 crc kubenswrapper[4808]: I0311 09:05:21.049500 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e28fc76b-781e-4397-87b5-2b2bf6d2a496-combined-ca-bundle\") pod \"e28fc76b-781e-4397-87b5-2b2bf6d2a496\" (UID: \"e28fc76b-781e-4397-87b5-2b2bf6d2a496\") " Mar 11 09:05:21 crc kubenswrapper[4808]: I0311 09:05:21.049521 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4cxs\" (UniqueName: \"kubernetes.io/projected/e28fc76b-781e-4397-87b5-2b2bf6d2a496-kube-api-access-v4cxs\") pod \"e28fc76b-781e-4397-87b5-2b2bf6d2a496\" (UID: \"e28fc76b-781e-4397-87b5-2b2bf6d2a496\") " Mar 11 09:05:21 crc kubenswrapper[4808]: I0311 09:05:21.049574 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bf0df220-037c-4d17-b4ac-93f6d7eb4fa0-etc-machine-id\") pod \"bf0df220-037c-4d17-b4ac-93f6d7eb4fa0\" (UID: \"bf0df220-037c-4d17-b4ac-93f6d7eb4fa0\") " Mar 11 09:05:21 crc kubenswrapper[4808]: I0311 09:05:21.049871 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8798b96-74d7-4e0e-a4c7-97f3c995544b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:21 crc kubenswrapper[4808]: I0311 09:05:21.049884 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s5hsw\" (UniqueName: \"kubernetes.io/projected/c8798b96-74d7-4e0e-a4c7-97f3c995544b-kube-api-access-s5hsw\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:21 crc kubenswrapper[4808]: I0311 09:05:21.049893 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8798b96-74d7-4e0e-a4c7-97f3c995544b-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:21 crc kubenswrapper[4808]: I0311 09:05:21.049947 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf0df220-037c-4d17-b4ac-93f6d7eb4fa0-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "bf0df220-037c-4d17-b4ac-93f6d7eb4fa0" (UID: "bf0df220-037c-4d17-b4ac-93f6d7eb4fa0"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 11 09:05:21 crc kubenswrapper[4808]: I0311 09:05:21.052301 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf0df220-037c-4d17-b4ac-93f6d7eb4fa0-scripts" (OuterVolumeSpecName: "scripts") pod "bf0df220-037c-4d17-b4ac-93f6d7eb4fa0" (UID: "bf0df220-037c-4d17-b4ac-93f6d7eb4fa0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:05:21 crc kubenswrapper[4808]: I0311 09:05:21.052984 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e28fc76b-781e-4397-87b5-2b2bf6d2a496-kube-api-access-v4cxs" (OuterVolumeSpecName: "kube-api-access-v4cxs") pod "e28fc76b-781e-4397-87b5-2b2bf6d2a496" (UID: "e28fc76b-781e-4397-87b5-2b2bf6d2a496"). InnerVolumeSpecName "kube-api-access-v4cxs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:05:21 crc kubenswrapper[4808]: I0311 09:05:21.053327 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf0df220-037c-4d17-b4ac-93f6d7eb4fa0-kube-api-access-vp7ld" (OuterVolumeSpecName: "kube-api-access-vp7ld") pod "bf0df220-037c-4d17-b4ac-93f6d7eb4fa0" (UID: "bf0df220-037c-4d17-b4ac-93f6d7eb4fa0"). InnerVolumeSpecName "kube-api-access-vp7ld". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:05:21 crc kubenswrapper[4808]: I0311 09:05:21.053761 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf0df220-037c-4d17-b4ac-93f6d7eb4fa0-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "bf0df220-037c-4d17-b4ac-93f6d7eb4fa0" (UID: "bf0df220-037c-4d17-b4ac-93f6d7eb4fa0"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:05:21 crc kubenswrapper[4808]: I0311 09:05:21.071199 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e28fc76b-781e-4397-87b5-2b2bf6d2a496-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e28fc76b-781e-4397-87b5-2b2bf6d2a496" (UID: "e28fc76b-781e-4397-87b5-2b2bf6d2a496"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:05:21 crc kubenswrapper[4808]: I0311 09:05:21.073997 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e28fc76b-781e-4397-87b5-2b2bf6d2a496-config-data" (OuterVolumeSpecName: "config-data") pod "e28fc76b-781e-4397-87b5-2b2bf6d2a496" (UID: "e28fc76b-781e-4397-87b5-2b2bf6d2a496"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:05:21 crc kubenswrapper[4808]: I0311 09:05:21.089878 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf0df220-037c-4d17-b4ac-93f6d7eb4fa0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bf0df220-037c-4d17-b4ac-93f6d7eb4fa0" (UID: "bf0df220-037c-4d17-b4ac-93f6d7eb4fa0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:05:21 crc kubenswrapper[4808]: I0311 09:05:21.131483 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf0df220-037c-4d17-b4ac-93f6d7eb4fa0-config-data" (OuterVolumeSpecName: "config-data") pod "bf0df220-037c-4d17-b4ac-93f6d7eb4fa0" (UID: "bf0df220-037c-4d17-b4ac-93f6d7eb4fa0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:05:21 crc kubenswrapper[4808]: I0311 09:05:21.152332 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf0df220-037c-4d17-b4ac-93f6d7eb4fa0-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:21 crc kubenswrapper[4808]: I0311 09:05:21.152401 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vp7ld\" (UniqueName: \"kubernetes.io/projected/bf0df220-037c-4d17-b4ac-93f6d7eb4fa0-kube-api-access-vp7ld\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:21 crc kubenswrapper[4808]: I0311 09:05:21.152417 4808 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf0df220-037c-4d17-b4ac-93f6d7eb4fa0-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:21 crc kubenswrapper[4808]: I0311 09:05:21.152431 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e28fc76b-781e-4397-87b5-2b2bf6d2a496-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:21 crc kubenswrapper[4808]: I0311 09:05:21.152442 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e28fc76b-781e-4397-87b5-2b2bf6d2a496-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:21 crc kubenswrapper[4808]: I0311 09:05:21.152482 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v4cxs\" (UniqueName: \"kubernetes.io/projected/e28fc76b-781e-4397-87b5-2b2bf6d2a496-kube-api-access-v4cxs\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:21 crc kubenswrapper[4808]: I0311 09:05:21.152495 4808 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bf0df220-037c-4d17-b4ac-93f6d7eb4fa0-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:21 crc kubenswrapper[4808]: I0311 09:05:21.152506 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf0df220-037c-4d17-b4ac-93f6d7eb4fa0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:21 crc kubenswrapper[4808]: I0311 09:05:21.152517 4808 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bf0df220-037c-4d17-b4ac-93f6d7eb4fa0-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:21 crc kubenswrapper[4808]: I0311 09:05:21.165240 4808 generic.go:334] "Generic (PLEG): container finished" podID="512d8427-151d-42dd-a2fe-b52d22583604" containerID="e2259d06d5e8abb57d4defe5dfed63d448596138a276f57890cb8984cb01e2ed" exitCode=0 Mar 11 09:05:21 crc kubenswrapper[4808]: I0311 09:05:21.165309 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Mar 11 09:05:21 crc kubenswrapper[4808]: I0311 09:05:21.165323 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"512d8427-151d-42dd-a2fe-b52d22583604","Type":"ContainerDied","Data":"e2259d06d5e8abb57d4defe5dfed63d448596138a276f57890cb8984cb01e2ed"} Mar 11 09:05:21 crc kubenswrapper[4808]: I0311 09:05:21.165398 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"512d8427-151d-42dd-a2fe-b52d22583604","Type":"ContainerDied","Data":"c86b94d1a0d436dbe1724eab959d9c375ca1128d25cf62566cde2ea56ac15edd"} Mar 11 09:05:21 crc kubenswrapper[4808]: I0311 09:05:21.165418 4808 scope.go:117] "RemoveContainer" containerID="e2259d06d5e8abb57d4defe5dfed63d448596138a276f57890cb8984cb01e2ed" Mar 11 09:05:21 crc kubenswrapper[4808]: I0311 09:05:21.168951 4808 generic.go:334] "Generic (PLEG): container finished" podID="bf0df220-037c-4d17-b4ac-93f6d7eb4fa0" containerID="aba62c8514267bc46dbefc96570aade30f30e9975367a34465e330970831b81c" exitCode=0 Mar 11 09:05:21 crc kubenswrapper[4808]: I0311 09:05:21.169156 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 11 09:05:21 crc kubenswrapper[4808]: I0311 09:05:21.170069 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"bf0df220-037c-4d17-b4ac-93f6d7eb4fa0","Type":"ContainerDied","Data":"aba62c8514267bc46dbefc96570aade30f30e9975367a34465e330970831b81c"} Mar 11 09:05:21 crc kubenswrapper[4808]: I0311 09:05:21.170122 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"bf0df220-037c-4d17-b4ac-93f6d7eb4fa0","Type":"ContainerDied","Data":"cd2b7d4db2246b62c0bfd63ef26fafb9db81109e82e362b45e8eac747588705e"} Mar 11 09:05:21 crc kubenswrapper[4808]: I0311 09:05:21.172193 4808 generic.go:334] "Generic (PLEG): container finished" podID="c8798b96-74d7-4e0e-a4c7-97f3c995544b" containerID="3eb75e498cfe3f13fb0222d0de5cdb5657fd0be88039ded0f03ccc09d63cb1e3" exitCode=0 Mar 11 09:05:21 crc kubenswrapper[4808]: I0311 09:05:21.172328 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Mar 11 09:05:21 crc kubenswrapper[4808]: I0311 09:05:21.173218 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"c8798b96-74d7-4e0e-a4c7-97f3c995544b","Type":"ContainerDied","Data":"3eb75e498cfe3f13fb0222d0de5cdb5657fd0be88039ded0f03ccc09d63cb1e3"} Mar 11 09:05:21 crc kubenswrapper[4808]: I0311 09:05:21.173340 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"c8798b96-74d7-4e0e-a4c7-97f3c995544b","Type":"ContainerDied","Data":"c3154a9665cef62668e2a53aa7ca84b5b233bad0c5f30cf7eda704b78e2abdbd"} Mar 11 09:05:21 crc kubenswrapper[4808]: I0311 09:05:21.175693 4808 generic.go:334] "Generic (PLEG): container finished" podID="e28fc76b-781e-4397-87b5-2b2bf6d2a496" containerID="46dcd1f24c24da00b44ee1905152bd43556064f3dd7b7d9f3e140bf0866a8897" exitCode=0 Mar 11 09:05:21 crc kubenswrapper[4808]: I0311 09:05:21.175868 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e28fc76b-781e-4397-87b5-2b2bf6d2a496","Type":"ContainerDied","Data":"46dcd1f24c24da00b44ee1905152bd43556064f3dd7b7d9f3e140bf0866a8897"} Mar 11 09:05:21 crc kubenswrapper[4808]: I0311 09:05:21.176029 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e28fc76b-781e-4397-87b5-2b2bf6d2a496","Type":"ContainerDied","Data":"53fde59faa4bc42e8ab2e74a1ab3aed153f2e7b3f7887b682e88f3b25e946250"} Mar 11 09:05:21 crc kubenswrapper[4808]: I0311 09:05:21.176218 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 11 09:05:21 crc kubenswrapper[4808]: I0311 09:05:21.186350 4808 generic.go:334] "Generic (PLEG): container finished" podID="50b3975b-d699-4f86-8aba-3a00f99bfdbc" containerID="e3ce99f2e8ebb84a8c715f80b60a3bd785a8f332525b48b0979b54f0ca248423" exitCode=0 Mar 11 09:05:21 crc kubenswrapper[4808]: I0311 09:05:21.186465 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6b745fd47c-v25f8" event={"ID":"50b3975b-d699-4f86-8aba-3a00f99bfdbc","Type":"ContainerDied","Data":"e3ce99f2e8ebb84a8c715f80b60a3bd785a8f332525b48b0979b54f0ca248423"} Mar 11 09:05:21 crc kubenswrapper[4808]: I0311 09:05:21.186493 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6b745fd47c-v25f8" event={"ID":"50b3975b-d699-4f86-8aba-3a00f99bfdbc","Type":"ContainerDied","Data":"502e64cd15683baf26ed13219963e8e13c916389fe2a641d97b1c00ff3385749"} Mar 11 09:05:21 crc kubenswrapper[4808]: I0311 09:05:21.186557 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6b745fd47c-v25f8" Mar 11 09:05:21 crc kubenswrapper[4808]: I0311 09:05:21.193630 4808 scope.go:117] "RemoveContainer" containerID="e2259d06d5e8abb57d4defe5dfed63d448596138a276f57890cb8984cb01e2ed" Mar 11 09:05:21 crc kubenswrapper[4808]: E0311 09:05:21.196167 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2259d06d5e8abb57d4defe5dfed63d448596138a276f57890cb8984cb01e2ed\": container with ID starting with e2259d06d5e8abb57d4defe5dfed63d448596138a276f57890cb8984cb01e2ed not found: ID does not exist" containerID="e2259d06d5e8abb57d4defe5dfed63d448596138a276f57890cb8984cb01e2ed" Mar 11 09:05:21 crc kubenswrapper[4808]: I0311 09:05:21.196278 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2259d06d5e8abb57d4defe5dfed63d448596138a276f57890cb8984cb01e2ed"} err="failed to get container status \"e2259d06d5e8abb57d4defe5dfed63d448596138a276f57890cb8984cb01e2ed\": rpc error: code = NotFound desc = could not find container \"e2259d06d5e8abb57d4defe5dfed63d448596138a276f57890cb8984cb01e2ed\": container with ID starting with e2259d06d5e8abb57d4defe5dfed63d448596138a276f57890cb8984cb01e2ed not found: ID does not exist" Mar 11 09:05:21 crc kubenswrapper[4808]: I0311 09:05:21.196385 4808 scope.go:117] "RemoveContainer" containerID="fa00d39da6c2ff243156e8ec3d359bdd8214eb14e877427f3b6d367217f70a14" Mar 11 09:05:21 crc kubenswrapper[4808]: I0311 09:05:21.218763 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 11 09:05:21 crc kubenswrapper[4808]: I0311 09:05:21.226128 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 11 09:05:21 crc kubenswrapper[4808]: I0311 09:05:21.243668 4808 scope.go:117] "RemoveContainer" containerID="aba62c8514267bc46dbefc96570aade30f30e9975367a34465e330970831b81c" Mar 11 09:05:21 crc kubenswrapper[4808]: I0311 09:05:21.249045 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 11 09:05:21 crc kubenswrapper[4808]: I0311 09:05:21.259878 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Mar 11 09:05:21 crc kubenswrapper[4808]: I0311 09:05:21.271083 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 11 09:05:21 crc kubenswrapper[4808]: I0311 09:05:21.285742 4808 scope.go:117] "RemoveContainer" containerID="fa00d39da6c2ff243156e8ec3d359bdd8214eb14e877427f3b6d367217f70a14" Mar 11 09:05:21 crc kubenswrapper[4808]: E0311 09:05:21.286638 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa00d39da6c2ff243156e8ec3d359bdd8214eb14e877427f3b6d367217f70a14\": container with ID starting with fa00d39da6c2ff243156e8ec3d359bdd8214eb14e877427f3b6d367217f70a14 not found: ID does not exist" containerID="fa00d39da6c2ff243156e8ec3d359bdd8214eb14e877427f3b6d367217f70a14" Mar 11 09:05:21 crc kubenswrapper[4808]: I0311 09:05:21.286678 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa00d39da6c2ff243156e8ec3d359bdd8214eb14e877427f3b6d367217f70a14"} err="failed to get container status \"fa00d39da6c2ff243156e8ec3d359bdd8214eb14e877427f3b6d367217f70a14\": rpc error: code = NotFound desc = could not find container \"fa00d39da6c2ff243156e8ec3d359bdd8214eb14e877427f3b6d367217f70a14\": container with ID starting with fa00d39da6c2ff243156e8ec3d359bdd8214eb14e877427f3b6d367217f70a14 not found: ID does not exist" Mar 11 09:05:21 crc kubenswrapper[4808]: I0311 09:05:21.286731 4808 scope.go:117] "RemoveContainer" containerID="aba62c8514267bc46dbefc96570aade30f30e9975367a34465e330970831b81c" Mar 11 09:05:21 crc kubenswrapper[4808]: E0311 09:05:21.287274 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aba62c8514267bc46dbefc96570aade30f30e9975367a34465e330970831b81c\": container with ID starting with aba62c8514267bc46dbefc96570aade30f30e9975367a34465e330970831b81c not found: ID does not exist" containerID="aba62c8514267bc46dbefc96570aade30f30e9975367a34465e330970831b81c" Mar 11 09:05:21 crc kubenswrapper[4808]: I0311 09:05:21.287342 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aba62c8514267bc46dbefc96570aade30f30e9975367a34465e330970831b81c"} err="failed to get container status \"aba62c8514267bc46dbefc96570aade30f30e9975367a34465e330970831b81c\": rpc error: code = NotFound desc = could not find container \"aba62c8514267bc46dbefc96570aade30f30e9975367a34465e330970831b81c\": container with ID starting with aba62c8514267bc46dbefc96570aade30f30e9975367a34465e330970831b81c not found: ID does not exist" Mar 11 09:05:21 crc kubenswrapper[4808]: I0311 09:05:21.287404 4808 scope.go:117] "RemoveContainer" containerID="3eb75e498cfe3f13fb0222d0de5cdb5657fd0be88039ded0f03ccc09d63cb1e3" Mar 11 09:05:21 crc kubenswrapper[4808]: I0311 09:05:21.288037 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 11 09:05:21 crc kubenswrapper[4808]: I0311 09:05:21.300964 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 11 09:05:21 crc kubenswrapper[4808]: I0311 09:05:21.311300 4808 scope.go:117] "RemoveContainer" containerID="3eb75e498cfe3f13fb0222d0de5cdb5657fd0be88039ded0f03ccc09d63cb1e3" Mar 11 09:05:21 crc kubenswrapper[4808]: E0311 09:05:21.311860 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3eb75e498cfe3f13fb0222d0de5cdb5657fd0be88039ded0f03ccc09d63cb1e3\": container with ID starting with 3eb75e498cfe3f13fb0222d0de5cdb5657fd0be88039ded0f03ccc09d63cb1e3 not found: ID does not exist" containerID="3eb75e498cfe3f13fb0222d0de5cdb5657fd0be88039ded0f03ccc09d63cb1e3" Mar 11 09:05:21 crc kubenswrapper[4808]: I0311 09:05:21.311898 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3eb75e498cfe3f13fb0222d0de5cdb5657fd0be88039ded0f03ccc09d63cb1e3"} err="failed to get container status \"3eb75e498cfe3f13fb0222d0de5cdb5657fd0be88039ded0f03ccc09d63cb1e3\": rpc error: code = NotFound desc = could not find container \"3eb75e498cfe3f13fb0222d0de5cdb5657fd0be88039ded0f03ccc09d63cb1e3\": container with ID starting with 3eb75e498cfe3f13fb0222d0de5cdb5657fd0be88039ded0f03ccc09d63cb1e3 not found: ID does not exist" Mar 11 09:05:21 crc kubenswrapper[4808]: I0311 09:05:21.311954 4808 scope.go:117] "RemoveContainer" containerID="46dcd1f24c24da00b44ee1905152bd43556064f3dd7b7d9f3e140bf0866a8897" Mar 11 09:05:21 crc kubenswrapper[4808]: I0311 09:05:21.314737 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 11 09:05:21 crc kubenswrapper[4808]: I0311 09:05:21.328301 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-6b745fd47c-v25f8"] Mar 11 09:05:21 crc kubenswrapper[4808]: I0311 09:05:21.335080 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-6b745fd47c-v25f8"] Mar 11 09:05:21 crc kubenswrapper[4808]: I0311 09:05:21.358759 4808 scope.go:117] "RemoveContainer" containerID="46dcd1f24c24da00b44ee1905152bd43556064f3dd7b7d9f3e140bf0866a8897" Mar 11 09:05:21 crc kubenswrapper[4808]: E0311 09:05:21.359237 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46dcd1f24c24da00b44ee1905152bd43556064f3dd7b7d9f3e140bf0866a8897\": container with ID starting with 46dcd1f24c24da00b44ee1905152bd43556064f3dd7b7d9f3e140bf0866a8897 not found: ID does not exist" containerID="46dcd1f24c24da00b44ee1905152bd43556064f3dd7b7d9f3e140bf0866a8897" Mar 11 09:05:21 crc kubenswrapper[4808]: I0311 09:05:21.359265 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46dcd1f24c24da00b44ee1905152bd43556064f3dd7b7d9f3e140bf0866a8897"} err="failed to get container status \"46dcd1f24c24da00b44ee1905152bd43556064f3dd7b7d9f3e140bf0866a8897\": rpc error: code = NotFound desc = could not find container \"46dcd1f24c24da00b44ee1905152bd43556064f3dd7b7d9f3e140bf0866a8897\": container with ID starting with 46dcd1f24c24da00b44ee1905152bd43556064f3dd7b7d9f3e140bf0866a8897 not found: ID does not exist" Mar 11 09:05:21 crc kubenswrapper[4808]: I0311 09:05:21.359286 4808 scope.go:117] "RemoveContainer" containerID="e3ce99f2e8ebb84a8c715f80b60a3bd785a8f332525b48b0979b54f0ca248423" Mar 11 09:05:21 crc kubenswrapper[4808]: I0311 09:05:21.382616 4808 scope.go:117] "RemoveContainer" containerID="7c409fde438e843afe5c9926ca77c42ad26f7c963455ed3066ee3ef3491431c2" Mar 11 09:05:21 crc kubenswrapper[4808]: I0311 09:05:21.400564 4808 scope.go:117] "RemoveContainer" containerID="e3ce99f2e8ebb84a8c715f80b60a3bd785a8f332525b48b0979b54f0ca248423" Mar 11 09:05:21 crc kubenswrapper[4808]: E0311 09:05:21.400909 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3ce99f2e8ebb84a8c715f80b60a3bd785a8f332525b48b0979b54f0ca248423\": container with ID starting with e3ce99f2e8ebb84a8c715f80b60a3bd785a8f332525b48b0979b54f0ca248423 not found: ID does not exist" containerID="e3ce99f2e8ebb84a8c715f80b60a3bd785a8f332525b48b0979b54f0ca248423" Mar 11 09:05:21 crc kubenswrapper[4808]: I0311 09:05:21.400936 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3ce99f2e8ebb84a8c715f80b60a3bd785a8f332525b48b0979b54f0ca248423"} err="failed to get container status \"e3ce99f2e8ebb84a8c715f80b60a3bd785a8f332525b48b0979b54f0ca248423\": rpc error: code = NotFound desc = could not find container \"e3ce99f2e8ebb84a8c715f80b60a3bd785a8f332525b48b0979b54f0ca248423\": container with ID starting with e3ce99f2e8ebb84a8c715f80b60a3bd785a8f332525b48b0979b54f0ca248423 not found: ID does not exist" Mar 11 09:05:21 crc kubenswrapper[4808]: I0311 09:05:21.400958 4808 scope.go:117] "RemoveContainer" containerID="7c409fde438e843afe5c9926ca77c42ad26f7c963455ed3066ee3ef3491431c2" Mar 11 09:05:21 crc kubenswrapper[4808]: E0311 09:05:21.401217 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c409fde438e843afe5c9926ca77c42ad26f7c963455ed3066ee3ef3491431c2\": container with ID starting with 7c409fde438e843afe5c9926ca77c42ad26f7c963455ed3066ee3ef3491431c2 not found: ID does not exist" containerID="7c409fde438e843afe5c9926ca77c42ad26f7c963455ed3066ee3ef3491431c2" Mar 11 09:05:21 crc kubenswrapper[4808]: I0311 09:05:21.401238 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c409fde438e843afe5c9926ca77c42ad26f7c963455ed3066ee3ef3491431c2"} err="failed to get container status \"7c409fde438e843afe5c9926ca77c42ad26f7c963455ed3066ee3ef3491431c2\": rpc error: code = NotFound desc = could not find container \"7c409fde438e843afe5c9926ca77c42ad26f7c963455ed3066ee3ef3491431c2\": container with ID starting with 7c409fde438e843afe5c9926ca77c42ad26f7c963455ed3066ee3ef3491431c2 not found: ID does not exist" Mar 11 09:05:21 crc kubenswrapper[4808]: I0311 09:05:21.449893 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-688446ffb8-4n8n7" podUID="909d233e-60cb-4a66-989b-2dc8706ea143" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.169:9311/healthcheck\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 11 09:05:21 crc kubenswrapper[4808]: I0311 09:05:21.449920 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-688446ffb8-4n8n7" podUID="909d233e-60cb-4a66-989b-2dc8706ea143" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.169:9311/healthcheck\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 11 09:05:21 crc kubenswrapper[4808]: I0311 09:05:21.712334 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/memcached-0" podUID="c6baf079-20ab-45df-8e2d-2459a4286c9a" containerName="memcached" probeResult="failure" output="dial tcp 10.217.0.107:11211: i/o timeout" Mar 11 09:05:21 crc kubenswrapper[4808]: I0311 09:05:21.800448 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50b3975b-d699-4f86-8aba-3a00f99bfdbc" path="/var/lib/kubelet/pods/50b3975b-d699-4f86-8aba-3a00f99bfdbc/volumes" Mar 11 09:05:21 crc kubenswrapper[4808]: I0311 09:05:21.801951 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="512d8427-151d-42dd-a2fe-b52d22583604" path="/var/lib/kubelet/pods/512d8427-151d-42dd-a2fe-b52d22583604/volumes" Mar 11 09:05:21 crc kubenswrapper[4808]: I0311 09:05:21.802929 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="549d4ad5-b5b0-45bd-87b0-b9a6ee77866e" path="/var/lib/kubelet/pods/549d4ad5-b5b0-45bd-87b0-b9a6ee77866e/volumes" Mar 11 09:05:21 crc kubenswrapper[4808]: I0311 09:05:21.804875 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1e42e33-7453-4b97-abca-0c45cc27faa2" path="/var/lib/kubelet/pods/a1e42e33-7453-4b97-abca-0c45cc27faa2/volumes" Mar 11 09:05:21 crc kubenswrapper[4808]: I0311 09:05:21.805866 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4b8b237-3ec7-41fd-b6a1-fa3670d61c38" path="/var/lib/kubelet/pods/b4b8b237-3ec7-41fd-b6a1-fa3670d61c38/volumes" Mar 11 09:05:21 crc kubenswrapper[4808]: I0311 09:05:21.807176 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf0df220-037c-4d17-b4ac-93f6d7eb4fa0" path="/var/lib/kubelet/pods/bf0df220-037c-4d17-b4ac-93f6d7eb4fa0/volumes" Mar 11 09:05:21 crc kubenswrapper[4808]: I0311 09:05:21.808072 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8798b96-74d7-4e0e-a4c7-97f3c995544b" path="/var/lib/kubelet/pods/c8798b96-74d7-4e0e-a4c7-97f3c995544b/volumes" Mar 11 09:05:21 crc kubenswrapper[4808]: I0311 09:05:21.810531 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e28fc76b-781e-4397-87b5-2b2bf6d2a496" path="/var/lib/kubelet/pods/e28fc76b-781e-4397-87b5-2b2bf6d2a496/volumes" Mar 11 09:05:22 crc kubenswrapper[4808]: E0311 09:05:22.143536 4808 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of be587cd4387689015812b77c921ed987cf9e6f3848ffcd2be39c7c2a7a593cef is running failed: container process not found" containerID="be587cd4387689015812b77c921ed987cf9e6f3848ffcd2be39c7c2a7a593cef" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Mar 11 09:05:22 crc kubenswrapper[4808]: E0311 09:05:22.143785 4808 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of be587cd4387689015812b77c921ed987cf9e6f3848ffcd2be39c7c2a7a593cef is running failed: container process not found" containerID="be587cd4387689015812b77c921ed987cf9e6f3848ffcd2be39c7c2a7a593cef" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Mar 11 09:05:22 crc kubenswrapper[4808]: E0311 09:05:22.144011 4808 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of be587cd4387689015812b77c921ed987cf9e6f3848ffcd2be39c7c2a7a593cef is running failed: container process not found" containerID="be587cd4387689015812b77c921ed987cf9e6f3848ffcd2be39c7c2a7a593cef" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Mar 11 09:05:22 crc kubenswrapper[4808]: E0311 09:05:22.144046 4808 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of be587cd4387689015812b77c921ed987cf9e6f3848ffcd2be39c7c2a7a593cef is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-mbbhf" podUID="ac93b356-9c32-4094-9de5-8fd25c677810" containerName="ovsdb-server" Mar 11 09:05:22 crc kubenswrapper[4808]: E0311 09:05:22.145259 4808 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d0f6b59c6c2a37c85684bda61d18699ea95a7b07fe74cd2b454ef4d645be3cec" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Mar 11 09:05:22 crc kubenswrapper[4808]: E0311 09:05:22.146656 4808 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d0f6b59c6c2a37c85684bda61d18699ea95a7b07fe74cd2b454ef4d645be3cec" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Mar 11 09:05:22 crc kubenswrapper[4808]: E0311 09:05:22.147688 4808 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d0f6b59c6c2a37c85684bda61d18699ea95a7b07fe74cd2b454ef4d645be3cec" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Mar 11 09:05:22 crc kubenswrapper[4808]: E0311 09:05:22.147720 4808 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-mbbhf" podUID="ac93b356-9c32-4094-9de5-8fd25c677810" containerName="ovs-vswitchd" Mar 11 09:05:22 crc kubenswrapper[4808]: I0311 09:05:22.922429 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-7676f56769-zslbs" podUID="37361775-fb6c-486f-8d7b-fd93f31bbaf5" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.170:9696/\": dial tcp 10.217.0.170:9696: connect: connection refused" Mar 11 09:05:24 crc kubenswrapper[4808]: E0311 09:05:24.039641 4808 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3fd1979f_d1de_42a8_be8e_b61087f737bc.slice/crio-conmon-36c750f48ac33ddc1cee8a438d54bf885813c3f9d9bdb51b0dfec0a66d24f056.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3fd1979f_d1de_42a8_be8e_b61087f737bc.slice/crio-36c750f48ac33ddc1cee8a438d54bf885813c3f9d9bdb51b0dfec0a66d24f056.scope\": RecentStats: unable to find data in memory cache]" Mar 11 09:05:25 crc kubenswrapper[4808]: E0311 09:05:25.648634 4808 projected.go:288] Couldn't get configMap openstack/swift-storage-config-data: configmap "swift-storage-config-data" not found Mar 11 09:05:25 crc kubenswrapper[4808]: E0311 09:05:25.648941 4808 projected.go:263] Couldn't get secret openstack/swift-conf: secret "swift-conf" not found Mar 11 09:05:25 crc kubenswrapper[4808]: E0311 09:05:25.648952 4808 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 11 09:05:25 crc kubenswrapper[4808]: E0311 09:05:25.648965 4808 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: [configmap "swift-storage-config-data" not found, secret "swift-conf" not found, configmap "swift-ring-files" not found] Mar 11 09:05:25 crc kubenswrapper[4808]: E0311 09:05:25.649024 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b2531f01-6ef8-4583-b788-97e0c8b4b50b-etc-swift podName:b2531f01-6ef8-4583-b788-97e0c8b4b50b nodeName:}" failed. No retries permitted until 2026-03-11 09:05:41.649007951 +0000 UTC m=+1592.602331271 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b2531f01-6ef8-4583-b788-97e0c8b4b50b-etc-swift") pod "swift-storage-0" (UID: "b2531f01-6ef8-4583-b788-97e0c8b4b50b") : [configmap "swift-storage-config-data" not found, secret "swift-conf" not found, configmap "swift-ring-files" not found] Mar 11 09:05:27 crc kubenswrapper[4808]: E0311 09:05:27.142976 4808 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of be587cd4387689015812b77c921ed987cf9e6f3848ffcd2be39c7c2a7a593cef is running failed: container process not found" containerID="be587cd4387689015812b77c921ed987cf9e6f3848ffcd2be39c7c2a7a593cef" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Mar 11 09:05:27 crc kubenswrapper[4808]: E0311 09:05:27.144061 4808 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of be587cd4387689015812b77c921ed987cf9e6f3848ffcd2be39c7c2a7a593cef is running failed: container process not found" containerID="be587cd4387689015812b77c921ed987cf9e6f3848ffcd2be39c7c2a7a593cef" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Mar 11 09:05:27 crc kubenswrapper[4808]: E0311 09:05:27.144224 4808 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d0f6b59c6c2a37c85684bda61d18699ea95a7b07fe74cd2b454ef4d645be3cec" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Mar 11 09:05:27 crc kubenswrapper[4808]: E0311 09:05:27.145729 4808 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of be587cd4387689015812b77c921ed987cf9e6f3848ffcd2be39c7c2a7a593cef is running failed: container process not found" containerID="be587cd4387689015812b77c921ed987cf9e6f3848ffcd2be39c7c2a7a593cef" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Mar 11 09:05:27 crc kubenswrapper[4808]: E0311 09:05:27.145762 4808 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of be587cd4387689015812b77c921ed987cf9e6f3848ffcd2be39c7c2a7a593cef is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-mbbhf" podUID="ac93b356-9c32-4094-9de5-8fd25c677810" containerName="ovsdb-server" Mar 11 09:05:27 crc kubenswrapper[4808]: E0311 09:05:27.146835 4808 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d0f6b59c6c2a37c85684bda61d18699ea95a7b07fe74cd2b454ef4d645be3cec" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Mar 11 09:05:27 crc kubenswrapper[4808]: E0311 09:05:27.147860 4808 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d0f6b59c6c2a37c85684bda61d18699ea95a7b07fe74cd2b454ef4d645be3cec" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Mar 11 09:05:27 crc kubenswrapper[4808]: E0311 09:05:27.147895 4808 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-mbbhf" podUID="ac93b356-9c32-4094-9de5-8fd25c677810" containerName="ovs-vswitchd" Mar 11 09:05:27 crc kubenswrapper[4808]: I0311 09:05:27.789496 4808 scope.go:117] "RemoveContainer" containerID="9a282cc104124a36930296e8446190e2ce24a7536755b142f5bbb0e29d5b97d9" Mar 11 09:05:27 crc kubenswrapper[4808]: E0311 09:05:27.789778 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:05:28 crc kubenswrapper[4808]: I0311 09:05:28.455755 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9c7fl"] Mar 11 09:05:28 crc kubenswrapper[4808]: E0311 09:05:28.456430 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="909d233e-60cb-4a66-989b-2dc8706ea143" containerName="barbican-api-log" Mar 11 09:05:28 crc kubenswrapper[4808]: I0311 09:05:28.456446 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="909d233e-60cb-4a66-989b-2dc8706ea143" containerName="barbican-api-log" Mar 11 09:05:28 crc kubenswrapper[4808]: E0311 09:05:28.456465 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf0df220-037c-4d17-b4ac-93f6d7eb4fa0" containerName="probe" Mar 11 09:05:28 crc kubenswrapper[4808]: I0311 09:05:28.456472 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf0df220-037c-4d17-b4ac-93f6d7eb4fa0" containerName="probe" Mar 11 09:05:28 crc kubenswrapper[4808]: E0311 09:05:28.456480 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45e8823d-6df6-41fb-b7cd-9cb19e680db1" containerName="cinder-api-log" Mar 11 09:05:28 crc kubenswrapper[4808]: I0311 09:05:28.456488 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="45e8823d-6df6-41fb-b7cd-9cb19e680db1" containerName="cinder-api-log" Mar 11 09:05:28 crc kubenswrapper[4808]: E0311 09:05:28.456498 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6baf079-20ab-45df-8e2d-2459a4286c9a" containerName="memcached" Mar 11 09:05:28 crc kubenswrapper[4808]: I0311 09:05:28.456504 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6baf079-20ab-45df-8e2d-2459a4286c9a" containerName="memcached" Mar 11 09:05:28 crc kubenswrapper[4808]: E0311 09:05:28.456518 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1e42e33-7453-4b97-abca-0c45cc27faa2" containerName="setup-container" Mar 11 09:05:28 crc kubenswrapper[4808]: I0311 09:05:28.456525 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1e42e33-7453-4b97-abca-0c45cc27faa2" containerName="setup-container" Mar 11 09:05:28 crc kubenswrapper[4808]: E0311 09:05:28.456534 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1e42e33-7453-4b97-abca-0c45cc27faa2" containerName="rabbitmq" Mar 11 09:05:28 crc kubenswrapper[4808]: I0311 09:05:28.456541 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1e42e33-7453-4b97-abca-0c45cc27faa2" containerName="rabbitmq" Mar 11 09:05:28 crc kubenswrapper[4808]: E0311 09:05:28.456556 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50b3975b-d699-4f86-8aba-3a00f99bfdbc" containerName="barbican-worker" Mar 11 09:05:28 crc kubenswrapper[4808]: I0311 09:05:28.456564 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="50b3975b-d699-4f86-8aba-3a00f99bfdbc" containerName="barbican-worker" Mar 11 09:05:28 crc kubenswrapper[4808]: E0311 09:05:28.456575 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49e0938f-9c77-4bf3-b649-1be492ef1647" containerName="galera" Mar 11 09:05:28 crc kubenswrapper[4808]: I0311 09:05:28.456582 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="49e0938f-9c77-4bf3-b649-1be492ef1647" containerName="galera" Mar 11 09:05:28 crc kubenswrapper[4808]: E0311 09:05:28.456599 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50b3975b-d699-4f86-8aba-3a00f99bfdbc" containerName="barbican-worker-log" Mar 11 09:05:28 crc kubenswrapper[4808]: I0311 09:05:28.456606 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="50b3975b-d699-4f86-8aba-3a00f99bfdbc" containerName="barbican-worker-log" Mar 11 09:05:28 crc kubenswrapper[4808]: E0311 09:05:28.456619 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3cf17f3-18e6-43f9-ab09-5882a99ffa51" containerName="glance-log" Mar 11 09:05:28 crc kubenswrapper[4808]: I0311 09:05:28.456627 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3cf17f3-18e6-43f9-ab09-5882a99ffa51" containerName="glance-log" Mar 11 09:05:28 crc kubenswrapper[4808]: E0311 09:05:28.456635 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8798b96-74d7-4e0e-a4c7-97f3c995544b" containerName="nova-cell0-conductor-conductor" Mar 11 09:05:28 crc kubenswrapper[4808]: I0311 09:05:28.456642 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8798b96-74d7-4e0e-a4c7-97f3c995544b" containerName="nova-cell0-conductor-conductor" Mar 11 09:05:28 crc kubenswrapper[4808]: E0311 09:05:28.456650 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="192d6d53-4174-487e-b652-0ad887475d54" containerName="nova-api-log" Mar 11 09:05:28 crc kubenswrapper[4808]: I0311 09:05:28.456657 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="192d6d53-4174-487e-b652-0ad887475d54" containerName="nova-api-log" Mar 11 09:05:28 crc kubenswrapper[4808]: E0311 09:05:28.456670 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45e8823d-6df6-41fb-b7cd-9cb19e680db1" containerName="cinder-api" Mar 11 09:05:28 crc kubenswrapper[4808]: I0311 09:05:28.456677 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="45e8823d-6df6-41fb-b7cd-9cb19e680db1" containerName="cinder-api" Mar 11 09:05:28 crc kubenswrapper[4808]: E0311 09:05:28.456691 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45a36b4a-f974-46f6-a719-9765499308ed" containerName="glance-httpd" Mar 11 09:05:28 crc kubenswrapper[4808]: I0311 09:05:28.456699 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="45a36b4a-f974-46f6-a719-9765499308ed" containerName="glance-httpd" Mar 11 09:05:28 crc kubenswrapper[4808]: E0311 09:05:28.456711 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fde4956-a749-475e-9b5e-978fd33a4239" containerName="kube-state-metrics" Mar 11 09:05:28 crc kubenswrapper[4808]: I0311 09:05:28.456719 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fde4956-a749-475e-9b5e-978fd33a4239" containerName="kube-state-metrics" Mar 11 09:05:28 crc kubenswrapper[4808]: E0311 09:05:28.456731 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="549d4ad5-b5b0-45bd-87b0-b9a6ee77866e" containerName="setup-container" Mar 11 09:05:28 crc kubenswrapper[4808]: I0311 09:05:28.456738 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="549d4ad5-b5b0-45bd-87b0-b9a6ee77866e" containerName="setup-container" Mar 11 09:05:28 crc kubenswrapper[4808]: E0311 09:05:28.456750 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="512d8427-151d-42dd-a2fe-b52d22583604" containerName="nova-cell1-conductor-conductor" Mar 11 09:05:28 crc kubenswrapper[4808]: I0311 09:05:28.456757 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="512d8427-151d-42dd-a2fe-b52d22583604" containerName="nova-cell1-conductor-conductor" Mar 11 09:05:28 crc kubenswrapper[4808]: E0311 09:05:28.456769 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="192d6d53-4174-487e-b652-0ad887475d54" containerName="nova-api-api" Mar 11 09:05:28 crc kubenswrapper[4808]: I0311 09:05:28.456777 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="192d6d53-4174-487e-b652-0ad887475d54" containerName="nova-api-api" Mar 11 09:05:28 crc kubenswrapper[4808]: E0311 09:05:28.456791 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08b88a34-0eac-4a47-b3b3-89a8024bbe7b" containerName="ovn-northd" Mar 11 09:05:28 crc kubenswrapper[4808]: I0311 09:05:28.456798 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="08b88a34-0eac-4a47-b3b3-89a8024bbe7b" containerName="ovn-northd" Mar 11 09:05:28 crc kubenswrapper[4808]: E0311 09:05:28.456810 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3cf17f3-18e6-43f9-ab09-5882a99ffa51" containerName="glance-httpd" Mar 11 09:05:28 crc kubenswrapper[4808]: I0311 09:05:28.456817 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3cf17f3-18e6-43f9-ab09-5882a99ffa51" containerName="glance-httpd" Mar 11 09:05:28 crc kubenswrapper[4808]: E0311 09:05:28.456829 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e28fc76b-781e-4397-87b5-2b2bf6d2a496" containerName="nova-scheduler-scheduler" Mar 11 09:05:28 crc kubenswrapper[4808]: I0311 09:05:28.456836 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="e28fc76b-781e-4397-87b5-2b2bf6d2a496" containerName="nova-scheduler-scheduler" Mar 11 09:05:28 crc kubenswrapper[4808]: E0311 09:05:28.456845 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf0df220-037c-4d17-b4ac-93f6d7eb4fa0" containerName="cinder-scheduler" Mar 11 09:05:28 crc kubenswrapper[4808]: I0311 09:05:28.456854 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf0df220-037c-4d17-b4ac-93f6d7eb4fa0" containerName="cinder-scheduler" Mar 11 09:05:28 crc kubenswrapper[4808]: E0311 09:05:28.456862 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4b8b237-3ec7-41fd-b6a1-fa3670d61c38" containerName="keystone-api" Mar 11 09:05:28 crc kubenswrapper[4808]: I0311 09:05:28.456868 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4b8b237-3ec7-41fd-b6a1-fa3670d61c38" containerName="keystone-api" Mar 11 09:05:28 crc kubenswrapper[4808]: E0311 09:05:28.456884 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="909d233e-60cb-4a66-989b-2dc8706ea143" containerName="barbican-api" Mar 11 09:05:28 crc kubenswrapper[4808]: I0311 09:05:28.456892 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="909d233e-60cb-4a66-989b-2dc8706ea143" containerName="barbican-api" Mar 11 09:05:28 crc kubenswrapper[4808]: E0311 09:05:28.456903 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25dc3abb-1552-49e8-a8b4-c51edd37f47c" containerName="nova-metadata-log" Mar 11 09:05:28 crc kubenswrapper[4808]: I0311 09:05:28.456910 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="25dc3abb-1552-49e8-a8b4-c51edd37f47c" containerName="nova-metadata-log" Mar 11 09:05:28 crc kubenswrapper[4808]: E0311 09:05:28.456921 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25dc3abb-1552-49e8-a8b4-c51edd37f47c" containerName="nova-metadata-metadata" Mar 11 09:05:28 crc kubenswrapper[4808]: I0311 09:05:28.456928 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="25dc3abb-1552-49e8-a8b4-c51edd37f47c" containerName="nova-metadata-metadata" Mar 11 09:05:28 crc kubenswrapper[4808]: E0311 09:05:28.456938 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c526a61c-3322-446a-8ff5-edd5a02f4b1f" containerName="proxy-httpd" Mar 11 09:05:28 crc kubenswrapper[4808]: I0311 09:05:28.456945 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="c526a61c-3322-446a-8ff5-edd5a02f4b1f" containerName="proxy-httpd" Mar 11 09:05:28 crc kubenswrapper[4808]: E0311 09:05:28.456958 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45a36b4a-f974-46f6-a719-9765499308ed" containerName="glance-log" Mar 11 09:05:28 crc kubenswrapper[4808]: I0311 09:05:28.456965 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="45a36b4a-f974-46f6-a719-9765499308ed" containerName="glance-log" Mar 11 09:05:28 crc kubenswrapper[4808]: E0311 09:05:28.456975 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="549d4ad5-b5b0-45bd-87b0-b9a6ee77866e" containerName="rabbitmq" Mar 11 09:05:28 crc kubenswrapper[4808]: I0311 09:05:28.456983 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="549d4ad5-b5b0-45bd-87b0-b9a6ee77866e" containerName="rabbitmq" Mar 11 09:05:28 crc kubenswrapper[4808]: E0311 09:05:28.456993 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08b88a34-0eac-4a47-b3b3-89a8024bbe7b" containerName="openstack-network-exporter" Mar 11 09:05:28 crc kubenswrapper[4808]: I0311 09:05:28.457000 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="08b88a34-0eac-4a47-b3b3-89a8024bbe7b" containerName="openstack-network-exporter" Mar 11 09:05:28 crc kubenswrapper[4808]: E0311 09:05:28.457011 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c526a61c-3322-446a-8ff5-edd5a02f4b1f" containerName="ceilometer-notification-agent" Mar 11 09:05:28 crc kubenswrapper[4808]: I0311 09:05:28.457020 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="c526a61c-3322-446a-8ff5-edd5a02f4b1f" containerName="ceilometer-notification-agent" Mar 11 09:05:28 crc kubenswrapper[4808]: E0311 09:05:28.457028 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c526a61c-3322-446a-8ff5-edd5a02f4b1f" containerName="sg-core" Mar 11 09:05:28 crc kubenswrapper[4808]: I0311 09:05:28.457035 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="c526a61c-3322-446a-8ff5-edd5a02f4b1f" containerName="sg-core" Mar 11 09:05:28 crc kubenswrapper[4808]: E0311 09:05:28.457046 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c526a61c-3322-446a-8ff5-edd5a02f4b1f" containerName="ceilometer-central-agent" Mar 11 09:05:28 crc kubenswrapper[4808]: I0311 09:05:28.457053 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="c526a61c-3322-446a-8ff5-edd5a02f4b1f" containerName="ceilometer-central-agent" Mar 11 09:05:28 crc kubenswrapper[4808]: E0311 09:05:28.457064 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49e0938f-9c77-4bf3-b649-1be492ef1647" containerName="mysql-bootstrap" Mar 11 09:05:28 crc kubenswrapper[4808]: I0311 09:05:28.457071 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="49e0938f-9c77-4bf3-b649-1be492ef1647" containerName="mysql-bootstrap" Mar 11 09:05:28 crc kubenswrapper[4808]: I0311 09:05:28.457233 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="512d8427-151d-42dd-a2fe-b52d22583604" containerName="nova-cell1-conductor-conductor" Mar 11 09:05:28 crc kubenswrapper[4808]: I0311 09:05:28.457245 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="c526a61c-3322-446a-8ff5-edd5a02f4b1f" containerName="sg-core" Mar 11 09:05:28 crc kubenswrapper[4808]: I0311 09:05:28.457254 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="45e8823d-6df6-41fb-b7cd-9cb19e680db1" containerName="cinder-api-log" Mar 11 09:05:28 crc kubenswrapper[4808]: I0311 09:05:28.457262 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1e42e33-7453-4b97-abca-0c45cc27faa2" containerName="rabbitmq" Mar 11 09:05:28 crc kubenswrapper[4808]: I0311 09:05:28.457272 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="25dc3abb-1552-49e8-a8b4-c51edd37f47c" containerName="nova-metadata-log" Mar 11 09:05:28 crc kubenswrapper[4808]: I0311 09:05:28.457287 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="549d4ad5-b5b0-45bd-87b0-b9a6ee77866e" containerName="rabbitmq" Mar 11 09:05:28 crc kubenswrapper[4808]: I0311 09:05:28.457299 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf0df220-037c-4d17-b4ac-93f6d7eb4fa0" containerName="probe" Mar 11 09:05:28 crc kubenswrapper[4808]: I0311 09:05:28.457311 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="909d233e-60cb-4a66-989b-2dc8706ea143" containerName="barbican-api-log" Mar 11 09:05:28 crc kubenswrapper[4808]: I0311 09:05:28.457323 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="45a36b4a-f974-46f6-a719-9765499308ed" containerName="glance-log" Mar 11 09:05:28 crc kubenswrapper[4808]: I0311 09:05:28.457333 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3cf17f3-18e6-43f9-ab09-5882a99ffa51" containerName="glance-httpd" Mar 11 09:05:28 crc kubenswrapper[4808]: I0311 09:05:28.457346 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3cf17f3-18e6-43f9-ab09-5882a99ffa51" containerName="glance-log" Mar 11 09:05:28 crc kubenswrapper[4808]: I0311 09:05:28.457373 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="50b3975b-d699-4f86-8aba-3a00f99bfdbc" containerName="barbican-worker" Mar 11 09:05:28 crc kubenswrapper[4808]: I0311 09:05:28.457387 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="909d233e-60cb-4a66-989b-2dc8706ea143" containerName="barbican-api" Mar 11 09:05:28 crc kubenswrapper[4808]: I0311 09:05:28.457396 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="c526a61c-3322-446a-8ff5-edd5a02f4b1f" containerName="ceilometer-notification-agent" Mar 11 09:05:28 crc kubenswrapper[4808]: I0311 09:05:28.457407 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6baf079-20ab-45df-8e2d-2459a4286c9a" containerName="memcached" Mar 11 09:05:28 crc kubenswrapper[4808]: I0311 09:05:28.457419 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4b8b237-3ec7-41fd-b6a1-fa3670d61c38" containerName="keystone-api" Mar 11 09:05:28 crc kubenswrapper[4808]: I0311 09:05:28.457428 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf0df220-037c-4d17-b4ac-93f6d7eb4fa0" containerName="cinder-scheduler" Mar 11 09:05:28 crc kubenswrapper[4808]: I0311 09:05:28.457440 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="08b88a34-0eac-4a47-b3b3-89a8024bbe7b" containerName="ovn-northd" Mar 11 09:05:28 crc kubenswrapper[4808]: I0311 09:05:28.457447 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fde4956-a749-475e-9b5e-978fd33a4239" containerName="kube-state-metrics" Mar 11 09:05:28 crc kubenswrapper[4808]: I0311 09:05:28.457456 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8798b96-74d7-4e0e-a4c7-97f3c995544b" containerName="nova-cell0-conductor-conductor" Mar 11 09:05:28 crc kubenswrapper[4808]: I0311 09:05:28.457470 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="08b88a34-0eac-4a47-b3b3-89a8024bbe7b" containerName="openstack-network-exporter" Mar 11 09:05:28 crc kubenswrapper[4808]: I0311 09:05:28.457479 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="45a36b4a-f974-46f6-a719-9765499308ed" containerName="glance-httpd" Mar 11 09:05:28 crc kubenswrapper[4808]: I0311 09:05:28.457492 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="e28fc76b-781e-4397-87b5-2b2bf6d2a496" containerName="nova-scheduler-scheduler" Mar 11 09:05:28 crc kubenswrapper[4808]: I0311 09:05:28.457502 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="50b3975b-d699-4f86-8aba-3a00f99bfdbc" containerName="barbican-worker-log" Mar 11 09:05:28 crc kubenswrapper[4808]: I0311 09:05:28.457511 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="192d6d53-4174-487e-b652-0ad887475d54" containerName="nova-api-api" Mar 11 09:05:28 crc kubenswrapper[4808]: I0311 09:05:28.457523 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="45e8823d-6df6-41fb-b7cd-9cb19e680db1" containerName="cinder-api" Mar 11 09:05:28 crc kubenswrapper[4808]: I0311 09:05:28.457535 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="25dc3abb-1552-49e8-a8b4-c51edd37f47c" containerName="nova-metadata-metadata" Mar 11 09:05:28 crc kubenswrapper[4808]: I0311 09:05:28.457546 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="192d6d53-4174-487e-b652-0ad887475d54" containerName="nova-api-log" Mar 11 09:05:28 crc kubenswrapper[4808]: I0311 09:05:28.457555 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="c526a61c-3322-446a-8ff5-edd5a02f4b1f" containerName="proxy-httpd" Mar 11 09:05:28 crc kubenswrapper[4808]: I0311 09:05:28.457565 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="c526a61c-3322-446a-8ff5-edd5a02f4b1f" containerName="ceilometer-central-agent" Mar 11 09:05:28 crc kubenswrapper[4808]: I0311 09:05:28.457576 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="49e0938f-9c77-4bf3-b649-1be492ef1647" containerName="galera" Mar 11 09:05:28 crc kubenswrapper[4808]: I0311 09:05:28.459111 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9c7fl" Mar 11 09:05:28 crc kubenswrapper[4808]: I0311 09:05:28.471968 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9c7fl"] Mar 11 09:05:28 crc kubenswrapper[4808]: I0311 09:05:28.593418 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec544436-0ec9-40e9-bf1e-54544208af4c-catalog-content\") pod \"redhat-marketplace-9c7fl\" (UID: \"ec544436-0ec9-40e9-bf1e-54544208af4c\") " pod="openshift-marketplace/redhat-marketplace-9c7fl" Mar 11 09:05:28 crc kubenswrapper[4808]: I0311 09:05:28.593491 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgq7r\" (UniqueName: \"kubernetes.io/projected/ec544436-0ec9-40e9-bf1e-54544208af4c-kube-api-access-lgq7r\") pod \"redhat-marketplace-9c7fl\" (UID: \"ec544436-0ec9-40e9-bf1e-54544208af4c\") " pod="openshift-marketplace/redhat-marketplace-9c7fl" Mar 11 09:05:28 crc kubenswrapper[4808]: I0311 09:05:28.593528 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec544436-0ec9-40e9-bf1e-54544208af4c-utilities\") pod \"redhat-marketplace-9c7fl\" (UID: \"ec544436-0ec9-40e9-bf1e-54544208af4c\") " pod="openshift-marketplace/redhat-marketplace-9c7fl" Mar 11 09:05:28 crc kubenswrapper[4808]: I0311 09:05:28.694737 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec544436-0ec9-40e9-bf1e-54544208af4c-catalog-content\") pod \"redhat-marketplace-9c7fl\" (UID: \"ec544436-0ec9-40e9-bf1e-54544208af4c\") " pod="openshift-marketplace/redhat-marketplace-9c7fl" Mar 11 09:05:28 crc kubenswrapper[4808]: I0311 09:05:28.694838 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgq7r\" (UniqueName: \"kubernetes.io/projected/ec544436-0ec9-40e9-bf1e-54544208af4c-kube-api-access-lgq7r\") pod \"redhat-marketplace-9c7fl\" (UID: \"ec544436-0ec9-40e9-bf1e-54544208af4c\") " pod="openshift-marketplace/redhat-marketplace-9c7fl" Mar 11 09:05:28 crc kubenswrapper[4808]: I0311 09:05:28.695175 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec544436-0ec9-40e9-bf1e-54544208af4c-utilities\") pod \"redhat-marketplace-9c7fl\" (UID: \"ec544436-0ec9-40e9-bf1e-54544208af4c\") " pod="openshift-marketplace/redhat-marketplace-9c7fl" Mar 11 09:05:28 crc kubenswrapper[4808]: I0311 09:05:28.695535 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec544436-0ec9-40e9-bf1e-54544208af4c-catalog-content\") pod \"redhat-marketplace-9c7fl\" (UID: \"ec544436-0ec9-40e9-bf1e-54544208af4c\") " pod="openshift-marketplace/redhat-marketplace-9c7fl" Mar 11 09:05:28 crc kubenswrapper[4808]: I0311 09:05:28.695566 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec544436-0ec9-40e9-bf1e-54544208af4c-utilities\") pod \"redhat-marketplace-9c7fl\" (UID: \"ec544436-0ec9-40e9-bf1e-54544208af4c\") " pod="openshift-marketplace/redhat-marketplace-9c7fl" Mar 11 09:05:28 crc kubenswrapper[4808]: I0311 09:05:28.714600 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgq7r\" (UniqueName: \"kubernetes.io/projected/ec544436-0ec9-40e9-bf1e-54544208af4c-kube-api-access-lgq7r\") pod \"redhat-marketplace-9c7fl\" (UID: \"ec544436-0ec9-40e9-bf1e-54544208af4c\") " pod="openshift-marketplace/redhat-marketplace-9c7fl" Mar 11 09:05:28 crc kubenswrapper[4808]: I0311 09:05:28.786119 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9c7fl" Mar 11 09:05:29 crc kubenswrapper[4808]: I0311 09:05:29.231784 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9c7fl"] Mar 11 09:05:29 crc kubenswrapper[4808]: W0311 09:05:29.242430 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec544436_0ec9_40e9_bf1e_54544208af4c.slice/crio-84785b130d1ce01af682eab5c901e98d1e5285c19e4765657d14dc3a48dc718f WatchSource:0}: Error finding container 84785b130d1ce01af682eab5c901e98d1e5285c19e4765657d14dc3a48dc718f: Status 404 returned error can't find the container with id 84785b130d1ce01af682eab5c901e98d1e5285c19e4765657d14dc3a48dc718f Mar 11 09:05:29 crc kubenswrapper[4808]: I0311 09:05:29.289008 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9c7fl" event={"ID":"ec544436-0ec9-40e9-bf1e-54544208af4c","Type":"ContainerStarted","Data":"84785b130d1ce01af682eab5c901e98d1e5285c19e4765657d14dc3a48dc718f"} Mar 11 09:05:30 crc kubenswrapper[4808]: I0311 09:05:30.304942 4808 generic.go:334] "Generic (PLEG): container finished" podID="ec544436-0ec9-40e9-bf1e-54544208af4c" containerID="e76346e63560d469c6ba820863686b0624baf4613034eb0b7e85a35a2d7123fa" exitCode=0 Mar 11 09:05:30 crc kubenswrapper[4808]: I0311 09:05:30.305479 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9c7fl" event={"ID":"ec544436-0ec9-40e9-bf1e-54544208af4c","Type":"ContainerDied","Data":"e76346e63560d469c6ba820863686b0624baf4613034eb0b7e85a35a2d7123fa"} Mar 11 09:05:31 crc kubenswrapper[4808]: I0311 09:05:31.319422 4808 generic.go:334] "Generic (PLEG): container finished" podID="ec544436-0ec9-40e9-bf1e-54544208af4c" containerID="8128a8e95c99907c9bca62973ec5a71a16bf1d9e18793812021f95d44e4afbe6" exitCode=0 Mar 11 09:05:31 crc kubenswrapper[4808]: I0311 09:05:31.319663 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9c7fl" event={"ID":"ec544436-0ec9-40e9-bf1e-54544208af4c","Type":"ContainerDied","Data":"8128a8e95c99907c9bca62973ec5a71a16bf1d9e18793812021f95d44e4afbe6"} Mar 11 09:05:32 crc kubenswrapper[4808]: E0311 09:05:32.143876 4808 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of be587cd4387689015812b77c921ed987cf9e6f3848ffcd2be39c7c2a7a593cef is running failed: container process not found" containerID="be587cd4387689015812b77c921ed987cf9e6f3848ffcd2be39c7c2a7a593cef" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Mar 11 09:05:32 crc kubenswrapper[4808]: E0311 09:05:32.144535 4808 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of be587cd4387689015812b77c921ed987cf9e6f3848ffcd2be39c7c2a7a593cef is running failed: container process not found" containerID="be587cd4387689015812b77c921ed987cf9e6f3848ffcd2be39c7c2a7a593cef" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Mar 11 09:05:32 crc kubenswrapper[4808]: E0311 09:05:32.144608 4808 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d0f6b59c6c2a37c85684bda61d18699ea95a7b07fe74cd2b454ef4d645be3cec" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Mar 11 09:05:32 crc kubenswrapper[4808]: E0311 09:05:32.145253 4808 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of be587cd4387689015812b77c921ed987cf9e6f3848ffcd2be39c7c2a7a593cef is running failed: container process not found" containerID="be587cd4387689015812b77c921ed987cf9e6f3848ffcd2be39c7c2a7a593cef" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Mar 11 09:05:32 crc kubenswrapper[4808]: E0311 09:05:32.145300 4808 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of be587cd4387689015812b77c921ed987cf9e6f3848ffcd2be39c7c2a7a593cef is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-mbbhf" podUID="ac93b356-9c32-4094-9de5-8fd25c677810" containerName="ovsdb-server" Mar 11 09:05:32 crc kubenswrapper[4808]: E0311 09:05:32.146880 4808 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d0f6b59c6c2a37c85684bda61d18699ea95a7b07fe74cd2b454ef4d645be3cec" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Mar 11 09:05:32 crc kubenswrapper[4808]: E0311 09:05:32.148347 4808 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d0f6b59c6c2a37c85684bda61d18699ea95a7b07fe74cd2b454ef4d645be3cec" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Mar 11 09:05:32 crc kubenswrapper[4808]: E0311 09:05:32.148397 4808 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-mbbhf" podUID="ac93b356-9c32-4094-9de5-8fd25c677810" containerName="ovs-vswitchd" Mar 11 09:05:32 crc kubenswrapper[4808]: I0311 09:05:32.331976 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9c7fl" event={"ID":"ec544436-0ec9-40e9-bf1e-54544208af4c","Type":"ContainerStarted","Data":"16899b431be497f14d6b2b4d2b29589cc71777212d274bf4e5f02b01eb85cce4"} Mar 11 09:05:32 crc kubenswrapper[4808]: I0311 09:05:32.372489 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9c7fl" podStartSLOduration=2.902734285 podStartE2EDuration="4.372463193s" podCreationTimestamp="2026-03-11 09:05:28 +0000 UTC" firstStartedPulling="2026-03-11 09:05:30.309609725 +0000 UTC m=+1581.262933085" lastFinishedPulling="2026-03-11 09:05:31.779338663 +0000 UTC m=+1582.732661993" observedRunningTime="2026-03-11 09:05:32.359814946 +0000 UTC m=+1583.313138306" watchObservedRunningTime="2026-03-11 09:05:32.372463193 +0000 UTC m=+1583.325786543" Mar 11 09:05:34 crc kubenswrapper[4808]: E0311 09:05:34.233592 4808 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3fd1979f_d1de_42a8_be8e_b61087f737bc.slice/crio-conmon-36c750f48ac33ddc1cee8a438d54bf885813c3f9d9bdb51b0dfec0a66d24f056.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3fd1979f_d1de_42a8_be8e_b61087f737bc.slice/crio-36c750f48ac33ddc1cee8a438d54bf885813c3f9d9bdb51b0dfec0a66d24f056.scope\": RecentStats: unable to find data in memory cache]" Mar 11 09:05:34 crc kubenswrapper[4808]: I0311 09:05:34.349177 4808 generic.go:334] "Generic (PLEG): container finished" podID="37361775-fb6c-486f-8d7b-fd93f31bbaf5" containerID="35ee99bbbf3c1d04399562decf8fafbf34dea50fb8418e32d3bedd96e8190659" exitCode=0 Mar 11 09:05:34 crc kubenswrapper[4808]: I0311 09:05:34.349284 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7676f56769-zslbs" event={"ID":"37361775-fb6c-486f-8d7b-fd93f31bbaf5","Type":"ContainerDied","Data":"35ee99bbbf3c1d04399562decf8fafbf34dea50fb8418e32d3bedd96e8190659"} Mar 11 09:05:34 crc kubenswrapper[4808]: I0311 09:05:34.423666 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7676f56769-zslbs" Mar 11 09:05:34 crc kubenswrapper[4808]: I0311 09:05:34.591144 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-crzd8\" (UniqueName: \"kubernetes.io/projected/37361775-fb6c-486f-8d7b-fd93f31bbaf5-kube-api-access-crzd8\") pod \"37361775-fb6c-486f-8d7b-fd93f31bbaf5\" (UID: \"37361775-fb6c-486f-8d7b-fd93f31bbaf5\") " Mar 11 09:05:34 crc kubenswrapper[4808]: I0311 09:05:34.591235 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/37361775-fb6c-486f-8d7b-fd93f31bbaf5-internal-tls-certs\") pod \"37361775-fb6c-486f-8d7b-fd93f31bbaf5\" (UID: \"37361775-fb6c-486f-8d7b-fd93f31bbaf5\") " Mar 11 09:05:34 crc kubenswrapper[4808]: I0311 09:05:34.591303 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/37361775-fb6c-486f-8d7b-fd93f31bbaf5-httpd-config\") pod \"37361775-fb6c-486f-8d7b-fd93f31bbaf5\" (UID: \"37361775-fb6c-486f-8d7b-fd93f31bbaf5\") " Mar 11 09:05:34 crc kubenswrapper[4808]: I0311 09:05:34.591337 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37361775-fb6c-486f-8d7b-fd93f31bbaf5-combined-ca-bundle\") pod \"37361775-fb6c-486f-8d7b-fd93f31bbaf5\" (UID: \"37361775-fb6c-486f-8d7b-fd93f31bbaf5\") " Mar 11 09:05:34 crc kubenswrapper[4808]: I0311 09:05:34.591441 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/37361775-fb6c-486f-8d7b-fd93f31bbaf5-public-tls-certs\") pod \"37361775-fb6c-486f-8d7b-fd93f31bbaf5\" (UID: \"37361775-fb6c-486f-8d7b-fd93f31bbaf5\") " Mar 11 09:05:34 crc kubenswrapper[4808]: I0311 09:05:34.591515 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/37361775-fb6c-486f-8d7b-fd93f31bbaf5-config\") pod \"37361775-fb6c-486f-8d7b-fd93f31bbaf5\" (UID: \"37361775-fb6c-486f-8d7b-fd93f31bbaf5\") " Mar 11 09:05:34 crc kubenswrapper[4808]: I0311 09:05:34.591543 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/37361775-fb6c-486f-8d7b-fd93f31bbaf5-ovndb-tls-certs\") pod \"37361775-fb6c-486f-8d7b-fd93f31bbaf5\" (UID: \"37361775-fb6c-486f-8d7b-fd93f31bbaf5\") " Mar 11 09:05:34 crc kubenswrapper[4808]: I0311 09:05:34.598768 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37361775-fb6c-486f-8d7b-fd93f31bbaf5-kube-api-access-crzd8" (OuterVolumeSpecName: "kube-api-access-crzd8") pod "37361775-fb6c-486f-8d7b-fd93f31bbaf5" (UID: "37361775-fb6c-486f-8d7b-fd93f31bbaf5"). InnerVolumeSpecName "kube-api-access-crzd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:05:34 crc kubenswrapper[4808]: I0311 09:05:34.599598 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37361775-fb6c-486f-8d7b-fd93f31bbaf5-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "37361775-fb6c-486f-8d7b-fd93f31bbaf5" (UID: "37361775-fb6c-486f-8d7b-fd93f31bbaf5"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:05:34 crc kubenswrapper[4808]: I0311 09:05:34.629210 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37361775-fb6c-486f-8d7b-fd93f31bbaf5-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "37361775-fb6c-486f-8d7b-fd93f31bbaf5" (UID: "37361775-fb6c-486f-8d7b-fd93f31bbaf5"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:05:34 crc kubenswrapper[4808]: I0311 09:05:34.629560 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37361775-fb6c-486f-8d7b-fd93f31bbaf5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "37361775-fb6c-486f-8d7b-fd93f31bbaf5" (UID: "37361775-fb6c-486f-8d7b-fd93f31bbaf5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:05:34 crc kubenswrapper[4808]: I0311 09:05:34.634622 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37361775-fb6c-486f-8d7b-fd93f31bbaf5-config" (OuterVolumeSpecName: "config") pod "37361775-fb6c-486f-8d7b-fd93f31bbaf5" (UID: "37361775-fb6c-486f-8d7b-fd93f31bbaf5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:05:34 crc kubenswrapper[4808]: I0311 09:05:34.645645 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37361775-fb6c-486f-8d7b-fd93f31bbaf5-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "37361775-fb6c-486f-8d7b-fd93f31bbaf5" (UID: "37361775-fb6c-486f-8d7b-fd93f31bbaf5"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:05:34 crc kubenswrapper[4808]: I0311 09:05:34.650319 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37361775-fb6c-486f-8d7b-fd93f31bbaf5-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "37361775-fb6c-486f-8d7b-fd93f31bbaf5" (UID: "37361775-fb6c-486f-8d7b-fd93f31bbaf5"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:05:34 crc kubenswrapper[4808]: I0311 09:05:34.693616 4808 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/37361775-fb6c-486f-8d7b-fd93f31bbaf5-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:34 crc kubenswrapper[4808]: I0311 09:05:34.693650 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/37361775-fb6c-486f-8d7b-fd93f31bbaf5-config\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:34 crc kubenswrapper[4808]: I0311 09:05:34.693659 4808 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/37361775-fb6c-486f-8d7b-fd93f31bbaf5-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:34 crc kubenswrapper[4808]: I0311 09:05:34.693668 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-crzd8\" (UniqueName: \"kubernetes.io/projected/37361775-fb6c-486f-8d7b-fd93f31bbaf5-kube-api-access-crzd8\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:34 crc kubenswrapper[4808]: I0311 09:05:34.693679 4808 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/37361775-fb6c-486f-8d7b-fd93f31bbaf5-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:34 crc kubenswrapper[4808]: I0311 09:05:34.693687 4808 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/37361775-fb6c-486f-8d7b-fd93f31bbaf5-httpd-config\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:34 crc kubenswrapper[4808]: I0311 09:05:34.693694 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37361775-fb6c-486f-8d7b-fd93f31bbaf5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:35 crc kubenswrapper[4808]: I0311 09:05:35.365586 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7676f56769-zslbs" event={"ID":"37361775-fb6c-486f-8d7b-fd93f31bbaf5","Type":"ContainerDied","Data":"8c775eab8b7815798c1844ebf59bb7516455175fe0d7073a12c3138872d94ea1"} Mar 11 09:05:35 crc kubenswrapper[4808]: I0311 09:05:35.365635 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7676f56769-zslbs" Mar 11 09:05:35 crc kubenswrapper[4808]: I0311 09:05:35.365945 4808 scope.go:117] "RemoveContainer" containerID="313c3f34d3027bd9947d2e5694c49e600d145074cf0323486a527bfeeb269fbc" Mar 11 09:05:35 crc kubenswrapper[4808]: I0311 09:05:35.404318 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7676f56769-zslbs"] Mar 11 09:05:35 crc kubenswrapper[4808]: I0311 09:05:35.410151 4808 scope.go:117] "RemoveContainer" containerID="35ee99bbbf3c1d04399562decf8fafbf34dea50fb8418e32d3bedd96e8190659" Mar 11 09:05:35 crc kubenswrapper[4808]: I0311 09:05:35.414311 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-7676f56769-zslbs"] Mar 11 09:05:35 crc kubenswrapper[4808]: I0311 09:05:35.809509 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37361775-fb6c-486f-8d7b-fd93f31bbaf5" path="/var/lib/kubelet/pods/37361775-fb6c-486f-8d7b-fd93f31bbaf5/volumes" Mar 11 09:05:37 crc kubenswrapper[4808]: E0311 09:05:37.143472 4808 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of be587cd4387689015812b77c921ed987cf9e6f3848ffcd2be39c7c2a7a593cef is running failed: container process not found" containerID="be587cd4387689015812b77c921ed987cf9e6f3848ffcd2be39c7c2a7a593cef" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Mar 11 09:05:37 crc kubenswrapper[4808]: E0311 09:05:37.144177 4808 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of be587cd4387689015812b77c921ed987cf9e6f3848ffcd2be39c7c2a7a593cef is running failed: container process not found" containerID="be587cd4387689015812b77c921ed987cf9e6f3848ffcd2be39c7c2a7a593cef" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Mar 11 09:05:37 crc kubenswrapper[4808]: E0311 09:05:37.144776 4808 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of be587cd4387689015812b77c921ed987cf9e6f3848ffcd2be39c7c2a7a593cef is running failed: container process not found" containerID="be587cd4387689015812b77c921ed987cf9e6f3848ffcd2be39c7c2a7a593cef" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Mar 11 09:05:37 crc kubenswrapper[4808]: E0311 09:05:37.144848 4808 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of be587cd4387689015812b77c921ed987cf9e6f3848ffcd2be39c7c2a7a593cef is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-mbbhf" podUID="ac93b356-9c32-4094-9de5-8fd25c677810" containerName="ovsdb-server" Mar 11 09:05:37 crc kubenswrapper[4808]: E0311 09:05:37.145717 4808 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d0f6b59c6c2a37c85684bda61d18699ea95a7b07fe74cd2b454ef4d645be3cec" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Mar 11 09:05:37 crc kubenswrapper[4808]: E0311 09:05:37.149453 4808 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d0f6b59c6c2a37c85684bda61d18699ea95a7b07fe74cd2b454ef4d645be3cec" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Mar 11 09:05:37 crc kubenswrapper[4808]: E0311 09:05:37.152710 4808 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d0f6b59c6c2a37c85684bda61d18699ea95a7b07fe74cd2b454ef4d645be3cec" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Mar 11 09:05:37 crc kubenswrapper[4808]: E0311 09:05:37.152799 4808 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-mbbhf" podUID="ac93b356-9c32-4094-9de5-8fd25c677810" containerName="ovs-vswitchd" Mar 11 09:05:38 crc kubenswrapper[4808]: I0311 09:05:38.787208 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9c7fl" Mar 11 09:05:38 crc kubenswrapper[4808]: I0311 09:05:38.787706 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9c7fl" Mar 11 09:05:38 crc kubenswrapper[4808]: I0311 09:05:38.840513 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9c7fl" Mar 11 09:05:39 crc kubenswrapper[4808]: I0311 09:05:39.464911 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9c7fl" Mar 11 09:05:39 crc kubenswrapper[4808]: I0311 09:05:39.508057 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9c7fl"] Mar 11 09:05:40 crc kubenswrapper[4808]: I0311 09:05:40.420239 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-mbbhf_ac93b356-9c32-4094-9de5-8fd25c677810/ovs-vswitchd/0.log" Mar 11 09:05:40 crc kubenswrapper[4808]: I0311 09:05:40.421284 4808 generic.go:334] "Generic (PLEG): container finished" podID="ac93b356-9c32-4094-9de5-8fd25c677810" containerID="d0f6b59c6c2a37c85684bda61d18699ea95a7b07fe74cd2b454ef4d645be3cec" exitCode=137 Mar 11 09:05:40 crc kubenswrapper[4808]: I0311 09:05:40.421320 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-mbbhf" event={"ID":"ac93b356-9c32-4094-9de5-8fd25c677810","Type":"ContainerDied","Data":"d0f6b59c6c2a37c85684bda61d18699ea95a7b07fe74cd2b454ef4d645be3cec"} Mar 11 09:05:40 crc kubenswrapper[4808]: I0311 09:05:40.612252 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-mbbhf_ac93b356-9c32-4094-9de5-8fd25c677810/ovs-vswitchd/0.log" Mar 11 09:05:40 crc kubenswrapper[4808]: I0311 09:05:40.613147 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-mbbhf" Mar 11 09:05:40 crc kubenswrapper[4808]: I0311 09:05:40.688853 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/ac93b356-9c32-4094-9de5-8fd25c677810-etc-ovs\") pod \"ac93b356-9c32-4094-9de5-8fd25c677810\" (UID: \"ac93b356-9c32-4094-9de5-8fd25c677810\") " Mar 11 09:05:40 crc kubenswrapper[4808]: I0311 09:05:40.688907 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ac93b356-9c32-4094-9de5-8fd25c677810-scripts\") pod \"ac93b356-9c32-4094-9de5-8fd25c677810\" (UID: \"ac93b356-9c32-4094-9de5-8fd25c677810\") " Mar 11 09:05:40 crc kubenswrapper[4808]: I0311 09:05:40.688978 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/ac93b356-9c32-4094-9de5-8fd25c677810-var-log\") pod \"ac93b356-9c32-4094-9de5-8fd25c677810\" (UID: \"ac93b356-9c32-4094-9de5-8fd25c677810\") " Mar 11 09:05:40 crc kubenswrapper[4808]: I0311 09:05:40.688995 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ac93b356-9c32-4094-9de5-8fd25c677810-var-run\") pod \"ac93b356-9c32-4094-9de5-8fd25c677810\" (UID: \"ac93b356-9c32-4094-9de5-8fd25c677810\") " Mar 11 09:05:40 crc kubenswrapper[4808]: I0311 09:05:40.689023 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tsxq6\" (UniqueName: \"kubernetes.io/projected/ac93b356-9c32-4094-9de5-8fd25c677810-kube-api-access-tsxq6\") pod \"ac93b356-9c32-4094-9de5-8fd25c677810\" (UID: \"ac93b356-9c32-4094-9de5-8fd25c677810\") " Mar 11 09:05:40 crc kubenswrapper[4808]: I0311 09:05:40.689066 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/ac93b356-9c32-4094-9de5-8fd25c677810-var-lib\") pod \"ac93b356-9c32-4094-9de5-8fd25c677810\" (UID: \"ac93b356-9c32-4094-9de5-8fd25c677810\") " Mar 11 09:05:40 crc kubenswrapper[4808]: I0311 09:05:40.689448 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ac93b356-9c32-4094-9de5-8fd25c677810-var-lib" (OuterVolumeSpecName: "var-lib") pod "ac93b356-9c32-4094-9de5-8fd25c677810" (UID: "ac93b356-9c32-4094-9de5-8fd25c677810"). InnerVolumeSpecName "var-lib". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 11 09:05:40 crc kubenswrapper[4808]: I0311 09:05:40.689481 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ac93b356-9c32-4094-9de5-8fd25c677810-etc-ovs" (OuterVolumeSpecName: "etc-ovs") pod "ac93b356-9c32-4094-9de5-8fd25c677810" (UID: "ac93b356-9c32-4094-9de5-8fd25c677810"). InnerVolumeSpecName "etc-ovs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 11 09:05:40 crc kubenswrapper[4808]: I0311 09:05:40.689770 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ac93b356-9c32-4094-9de5-8fd25c677810-var-run" (OuterVolumeSpecName: "var-run") pod "ac93b356-9c32-4094-9de5-8fd25c677810" (UID: "ac93b356-9c32-4094-9de5-8fd25c677810"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 11 09:05:40 crc kubenswrapper[4808]: I0311 09:05:40.689820 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ac93b356-9c32-4094-9de5-8fd25c677810-var-log" (OuterVolumeSpecName: "var-log") pod "ac93b356-9c32-4094-9de5-8fd25c677810" (UID: "ac93b356-9c32-4094-9de5-8fd25c677810"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 11 09:05:40 crc kubenswrapper[4808]: I0311 09:05:40.692931 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac93b356-9c32-4094-9de5-8fd25c677810-scripts" (OuterVolumeSpecName: "scripts") pod "ac93b356-9c32-4094-9de5-8fd25c677810" (UID: "ac93b356-9c32-4094-9de5-8fd25c677810"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:05:40 crc kubenswrapper[4808]: I0311 09:05:40.697955 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac93b356-9c32-4094-9de5-8fd25c677810-kube-api-access-tsxq6" (OuterVolumeSpecName: "kube-api-access-tsxq6") pod "ac93b356-9c32-4094-9de5-8fd25c677810" (UID: "ac93b356-9c32-4094-9de5-8fd25c677810"). InnerVolumeSpecName "kube-api-access-tsxq6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:05:40 crc kubenswrapper[4808]: I0311 09:05:40.790104 4808 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/ac93b356-9c32-4094-9de5-8fd25c677810-var-log\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:40 crc kubenswrapper[4808]: I0311 09:05:40.790380 4808 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ac93b356-9c32-4094-9de5-8fd25c677810-var-run\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:40 crc kubenswrapper[4808]: I0311 09:05:40.790474 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tsxq6\" (UniqueName: \"kubernetes.io/projected/ac93b356-9c32-4094-9de5-8fd25c677810-kube-api-access-tsxq6\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:40 crc kubenswrapper[4808]: I0311 09:05:40.790573 4808 reconciler_common.go:293] "Volume detached for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/ac93b356-9c32-4094-9de5-8fd25c677810-var-lib\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:40 crc kubenswrapper[4808]: I0311 09:05:40.790654 4808 reconciler_common.go:293] "Volume detached for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/ac93b356-9c32-4094-9de5-8fd25c677810-etc-ovs\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:40 crc kubenswrapper[4808]: I0311 09:05:40.790721 4808 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ac93b356-9c32-4094-9de5-8fd25c677810-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:41 crc kubenswrapper[4808]: I0311 09:05:41.431096 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-mbbhf_ac93b356-9c32-4094-9de5-8fd25c677810/ovs-vswitchd/0.log" Mar 11 09:05:41 crc kubenswrapper[4808]: I0311 09:05:41.432328 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-mbbhf" Mar 11 09:05:41 crc kubenswrapper[4808]: I0311 09:05:41.432399 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-mbbhf" event={"ID":"ac93b356-9c32-4094-9de5-8fd25c677810","Type":"ContainerDied","Data":"6cb2cdfcadb65284c6e41825903608f534c1ecc2702f14259dc334c53b9795be"} Mar 11 09:05:41 crc kubenswrapper[4808]: I0311 09:05:41.432440 4808 scope.go:117] "RemoveContainer" containerID="d0f6b59c6c2a37c85684bda61d18699ea95a7b07fe74cd2b454ef4d645be3cec" Mar 11 09:05:41 crc kubenswrapper[4808]: I0311 09:05:41.432516 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9c7fl" podUID="ec544436-0ec9-40e9-bf1e-54544208af4c" containerName="registry-server" containerID="cri-o://16899b431be497f14d6b2b4d2b29589cc71777212d274bf4e5f02b01eb85cce4" gracePeriod=2 Mar 11 09:05:41 crc kubenswrapper[4808]: I0311 09:05:41.457035 4808 scope.go:117] "RemoveContainer" containerID="be587cd4387689015812b77c921ed987cf9e6f3848ffcd2be39c7c2a7a593cef" Mar 11 09:05:41 crc kubenswrapper[4808]: I0311 09:05:41.484546 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-mbbhf"] Mar 11 09:05:41 crc kubenswrapper[4808]: I0311 09:05:41.484681 4808 scope.go:117] "RemoveContainer" containerID="4274d9b10427464d0ed03dea602f29a75967086af3a1931da7eaba3977be6f52" Mar 11 09:05:41 crc kubenswrapper[4808]: I0311 09:05:41.491500 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-ovs-mbbhf"] Mar 11 09:05:41 crc kubenswrapper[4808]: E0311 09:05:41.704985 4808 projected.go:288] Couldn't get configMap openstack/swift-storage-config-data: configmap "swift-storage-config-data" not found Mar 11 09:05:41 crc kubenswrapper[4808]: E0311 09:05:41.705034 4808 projected.go:263] Couldn't get secret openstack/swift-conf: secret "swift-conf" not found Mar 11 09:05:41 crc kubenswrapper[4808]: E0311 09:05:41.705052 4808 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 11 09:05:41 crc kubenswrapper[4808]: E0311 09:05:41.705072 4808 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: [configmap "swift-storage-config-data" not found, secret "swift-conf" not found, configmap "swift-ring-files" not found] Mar 11 09:05:41 crc kubenswrapper[4808]: E0311 09:05:41.705151 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b2531f01-6ef8-4583-b788-97e0c8b4b50b-etc-swift podName:b2531f01-6ef8-4583-b788-97e0c8b4b50b nodeName:}" failed. No retries permitted until 2026-03-11 09:06:13.705128534 +0000 UTC m=+1624.658451884 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b2531f01-6ef8-4583-b788-97e0c8b4b50b-etc-swift") pod "swift-storage-0" (UID: "b2531f01-6ef8-4583-b788-97e0c8b4b50b") : [configmap "swift-storage-config-data" not found, secret "swift-conf" not found, configmap "swift-ring-files" not found] Mar 11 09:05:41 crc kubenswrapper[4808]: I0311 09:05:41.790440 4808 scope.go:117] "RemoveContainer" containerID="9a282cc104124a36930296e8446190e2ce24a7536755b142f5bbb0e29d5b97d9" Mar 11 09:05:41 crc kubenswrapper[4808]: E0311 09:05:41.791021 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:05:41 crc kubenswrapper[4808]: I0311 09:05:41.829573 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac93b356-9c32-4094-9de5-8fd25c677810" path="/var/lib/kubelet/pods/ac93b356-9c32-4094-9de5-8fd25c677810/volumes" Mar 11 09:05:42 crc kubenswrapper[4808]: I0311 09:05:42.002501 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9c7fl" Mar 11 09:05:42 crc kubenswrapper[4808]: I0311 09:05:42.114745 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lgq7r\" (UniqueName: \"kubernetes.io/projected/ec544436-0ec9-40e9-bf1e-54544208af4c-kube-api-access-lgq7r\") pod \"ec544436-0ec9-40e9-bf1e-54544208af4c\" (UID: \"ec544436-0ec9-40e9-bf1e-54544208af4c\") " Mar 11 09:05:42 crc kubenswrapper[4808]: I0311 09:05:42.114922 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec544436-0ec9-40e9-bf1e-54544208af4c-catalog-content\") pod \"ec544436-0ec9-40e9-bf1e-54544208af4c\" (UID: \"ec544436-0ec9-40e9-bf1e-54544208af4c\") " Mar 11 09:05:42 crc kubenswrapper[4808]: I0311 09:05:42.114953 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec544436-0ec9-40e9-bf1e-54544208af4c-utilities\") pod \"ec544436-0ec9-40e9-bf1e-54544208af4c\" (UID: \"ec544436-0ec9-40e9-bf1e-54544208af4c\") " Mar 11 09:05:42 crc kubenswrapper[4808]: I0311 09:05:42.115805 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec544436-0ec9-40e9-bf1e-54544208af4c-utilities" (OuterVolumeSpecName: "utilities") pod "ec544436-0ec9-40e9-bf1e-54544208af4c" (UID: "ec544436-0ec9-40e9-bf1e-54544208af4c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:05:42 crc kubenswrapper[4808]: I0311 09:05:42.118894 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec544436-0ec9-40e9-bf1e-54544208af4c-kube-api-access-lgq7r" (OuterVolumeSpecName: "kube-api-access-lgq7r") pod "ec544436-0ec9-40e9-bf1e-54544208af4c" (UID: "ec544436-0ec9-40e9-bf1e-54544208af4c"). InnerVolumeSpecName "kube-api-access-lgq7r". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:05:42 crc kubenswrapper[4808]: I0311 09:05:42.144337 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec544436-0ec9-40e9-bf1e-54544208af4c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ec544436-0ec9-40e9-bf1e-54544208af4c" (UID: "ec544436-0ec9-40e9-bf1e-54544208af4c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:05:42 crc kubenswrapper[4808]: I0311 09:05:42.216612 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec544436-0ec9-40e9-bf1e-54544208af4c-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:42 crc kubenswrapper[4808]: I0311 09:05:42.216849 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec544436-0ec9-40e9-bf1e-54544208af4c-utilities\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:42 crc kubenswrapper[4808]: I0311 09:05:42.216913 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lgq7r\" (UniqueName: \"kubernetes.io/projected/ec544436-0ec9-40e9-bf1e-54544208af4c-kube-api-access-lgq7r\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:42 crc kubenswrapper[4808]: I0311 09:05:42.458057 4808 generic.go:334] "Generic (PLEG): container finished" podID="b2531f01-6ef8-4583-b788-97e0c8b4b50b" containerID="073e4d7fcffe762aa5c3e2750fba257255a258fb332465e09d8529f78025ea59" exitCode=137 Mar 11 09:05:42 crc kubenswrapper[4808]: I0311 09:05:42.458143 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2531f01-6ef8-4583-b788-97e0c8b4b50b","Type":"ContainerDied","Data":"073e4d7fcffe762aa5c3e2750fba257255a258fb332465e09d8529f78025ea59"} Mar 11 09:05:42 crc kubenswrapper[4808]: I0311 09:05:42.463549 4808 generic.go:334] "Generic (PLEG): container finished" podID="ec544436-0ec9-40e9-bf1e-54544208af4c" containerID="16899b431be497f14d6b2b4d2b29589cc71777212d274bf4e5f02b01eb85cce4" exitCode=0 Mar 11 09:05:42 crc kubenswrapper[4808]: I0311 09:05:42.463578 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9c7fl" Mar 11 09:05:42 crc kubenswrapper[4808]: I0311 09:05:42.463604 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9c7fl" event={"ID":"ec544436-0ec9-40e9-bf1e-54544208af4c","Type":"ContainerDied","Data":"16899b431be497f14d6b2b4d2b29589cc71777212d274bf4e5f02b01eb85cce4"} Mar 11 09:05:42 crc kubenswrapper[4808]: I0311 09:05:42.463654 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9c7fl" event={"ID":"ec544436-0ec9-40e9-bf1e-54544208af4c","Type":"ContainerDied","Data":"84785b130d1ce01af682eab5c901e98d1e5285c19e4765657d14dc3a48dc718f"} Mar 11 09:05:42 crc kubenswrapper[4808]: I0311 09:05:42.463684 4808 scope.go:117] "RemoveContainer" containerID="16899b431be497f14d6b2b4d2b29589cc71777212d274bf4e5f02b01eb85cce4" Mar 11 09:05:42 crc kubenswrapper[4808]: I0311 09:05:42.502548 4808 scope.go:117] "RemoveContainer" containerID="8128a8e95c99907c9bca62973ec5a71a16bf1d9e18793812021f95d44e4afbe6" Mar 11 09:05:42 crc kubenswrapper[4808]: I0311 09:05:42.511217 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9c7fl"] Mar 11 09:05:42 crc kubenswrapper[4808]: I0311 09:05:42.519315 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9c7fl"] Mar 11 09:05:42 crc kubenswrapper[4808]: I0311 09:05:42.529154 4808 scope.go:117] "RemoveContainer" containerID="e76346e63560d469c6ba820863686b0624baf4613034eb0b7e85a35a2d7123fa" Mar 11 09:05:42 crc kubenswrapper[4808]: I0311 09:05:42.592249 4808 scope.go:117] "RemoveContainer" containerID="16899b431be497f14d6b2b4d2b29589cc71777212d274bf4e5f02b01eb85cce4" Mar 11 09:05:42 crc kubenswrapper[4808]: E0311 09:05:42.592780 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16899b431be497f14d6b2b4d2b29589cc71777212d274bf4e5f02b01eb85cce4\": container with ID starting with 16899b431be497f14d6b2b4d2b29589cc71777212d274bf4e5f02b01eb85cce4 not found: ID does not exist" containerID="16899b431be497f14d6b2b4d2b29589cc71777212d274bf4e5f02b01eb85cce4" Mar 11 09:05:42 crc kubenswrapper[4808]: I0311 09:05:42.592831 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16899b431be497f14d6b2b4d2b29589cc71777212d274bf4e5f02b01eb85cce4"} err="failed to get container status \"16899b431be497f14d6b2b4d2b29589cc71777212d274bf4e5f02b01eb85cce4\": rpc error: code = NotFound desc = could not find container \"16899b431be497f14d6b2b4d2b29589cc71777212d274bf4e5f02b01eb85cce4\": container with ID starting with 16899b431be497f14d6b2b4d2b29589cc71777212d274bf4e5f02b01eb85cce4 not found: ID does not exist" Mar 11 09:05:42 crc kubenswrapper[4808]: I0311 09:05:42.592865 4808 scope.go:117] "RemoveContainer" containerID="8128a8e95c99907c9bca62973ec5a71a16bf1d9e18793812021f95d44e4afbe6" Mar 11 09:05:42 crc kubenswrapper[4808]: E0311 09:05:42.593726 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8128a8e95c99907c9bca62973ec5a71a16bf1d9e18793812021f95d44e4afbe6\": container with ID starting with 8128a8e95c99907c9bca62973ec5a71a16bf1d9e18793812021f95d44e4afbe6 not found: ID does not exist" containerID="8128a8e95c99907c9bca62973ec5a71a16bf1d9e18793812021f95d44e4afbe6" Mar 11 09:05:42 crc kubenswrapper[4808]: I0311 09:05:42.593764 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8128a8e95c99907c9bca62973ec5a71a16bf1d9e18793812021f95d44e4afbe6"} err="failed to get container status \"8128a8e95c99907c9bca62973ec5a71a16bf1d9e18793812021f95d44e4afbe6\": rpc error: code = NotFound desc = could not find container \"8128a8e95c99907c9bca62973ec5a71a16bf1d9e18793812021f95d44e4afbe6\": container with ID starting with 8128a8e95c99907c9bca62973ec5a71a16bf1d9e18793812021f95d44e4afbe6 not found: ID does not exist" Mar 11 09:05:42 crc kubenswrapper[4808]: I0311 09:05:42.593791 4808 scope.go:117] "RemoveContainer" containerID="e76346e63560d469c6ba820863686b0624baf4613034eb0b7e85a35a2d7123fa" Mar 11 09:05:42 crc kubenswrapper[4808]: E0311 09:05:42.594611 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e76346e63560d469c6ba820863686b0624baf4613034eb0b7e85a35a2d7123fa\": container with ID starting with e76346e63560d469c6ba820863686b0624baf4613034eb0b7e85a35a2d7123fa not found: ID does not exist" containerID="e76346e63560d469c6ba820863686b0624baf4613034eb0b7e85a35a2d7123fa" Mar 11 09:05:42 crc kubenswrapper[4808]: I0311 09:05:42.594648 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e76346e63560d469c6ba820863686b0624baf4613034eb0b7e85a35a2d7123fa"} err="failed to get container status \"e76346e63560d469c6ba820863686b0624baf4613034eb0b7e85a35a2d7123fa\": rpc error: code = NotFound desc = could not find container \"e76346e63560d469c6ba820863686b0624baf4613034eb0b7e85a35a2d7123fa\": container with ID starting with e76346e63560d469c6ba820863686b0624baf4613034eb0b7e85a35a2d7123fa not found: ID does not exist" Mar 11 09:05:42 crc kubenswrapper[4808]: I0311 09:05:42.810614 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Mar 11 09:05:42 crc kubenswrapper[4808]: I0311 09:05:42.928778 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b2531f01-6ef8-4583-b788-97e0c8b4b50b-etc-swift\") pod \"b2531f01-6ef8-4583-b788-97e0c8b4b50b\" (UID: \"b2531f01-6ef8-4583-b788-97e0c8b4b50b\") " Mar 11 09:05:42 crc kubenswrapper[4808]: I0311 09:05:42.928832 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ssfkg\" (UniqueName: \"kubernetes.io/projected/b2531f01-6ef8-4583-b788-97e0c8b4b50b-kube-api-access-ssfkg\") pod \"b2531f01-6ef8-4583-b788-97e0c8b4b50b\" (UID: \"b2531f01-6ef8-4583-b788-97e0c8b4b50b\") " Mar 11 09:05:42 crc kubenswrapper[4808]: I0311 09:05:42.928929 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/b2531f01-6ef8-4583-b788-97e0c8b4b50b-cache\") pod \"b2531f01-6ef8-4583-b788-97e0c8b4b50b\" (UID: \"b2531f01-6ef8-4583-b788-97e0c8b4b50b\") " Mar 11 09:05:42 crc kubenswrapper[4808]: I0311 09:05:42.928953 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2531f01-6ef8-4583-b788-97e0c8b4b50b-combined-ca-bundle\") pod \"b2531f01-6ef8-4583-b788-97e0c8b4b50b\" (UID: \"b2531f01-6ef8-4583-b788-97e0c8b4b50b\") " Mar 11 09:05:42 crc kubenswrapper[4808]: I0311 09:05:42.928984 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"b2531f01-6ef8-4583-b788-97e0c8b4b50b\" (UID: \"b2531f01-6ef8-4583-b788-97e0c8b4b50b\") " Mar 11 09:05:42 crc kubenswrapper[4808]: I0311 09:05:42.929046 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/b2531f01-6ef8-4583-b788-97e0c8b4b50b-lock\") pod \"b2531f01-6ef8-4583-b788-97e0c8b4b50b\" (UID: \"b2531f01-6ef8-4583-b788-97e0c8b4b50b\") " Mar 11 09:05:42 crc kubenswrapper[4808]: I0311 09:05:42.929649 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2531f01-6ef8-4583-b788-97e0c8b4b50b-cache" (OuterVolumeSpecName: "cache") pod "b2531f01-6ef8-4583-b788-97e0c8b4b50b" (UID: "b2531f01-6ef8-4583-b788-97e0c8b4b50b"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:05:42 crc kubenswrapper[4808]: I0311 09:05:42.929889 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2531f01-6ef8-4583-b788-97e0c8b4b50b-lock" (OuterVolumeSpecName: "lock") pod "b2531f01-6ef8-4583-b788-97e0c8b4b50b" (UID: "b2531f01-6ef8-4583-b788-97e0c8b4b50b"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:05:42 crc kubenswrapper[4808]: I0311 09:05:42.943677 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "swift") pod "b2531f01-6ef8-4583-b788-97e0c8b4b50b" (UID: "b2531f01-6ef8-4583-b788-97e0c8b4b50b"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 11 09:05:42 crc kubenswrapper[4808]: I0311 09:05:42.943724 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2531f01-6ef8-4583-b788-97e0c8b4b50b-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "b2531f01-6ef8-4583-b788-97e0c8b4b50b" (UID: "b2531f01-6ef8-4583-b788-97e0c8b4b50b"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:05:42 crc kubenswrapper[4808]: I0311 09:05:42.944066 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2531f01-6ef8-4583-b788-97e0c8b4b50b-kube-api-access-ssfkg" (OuterVolumeSpecName: "kube-api-access-ssfkg") pod "b2531f01-6ef8-4583-b788-97e0c8b4b50b" (UID: "b2531f01-6ef8-4583-b788-97e0c8b4b50b"). InnerVolumeSpecName "kube-api-access-ssfkg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:05:43 crc kubenswrapper[4808]: I0311 09:05:43.031331 4808 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/b2531f01-6ef8-4583-b788-97e0c8b4b50b-cache\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:43 crc kubenswrapper[4808]: I0311 09:05:43.031409 4808 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Mar 11 09:05:43 crc kubenswrapper[4808]: I0311 09:05:43.031428 4808 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/b2531f01-6ef8-4583-b788-97e0c8b4b50b-lock\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:43 crc kubenswrapper[4808]: I0311 09:05:43.031443 4808 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b2531f01-6ef8-4583-b788-97e0c8b4b50b-etc-swift\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:43 crc kubenswrapper[4808]: I0311 09:05:43.031457 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ssfkg\" (UniqueName: \"kubernetes.io/projected/b2531f01-6ef8-4583-b788-97e0c8b4b50b-kube-api-access-ssfkg\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:43 crc kubenswrapper[4808]: I0311 09:05:43.054478 4808 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Mar 11 09:05:43 crc kubenswrapper[4808]: I0311 09:05:43.133472 4808 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:43 crc kubenswrapper[4808]: I0311 09:05:43.284947 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2531f01-6ef8-4583-b788-97e0c8b4b50b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b2531f01-6ef8-4583-b788-97e0c8b4b50b" (UID: "b2531f01-6ef8-4583-b788-97e0c8b4b50b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:05:43 crc kubenswrapper[4808]: I0311 09:05:43.335665 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2531f01-6ef8-4583-b788-97e0c8b4b50b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:05:43 crc kubenswrapper[4808]: I0311 09:05:43.486274 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2531f01-6ef8-4583-b788-97e0c8b4b50b","Type":"ContainerDied","Data":"14a821528e04adaea86b29736c1578ca034f3a46815e09028c6efdf76369972a"} Mar 11 09:05:43 crc kubenswrapper[4808]: I0311 09:05:43.486341 4808 scope.go:117] "RemoveContainer" containerID="073e4d7fcffe762aa5c3e2750fba257255a258fb332465e09d8529f78025ea59" Mar 11 09:05:43 crc kubenswrapper[4808]: I0311 09:05:43.486451 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Mar 11 09:05:43 crc kubenswrapper[4808]: I0311 09:05:43.511408 4808 scope.go:117] "RemoveContainer" containerID="8703e021e4664d65f7198a1e10e27fae65ecd623ec350adc5affd0e319e1f91c" Mar 11 09:05:43 crc kubenswrapper[4808]: I0311 09:05:43.543159 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Mar 11 09:05:43 crc kubenswrapper[4808]: I0311 09:05:43.549006 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-storage-0"] Mar 11 09:05:43 crc kubenswrapper[4808]: I0311 09:05:43.556583 4808 scope.go:117] "RemoveContainer" containerID="1c242216161e4c3f5f19cceec89f8a4f772fb8534970698b0f08070d25afb355" Mar 11 09:05:43 crc kubenswrapper[4808]: I0311 09:05:43.611864 4808 scope.go:117] "RemoveContainer" containerID="1fe8a90328e3c5fc5913211fdc47a8b9d9c43d633f87180fdcbdef9a02958f4e" Mar 11 09:05:43 crc kubenswrapper[4808]: I0311 09:05:43.634267 4808 scope.go:117] "RemoveContainer" containerID="dd4a12d0f40b70bed0ff12cd9961f609f614dcb0c77bb2b41c37fb77c51b62c9" Mar 11 09:05:43 crc kubenswrapper[4808]: I0311 09:05:43.665123 4808 scope.go:117] "RemoveContainer" containerID="36d57f8f584cb1d8ebdc130edda1da090b5344f1959c1b5fbee4de63ad660d1d" Mar 11 09:05:43 crc kubenswrapper[4808]: I0311 09:05:43.696139 4808 scope.go:117] "RemoveContainer" containerID="5610b5414b923dbe5f29196fe9b69a93bc333d712a301053a8290a033d1900e2" Mar 11 09:05:43 crc kubenswrapper[4808]: I0311 09:05:43.717417 4808 scope.go:117] "RemoveContainer" containerID="91cb6e0a1f936b6dc40058ec5049a18b1acb5b6601c22fa29cce4e18b74747dd" Mar 11 09:05:43 crc kubenswrapper[4808]: I0311 09:05:43.739389 4808 scope.go:117] "RemoveContainer" containerID="37d0039d1a631f590de0477dafeba545b23952735655b1396940b1af448dcf5c" Mar 11 09:05:43 crc kubenswrapper[4808]: I0311 09:05:43.762588 4808 scope.go:117] "RemoveContainer" containerID="b34c64e3d4e3825344ce6d566596454bdaa22b31521f5119b23e2df58bc1f23d" Mar 11 09:05:43 crc kubenswrapper[4808]: I0311 09:05:43.795896 4808 scope.go:117] "RemoveContainer" containerID="ba474a4645c4e2ea029e0f9ff8bded4ccbfb98f7157a6bc5a8efcb5ca613c7de" Mar 11 09:05:43 crc kubenswrapper[4808]: I0311 09:05:43.802475 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2531f01-6ef8-4583-b788-97e0c8b4b50b" path="/var/lib/kubelet/pods/b2531f01-6ef8-4583-b788-97e0c8b4b50b/volumes" Mar 11 09:05:43 crc kubenswrapper[4808]: I0311 09:05:43.806229 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec544436-0ec9-40e9-bf1e-54544208af4c" path="/var/lib/kubelet/pods/ec544436-0ec9-40e9-bf1e-54544208af4c/volumes" Mar 11 09:05:43 crc kubenswrapper[4808]: I0311 09:05:43.822857 4808 scope.go:117] "RemoveContainer" containerID="081fbc280b7cac976f12e7408ccac953bdb3e17b1d8f4bac92f9023da0402d27" Mar 11 09:05:43 crc kubenswrapper[4808]: I0311 09:05:43.847065 4808 scope.go:117] "RemoveContainer" containerID="267b966e81c3ff2dcbbdf7e8e9d6fb5a08be9d360dec65483ea3395dbc33a811" Mar 11 09:05:43 crc kubenswrapper[4808]: I0311 09:05:43.877589 4808 scope.go:117] "RemoveContainer" containerID="681ee01ae1a57e4a64b46a088c7eb77d95b1fdf9586896e89410116abec29a90" Mar 11 09:05:43 crc kubenswrapper[4808]: I0311 09:05:43.902052 4808 scope.go:117] "RemoveContainer" containerID="82f8358e3c23ae5caf4686e4d2ed129be3db93f6c1646acae71a1541a038ba65" Mar 11 09:05:44 crc kubenswrapper[4808]: E0311 09:05:44.404875 4808 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3fd1979f_d1de_42a8_be8e_b61087f737bc.slice/crio-36c750f48ac33ddc1cee8a438d54bf885813c3f9d9bdb51b0dfec0a66d24f056.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3fd1979f_d1de_42a8_be8e_b61087f737bc.slice/crio-conmon-36c750f48ac33ddc1cee8a438d54bf885813c3f9d9bdb51b0dfec0a66d24f056.scope\": RecentStats: unable to find data in memory cache]" Mar 11 09:05:54 crc kubenswrapper[4808]: E0311 09:05:54.614288 4808 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3fd1979f_d1de_42a8_be8e_b61087f737bc.slice/crio-conmon-36c750f48ac33ddc1cee8a438d54bf885813c3f9d9bdb51b0dfec0a66d24f056.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3fd1979f_d1de_42a8_be8e_b61087f737bc.slice/crio-36c750f48ac33ddc1cee8a438d54bf885813c3f9d9bdb51b0dfec0a66d24f056.scope\": RecentStats: unable to find data in memory cache]" Mar 11 09:05:54 crc kubenswrapper[4808]: I0311 09:05:54.789798 4808 scope.go:117] "RemoveContainer" containerID="9a282cc104124a36930296e8446190e2ce24a7536755b142f5bbb0e29d5b97d9" Mar 11 09:05:54 crc kubenswrapper[4808]: E0311 09:05:54.790262 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:06:00 crc kubenswrapper[4808]: I0311 09:06:00.163679 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553666-rfbnf"] Mar 11 09:06:00 crc kubenswrapper[4808]: E0311 09:06:00.164496 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2531f01-6ef8-4583-b788-97e0c8b4b50b" containerName="account-auditor" Mar 11 09:06:00 crc kubenswrapper[4808]: I0311 09:06:00.164519 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2531f01-6ef8-4583-b788-97e0c8b4b50b" containerName="account-auditor" Mar 11 09:06:00 crc kubenswrapper[4808]: E0311 09:06:00.164540 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2531f01-6ef8-4583-b788-97e0c8b4b50b" containerName="rsync" Mar 11 09:06:00 crc kubenswrapper[4808]: I0311 09:06:00.164551 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2531f01-6ef8-4583-b788-97e0c8b4b50b" containerName="rsync" Mar 11 09:06:00 crc kubenswrapper[4808]: E0311 09:06:00.164571 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2531f01-6ef8-4583-b788-97e0c8b4b50b" containerName="object-replicator" Mar 11 09:06:00 crc kubenswrapper[4808]: I0311 09:06:00.164582 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2531f01-6ef8-4583-b788-97e0c8b4b50b" containerName="object-replicator" Mar 11 09:06:00 crc kubenswrapper[4808]: E0311 09:06:00.164602 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac93b356-9c32-4094-9de5-8fd25c677810" containerName="ovsdb-server-init" Mar 11 09:06:00 crc kubenswrapper[4808]: I0311 09:06:00.164613 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac93b356-9c32-4094-9de5-8fd25c677810" containerName="ovsdb-server-init" Mar 11 09:06:00 crc kubenswrapper[4808]: E0311 09:06:00.164632 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec544436-0ec9-40e9-bf1e-54544208af4c" containerName="extract-content" Mar 11 09:06:00 crc kubenswrapper[4808]: I0311 09:06:00.164642 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec544436-0ec9-40e9-bf1e-54544208af4c" containerName="extract-content" Mar 11 09:06:00 crc kubenswrapper[4808]: E0311 09:06:00.164664 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2531f01-6ef8-4583-b788-97e0c8b4b50b" containerName="object-expirer" Mar 11 09:06:00 crc kubenswrapper[4808]: I0311 09:06:00.164674 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2531f01-6ef8-4583-b788-97e0c8b4b50b" containerName="object-expirer" Mar 11 09:06:00 crc kubenswrapper[4808]: E0311 09:06:00.164694 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2531f01-6ef8-4583-b788-97e0c8b4b50b" containerName="account-server" Mar 11 09:06:00 crc kubenswrapper[4808]: I0311 09:06:00.164704 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2531f01-6ef8-4583-b788-97e0c8b4b50b" containerName="account-server" Mar 11 09:06:00 crc kubenswrapper[4808]: E0311 09:06:00.164721 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2531f01-6ef8-4583-b788-97e0c8b4b50b" containerName="swift-recon-cron" Mar 11 09:06:00 crc kubenswrapper[4808]: I0311 09:06:00.164729 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2531f01-6ef8-4583-b788-97e0c8b4b50b" containerName="swift-recon-cron" Mar 11 09:06:00 crc kubenswrapper[4808]: E0311 09:06:00.164744 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec544436-0ec9-40e9-bf1e-54544208af4c" containerName="registry-server" Mar 11 09:06:00 crc kubenswrapper[4808]: I0311 09:06:00.164755 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec544436-0ec9-40e9-bf1e-54544208af4c" containerName="registry-server" Mar 11 09:06:00 crc kubenswrapper[4808]: E0311 09:06:00.164778 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2531f01-6ef8-4583-b788-97e0c8b4b50b" containerName="container-server" Mar 11 09:06:00 crc kubenswrapper[4808]: I0311 09:06:00.164792 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2531f01-6ef8-4583-b788-97e0c8b4b50b" containerName="container-server" Mar 11 09:06:00 crc kubenswrapper[4808]: E0311 09:06:00.164810 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2531f01-6ef8-4583-b788-97e0c8b4b50b" containerName="object-auditor" Mar 11 09:06:00 crc kubenswrapper[4808]: I0311 09:06:00.164818 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2531f01-6ef8-4583-b788-97e0c8b4b50b" containerName="object-auditor" Mar 11 09:06:00 crc kubenswrapper[4808]: E0311 09:06:00.164830 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac93b356-9c32-4094-9de5-8fd25c677810" containerName="ovsdb-server" Mar 11 09:06:00 crc kubenswrapper[4808]: I0311 09:06:00.164839 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac93b356-9c32-4094-9de5-8fd25c677810" containerName="ovsdb-server" Mar 11 09:06:00 crc kubenswrapper[4808]: E0311 09:06:00.164849 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2531f01-6ef8-4583-b788-97e0c8b4b50b" containerName="container-updater" Mar 11 09:06:00 crc kubenswrapper[4808]: I0311 09:06:00.164857 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2531f01-6ef8-4583-b788-97e0c8b4b50b" containerName="container-updater" Mar 11 09:06:00 crc kubenswrapper[4808]: E0311 09:06:00.164870 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2531f01-6ef8-4583-b788-97e0c8b4b50b" containerName="object-server" Mar 11 09:06:00 crc kubenswrapper[4808]: I0311 09:06:00.164879 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2531f01-6ef8-4583-b788-97e0c8b4b50b" containerName="object-server" Mar 11 09:06:00 crc kubenswrapper[4808]: E0311 09:06:00.164894 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2531f01-6ef8-4583-b788-97e0c8b4b50b" containerName="container-replicator" Mar 11 09:06:00 crc kubenswrapper[4808]: I0311 09:06:00.164902 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2531f01-6ef8-4583-b788-97e0c8b4b50b" containerName="container-replicator" Mar 11 09:06:00 crc kubenswrapper[4808]: E0311 09:06:00.164913 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2531f01-6ef8-4583-b788-97e0c8b4b50b" containerName="account-replicator" Mar 11 09:06:00 crc kubenswrapper[4808]: I0311 09:06:00.164921 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2531f01-6ef8-4583-b788-97e0c8b4b50b" containerName="account-replicator" Mar 11 09:06:00 crc kubenswrapper[4808]: E0311 09:06:00.164937 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37361775-fb6c-486f-8d7b-fd93f31bbaf5" containerName="neutron-api" Mar 11 09:06:00 crc kubenswrapper[4808]: I0311 09:06:00.164945 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="37361775-fb6c-486f-8d7b-fd93f31bbaf5" containerName="neutron-api" Mar 11 09:06:00 crc kubenswrapper[4808]: E0311 09:06:00.164955 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec544436-0ec9-40e9-bf1e-54544208af4c" containerName="extract-utilities" Mar 11 09:06:00 crc kubenswrapper[4808]: I0311 09:06:00.164962 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec544436-0ec9-40e9-bf1e-54544208af4c" containerName="extract-utilities" Mar 11 09:06:00 crc kubenswrapper[4808]: E0311 09:06:00.164974 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac93b356-9c32-4094-9de5-8fd25c677810" containerName="ovs-vswitchd" Mar 11 09:06:00 crc kubenswrapper[4808]: I0311 09:06:00.164982 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac93b356-9c32-4094-9de5-8fd25c677810" containerName="ovs-vswitchd" Mar 11 09:06:00 crc kubenswrapper[4808]: E0311 09:06:00.165000 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2531f01-6ef8-4583-b788-97e0c8b4b50b" containerName="container-auditor" Mar 11 09:06:00 crc kubenswrapper[4808]: I0311 09:06:00.165008 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2531f01-6ef8-4583-b788-97e0c8b4b50b" containerName="container-auditor" Mar 11 09:06:00 crc kubenswrapper[4808]: E0311 09:06:00.165022 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2531f01-6ef8-4583-b788-97e0c8b4b50b" containerName="object-updater" Mar 11 09:06:00 crc kubenswrapper[4808]: I0311 09:06:00.165031 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2531f01-6ef8-4583-b788-97e0c8b4b50b" containerName="object-updater" Mar 11 09:06:00 crc kubenswrapper[4808]: E0311 09:06:00.165042 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37361775-fb6c-486f-8d7b-fd93f31bbaf5" containerName="neutron-httpd" Mar 11 09:06:00 crc kubenswrapper[4808]: I0311 09:06:00.165050 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="37361775-fb6c-486f-8d7b-fd93f31bbaf5" containerName="neutron-httpd" Mar 11 09:06:00 crc kubenswrapper[4808]: E0311 09:06:00.165060 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2531f01-6ef8-4583-b788-97e0c8b4b50b" containerName="account-reaper" Mar 11 09:06:00 crc kubenswrapper[4808]: I0311 09:06:00.165068 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2531f01-6ef8-4583-b788-97e0c8b4b50b" containerName="account-reaper" Mar 11 09:06:00 crc kubenswrapper[4808]: I0311 09:06:00.165229 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac93b356-9c32-4094-9de5-8fd25c677810" containerName="ovs-vswitchd" Mar 11 09:06:00 crc kubenswrapper[4808]: I0311 09:06:00.165243 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2531f01-6ef8-4583-b788-97e0c8b4b50b" containerName="swift-recon-cron" Mar 11 09:06:00 crc kubenswrapper[4808]: I0311 09:06:00.165258 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2531f01-6ef8-4583-b788-97e0c8b4b50b" containerName="object-server" Mar 11 09:06:00 crc kubenswrapper[4808]: I0311 09:06:00.165266 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2531f01-6ef8-4583-b788-97e0c8b4b50b" containerName="object-updater" Mar 11 09:06:00 crc kubenswrapper[4808]: I0311 09:06:00.165278 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2531f01-6ef8-4583-b788-97e0c8b4b50b" containerName="container-server" Mar 11 09:06:00 crc kubenswrapper[4808]: I0311 09:06:00.165290 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec544436-0ec9-40e9-bf1e-54544208af4c" containerName="registry-server" Mar 11 09:06:00 crc kubenswrapper[4808]: I0311 09:06:00.165300 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="37361775-fb6c-486f-8d7b-fd93f31bbaf5" containerName="neutron-httpd" Mar 11 09:06:00 crc kubenswrapper[4808]: I0311 09:06:00.165310 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2531f01-6ef8-4583-b788-97e0c8b4b50b" containerName="rsync" Mar 11 09:06:00 crc kubenswrapper[4808]: I0311 09:06:00.165326 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2531f01-6ef8-4583-b788-97e0c8b4b50b" containerName="object-replicator" Mar 11 09:06:00 crc kubenswrapper[4808]: I0311 09:06:00.165337 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2531f01-6ef8-4583-b788-97e0c8b4b50b" containerName="account-server" Mar 11 09:06:00 crc kubenswrapper[4808]: I0311 09:06:00.165350 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac93b356-9c32-4094-9de5-8fd25c677810" containerName="ovsdb-server" Mar 11 09:06:00 crc kubenswrapper[4808]: I0311 09:06:00.165388 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2531f01-6ef8-4583-b788-97e0c8b4b50b" containerName="object-expirer" Mar 11 09:06:00 crc kubenswrapper[4808]: I0311 09:06:00.165403 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2531f01-6ef8-4583-b788-97e0c8b4b50b" containerName="container-auditor" Mar 11 09:06:00 crc kubenswrapper[4808]: I0311 09:06:00.165417 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2531f01-6ef8-4583-b788-97e0c8b4b50b" containerName="account-reaper" Mar 11 09:06:00 crc kubenswrapper[4808]: I0311 09:06:00.165427 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2531f01-6ef8-4583-b788-97e0c8b4b50b" containerName="container-updater" Mar 11 09:06:00 crc kubenswrapper[4808]: I0311 09:06:00.165438 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="37361775-fb6c-486f-8d7b-fd93f31bbaf5" containerName="neutron-api" Mar 11 09:06:00 crc kubenswrapper[4808]: I0311 09:06:00.165451 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2531f01-6ef8-4583-b788-97e0c8b4b50b" containerName="account-replicator" Mar 11 09:06:00 crc kubenswrapper[4808]: I0311 09:06:00.165463 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2531f01-6ef8-4583-b788-97e0c8b4b50b" containerName="object-auditor" Mar 11 09:06:00 crc kubenswrapper[4808]: I0311 09:06:00.165472 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2531f01-6ef8-4583-b788-97e0c8b4b50b" containerName="container-replicator" Mar 11 09:06:00 crc kubenswrapper[4808]: I0311 09:06:00.165483 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2531f01-6ef8-4583-b788-97e0c8b4b50b" containerName="account-auditor" Mar 11 09:06:00 crc kubenswrapper[4808]: I0311 09:06:00.166001 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553666-rfbnf" Mar 11 09:06:00 crc kubenswrapper[4808]: I0311 09:06:00.169746 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 09:06:00 crc kubenswrapper[4808]: I0311 09:06:00.169799 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-8r7cc" Mar 11 09:06:00 crc kubenswrapper[4808]: I0311 09:06:00.170304 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 09:06:00 crc kubenswrapper[4808]: I0311 09:06:00.176187 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553666-rfbnf"] Mar 11 09:06:00 crc kubenswrapper[4808]: I0311 09:06:00.358540 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4m8tq\" (UniqueName: \"kubernetes.io/projected/a344d3bf-78f3-416b-b63f-d1ac03728baf-kube-api-access-4m8tq\") pod \"auto-csr-approver-29553666-rfbnf\" (UID: \"a344d3bf-78f3-416b-b63f-d1ac03728baf\") " pod="openshift-infra/auto-csr-approver-29553666-rfbnf" Mar 11 09:06:00 crc kubenswrapper[4808]: I0311 09:06:00.459778 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4m8tq\" (UniqueName: \"kubernetes.io/projected/a344d3bf-78f3-416b-b63f-d1ac03728baf-kube-api-access-4m8tq\") pod \"auto-csr-approver-29553666-rfbnf\" (UID: \"a344d3bf-78f3-416b-b63f-d1ac03728baf\") " pod="openshift-infra/auto-csr-approver-29553666-rfbnf" Mar 11 09:06:00 crc kubenswrapper[4808]: I0311 09:06:00.496253 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4m8tq\" (UniqueName: \"kubernetes.io/projected/a344d3bf-78f3-416b-b63f-d1ac03728baf-kube-api-access-4m8tq\") pod \"auto-csr-approver-29553666-rfbnf\" (UID: \"a344d3bf-78f3-416b-b63f-d1ac03728baf\") " pod="openshift-infra/auto-csr-approver-29553666-rfbnf" Mar 11 09:06:00 crc kubenswrapper[4808]: I0311 09:06:00.785147 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553666-rfbnf" Mar 11 09:06:01 crc kubenswrapper[4808]: I0311 09:06:01.295643 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553666-rfbnf"] Mar 11 09:06:01 crc kubenswrapper[4808]: I0311 09:06:01.690370 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553666-rfbnf" event={"ID":"a344d3bf-78f3-416b-b63f-d1ac03728baf","Type":"ContainerStarted","Data":"6461a393437b8ae26ca054b600cc9fcc3bfc6607dfd213829f2b1daee6b61384"} Mar 11 09:06:02 crc kubenswrapper[4808]: I0311 09:06:02.700910 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553666-rfbnf" event={"ID":"a344d3bf-78f3-416b-b63f-d1ac03728baf","Type":"ContainerStarted","Data":"2ed7b46258c1046533435bd805e924e189e307712114cbed6cc8b3f1a10aeb7f"} Mar 11 09:06:02 crc kubenswrapper[4808]: I0311 09:06:02.726254 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29553666-rfbnf" podStartSLOduration=1.794020629 podStartE2EDuration="2.726226174s" podCreationTimestamp="2026-03-11 09:06:00 +0000 UTC" firstStartedPulling="2026-03-11 09:06:01.294058137 +0000 UTC m=+1612.247381447" lastFinishedPulling="2026-03-11 09:06:02.226263662 +0000 UTC m=+1613.179586992" observedRunningTime="2026-03-11 09:06:02.720165918 +0000 UTC m=+1613.673489258" watchObservedRunningTime="2026-03-11 09:06:02.726226174 +0000 UTC m=+1613.679549524" Mar 11 09:06:03 crc kubenswrapper[4808]: I0311 09:06:03.716498 4808 generic.go:334] "Generic (PLEG): container finished" podID="a344d3bf-78f3-416b-b63f-d1ac03728baf" containerID="2ed7b46258c1046533435bd805e924e189e307712114cbed6cc8b3f1a10aeb7f" exitCode=0 Mar 11 09:06:03 crc kubenswrapper[4808]: I0311 09:06:03.716542 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553666-rfbnf" event={"ID":"a344d3bf-78f3-416b-b63f-d1ac03728baf","Type":"ContainerDied","Data":"2ed7b46258c1046533435bd805e924e189e307712114cbed6cc8b3f1a10aeb7f"} Mar 11 09:06:04 crc kubenswrapper[4808]: E0311 09:06:04.810675 4808 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3fd1979f_d1de_42a8_be8e_b61087f737bc.slice/crio-36c750f48ac33ddc1cee8a438d54bf885813c3f9d9bdb51b0dfec0a66d24f056.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3fd1979f_d1de_42a8_be8e_b61087f737bc.slice/crio-conmon-36c750f48ac33ddc1cee8a438d54bf885813c3f9d9bdb51b0dfec0a66d24f056.scope\": RecentStats: unable to find data in memory cache]" Mar 11 09:06:05 crc kubenswrapper[4808]: I0311 09:06:05.076696 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553666-rfbnf" Mar 11 09:06:05 crc kubenswrapper[4808]: I0311 09:06:05.131312 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4m8tq\" (UniqueName: \"kubernetes.io/projected/a344d3bf-78f3-416b-b63f-d1ac03728baf-kube-api-access-4m8tq\") pod \"a344d3bf-78f3-416b-b63f-d1ac03728baf\" (UID: \"a344d3bf-78f3-416b-b63f-d1ac03728baf\") " Mar 11 09:06:05 crc kubenswrapper[4808]: I0311 09:06:05.136446 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a344d3bf-78f3-416b-b63f-d1ac03728baf-kube-api-access-4m8tq" (OuterVolumeSpecName: "kube-api-access-4m8tq") pod "a344d3bf-78f3-416b-b63f-d1ac03728baf" (UID: "a344d3bf-78f3-416b-b63f-d1ac03728baf"). InnerVolumeSpecName "kube-api-access-4m8tq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:06:05 crc kubenswrapper[4808]: I0311 09:06:05.232563 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4m8tq\" (UniqueName: \"kubernetes.io/projected/a344d3bf-78f3-416b-b63f-d1ac03728baf-kube-api-access-4m8tq\") on node \"crc\" DevicePath \"\"" Mar 11 09:06:05 crc kubenswrapper[4808]: I0311 09:06:05.739435 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553666-rfbnf" event={"ID":"a344d3bf-78f3-416b-b63f-d1ac03728baf","Type":"ContainerDied","Data":"6461a393437b8ae26ca054b600cc9fcc3bfc6607dfd213829f2b1daee6b61384"} Mar 11 09:06:05 crc kubenswrapper[4808]: I0311 09:06:05.739504 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6461a393437b8ae26ca054b600cc9fcc3bfc6607dfd213829f2b1daee6b61384" Mar 11 09:06:05 crc kubenswrapper[4808]: I0311 09:06:05.739607 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553666-rfbnf" Mar 11 09:06:05 crc kubenswrapper[4808]: I0311 09:06:05.817977 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553660-56kcb"] Mar 11 09:06:05 crc kubenswrapper[4808]: I0311 09:06:05.827743 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553660-56kcb"] Mar 11 09:06:07 crc kubenswrapper[4808]: I0311 09:06:07.789604 4808 scope.go:117] "RemoveContainer" containerID="9a282cc104124a36930296e8446190e2ce24a7536755b142f5bbb0e29d5b97d9" Mar 11 09:06:07 crc kubenswrapper[4808]: E0311 09:06:07.790352 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:06:07 crc kubenswrapper[4808]: I0311 09:06:07.812600 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cdfb1b93-4963-4e67-a64b-6489306f8fc3" path="/var/lib/kubelet/pods/cdfb1b93-4963-4e67-a64b-6489306f8fc3/volumes" Mar 11 09:06:22 crc kubenswrapper[4808]: I0311 09:06:22.789581 4808 scope.go:117] "RemoveContainer" containerID="9a282cc104124a36930296e8446190e2ce24a7536755b142f5bbb0e29d5b97d9" Mar 11 09:06:22 crc kubenswrapper[4808]: E0311 09:06:22.791277 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:06:37 crc kubenswrapper[4808]: I0311 09:06:37.790707 4808 scope.go:117] "RemoveContainer" containerID="9a282cc104124a36930296e8446190e2ce24a7536755b142f5bbb0e29d5b97d9" Mar 11 09:06:37 crc kubenswrapper[4808]: E0311 09:06:37.791531 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:06:48 crc kubenswrapper[4808]: I0311 09:06:48.790282 4808 scope.go:117] "RemoveContainer" containerID="9a282cc104124a36930296e8446190e2ce24a7536755b142f5bbb0e29d5b97d9" Mar 11 09:06:48 crc kubenswrapper[4808]: E0311 09:06:48.791533 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:06:51 crc kubenswrapper[4808]: I0311 09:06:51.146169 4808 scope.go:117] "RemoveContainer" containerID="a6e24348634dbc9149a69b40299ae6faaf01c5af688867530224a61ee87e04f9" Mar 11 09:06:51 crc kubenswrapper[4808]: I0311 09:06:51.188815 4808 scope.go:117] "RemoveContainer" containerID="1e0b35da78fe287646897a43e662b36a085c2c17ed5eebfd88eca56be8ccd50b" Mar 11 09:06:51 crc kubenswrapper[4808]: I0311 09:06:51.234396 4808 scope.go:117] "RemoveContainer" containerID="0389425c7f75e1346c18db409021d7a05794095c602a590c601ece68d2aec62f" Mar 11 09:06:51 crc kubenswrapper[4808]: I0311 09:06:51.274035 4808 scope.go:117] "RemoveContainer" containerID="1e5053fde5053593012a204609352be46ea1f706659e831ffb42e9784089cb3d" Mar 11 09:06:51 crc kubenswrapper[4808]: I0311 09:06:51.298755 4808 scope.go:117] "RemoveContainer" containerID="92849822152ba4ba167bce3b72b0023eb4ac5a5867b0b2030c0a54cc5e08bb5f" Mar 11 09:06:51 crc kubenswrapper[4808]: I0311 09:06:51.317831 4808 scope.go:117] "RemoveContainer" containerID="799583fc7147c5f13c161f65b5dc957a705e3b62301307bedb0b4e78c75d73db" Mar 11 09:06:51 crc kubenswrapper[4808]: I0311 09:06:51.343070 4808 scope.go:117] "RemoveContainer" containerID="86d8334eb64365eca4fd6bcd5fcc557f4b30602304be2ee5bbea4165f15dd9cd" Mar 11 09:06:51 crc kubenswrapper[4808]: I0311 09:06:51.386058 4808 scope.go:117] "RemoveContainer" containerID="828d32f123ce58adca34aac2e5a6c5de4c805db6efdb744bad2536f49ee36dc6" Mar 11 09:06:51 crc kubenswrapper[4808]: I0311 09:06:51.406169 4808 scope.go:117] "RemoveContainer" containerID="82f8eb58e9f3ad3c8cc07e575893f64a2f9e5741e48c57fc799afc4a33b2dc6a" Mar 11 09:07:03 crc kubenswrapper[4808]: I0311 09:07:03.789923 4808 scope.go:117] "RemoveContainer" containerID="9a282cc104124a36930296e8446190e2ce24a7536755b142f5bbb0e29d5b97d9" Mar 11 09:07:03 crc kubenswrapper[4808]: E0311 09:07:03.790890 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:07:18 crc kubenswrapper[4808]: I0311 09:07:18.790507 4808 scope.go:117] "RemoveContainer" containerID="9a282cc104124a36930296e8446190e2ce24a7536755b142f5bbb0e29d5b97d9" Mar 11 09:07:18 crc kubenswrapper[4808]: E0311 09:07:18.791459 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:07:29 crc kubenswrapper[4808]: I0311 09:07:29.794466 4808 scope.go:117] "RemoveContainer" containerID="9a282cc104124a36930296e8446190e2ce24a7536755b142f5bbb0e29d5b97d9" Mar 11 09:07:29 crc kubenswrapper[4808]: E0311 09:07:29.795660 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:07:43 crc kubenswrapper[4808]: I0311 09:07:43.792236 4808 scope.go:117] "RemoveContainer" containerID="9a282cc104124a36930296e8446190e2ce24a7536755b142f5bbb0e29d5b97d9" Mar 11 09:07:43 crc kubenswrapper[4808]: E0311 09:07:43.793164 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:07:51 crc kubenswrapper[4808]: I0311 09:07:51.599137 4808 scope.go:117] "RemoveContainer" containerID="e1d5167e0ef98d8c3a6364c193b5c0835578c635b67e1e0877f6c0259ac12fe8" Mar 11 09:07:51 crc kubenswrapper[4808]: I0311 09:07:51.672546 4808 scope.go:117] "RemoveContainer" containerID="baba432d3961a2c86fc9d83251361663c5b70aad2fc46af4af66f314f4348ec9" Mar 11 09:07:51 crc kubenswrapper[4808]: I0311 09:07:51.731260 4808 scope.go:117] "RemoveContainer" containerID="63c3878f799661a8107b4ab4884d9b3b435825ecb495eafd1aa272df9f3dac40" Mar 11 09:07:51 crc kubenswrapper[4808]: I0311 09:07:51.778742 4808 scope.go:117] "RemoveContainer" containerID="53d4dc592fa925d1f668a0777b7efa1859394b6a110fbb6a7bc3f84c3d69c10b" Mar 11 09:07:51 crc kubenswrapper[4808]: I0311 09:07:51.808250 4808 scope.go:117] "RemoveContainer" containerID="1e995a57e4f031753f2434695222feaa9b97c3336d52b19d5f236611f1937daa" Mar 11 09:07:51 crc kubenswrapper[4808]: I0311 09:07:51.839276 4808 scope.go:117] "RemoveContainer" containerID="29557113955ec225b88890526bdc8ee7308caa89a7f024d9bfa00ba222ead59c" Mar 11 09:07:51 crc kubenswrapper[4808]: I0311 09:07:51.882524 4808 scope.go:117] "RemoveContainer" containerID="8847b3ed61fa0264c90d6d596e3065ce3477671b8ae7c4dcf8914edca1f79d84" Mar 11 09:07:51 crc kubenswrapper[4808]: I0311 09:07:51.917418 4808 scope.go:117] "RemoveContainer" containerID="b1dd2cef2df602ec6e3bf88dd7ffec4d1afd4d689d44b78e7842637f3a9514e0" Mar 11 09:07:51 crc kubenswrapper[4808]: I0311 09:07:51.941976 4808 scope.go:117] "RemoveContainer" containerID="cb3ce65524aaf92c57bd65d02e77f99b1572dbdb2d2fc4229b1ecb25766b4545" Mar 11 09:07:51 crc kubenswrapper[4808]: I0311 09:07:51.979621 4808 scope.go:117] "RemoveContainer" containerID="6cf76293f7898e4cc19049e75e962faaa5a680212f9f242bc918b30601d04608" Mar 11 09:07:52 crc kubenswrapper[4808]: I0311 09:07:52.005050 4808 scope.go:117] "RemoveContainer" containerID="af8c28b1bc01858c7533acc734199553d01fb4d4f078a8c4a4f0170fb48a19fd" Mar 11 09:07:52 crc kubenswrapper[4808]: I0311 09:07:52.041818 4808 scope.go:117] "RemoveContainer" containerID="7d3f919e929f2c67fd8f52fc95d0e63db5f86456fb6f7636f5fe99bacfe5ab3e" Mar 11 09:07:52 crc kubenswrapper[4808]: I0311 09:07:52.066488 4808 scope.go:117] "RemoveContainer" containerID="e7f73eff01ef0223f26e774598f8d99a1acae0436283634adab1b2c6d4f6862e" Mar 11 09:07:52 crc kubenswrapper[4808]: I0311 09:07:52.093957 4808 scope.go:117] "RemoveContainer" containerID="9eff5d169ad499295a27031b8c4fa2bdf75a4be1ad00b111d6ac055fc71535bb" Mar 11 09:07:52 crc kubenswrapper[4808]: I0311 09:07:52.119567 4808 scope.go:117] "RemoveContainer" containerID="439d8d9098e6f2a4d0490da8ab9290e2957fa9f44dcf12cb0782f780b00d1f27" Mar 11 09:07:54 crc kubenswrapper[4808]: I0311 09:07:54.790624 4808 scope.go:117] "RemoveContainer" containerID="9a282cc104124a36930296e8446190e2ce24a7536755b142f5bbb0e29d5b97d9" Mar 11 09:07:54 crc kubenswrapper[4808]: E0311 09:07:54.792489 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:08:00 crc kubenswrapper[4808]: I0311 09:08:00.143865 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553668-zthqs"] Mar 11 09:08:00 crc kubenswrapper[4808]: E0311 09:08:00.145557 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a344d3bf-78f3-416b-b63f-d1ac03728baf" containerName="oc" Mar 11 09:08:00 crc kubenswrapper[4808]: I0311 09:08:00.145580 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="a344d3bf-78f3-416b-b63f-d1ac03728baf" containerName="oc" Mar 11 09:08:00 crc kubenswrapper[4808]: I0311 09:08:00.145927 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="a344d3bf-78f3-416b-b63f-d1ac03728baf" containerName="oc" Mar 11 09:08:00 crc kubenswrapper[4808]: I0311 09:08:00.147867 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553668-zthqs" Mar 11 09:08:00 crc kubenswrapper[4808]: I0311 09:08:00.151684 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 09:08:00 crc kubenswrapper[4808]: I0311 09:08:00.152180 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-8r7cc" Mar 11 09:08:00 crc kubenswrapper[4808]: I0311 09:08:00.152185 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 09:08:00 crc kubenswrapper[4808]: I0311 09:08:00.158875 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553668-zthqs"] Mar 11 09:08:00 crc kubenswrapper[4808]: I0311 09:08:00.245210 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7f4m\" (UniqueName: \"kubernetes.io/projected/5cdb5fb0-486f-4682-8b57-8d8547c75c40-kube-api-access-s7f4m\") pod \"auto-csr-approver-29553668-zthqs\" (UID: \"5cdb5fb0-486f-4682-8b57-8d8547c75c40\") " pod="openshift-infra/auto-csr-approver-29553668-zthqs" Mar 11 09:08:00 crc kubenswrapper[4808]: I0311 09:08:00.347055 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7f4m\" (UniqueName: \"kubernetes.io/projected/5cdb5fb0-486f-4682-8b57-8d8547c75c40-kube-api-access-s7f4m\") pod \"auto-csr-approver-29553668-zthqs\" (UID: \"5cdb5fb0-486f-4682-8b57-8d8547c75c40\") " pod="openshift-infra/auto-csr-approver-29553668-zthqs" Mar 11 09:08:00 crc kubenswrapper[4808]: I0311 09:08:00.378767 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7f4m\" (UniqueName: \"kubernetes.io/projected/5cdb5fb0-486f-4682-8b57-8d8547c75c40-kube-api-access-s7f4m\") pod \"auto-csr-approver-29553668-zthqs\" (UID: \"5cdb5fb0-486f-4682-8b57-8d8547c75c40\") " pod="openshift-infra/auto-csr-approver-29553668-zthqs" Mar 11 09:08:00 crc kubenswrapper[4808]: I0311 09:08:00.473532 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553668-zthqs" Mar 11 09:08:00 crc kubenswrapper[4808]: I0311 09:08:00.938762 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553668-zthqs"] Mar 11 09:08:00 crc kubenswrapper[4808]: W0311 09:08:00.950598 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5cdb5fb0_486f_4682_8b57_8d8547c75c40.slice/crio-e7012ffe78d9e8da5d9f8f204933a3edba8f4177e81cd61ecc891c4dd27a1673 WatchSource:0}: Error finding container e7012ffe78d9e8da5d9f8f204933a3edba8f4177e81cd61ecc891c4dd27a1673: Status 404 returned error can't find the container with id e7012ffe78d9e8da5d9f8f204933a3edba8f4177e81cd61ecc891c4dd27a1673 Mar 11 09:08:01 crc kubenswrapper[4808]: I0311 09:08:01.947708 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553668-zthqs" event={"ID":"5cdb5fb0-486f-4682-8b57-8d8547c75c40","Type":"ContainerStarted","Data":"e7012ffe78d9e8da5d9f8f204933a3edba8f4177e81cd61ecc891c4dd27a1673"} Mar 11 09:08:02 crc kubenswrapper[4808]: I0311 09:08:02.955240 4808 generic.go:334] "Generic (PLEG): container finished" podID="5cdb5fb0-486f-4682-8b57-8d8547c75c40" containerID="2e71b80d005e7f62280ffd44d21cda569cd2fc6ee8722b786691772fc9206bc8" exitCode=0 Mar 11 09:08:02 crc kubenswrapper[4808]: I0311 09:08:02.955295 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553668-zthqs" event={"ID":"5cdb5fb0-486f-4682-8b57-8d8547c75c40","Type":"ContainerDied","Data":"2e71b80d005e7f62280ffd44d21cda569cd2fc6ee8722b786691772fc9206bc8"} Mar 11 09:08:04 crc kubenswrapper[4808]: I0311 09:08:04.203305 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553668-zthqs" Mar 11 09:08:04 crc kubenswrapper[4808]: I0311 09:08:04.303537 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s7f4m\" (UniqueName: \"kubernetes.io/projected/5cdb5fb0-486f-4682-8b57-8d8547c75c40-kube-api-access-s7f4m\") pod \"5cdb5fb0-486f-4682-8b57-8d8547c75c40\" (UID: \"5cdb5fb0-486f-4682-8b57-8d8547c75c40\") " Mar 11 09:08:04 crc kubenswrapper[4808]: I0311 09:08:04.312592 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5cdb5fb0-486f-4682-8b57-8d8547c75c40-kube-api-access-s7f4m" (OuterVolumeSpecName: "kube-api-access-s7f4m") pod "5cdb5fb0-486f-4682-8b57-8d8547c75c40" (UID: "5cdb5fb0-486f-4682-8b57-8d8547c75c40"). InnerVolumeSpecName "kube-api-access-s7f4m". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:08:04 crc kubenswrapper[4808]: I0311 09:08:04.405845 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s7f4m\" (UniqueName: \"kubernetes.io/projected/5cdb5fb0-486f-4682-8b57-8d8547c75c40-kube-api-access-s7f4m\") on node \"crc\" DevicePath \"\"" Mar 11 09:08:04 crc kubenswrapper[4808]: I0311 09:08:04.972067 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553668-zthqs" event={"ID":"5cdb5fb0-486f-4682-8b57-8d8547c75c40","Type":"ContainerDied","Data":"e7012ffe78d9e8da5d9f8f204933a3edba8f4177e81cd61ecc891c4dd27a1673"} Mar 11 09:08:04 crc kubenswrapper[4808]: I0311 09:08:04.972148 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7012ffe78d9e8da5d9f8f204933a3edba8f4177e81cd61ecc891c4dd27a1673" Mar 11 09:08:04 crc kubenswrapper[4808]: I0311 09:08:04.972221 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553668-zthqs" Mar 11 09:08:05 crc kubenswrapper[4808]: I0311 09:08:05.302428 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553662-hzj4w"] Mar 11 09:08:05 crc kubenswrapper[4808]: I0311 09:08:05.312282 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553662-hzj4w"] Mar 11 09:08:05 crc kubenswrapper[4808]: I0311 09:08:05.799082 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45244591-dcfa-4143-81a3-0be4bbca5450" path="/var/lib/kubelet/pods/45244591-dcfa-4143-81a3-0be4bbca5450/volumes" Mar 11 09:08:08 crc kubenswrapper[4808]: I0311 09:08:08.789313 4808 scope.go:117] "RemoveContainer" containerID="9a282cc104124a36930296e8446190e2ce24a7536755b142f5bbb0e29d5b97d9" Mar 11 09:08:08 crc kubenswrapper[4808]: E0311 09:08:08.789908 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:08:21 crc kubenswrapper[4808]: I0311 09:08:21.789482 4808 scope.go:117] "RemoveContainer" containerID="9a282cc104124a36930296e8446190e2ce24a7536755b142f5bbb0e29d5b97d9" Mar 11 09:08:21 crc kubenswrapper[4808]: E0311 09:08:21.791026 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:08:36 crc kubenswrapper[4808]: I0311 09:08:36.789777 4808 scope.go:117] "RemoveContainer" containerID="9a282cc104124a36930296e8446190e2ce24a7536755b142f5bbb0e29d5b97d9" Mar 11 09:08:36 crc kubenswrapper[4808]: E0311 09:08:36.790788 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:08:50 crc kubenswrapper[4808]: I0311 09:08:50.789274 4808 scope.go:117] "RemoveContainer" containerID="9a282cc104124a36930296e8446190e2ce24a7536755b142f5bbb0e29d5b97d9" Mar 11 09:08:50 crc kubenswrapper[4808]: E0311 09:08:50.789849 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:08:52 crc kubenswrapper[4808]: I0311 09:08:52.357001 4808 scope.go:117] "RemoveContainer" containerID="c57aa8a8af604e7085005636bc6d04d8b2f1f852212d6609092d95311d5a8ea9" Mar 11 09:08:52 crc kubenswrapper[4808]: I0311 09:08:52.404037 4808 scope.go:117] "RemoveContainer" containerID="af7c20691f2ca3c9ce9cd99eba3a140d610ed51c1ead1e4be3eb85a879f669b7" Mar 11 09:08:52 crc kubenswrapper[4808]: I0311 09:08:52.438864 4808 scope.go:117] "RemoveContainer" containerID="83e4299f8db969047f115b2e6d865e8ed4751b1a09a6cc0470a630153af91366" Mar 11 09:08:52 crc kubenswrapper[4808]: I0311 09:08:52.464039 4808 scope.go:117] "RemoveContainer" containerID="7c27079d33e451ffa153a65361da4dfa11e1369b3f6cff896dfa022cdc0aee8f" Mar 11 09:08:52 crc kubenswrapper[4808]: I0311 09:08:52.482783 4808 scope.go:117] "RemoveContainer" containerID="7466a0268d7368fd7ab3cd95b2a779f359f149cf2f4b6201476727e40a6d77bd" Mar 11 09:08:52 crc kubenswrapper[4808]: I0311 09:08:52.534116 4808 scope.go:117] "RemoveContainer" containerID="d1944cb374befa9392ca208c08c1f194ef3875e319ef0b634adb20cebad58110" Mar 11 09:08:52 crc kubenswrapper[4808]: I0311 09:08:52.552014 4808 scope.go:117] "RemoveContainer" containerID="bdae89947aee5d01574bc898d1bb0cd1ef75b87aa4aea1f2faa3e6aefa15e2fa" Mar 11 09:08:52 crc kubenswrapper[4808]: I0311 09:08:52.569668 4808 scope.go:117] "RemoveContainer" containerID="7698d68a645522d06d9e9b82db59dae460efc7f1486f465cafc9671f6152e29c" Mar 11 09:08:52 crc kubenswrapper[4808]: I0311 09:08:52.601813 4808 scope.go:117] "RemoveContainer" containerID="52ab7660270800e9762a93dcee038a29b5580e99936735b6f8a6a592b04438f0" Mar 11 09:08:52 crc kubenswrapper[4808]: I0311 09:08:52.635942 4808 scope.go:117] "RemoveContainer" containerID="d8aa3bfc204c1fbab362a3e358821b2edbdd78652875ef2e61ff3450beb8bec7" Mar 11 09:08:52 crc kubenswrapper[4808]: I0311 09:08:52.656573 4808 scope.go:117] "RemoveContainer" containerID="f0ec50125001e1ecc713469145b8273720df67689800b8bfa82e2d70ca2acaa4" Mar 11 09:08:52 crc kubenswrapper[4808]: I0311 09:08:52.694937 4808 scope.go:117] "RemoveContainer" containerID="ed78a0f60c4b6e77480698d703044dd5de4e4c818ad638d0fa966fddfb16ff16" Mar 11 09:08:52 crc kubenswrapper[4808]: I0311 09:08:52.709970 4808 scope.go:117] "RemoveContainer" containerID="649e62172af41ee32fa8ad64ffdf69d7b43c61361a5f0fec3f85bbd20e28c3fa" Mar 11 09:08:52 crc kubenswrapper[4808]: I0311 09:08:52.726061 4808 scope.go:117] "RemoveContainer" containerID="5bbe12a55844c71bd4cfc6765ae8a6f51e3c137772a16659625da2613a4bab60" Mar 11 09:08:52 crc kubenswrapper[4808]: I0311 09:08:52.744435 4808 scope.go:117] "RemoveContainer" containerID="f21d8ff4624c0e299072dec0ed64600ad702be3d85c6613566143c1a8fa60529" Mar 11 09:09:04 crc kubenswrapper[4808]: I0311 09:09:04.789629 4808 scope.go:117] "RemoveContainer" containerID="9a282cc104124a36930296e8446190e2ce24a7536755b142f5bbb0e29d5b97d9" Mar 11 09:09:04 crc kubenswrapper[4808]: E0311 09:09:04.790616 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:09:16 crc kubenswrapper[4808]: I0311 09:09:16.789789 4808 scope.go:117] "RemoveContainer" containerID="9a282cc104124a36930296e8446190e2ce24a7536755b142f5bbb0e29d5b97d9" Mar 11 09:09:16 crc kubenswrapper[4808]: E0311 09:09:16.790404 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:09:27 crc kubenswrapper[4808]: I0311 09:09:27.789897 4808 scope.go:117] "RemoveContainer" containerID="9a282cc104124a36930296e8446190e2ce24a7536755b142f5bbb0e29d5b97d9" Mar 11 09:09:27 crc kubenswrapper[4808]: E0311 09:09:27.791814 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:09:38 crc kubenswrapper[4808]: I0311 09:09:38.789118 4808 scope.go:117] "RemoveContainer" containerID="9a282cc104124a36930296e8446190e2ce24a7536755b142f5bbb0e29d5b97d9" Mar 11 09:09:38 crc kubenswrapper[4808]: E0311 09:09:38.790756 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:09:52 crc kubenswrapper[4808]: I0311 09:09:52.790014 4808 scope.go:117] "RemoveContainer" containerID="9a282cc104124a36930296e8446190e2ce24a7536755b142f5bbb0e29d5b97d9" Mar 11 09:09:52 crc kubenswrapper[4808]: E0311 09:09:52.790901 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:09:52 crc kubenswrapper[4808]: I0311 09:09:52.886761 4808 scope.go:117] "RemoveContainer" containerID="2098f288f36d618b0b8ff8db38f323ea05e37a3350ea1f3a2a163a5c4608f26c" Mar 11 09:09:52 crc kubenswrapper[4808]: I0311 09:09:52.918450 4808 scope.go:117] "RemoveContainer" containerID="6030d37cc99ab2df68249c8c25ca42f7c3e532c1dfe31feb2d90ccfa67c70a0c" Mar 11 09:09:52 crc kubenswrapper[4808]: I0311 09:09:52.977919 4808 scope.go:117] "RemoveContainer" containerID="f69486871219bd5a3cab0b6e63e7846b8bd6ca99e12a1918f2001e44c12769b8" Mar 11 09:09:53 crc kubenswrapper[4808]: I0311 09:09:53.009318 4808 scope.go:117] "RemoveContainer" containerID="f5993ed9fccb696165a5956e8d830cce2b087e556012593cbecfbedebe131fea" Mar 11 09:09:53 crc kubenswrapper[4808]: I0311 09:09:53.035834 4808 scope.go:117] "RemoveContainer" containerID="f7de189655d9da6f7a1e8cc7c1967571cdb0ed5bcda3796b4f41b1596fabc0a5" Mar 11 09:10:00 crc kubenswrapper[4808]: I0311 09:10:00.172482 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553670-qhptn"] Mar 11 09:10:00 crc kubenswrapper[4808]: E0311 09:10:00.173733 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cdb5fb0-486f-4682-8b57-8d8547c75c40" containerName="oc" Mar 11 09:10:00 crc kubenswrapper[4808]: I0311 09:10:00.173757 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cdb5fb0-486f-4682-8b57-8d8547c75c40" containerName="oc" Mar 11 09:10:00 crc kubenswrapper[4808]: I0311 09:10:00.174046 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cdb5fb0-486f-4682-8b57-8d8547c75c40" containerName="oc" Mar 11 09:10:00 crc kubenswrapper[4808]: I0311 09:10:00.174832 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553670-qhptn" Mar 11 09:10:00 crc kubenswrapper[4808]: I0311 09:10:00.179042 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-8r7cc" Mar 11 09:10:00 crc kubenswrapper[4808]: I0311 09:10:00.179251 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 09:10:00 crc kubenswrapper[4808]: I0311 09:10:00.180163 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 09:10:00 crc kubenswrapper[4808]: I0311 09:10:00.183745 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553670-qhptn"] Mar 11 09:10:00 crc kubenswrapper[4808]: I0311 09:10:00.277772 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlgzs\" (UniqueName: \"kubernetes.io/projected/8dc7c529-f9ec-4c84-bcf3-5d5c97aa1b3b-kube-api-access-tlgzs\") pod \"auto-csr-approver-29553670-qhptn\" (UID: \"8dc7c529-f9ec-4c84-bcf3-5d5c97aa1b3b\") " pod="openshift-infra/auto-csr-approver-29553670-qhptn" Mar 11 09:10:00 crc kubenswrapper[4808]: I0311 09:10:00.378729 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlgzs\" (UniqueName: \"kubernetes.io/projected/8dc7c529-f9ec-4c84-bcf3-5d5c97aa1b3b-kube-api-access-tlgzs\") pod \"auto-csr-approver-29553670-qhptn\" (UID: \"8dc7c529-f9ec-4c84-bcf3-5d5c97aa1b3b\") " pod="openshift-infra/auto-csr-approver-29553670-qhptn" Mar 11 09:10:00 crc kubenswrapper[4808]: I0311 09:10:00.407022 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlgzs\" (UniqueName: \"kubernetes.io/projected/8dc7c529-f9ec-4c84-bcf3-5d5c97aa1b3b-kube-api-access-tlgzs\") pod \"auto-csr-approver-29553670-qhptn\" (UID: \"8dc7c529-f9ec-4c84-bcf3-5d5c97aa1b3b\") " pod="openshift-infra/auto-csr-approver-29553670-qhptn" Mar 11 09:10:00 crc kubenswrapper[4808]: I0311 09:10:00.510999 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553670-qhptn" Mar 11 09:10:00 crc kubenswrapper[4808]: I0311 09:10:00.934305 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553670-qhptn"] Mar 11 09:10:00 crc kubenswrapper[4808]: W0311 09:10:00.935617 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8dc7c529_f9ec_4c84_bcf3_5d5c97aa1b3b.slice/crio-bfb540fb813b1f03909f512cca85af72268fb6251ff692175dc4e28c8f7eb3c2 WatchSource:0}: Error finding container bfb540fb813b1f03909f512cca85af72268fb6251ff692175dc4e28c8f7eb3c2: Status 404 returned error can't find the container with id bfb540fb813b1f03909f512cca85af72268fb6251ff692175dc4e28c8f7eb3c2 Mar 11 09:10:00 crc kubenswrapper[4808]: I0311 09:10:00.937403 4808 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 11 09:10:01 crc kubenswrapper[4808]: I0311 09:10:01.014960 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553670-qhptn" event={"ID":"8dc7c529-f9ec-4c84-bcf3-5d5c97aa1b3b","Type":"ContainerStarted","Data":"bfb540fb813b1f03909f512cca85af72268fb6251ff692175dc4e28c8f7eb3c2"} Mar 11 09:10:03 crc kubenswrapper[4808]: I0311 09:10:03.031125 4808 generic.go:334] "Generic (PLEG): container finished" podID="8dc7c529-f9ec-4c84-bcf3-5d5c97aa1b3b" containerID="f3fc88acc0e4e7832c4d82f271ad9f16e21fbe5b73a9e3013b2fe512a046b0aa" exitCode=0 Mar 11 09:10:03 crc kubenswrapper[4808]: I0311 09:10:03.031181 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553670-qhptn" event={"ID":"8dc7c529-f9ec-4c84-bcf3-5d5c97aa1b3b","Type":"ContainerDied","Data":"f3fc88acc0e4e7832c4d82f271ad9f16e21fbe5b73a9e3013b2fe512a046b0aa"} Mar 11 09:10:04 crc kubenswrapper[4808]: I0311 09:10:04.283838 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553670-qhptn" Mar 11 09:10:04 crc kubenswrapper[4808]: I0311 09:10:04.435492 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tlgzs\" (UniqueName: \"kubernetes.io/projected/8dc7c529-f9ec-4c84-bcf3-5d5c97aa1b3b-kube-api-access-tlgzs\") pod \"8dc7c529-f9ec-4c84-bcf3-5d5c97aa1b3b\" (UID: \"8dc7c529-f9ec-4c84-bcf3-5d5c97aa1b3b\") " Mar 11 09:10:04 crc kubenswrapper[4808]: I0311 09:10:04.443520 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8dc7c529-f9ec-4c84-bcf3-5d5c97aa1b3b-kube-api-access-tlgzs" (OuterVolumeSpecName: "kube-api-access-tlgzs") pod "8dc7c529-f9ec-4c84-bcf3-5d5c97aa1b3b" (UID: "8dc7c529-f9ec-4c84-bcf3-5d5c97aa1b3b"). InnerVolumeSpecName "kube-api-access-tlgzs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:10:04 crc kubenswrapper[4808]: I0311 09:10:04.537716 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tlgzs\" (UniqueName: \"kubernetes.io/projected/8dc7c529-f9ec-4c84-bcf3-5d5c97aa1b3b-kube-api-access-tlgzs\") on node \"crc\" DevicePath \"\"" Mar 11 09:10:05 crc kubenswrapper[4808]: I0311 09:10:05.046853 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553670-qhptn" event={"ID":"8dc7c529-f9ec-4c84-bcf3-5d5c97aa1b3b","Type":"ContainerDied","Data":"bfb540fb813b1f03909f512cca85af72268fb6251ff692175dc4e28c8f7eb3c2"} Mar 11 09:10:05 crc kubenswrapper[4808]: I0311 09:10:05.046893 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bfb540fb813b1f03909f512cca85af72268fb6251ff692175dc4e28c8f7eb3c2" Mar 11 09:10:05 crc kubenswrapper[4808]: I0311 09:10:05.047379 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553670-qhptn" Mar 11 09:10:05 crc kubenswrapper[4808]: I0311 09:10:05.348423 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553664-5zwr4"] Mar 11 09:10:05 crc kubenswrapper[4808]: I0311 09:10:05.354523 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553664-5zwr4"] Mar 11 09:10:05 crc kubenswrapper[4808]: I0311 09:10:05.789583 4808 scope.go:117] "RemoveContainer" containerID="9a282cc104124a36930296e8446190e2ce24a7536755b142f5bbb0e29d5b97d9" Mar 11 09:10:05 crc kubenswrapper[4808]: E0311 09:10:05.789941 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:10:05 crc kubenswrapper[4808]: I0311 09:10:05.802748 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46ec9b97-accf-46c4-9ec4-a66f90979d05" path="/var/lib/kubelet/pods/46ec9b97-accf-46c4-9ec4-a66f90979d05/volumes" Mar 11 09:10:19 crc kubenswrapper[4808]: I0311 09:10:19.793498 4808 scope.go:117] "RemoveContainer" containerID="9a282cc104124a36930296e8446190e2ce24a7536755b142f5bbb0e29d5b97d9" Mar 11 09:10:20 crc kubenswrapper[4808]: I0311 09:10:20.180611 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" event={"ID":"3dda5309-668d-4e3c-b3b2-1d708eecc578","Type":"ContainerStarted","Data":"d9208ed99d4f952025aa3f09482678464767b3c565dc523726a51651018e18fb"} Mar 11 09:10:53 crc kubenswrapper[4808]: I0311 09:10:53.102000 4808 scope.go:117] "RemoveContainer" containerID="0b673ef902153b042052c1c50aa5304582d6373cf0de80c64763348094d8a4a3" Mar 11 09:10:53 crc kubenswrapper[4808]: I0311 09:10:53.146768 4808 scope.go:117] "RemoveContainer" containerID="07ff8f6d1805a2a54b4d0bba98badaaa8bb257fee469c5d8c592bb7825b277e9" Mar 11 09:10:53 crc kubenswrapper[4808]: I0311 09:10:53.198006 4808 scope.go:117] "RemoveContainer" containerID="41aef0a023a3378b84cb70fe3872c109eb0c3c9ac4b7471c94ff982c80281442" Mar 11 09:10:53 crc kubenswrapper[4808]: I0311 09:10:53.224809 4808 scope.go:117] "RemoveContainer" containerID="d02f043e013fd12091fec4a22839ba78f4684779ae4ff1b219c9f90367f903e7" Mar 11 09:12:00 crc kubenswrapper[4808]: I0311 09:12:00.168027 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553672-vd5p2"] Mar 11 09:12:00 crc kubenswrapper[4808]: E0311 09:12:00.168912 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8dc7c529-f9ec-4c84-bcf3-5d5c97aa1b3b" containerName="oc" Mar 11 09:12:00 crc kubenswrapper[4808]: I0311 09:12:00.168925 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="8dc7c529-f9ec-4c84-bcf3-5d5c97aa1b3b" containerName="oc" Mar 11 09:12:00 crc kubenswrapper[4808]: I0311 09:12:00.169060 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="8dc7c529-f9ec-4c84-bcf3-5d5c97aa1b3b" containerName="oc" Mar 11 09:12:00 crc kubenswrapper[4808]: I0311 09:12:00.169663 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553672-vd5p2" Mar 11 09:12:00 crc kubenswrapper[4808]: I0311 09:12:00.183930 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-8r7cc" Mar 11 09:12:00 crc kubenswrapper[4808]: I0311 09:12:00.184125 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 09:12:00 crc kubenswrapper[4808]: I0311 09:12:00.184177 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 09:12:00 crc kubenswrapper[4808]: I0311 09:12:00.195350 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553672-vd5p2"] Mar 11 09:12:00 crc kubenswrapper[4808]: I0311 09:12:00.338736 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmxwg\" (UniqueName: \"kubernetes.io/projected/aa77e4ec-3714-4b83-9245-07d5cbc814d7-kube-api-access-nmxwg\") pod \"auto-csr-approver-29553672-vd5p2\" (UID: \"aa77e4ec-3714-4b83-9245-07d5cbc814d7\") " pod="openshift-infra/auto-csr-approver-29553672-vd5p2" Mar 11 09:12:00 crc kubenswrapper[4808]: I0311 09:12:00.439992 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmxwg\" (UniqueName: \"kubernetes.io/projected/aa77e4ec-3714-4b83-9245-07d5cbc814d7-kube-api-access-nmxwg\") pod \"auto-csr-approver-29553672-vd5p2\" (UID: \"aa77e4ec-3714-4b83-9245-07d5cbc814d7\") " pod="openshift-infra/auto-csr-approver-29553672-vd5p2" Mar 11 09:12:00 crc kubenswrapper[4808]: I0311 09:12:00.463059 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmxwg\" (UniqueName: \"kubernetes.io/projected/aa77e4ec-3714-4b83-9245-07d5cbc814d7-kube-api-access-nmxwg\") pod \"auto-csr-approver-29553672-vd5p2\" (UID: \"aa77e4ec-3714-4b83-9245-07d5cbc814d7\") " pod="openshift-infra/auto-csr-approver-29553672-vd5p2" Mar 11 09:12:00 crc kubenswrapper[4808]: I0311 09:12:00.503166 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553672-vd5p2" Mar 11 09:12:00 crc kubenswrapper[4808]: I0311 09:12:00.702176 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553672-vd5p2"] Mar 11 09:12:00 crc kubenswrapper[4808]: I0311 09:12:00.986555 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553672-vd5p2" event={"ID":"aa77e4ec-3714-4b83-9245-07d5cbc814d7","Type":"ContainerStarted","Data":"55ea1b7c6f57e3493ca857239a2bac386b9dc00eb3f5b21320db325386593726"} Mar 11 09:12:03 crc kubenswrapper[4808]: I0311 09:12:03.005045 4808 generic.go:334] "Generic (PLEG): container finished" podID="aa77e4ec-3714-4b83-9245-07d5cbc814d7" containerID="b768cdfaf7349316cb6f82547734b735c2235f4d8df6a9907758083a5f8874f7" exitCode=0 Mar 11 09:12:03 crc kubenswrapper[4808]: I0311 09:12:03.005090 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553672-vd5p2" event={"ID":"aa77e4ec-3714-4b83-9245-07d5cbc814d7","Type":"ContainerDied","Data":"b768cdfaf7349316cb6f82547734b735c2235f4d8df6a9907758083a5f8874f7"} Mar 11 09:12:04 crc kubenswrapper[4808]: I0311 09:12:04.280630 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553672-vd5p2" Mar 11 09:12:04 crc kubenswrapper[4808]: I0311 09:12:04.404492 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmxwg\" (UniqueName: \"kubernetes.io/projected/aa77e4ec-3714-4b83-9245-07d5cbc814d7-kube-api-access-nmxwg\") pod \"aa77e4ec-3714-4b83-9245-07d5cbc814d7\" (UID: \"aa77e4ec-3714-4b83-9245-07d5cbc814d7\") " Mar 11 09:12:04 crc kubenswrapper[4808]: I0311 09:12:04.411068 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa77e4ec-3714-4b83-9245-07d5cbc814d7-kube-api-access-nmxwg" (OuterVolumeSpecName: "kube-api-access-nmxwg") pod "aa77e4ec-3714-4b83-9245-07d5cbc814d7" (UID: "aa77e4ec-3714-4b83-9245-07d5cbc814d7"). InnerVolumeSpecName "kube-api-access-nmxwg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:12:04 crc kubenswrapper[4808]: I0311 09:12:04.506345 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nmxwg\" (UniqueName: \"kubernetes.io/projected/aa77e4ec-3714-4b83-9245-07d5cbc814d7-kube-api-access-nmxwg\") on node \"crc\" DevicePath \"\"" Mar 11 09:12:05 crc kubenswrapper[4808]: I0311 09:12:05.032893 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553672-vd5p2" event={"ID":"aa77e4ec-3714-4b83-9245-07d5cbc814d7","Type":"ContainerDied","Data":"55ea1b7c6f57e3493ca857239a2bac386b9dc00eb3f5b21320db325386593726"} Mar 11 09:12:05 crc kubenswrapper[4808]: I0311 09:12:05.032940 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55ea1b7c6f57e3493ca857239a2bac386b9dc00eb3f5b21320db325386593726" Mar 11 09:12:05 crc kubenswrapper[4808]: I0311 09:12:05.033011 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553672-vd5p2" Mar 11 09:12:05 crc kubenswrapper[4808]: I0311 09:12:05.354629 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553666-rfbnf"] Mar 11 09:12:05 crc kubenswrapper[4808]: I0311 09:12:05.362782 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553666-rfbnf"] Mar 11 09:12:05 crc kubenswrapper[4808]: I0311 09:12:05.804862 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a344d3bf-78f3-416b-b63f-d1ac03728baf" path="/var/lib/kubelet/pods/a344d3bf-78f3-416b-b63f-d1ac03728baf/volumes" Mar 11 09:12:15 crc kubenswrapper[4808]: I0311 09:12:15.227887 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8gp7f"] Mar 11 09:12:15 crc kubenswrapper[4808]: E0311 09:12:15.229314 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa77e4ec-3714-4b83-9245-07d5cbc814d7" containerName="oc" Mar 11 09:12:15 crc kubenswrapper[4808]: I0311 09:12:15.229346 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa77e4ec-3714-4b83-9245-07d5cbc814d7" containerName="oc" Mar 11 09:12:15 crc kubenswrapper[4808]: I0311 09:12:15.229726 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa77e4ec-3714-4b83-9245-07d5cbc814d7" containerName="oc" Mar 11 09:12:15 crc kubenswrapper[4808]: I0311 09:12:15.232300 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8gp7f" Mar 11 09:12:15 crc kubenswrapper[4808]: I0311 09:12:15.239815 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8gp7f"] Mar 11 09:12:15 crc kubenswrapper[4808]: I0311 09:12:15.295516 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c0d896e-aab8-4ee0-9fb9-7ec5958c8f1f-utilities\") pod \"certified-operators-8gp7f\" (UID: \"0c0d896e-aab8-4ee0-9fb9-7ec5958c8f1f\") " pod="openshift-marketplace/certified-operators-8gp7f" Mar 11 09:12:15 crc kubenswrapper[4808]: I0311 09:12:15.296035 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zd9s5\" (UniqueName: \"kubernetes.io/projected/0c0d896e-aab8-4ee0-9fb9-7ec5958c8f1f-kube-api-access-zd9s5\") pod \"certified-operators-8gp7f\" (UID: \"0c0d896e-aab8-4ee0-9fb9-7ec5958c8f1f\") " pod="openshift-marketplace/certified-operators-8gp7f" Mar 11 09:12:15 crc kubenswrapper[4808]: I0311 09:12:15.296236 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c0d896e-aab8-4ee0-9fb9-7ec5958c8f1f-catalog-content\") pod \"certified-operators-8gp7f\" (UID: \"0c0d896e-aab8-4ee0-9fb9-7ec5958c8f1f\") " pod="openshift-marketplace/certified-operators-8gp7f" Mar 11 09:12:15 crc kubenswrapper[4808]: I0311 09:12:15.397626 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c0d896e-aab8-4ee0-9fb9-7ec5958c8f1f-catalog-content\") pod \"certified-operators-8gp7f\" (UID: \"0c0d896e-aab8-4ee0-9fb9-7ec5958c8f1f\") " pod="openshift-marketplace/certified-operators-8gp7f" Mar 11 09:12:15 crc kubenswrapper[4808]: I0311 09:12:15.397724 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c0d896e-aab8-4ee0-9fb9-7ec5958c8f1f-utilities\") pod \"certified-operators-8gp7f\" (UID: \"0c0d896e-aab8-4ee0-9fb9-7ec5958c8f1f\") " pod="openshift-marketplace/certified-operators-8gp7f" Mar 11 09:12:15 crc kubenswrapper[4808]: I0311 09:12:15.397765 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zd9s5\" (UniqueName: \"kubernetes.io/projected/0c0d896e-aab8-4ee0-9fb9-7ec5958c8f1f-kube-api-access-zd9s5\") pod \"certified-operators-8gp7f\" (UID: \"0c0d896e-aab8-4ee0-9fb9-7ec5958c8f1f\") " pod="openshift-marketplace/certified-operators-8gp7f" Mar 11 09:12:15 crc kubenswrapper[4808]: I0311 09:12:15.398853 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c0d896e-aab8-4ee0-9fb9-7ec5958c8f1f-utilities\") pod \"certified-operators-8gp7f\" (UID: \"0c0d896e-aab8-4ee0-9fb9-7ec5958c8f1f\") " pod="openshift-marketplace/certified-operators-8gp7f" Mar 11 09:12:15 crc kubenswrapper[4808]: I0311 09:12:15.399461 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c0d896e-aab8-4ee0-9fb9-7ec5958c8f1f-catalog-content\") pod \"certified-operators-8gp7f\" (UID: \"0c0d896e-aab8-4ee0-9fb9-7ec5958c8f1f\") " pod="openshift-marketplace/certified-operators-8gp7f" Mar 11 09:12:15 crc kubenswrapper[4808]: I0311 09:12:15.403244 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dm9bp"] Mar 11 09:12:15 crc kubenswrapper[4808]: I0311 09:12:15.404612 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dm9bp" Mar 11 09:12:15 crc kubenswrapper[4808]: I0311 09:12:15.426384 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zd9s5\" (UniqueName: \"kubernetes.io/projected/0c0d896e-aab8-4ee0-9fb9-7ec5958c8f1f-kube-api-access-zd9s5\") pod \"certified-operators-8gp7f\" (UID: \"0c0d896e-aab8-4ee0-9fb9-7ec5958c8f1f\") " pod="openshift-marketplace/certified-operators-8gp7f" Mar 11 09:12:15 crc kubenswrapper[4808]: I0311 09:12:15.433108 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dm9bp"] Mar 11 09:12:15 crc kubenswrapper[4808]: I0311 09:12:15.499049 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8nvp\" (UniqueName: \"kubernetes.io/projected/f9fd889b-eb2c-420b-b8df-6f28843170d3-kube-api-access-j8nvp\") pod \"community-operators-dm9bp\" (UID: \"f9fd889b-eb2c-420b-b8df-6f28843170d3\") " pod="openshift-marketplace/community-operators-dm9bp" Mar 11 09:12:15 crc kubenswrapper[4808]: I0311 09:12:15.499116 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9fd889b-eb2c-420b-b8df-6f28843170d3-catalog-content\") pod \"community-operators-dm9bp\" (UID: \"f9fd889b-eb2c-420b-b8df-6f28843170d3\") " pod="openshift-marketplace/community-operators-dm9bp" Mar 11 09:12:15 crc kubenswrapper[4808]: I0311 09:12:15.499146 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9fd889b-eb2c-420b-b8df-6f28843170d3-utilities\") pod \"community-operators-dm9bp\" (UID: \"f9fd889b-eb2c-420b-b8df-6f28843170d3\") " pod="openshift-marketplace/community-operators-dm9bp" Mar 11 09:12:15 crc kubenswrapper[4808]: I0311 09:12:15.572320 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8gp7f" Mar 11 09:12:15 crc kubenswrapper[4808]: I0311 09:12:15.600016 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8nvp\" (UniqueName: \"kubernetes.io/projected/f9fd889b-eb2c-420b-b8df-6f28843170d3-kube-api-access-j8nvp\") pod \"community-operators-dm9bp\" (UID: \"f9fd889b-eb2c-420b-b8df-6f28843170d3\") " pod="openshift-marketplace/community-operators-dm9bp" Mar 11 09:12:15 crc kubenswrapper[4808]: I0311 09:12:15.600070 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9fd889b-eb2c-420b-b8df-6f28843170d3-catalog-content\") pod \"community-operators-dm9bp\" (UID: \"f9fd889b-eb2c-420b-b8df-6f28843170d3\") " pod="openshift-marketplace/community-operators-dm9bp" Mar 11 09:12:15 crc kubenswrapper[4808]: I0311 09:12:15.600091 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9fd889b-eb2c-420b-b8df-6f28843170d3-utilities\") pod \"community-operators-dm9bp\" (UID: \"f9fd889b-eb2c-420b-b8df-6f28843170d3\") " pod="openshift-marketplace/community-operators-dm9bp" Mar 11 09:12:15 crc kubenswrapper[4808]: I0311 09:12:15.601172 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9fd889b-eb2c-420b-b8df-6f28843170d3-utilities\") pod \"community-operators-dm9bp\" (UID: \"f9fd889b-eb2c-420b-b8df-6f28843170d3\") " pod="openshift-marketplace/community-operators-dm9bp" Mar 11 09:12:15 crc kubenswrapper[4808]: I0311 09:12:15.601761 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9fd889b-eb2c-420b-b8df-6f28843170d3-catalog-content\") pod \"community-operators-dm9bp\" (UID: \"f9fd889b-eb2c-420b-b8df-6f28843170d3\") " pod="openshift-marketplace/community-operators-dm9bp" Mar 11 09:12:15 crc kubenswrapper[4808]: I0311 09:12:15.624004 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8nvp\" (UniqueName: \"kubernetes.io/projected/f9fd889b-eb2c-420b-b8df-6f28843170d3-kube-api-access-j8nvp\") pod \"community-operators-dm9bp\" (UID: \"f9fd889b-eb2c-420b-b8df-6f28843170d3\") " pod="openshift-marketplace/community-operators-dm9bp" Mar 11 09:12:15 crc kubenswrapper[4808]: I0311 09:12:15.724079 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dm9bp" Mar 11 09:12:16 crc kubenswrapper[4808]: I0311 09:12:16.068511 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8gp7f"] Mar 11 09:12:16 crc kubenswrapper[4808]: I0311 09:12:16.140390 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8gp7f" event={"ID":"0c0d896e-aab8-4ee0-9fb9-7ec5958c8f1f","Type":"ContainerStarted","Data":"c831031457083af0ee74985ca58b14b5aebda39b37d5f2f668ab54e7b65b5909"} Mar 11 09:12:16 crc kubenswrapper[4808]: I0311 09:12:16.173700 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dm9bp"] Mar 11 09:12:16 crc kubenswrapper[4808]: W0311 09:12:16.182760 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf9fd889b_eb2c_420b_b8df_6f28843170d3.slice/crio-f31ffa28f24605777b8cbd1ae242877c985aaf953f2a2b4a3e24b519daf3376f WatchSource:0}: Error finding container f31ffa28f24605777b8cbd1ae242877c985aaf953f2a2b4a3e24b519daf3376f: Status 404 returned error can't find the container with id f31ffa28f24605777b8cbd1ae242877c985aaf953f2a2b4a3e24b519daf3376f Mar 11 09:12:17 crc kubenswrapper[4808]: I0311 09:12:17.147908 4808 generic.go:334] "Generic (PLEG): container finished" podID="f9fd889b-eb2c-420b-b8df-6f28843170d3" containerID="f6ffdb29b856e1a1e8fe36a634440107d41dbad4ae4f54951e5a0d23c8180194" exitCode=0 Mar 11 09:12:17 crc kubenswrapper[4808]: I0311 09:12:17.148000 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dm9bp" event={"ID":"f9fd889b-eb2c-420b-b8df-6f28843170d3","Type":"ContainerDied","Data":"f6ffdb29b856e1a1e8fe36a634440107d41dbad4ae4f54951e5a0d23c8180194"} Mar 11 09:12:17 crc kubenswrapper[4808]: I0311 09:12:17.148247 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dm9bp" event={"ID":"f9fd889b-eb2c-420b-b8df-6f28843170d3","Type":"ContainerStarted","Data":"f31ffa28f24605777b8cbd1ae242877c985aaf953f2a2b4a3e24b519daf3376f"} Mar 11 09:12:17 crc kubenswrapper[4808]: I0311 09:12:17.150587 4808 generic.go:334] "Generic (PLEG): container finished" podID="0c0d896e-aab8-4ee0-9fb9-7ec5958c8f1f" containerID="6107997e38b583b08debdaf1f4f908115ef0fae3f3ff594d6f8ce8b267809771" exitCode=0 Mar 11 09:12:17 crc kubenswrapper[4808]: I0311 09:12:17.150622 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8gp7f" event={"ID":"0c0d896e-aab8-4ee0-9fb9-7ec5958c8f1f","Type":"ContainerDied","Data":"6107997e38b583b08debdaf1f4f908115ef0fae3f3ff594d6f8ce8b267809771"} Mar 11 09:12:17 crc kubenswrapper[4808]: I0311 09:12:17.811635 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bjc6b"] Mar 11 09:12:17 crc kubenswrapper[4808]: I0311 09:12:17.813240 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bjc6b" Mar 11 09:12:17 crc kubenswrapper[4808]: I0311 09:12:17.821644 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bjc6b"] Mar 11 09:12:17 crc kubenswrapper[4808]: I0311 09:12:17.939147 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9f48e9c-6f76-467b-868f-d8753273eebb-utilities\") pod \"redhat-operators-bjc6b\" (UID: \"e9f48e9c-6f76-467b-868f-d8753273eebb\") " pod="openshift-marketplace/redhat-operators-bjc6b" Mar 11 09:12:17 crc kubenswrapper[4808]: I0311 09:12:17.939494 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7fnz\" (UniqueName: \"kubernetes.io/projected/e9f48e9c-6f76-467b-868f-d8753273eebb-kube-api-access-l7fnz\") pod \"redhat-operators-bjc6b\" (UID: \"e9f48e9c-6f76-467b-868f-d8753273eebb\") " pod="openshift-marketplace/redhat-operators-bjc6b" Mar 11 09:12:17 crc kubenswrapper[4808]: I0311 09:12:17.939653 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9f48e9c-6f76-467b-868f-d8753273eebb-catalog-content\") pod \"redhat-operators-bjc6b\" (UID: \"e9f48e9c-6f76-467b-868f-d8753273eebb\") " pod="openshift-marketplace/redhat-operators-bjc6b" Mar 11 09:12:18 crc kubenswrapper[4808]: I0311 09:12:18.040753 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7fnz\" (UniqueName: \"kubernetes.io/projected/e9f48e9c-6f76-467b-868f-d8753273eebb-kube-api-access-l7fnz\") pod \"redhat-operators-bjc6b\" (UID: \"e9f48e9c-6f76-467b-868f-d8753273eebb\") " pod="openshift-marketplace/redhat-operators-bjc6b" Mar 11 09:12:18 crc kubenswrapper[4808]: I0311 09:12:18.040818 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9f48e9c-6f76-467b-868f-d8753273eebb-catalog-content\") pod \"redhat-operators-bjc6b\" (UID: \"e9f48e9c-6f76-467b-868f-d8753273eebb\") " pod="openshift-marketplace/redhat-operators-bjc6b" Mar 11 09:12:18 crc kubenswrapper[4808]: I0311 09:12:18.040896 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9f48e9c-6f76-467b-868f-d8753273eebb-utilities\") pod \"redhat-operators-bjc6b\" (UID: \"e9f48e9c-6f76-467b-868f-d8753273eebb\") " pod="openshift-marketplace/redhat-operators-bjc6b" Mar 11 09:12:18 crc kubenswrapper[4808]: I0311 09:12:18.041496 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9f48e9c-6f76-467b-868f-d8753273eebb-utilities\") pod \"redhat-operators-bjc6b\" (UID: \"e9f48e9c-6f76-467b-868f-d8753273eebb\") " pod="openshift-marketplace/redhat-operators-bjc6b" Mar 11 09:12:18 crc kubenswrapper[4808]: I0311 09:12:18.041535 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9f48e9c-6f76-467b-868f-d8753273eebb-catalog-content\") pod \"redhat-operators-bjc6b\" (UID: \"e9f48e9c-6f76-467b-868f-d8753273eebb\") " pod="openshift-marketplace/redhat-operators-bjc6b" Mar 11 09:12:18 crc kubenswrapper[4808]: I0311 09:12:18.070350 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7fnz\" (UniqueName: \"kubernetes.io/projected/e9f48e9c-6f76-467b-868f-d8753273eebb-kube-api-access-l7fnz\") pod \"redhat-operators-bjc6b\" (UID: \"e9f48e9c-6f76-467b-868f-d8753273eebb\") " pod="openshift-marketplace/redhat-operators-bjc6b" Mar 11 09:12:18 crc kubenswrapper[4808]: I0311 09:12:18.158326 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8gp7f" event={"ID":"0c0d896e-aab8-4ee0-9fb9-7ec5958c8f1f","Type":"ContainerStarted","Data":"fa938e1c047b54f1143cafd47ad32fe6a52150d51fcf7383c66f293f7b2502b8"} Mar 11 09:12:18 crc kubenswrapper[4808]: I0311 09:12:18.247538 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bjc6b" Mar 11 09:12:18 crc kubenswrapper[4808]: I0311 09:12:18.670745 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bjc6b"] Mar 11 09:12:18 crc kubenswrapper[4808]: W0311 09:12:18.741967 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode9f48e9c_6f76_467b_868f_d8753273eebb.slice/crio-e837113ca6d4c0d4e7c03d7fd950a21e752c337c36dd0fe6d10b5d1ee774e6a5 WatchSource:0}: Error finding container e837113ca6d4c0d4e7c03d7fd950a21e752c337c36dd0fe6d10b5d1ee774e6a5: Status 404 returned error can't find the container with id e837113ca6d4c0d4e7c03d7fd950a21e752c337c36dd0fe6d10b5d1ee774e6a5 Mar 11 09:12:19 crc kubenswrapper[4808]: I0311 09:12:19.167509 4808 generic.go:334] "Generic (PLEG): container finished" podID="0c0d896e-aab8-4ee0-9fb9-7ec5958c8f1f" containerID="fa938e1c047b54f1143cafd47ad32fe6a52150d51fcf7383c66f293f7b2502b8" exitCode=0 Mar 11 09:12:19 crc kubenswrapper[4808]: I0311 09:12:19.167739 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8gp7f" event={"ID":"0c0d896e-aab8-4ee0-9fb9-7ec5958c8f1f","Type":"ContainerDied","Data":"fa938e1c047b54f1143cafd47ad32fe6a52150d51fcf7383c66f293f7b2502b8"} Mar 11 09:12:19 crc kubenswrapper[4808]: I0311 09:12:19.181013 4808 generic.go:334] "Generic (PLEG): container finished" podID="e9f48e9c-6f76-467b-868f-d8753273eebb" containerID="34db354de4a8357b620fd2ad489488e13f8ab9d21b67d57bb1cfaaa6326bb481" exitCode=0 Mar 11 09:12:19 crc kubenswrapper[4808]: I0311 09:12:19.181571 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bjc6b" event={"ID":"e9f48e9c-6f76-467b-868f-d8753273eebb","Type":"ContainerDied","Data":"34db354de4a8357b620fd2ad489488e13f8ab9d21b67d57bb1cfaaa6326bb481"} Mar 11 09:12:19 crc kubenswrapper[4808]: I0311 09:12:19.181651 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bjc6b" event={"ID":"e9f48e9c-6f76-467b-868f-d8753273eebb","Type":"ContainerStarted","Data":"e837113ca6d4c0d4e7c03d7fd950a21e752c337c36dd0fe6d10b5d1ee774e6a5"} Mar 11 09:12:19 crc kubenswrapper[4808]: I0311 09:12:19.187534 4808 generic.go:334] "Generic (PLEG): container finished" podID="f9fd889b-eb2c-420b-b8df-6f28843170d3" containerID="b7ef47496063ac7f9f95545388811fee73d330946de8bfd27464318b26707ae8" exitCode=0 Mar 11 09:12:19 crc kubenswrapper[4808]: I0311 09:12:19.187600 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dm9bp" event={"ID":"f9fd889b-eb2c-420b-b8df-6f28843170d3","Type":"ContainerDied","Data":"b7ef47496063ac7f9f95545388811fee73d330946de8bfd27464318b26707ae8"} Mar 11 09:12:20 crc kubenswrapper[4808]: I0311 09:12:20.197815 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bjc6b" event={"ID":"e9f48e9c-6f76-467b-868f-d8753273eebb","Type":"ContainerStarted","Data":"ed16d625594e11e802db234a37b5b826d4a714c1b8c4482d6c40eba8658eda95"} Mar 11 09:12:20 crc kubenswrapper[4808]: I0311 09:12:20.202041 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dm9bp" event={"ID":"f9fd889b-eb2c-420b-b8df-6f28843170d3","Type":"ContainerStarted","Data":"66e393cdd6632d238a12fe6b5c6eae2c1d364f9c201f18e805831fb71f44806c"} Mar 11 09:12:20 crc kubenswrapper[4808]: I0311 09:12:20.204621 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8gp7f" event={"ID":"0c0d896e-aab8-4ee0-9fb9-7ec5958c8f1f","Type":"ContainerStarted","Data":"0fdcf2046e3203723366a2386f230662a4d09d2ffdad94ed7393a2a4fa956ee6"} Mar 11 09:12:20 crc kubenswrapper[4808]: I0311 09:12:20.243388 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dm9bp" podStartSLOduration=2.842139962 podStartE2EDuration="5.243341677s" podCreationTimestamp="2026-03-11 09:12:15 +0000 UTC" firstStartedPulling="2026-03-11 09:12:17.149488004 +0000 UTC m=+1988.102811324" lastFinishedPulling="2026-03-11 09:12:19.550689679 +0000 UTC m=+1990.504013039" observedRunningTime="2026-03-11 09:12:20.232161062 +0000 UTC m=+1991.185484382" watchObservedRunningTime="2026-03-11 09:12:20.243341677 +0000 UTC m=+1991.196664997" Mar 11 09:12:20 crc kubenswrapper[4808]: I0311 09:12:20.252822 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8gp7f" podStartSLOduration=2.429113575 podStartE2EDuration="5.252798043s" podCreationTimestamp="2026-03-11 09:12:15 +0000 UTC" firstStartedPulling="2026-03-11 09:12:17.157557021 +0000 UTC m=+1988.110880341" lastFinishedPulling="2026-03-11 09:12:19.981241489 +0000 UTC m=+1990.934564809" observedRunningTime="2026-03-11 09:12:20.247643958 +0000 UTC m=+1991.200967288" watchObservedRunningTime="2026-03-11 09:12:20.252798043 +0000 UTC m=+1991.206121363" Mar 11 09:12:21 crc kubenswrapper[4808]: I0311 09:12:21.212533 4808 generic.go:334] "Generic (PLEG): container finished" podID="e9f48e9c-6f76-467b-868f-d8753273eebb" containerID="ed16d625594e11e802db234a37b5b826d4a714c1b8c4482d6c40eba8658eda95" exitCode=0 Mar 11 09:12:21 crc kubenswrapper[4808]: I0311 09:12:21.212575 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bjc6b" event={"ID":"e9f48e9c-6f76-467b-868f-d8753273eebb","Type":"ContainerDied","Data":"ed16d625594e11e802db234a37b5b826d4a714c1b8c4482d6c40eba8658eda95"} Mar 11 09:12:22 crc kubenswrapper[4808]: I0311 09:12:22.227021 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bjc6b" event={"ID":"e9f48e9c-6f76-467b-868f-d8753273eebb","Type":"ContainerStarted","Data":"bf0003d57473aa8b6ad354588b55dbd469f0f7e3de973870b60dbf8a3137ead4"} Mar 11 09:12:22 crc kubenswrapper[4808]: I0311 09:12:22.251028 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bjc6b" podStartSLOduration=2.6070311459999997 podStartE2EDuration="5.251008504s" podCreationTimestamp="2026-03-11 09:12:17 +0000 UTC" firstStartedPulling="2026-03-11 09:12:19.183856853 +0000 UTC m=+1990.137180173" lastFinishedPulling="2026-03-11 09:12:21.827834211 +0000 UTC m=+1992.781157531" observedRunningTime="2026-03-11 09:12:22.24802619 +0000 UTC m=+1993.201349530" watchObservedRunningTime="2026-03-11 09:12:22.251008504 +0000 UTC m=+1993.204331824" Mar 11 09:12:25 crc kubenswrapper[4808]: I0311 09:12:25.573076 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8gp7f" Mar 11 09:12:25 crc kubenswrapper[4808]: I0311 09:12:25.573483 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8gp7f" Mar 11 09:12:25 crc kubenswrapper[4808]: I0311 09:12:25.617723 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8gp7f" Mar 11 09:12:25 crc kubenswrapper[4808]: I0311 09:12:25.725112 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dm9bp" Mar 11 09:12:25 crc kubenswrapper[4808]: I0311 09:12:25.725191 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dm9bp" Mar 11 09:12:25 crc kubenswrapper[4808]: I0311 09:12:25.774677 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dm9bp" Mar 11 09:12:26 crc kubenswrapper[4808]: I0311 09:12:26.303013 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dm9bp" Mar 11 09:12:26 crc kubenswrapper[4808]: I0311 09:12:26.308059 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8gp7f" Mar 11 09:12:27 crc kubenswrapper[4808]: I0311 09:12:27.596937 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8gp7f"] Mar 11 09:12:28 crc kubenswrapper[4808]: I0311 09:12:28.198312 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dm9bp"] Mar 11 09:12:28 crc kubenswrapper[4808]: I0311 09:12:28.248701 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bjc6b" Mar 11 09:12:28 crc kubenswrapper[4808]: I0311 09:12:28.248762 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bjc6b" Mar 11 09:12:28 crc kubenswrapper[4808]: I0311 09:12:28.275048 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-dm9bp" podUID="f9fd889b-eb2c-420b-b8df-6f28843170d3" containerName="registry-server" containerID="cri-o://66e393cdd6632d238a12fe6b5c6eae2c1d364f9c201f18e805831fb71f44806c" gracePeriod=2 Mar 11 09:12:28 crc kubenswrapper[4808]: I0311 09:12:28.275243 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8gp7f" podUID="0c0d896e-aab8-4ee0-9fb9-7ec5958c8f1f" containerName="registry-server" containerID="cri-o://0fdcf2046e3203723366a2386f230662a4d09d2ffdad94ed7393a2a4fa956ee6" gracePeriod=2 Mar 11 09:12:29 crc kubenswrapper[4808]: I0311 09:12:29.277118 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8gp7f" Mar 11 09:12:29 crc kubenswrapper[4808]: I0311 09:12:29.289308 4808 generic.go:334] "Generic (PLEG): container finished" podID="0c0d896e-aab8-4ee0-9fb9-7ec5958c8f1f" containerID="0fdcf2046e3203723366a2386f230662a4d09d2ffdad94ed7393a2a4fa956ee6" exitCode=0 Mar 11 09:12:29 crc kubenswrapper[4808]: I0311 09:12:29.289378 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8gp7f" event={"ID":"0c0d896e-aab8-4ee0-9fb9-7ec5958c8f1f","Type":"ContainerDied","Data":"0fdcf2046e3203723366a2386f230662a4d09d2ffdad94ed7393a2a4fa956ee6"} Mar 11 09:12:29 crc kubenswrapper[4808]: I0311 09:12:29.289408 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8gp7f" event={"ID":"0c0d896e-aab8-4ee0-9fb9-7ec5958c8f1f","Type":"ContainerDied","Data":"c831031457083af0ee74985ca58b14b5aebda39b37d5f2f668ab54e7b65b5909"} Mar 11 09:12:29 crc kubenswrapper[4808]: I0311 09:12:29.289415 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8gp7f" Mar 11 09:12:29 crc kubenswrapper[4808]: I0311 09:12:29.289431 4808 scope.go:117] "RemoveContainer" containerID="0fdcf2046e3203723366a2386f230662a4d09d2ffdad94ed7393a2a4fa956ee6" Mar 11 09:12:29 crc kubenswrapper[4808]: I0311 09:12:29.309137 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bjc6b" podUID="e9f48e9c-6f76-467b-868f-d8753273eebb" containerName="registry-server" probeResult="failure" output=< Mar 11 09:12:29 crc kubenswrapper[4808]: timeout: failed to connect service ":50051" within 1s Mar 11 09:12:29 crc kubenswrapper[4808]: > Mar 11 09:12:29 crc kubenswrapper[4808]: I0311 09:12:29.360483 4808 scope.go:117] "RemoveContainer" containerID="fa938e1c047b54f1143cafd47ad32fe6a52150d51fcf7383c66f293f7b2502b8" Mar 11 09:12:29 crc kubenswrapper[4808]: I0311 09:12:29.384915 4808 scope.go:117] "RemoveContainer" containerID="6107997e38b583b08debdaf1f4f908115ef0fae3f3ff594d6f8ce8b267809771" Mar 11 09:12:29 crc kubenswrapper[4808]: I0311 09:12:29.417589 4808 scope.go:117] "RemoveContainer" containerID="0fdcf2046e3203723366a2386f230662a4d09d2ffdad94ed7393a2a4fa956ee6" Mar 11 09:12:29 crc kubenswrapper[4808]: E0311 09:12:29.418263 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0fdcf2046e3203723366a2386f230662a4d09d2ffdad94ed7393a2a4fa956ee6\": container with ID starting with 0fdcf2046e3203723366a2386f230662a4d09d2ffdad94ed7393a2a4fa956ee6 not found: ID does not exist" containerID="0fdcf2046e3203723366a2386f230662a4d09d2ffdad94ed7393a2a4fa956ee6" Mar 11 09:12:29 crc kubenswrapper[4808]: I0311 09:12:29.418394 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0fdcf2046e3203723366a2386f230662a4d09d2ffdad94ed7393a2a4fa956ee6"} err="failed to get container status \"0fdcf2046e3203723366a2386f230662a4d09d2ffdad94ed7393a2a4fa956ee6\": rpc error: code = NotFound desc = could not find container \"0fdcf2046e3203723366a2386f230662a4d09d2ffdad94ed7393a2a4fa956ee6\": container with ID starting with 0fdcf2046e3203723366a2386f230662a4d09d2ffdad94ed7393a2a4fa956ee6 not found: ID does not exist" Mar 11 09:12:29 crc kubenswrapper[4808]: I0311 09:12:29.418475 4808 scope.go:117] "RemoveContainer" containerID="fa938e1c047b54f1143cafd47ad32fe6a52150d51fcf7383c66f293f7b2502b8" Mar 11 09:12:29 crc kubenswrapper[4808]: E0311 09:12:29.419109 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa938e1c047b54f1143cafd47ad32fe6a52150d51fcf7383c66f293f7b2502b8\": container with ID starting with fa938e1c047b54f1143cafd47ad32fe6a52150d51fcf7383c66f293f7b2502b8 not found: ID does not exist" containerID="fa938e1c047b54f1143cafd47ad32fe6a52150d51fcf7383c66f293f7b2502b8" Mar 11 09:12:29 crc kubenswrapper[4808]: I0311 09:12:29.419200 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa938e1c047b54f1143cafd47ad32fe6a52150d51fcf7383c66f293f7b2502b8"} err="failed to get container status \"fa938e1c047b54f1143cafd47ad32fe6a52150d51fcf7383c66f293f7b2502b8\": rpc error: code = NotFound desc = could not find container \"fa938e1c047b54f1143cafd47ad32fe6a52150d51fcf7383c66f293f7b2502b8\": container with ID starting with fa938e1c047b54f1143cafd47ad32fe6a52150d51fcf7383c66f293f7b2502b8 not found: ID does not exist" Mar 11 09:12:29 crc kubenswrapper[4808]: I0311 09:12:29.419269 4808 scope.go:117] "RemoveContainer" containerID="6107997e38b583b08debdaf1f4f908115ef0fae3f3ff594d6f8ce8b267809771" Mar 11 09:12:29 crc kubenswrapper[4808]: E0311 09:12:29.419982 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6107997e38b583b08debdaf1f4f908115ef0fae3f3ff594d6f8ce8b267809771\": container with ID starting with 6107997e38b583b08debdaf1f4f908115ef0fae3f3ff594d6f8ce8b267809771 not found: ID does not exist" containerID="6107997e38b583b08debdaf1f4f908115ef0fae3f3ff594d6f8ce8b267809771" Mar 11 09:12:29 crc kubenswrapper[4808]: I0311 09:12:29.420028 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6107997e38b583b08debdaf1f4f908115ef0fae3f3ff594d6f8ce8b267809771"} err="failed to get container status \"6107997e38b583b08debdaf1f4f908115ef0fae3f3ff594d6f8ce8b267809771\": rpc error: code = NotFound desc = could not find container \"6107997e38b583b08debdaf1f4f908115ef0fae3f3ff594d6f8ce8b267809771\": container with ID starting with 6107997e38b583b08debdaf1f4f908115ef0fae3f3ff594d6f8ce8b267809771 not found: ID does not exist" Mar 11 09:12:29 crc kubenswrapper[4808]: I0311 09:12:29.420949 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c0d896e-aab8-4ee0-9fb9-7ec5958c8f1f-utilities" (OuterVolumeSpecName: "utilities") pod "0c0d896e-aab8-4ee0-9fb9-7ec5958c8f1f" (UID: "0c0d896e-aab8-4ee0-9fb9-7ec5958c8f1f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:12:29 crc kubenswrapper[4808]: I0311 09:12:29.419349 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c0d896e-aab8-4ee0-9fb9-7ec5958c8f1f-utilities\") pod \"0c0d896e-aab8-4ee0-9fb9-7ec5958c8f1f\" (UID: \"0c0d896e-aab8-4ee0-9fb9-7ec5958c8f1f\") " Mar 11 09:12:29 crc kubenswrapper[4808]: I0311 09:12:29.421157 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c0d896e-aab8-4ee0-9fb9-7ec5958c8f1f-catalog-content\") pod \"0c0d896e-aab8-4ee0-9fb9-7ec5958c8f1f\" (UID: \"0c0d896e-aab8-4ee0-9fb9-7ec5958c8f1f\") " Mar 11 09:12:29 crc kubenswrapper[4808]: I0311 09:12:29.423573 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zd9s5\" (UniqueName: \"kubernetes.io/projected/0c0d896e-aab8-4ee0-9fb9-7ec5958c8f1f-kube-api-access-zd9s5\") pod \"0c0d896e-aab8-4ee0-9fb9-7ec5958c8f1f\" (UID: \"0c0d896e-aab8-4ee0-9fb9-7ec5958c8f1f\") " Mar 11 09:12:29 crc kubenswrapper[4808]: I0311 09:12:29.424225 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c0d896e-aab8-4ee0-9fb9-7ec5958c8f1f-utilities\") on node \"crc\" DevicePath \"\"" Mar 11 09:12:29 crc kubenswrapper[4808]: I0311 09:12:29.431417 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c0d896e-aab8-4ee0-9fb9-7ec5958c8f1f-kube-api-access-zd9s5" (OuterVolumeSpecName: "kube-api-access-zd9s5") pod "0c0d896e-aab8-4ee0-9fb9-7ec5958c8f1f" (UID: "0c0d896e-aab8-4ee0-9fb9-7ec5958c8f1f"). InnerVolumeSpecName "kube-api-access-zd9s5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:12:29 crc kubenswrapper[4808]: I0311 09:12:29.501728 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c0d896e-aab8-4ee0-9fb9-7ec5958c8f1f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0c0d896e-aab8-4ee0-9fb9-7ec5958c8f1f" (UID: "0c0d896e-aab8-4ee0-9fb9-7ec5958c8f1f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:12:29 crc kubenswrapper[4808]: I0311 09:12:29.525083 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c0d896e-aab8-4ee0-9fb9-7ec5958c8f1f-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 11 09:12:29 crc kubenswrapper[4808]: I0311 09:12:29.525111 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zd9s5\" (UniqueName: \"kubernetes.io/projected/0c0d896e-aab8-4ee0-9fb9-7ec5958c8f1f-kube-api-access-zd9s5\") on node \"crc\" DevicePath \"\"" Mar 11 09:12:29 crc kubenswrapper[4808]: I0311 09:12:29.640537 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8gp7f"] Mar 11 09:12:29 crc kubenswrapper[4808]: I0311 09:12:29.658058 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8gp7f"] Mar 11 09:12:29 crc kubenswrapper[4808]: I0311 09:12:29.698505 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dm9bp" Mar 11 09:12:29 crc kubenswrapper[4808]: I0311 09:12:29.797916 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c0d896e-aab8-4ee0-9fb9-7ec5958c8f1f" path="/var/lib/kubelet/pods/0c0d896e-aab8-4ee0-9fb9-7ec5958c8f1f/volumes" Mar 11 09:12:29 crc kubenswrapper[4808]: I0311 09:12:29.829859 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9fd889b-eb2c-420b-b8df-6f28843170d3-utilities\") pod \"f9fd889b-eb2c-420b-b8df-6f28843170d3\" (UID: \"f9fd889b-eb2c-420b-b8df-6f28843170d3\") " Mar 11 09:12:29 crc kubenswrapper[4808]: I0311 09:12:29.830061 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9fd889b-eb2c-420b-b8df-6f28843170d3-catalog-content\") pod \"f9fd889b-eb2c-420b-b8df-6f28843170d3\" (UID: \"f9fd889b-eb2c-420b-b8df-6f28843170d3\") " Mar 11 09:12:29 crc kubenswrapper[4808]: I0311 09:12:29.830159 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8nvp\" (UniqueName: \"kubernetes.io/projected/f9fd889b-eb2c-420b-b8df-6f28843170d3-kube-api-access-j8nvp\") pod \"f9fd889b-eb2c-420b-b8df-6f28843170d3\" (UID: \"f9fd889b-eb2c-420b-b8df-6f28843170d3\") " Mar 11 09:12:29 crc kubenswrapper[4808]: I0311 09:12:29.830706 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9fd889b-eb2c-420b-b8df-6f28843170d3-utilities" (OuterVolumeSpecName: "utilities") pod "f9fd889b-eb2c-420b-b8df-6f28843170d3" (UID: "f9fd889b-eb2c-420b-b8df-6f28843170d3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:12:29 crc kubenswrapper[4808]: I0311 09:12:29.833159 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9fd889b-eb2c-420b-b8df-6f28843170d3-kube-api-access-j8nvp" (OuterVolumeSpecName: "kube-api-access-j8nvp") pod "f9fd889b-eb2c-420b-b8df-6f28843170d3" (UID: "f9fd889b-eb2c-420b-b8df-6f28843170d3"). InnerVolumeSpecName "kube-api-access-j8nvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:12:29 crc kubenswrapper[4808]: I0311 09:12:29.881668 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9fd889b-eb2c-420b-b8df-6f28843170d3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f9fd889b-eb2c-420b-b8df-6f28843170d3" (UID: "f9fd889b-eb2c-420b-b8df-6f28843170d3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:12:29 crc kubenswrapper[4808]: I0311 09:12:29.932603 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9fd889b-eb2c-420b-b8df-6f28843170d3-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 11 09:12:29 crc kubenswrapper[4808]: I0311 09:12:29.932641 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j8nvp\" (UniqueName: \"kubernetes.io/projected/f9fd889b-eb2c-420b-b8df-6f28843170d3-kube-api-access-j8nvp\") on node \"crc\" DevicePath \"\"" Mar 11 09:12:29 crc kubenswrapper[4808]: I0311 09:12:29.932678 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9fd889b-eb2c-420b-b8df-6f28843170d3-utilities\") on node \"crc\" DevicePath \"\"" Mar 11 09:12:30 crc kubenswrapper[4808]: I0311 09:12:30.298097 4808 generic.go:334] "Generic (PLEG): container finished" podID="f9fd889b-eb2c-420b-b8df-6f28843170d3" containerID="66e393cdd6632d238a12fe6b5c6eae2c1d364f9c201f18e805831fb71f44806c" exitCode=0 Mar 11 09:12:30 crc kubenswrapper[4808]: I0311 09:12:30.298166 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dm9bp" Mar 11 09:12:30 crc kubenswrapper[4808]: I0311 09:12:30.298175 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dm9bp" event={"ID":"f9fd889b-eb2c-420b-b8df-6f28843170d3","Type":"ContainerDied","Data":"66e393cdd6632d238a12fe6b5c6eae2c1d364f9c201f18e805831fb71f44806c"} Mar 11 09:12:30 crc kubenswrapper[4808]: I0311 09:12:30.298199 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dm9bp" event={"ID":"f9fd889b-eb2c-420b-b8df-6f28843170d3","Type":"ContainerDied","Data":"f31ffa28f24605777b8cbd1ae242877c985aaf953f2a2b4a3e24b519daf3376f"} Mar 11 09:12:30 crc kubenswrapper[4808]: I0311 09:12:30.298232 4808 scope.go:117] "RemoveContainer" containerID="66e393cdd6632d238a12fe6b5c6eae2c1d364f9c201f18e805831fb71f44806c" Mar 11 09:12:30 crc kubenswrapper[4808]: I0311 09:12:30.314723 4808 scope.go:117] "RemoveContainer" containerID="b7ef47496063ac7f9f95545388811fee73d330946de8bfd27464318b26707ae8" Mar 11 09:12:30 crc kubenswrapper[4808]: I0311 09:12:30.339439 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dm9bp"] Mar 11 09:12:30 crc kubenswrapper[4808]: I0311 09:12:30.346697 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-dm9bp"] Mar 11 09:12:30 crc kubenswrapper[4808]: I0311 09:12:30.350606 4808 scope.go:117] "RemoveContainer" containerID="f6ffdb29b856e1a1e8fe36a634440107d41dbad4ae4f54951e5a0d23c8180194" Mar 11 09:12:30 crc kubenswrapper[4808]: I0311 09:12:30.368810 4808 scope.go:117] "RemoveContainer" containerID="66e393cdd6632d238a12fe6b5c6eae2c1d364f9c201f18e805831fb71f44806c" Mar 11 09:12:30 crc kubenswrapper[4808]: E0311 09:12:30.369172 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66e393cdd6632d238a12fe6b5c6eae2c1d364f9c201f18e805831fb71f44806c\": container with ID starting with 66e393cdd6632d238a12fe6b5c6eae2c1d364f9c201f18e805831fb71f44806c not found: ID does not exist" containerID="66e393cdd6632d238a12fe6b5c6eae2c1d364f9c201f18e805831fb71f44806c" Mar 11 09:12:30 crc kubenswrapper[4808]: I0311 09:12:30.369200 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66e393cdd6632d238a12fe6b5c6eae2c1d364f9c201f18e805831fb71f44806c"} err="failed to get container status \"66e393cdd6632d238a12fe6b5c6eae2c1d364f9c201f18e805831fb71f44806c\": rpc error: code = NotFound desc = could not find container \"66e393cdd6632d238a12fe6b5c6eae2c1d364f9c201f18e805831fb71f44806c\": container with ID starting with 66e393cdd6632d238a12fe6b5c6eae2c1d364f9c201f18e805831fb71f44806c not found: ID does not exist" Mar 11 09:12:30 crc kubenswrapper[4808]: I0311 09:12:30.369219 4808 scope.go:117] "RemoveContainer" containerID="b7ef47496063ac7f9f95545388811fee73d330946de8bfd27464318b26707ae8" Mar 11 09:12:30 crc kubenswrapper[4808]: E0311 09:12:30.369531 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7ef47496063ac7f9f95545388811fee73d330946de8bfd27464318b26707ae8\": container with ID starting with b7ef47496063ac7f9f95545388811fee73d330946de8bfd27464318b26707ae8 not found: ID does not exist" containerID="b7ef47496063ac7f9f95545388811fee73d330946de8bfd27464318b26707ae8" Mar 11 09:12:30 crc kubenswrapper[4808]: I0311 09:12:30.369553 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7ef47496063ac7f9f95545388811fee73d330946de8bfd27464318b26707ae8"} err="failed to get container status \"b7ef47496063ac7f9f95545388811fee73d330946de8bfd27464318b26707ae8\": rpc error: code = NotFound desc = could not find container \"b7ef47496063ac7f9f95545388811fee73d330946de8bfd27464318b26707ae8\": container with ID starting with b7ef47496063ac7f9f95545388811fee73d330946de8bfd27464318b26707ae8 not found: ID does not exist" Mar 11 09:12:30 crc kubenswrapper[4808]: I0311 09:12:30.369565 4808 scope.go:117] "RemoveContainer" containerID="f6ffdb29b856e1a1e8fe36a634440107d41dbad4ae4f54951e5a0d23c8180194" Mar 11 09:12:30 crc kubenswrapper[4808]: E0311 09:12:30.369807 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6ffdb29b856e1a1e8fe36a634440107d41dbad4ae4f54951e5a0d23c8180194\": container with ID starting with f6ffdb29b856e1a1e8fe36a634440107d41dbad4ae4f54951e5a0d23c8180194 not found: ID does not exist" containerID="f6ffdb29b856e1a1e8fe36a634440107d41dbad4ae4f54951e5a0d23c8180194" Mar 11 09:12:30 crc kubenswrapper[4808]: I0311 09:12:30.369828 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6ffdb29b856e1a1e8fe36a634440107d41dbad4ae4f54951e5a0d23c8180194"} err="failed to get container status \"f6ffdb29b856e1a1e8fe36a634440107d41dbad4ae4f54951e5a0d23c8180194\": rpc error: code = NotFound desc = could not find container \"f6ffdb29b856e1a1e8fe36a634440107d41dbad4ae4f54951e5a0d23c8180194\": container with ID starting with f6ffdb29b856e1a1e8fe36a634440107d41dbad4ae4f54951e5a0d23c8180194 not found: ID does not exist" Mar 11 09:12:31 crc kubenswrapper[4808]: I0311 09:12:31.801079 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9fd889b-eb2c-420b-b8df-6f28843170d3" path="/var/lib/kubelet/pods/f9fd889b-eb2c-420b-b8df-6f28843170d3/volumes" Mar 11 09:12:38 crc kubenswrapper[4808]: I0311 09:12:38.293639 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bjc6b" Mar 11 09:12:38 crc kubenswrapper[4808]: I0311 09:12:38.336674 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bjc6b" Mar 11 09:12:38 crc kubenswrapper[4808]: I0311 09:12:38.531030 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bjc6b"] Mar 11 09:12:39 crc kubenswrapper[4808]: I0311 09:12:39.372197 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-bjc6b" podUID="e9f48e9c-6f76-467b-868f-d8753273eebb" containerName="registry-server" containerID="cri-o://bf0003d57473aa8b6ad354588b55dbd469f0f7e3de973870b60dbf8a3137ead4" gracePeriod=2 Mar 11 09:12:39 crc kubenswrapper[4808]: I0311 09:12:39.738667 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bjc6b" Mar 11 09:12:39 crc kubenswrapper[4808]: I0311 09:12:39.878658 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9f48e9c-6f76-467b-868f-d8753273eebb-catalog-content\") pod \"e9f48e9c-6f76-467b-868f-d8753273eebb\" (UID: \"e9f48e9c-6f76-467b-868f-d8753273eebb\") " Mar 11 09:12:39 crc kubenswrapper[4808]: I0311 09:12:39.878741 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l7fnz\" (UniqueName: \"kubernetes.io/projected/e9f48e9c-6f76-467b-868f-d8753273eebb-kube-api-access-l7fnz\") pod \"e9f48e9c-6f76-467b-868f-d8753273eebb\" (UID: \"e9f48e9c-6f76-467b-868f-d8753273eebb\") " Mar 11 09:12:39 crc kubenswrapper[4808]: I0311 09:12:39.878795 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9f48e9c-6f76-467b-868f-d8753273eebb-utilities\") pod \"e9f48e9c-6f76-467b-868f-d8753273eebb\" (UID: \"e9f48e9c-6f76-467b-868f-d8753273eebb\") " Mar 11 09:12:39 crc kubenswrapper[4808]: I0311 09:12:39.879931 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9f48e9c-6f76-467b-868f-d8753273eebb-utilities" (OuterVolumeSpecName: "utilities") pod "e9f48e9c-6f76-467b-868f-d8753273eebb" (UID: "e9f48e9c-6f76-467b-868f-d8753273eebb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:12:39 crc kubenswrapper[4808]: I0311 09:12:39.884762 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9f48e9c-6f76-467b-868f-d8753273eebb-kube-api-access-l7fnz" (OuterVolumeSpecName: "kube-api-access-l7fnz") pod "e9f48e9c-6f76-467b-868f-d8753273eebb" (UID: "e9f48e9c-6f76-467b-868f-d8753273eebb"). InnerVolumeSpecName "kube-api-access-l7fnz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:12:39 crc kubenswrapper[4808]: I0311 09:12:39.980337 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l7fnz\" (UniqueName: \"kubernetes.io/projected/e9f48e9c-6f76-467b-868f-d8753273eebb-kube-api-access-l7fnz\") on node \"crc\" DevicePath \"\"" Mar 11 09:12:39 crc kubenswrapper[4808]: I0311 09:12:39.980388 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9f48e9c-6f76-467b-868f-d8753273eebb-utilities\") on node \"crc\" DevicePath \"\"" Mar 11 09:12:40 crc kubenswrapper[4808]: I0311 09:12:40.008089 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9f48e9c-6f76-467b-868f-d8753273eebb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e9f48e9c-6f76-467b-868f-d8753273eebb" (UID: "e9f48e9c-6f76-467b-868f-d8753273eebb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:12:40 crc kubenswrapper[4808]: I0311 09:12:40.081934 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9f48e9c-6f76-467b-868f-d8753273eebb-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 11 09:12:40 crc kubenswrapper[4808]: I0311 09:12:40.386067 4808 generic.go:334] "Generic (PLEG): container finished" podID="e9f48e9c-6f76-467b-868f-d8753273eebb" containerID="bf0003d57473aa8b6ad354588b55dbd469f0f7e3de973870b60dbf8a3137ead4" exitCode=0 Mar 11 09:12:40 crc kubenswrapper[4808]: I0311 09:12:40.386126 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bjc6b" Mar 11 09:12:40 crc kubenswrapper[4808]: I0311 09:12:40.386144 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bjc6b" event={"ID":"e9f48e9c-6f76-467b-868f-d8753273eebb","Type":"ContainerDied","Data":"bf0003d57473aa8b6ad354588b55dbd469f0f7e3de973870b60dbf8a3137ead4"} Mar 11 09:12:40 crc kubenswrapper[4808]: I0311 09:12:40.386613 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bjc6b" event={"ID":"e9f48e9c-6f76-467b-868f-d8753273eebb","Type":"ContainerDied","Data":"e837113ca6d4c0d4e7c03d7fd950a21e752c337c36dd0fe6d10b5d1ee774e6a5"} Mar 11 09:12:40 crc kubenswrapper[4808]: I0311 09:12:40.386633 4808 scope.go:117] "RemoveContainer" containerID="bf0003d57473aa8b6ad354588b55dbd469f0f7e3de973870b60dbf8a3137ead4" Mar 11 09:12:40 crc kubenswrapper[4808]: I0311 09:12:40.423852 4808 scope.go:117] "RemoveContainer" containerID="ed16d625594e11e802db234a37b5b826d4a714c1b8c4482d6c40eba8658eda95" Mar 11 09:12:40 crc kubenswrapper[4808]: I0311 09:12:40.426908 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bjc6b"] Mar 11 09:12:40 crc kubenswrapper[4808]: I0311 09:12:40.432480 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-bjc6b"] Mar 11 09:12:40 crc kubenswrapper[4808]: I0311 09:12:40.457004 4808 scope.go:117] "RemoveContainer" containerID="34db354de4a8357b620fd2ad489488e13f8ab9d21b67d57bb1cfaaa6326bb481" Mar 11 09:12:40 crc kubenswrapper[4808]: I0311 09:12:40.473694 4808 scope.go:117] "RemoveContainer" containerID="bf0003d57473aa8b6ad354588b55dbd469f0f7e3de973870b60dbf8a3137ead4" Mar 11 09:12:40 crc kubenswrapper[4808]: E0311 09:12:40.474103 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf0003d57473aa8b6ad354588b55dbd469f0f7e3de973870b60dbf8a3137ead4\": container with ID starting with bf0003d57473aa8b6ad354588b55dbd469f0f7e3de973870b60dbf8a3137ead4 not found: ID does not exist" containerID="bf0003d57473aa8b6ad354588b55dbd469f0f7e3de973870b60dbf8a3137ead4" Mar 11 09:12:40 crc kubenswrapper[4808]: I0311 09:12:40.474154 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf0003d57473aa8b6ad354588b55dbd469f0f7e3de973870b60dbf8a3137ead4"} err="failed to get container status \"bf0003d57473aa8b6ad354588b55dbd469f0f7e3de973870b60dbf8a3137ead4\": rpc error: code = NotFound desc = could not find container \"bf0003d57473aa8b6ad354588b55dbd469f0f7e3de973870b60dbf8a3137ead4\": container with ID starting with bf0003d57473aa8b6ad354588b55dbd469f0f7e3de973870b60dbf8a3137ead4 not found: ID does not exist" Mar 11 09:12:40 crc kubenswrapper[4808]: I0311 09:12:40.474184 4808 scope.go:117] "RemoveContainer" containerID="ed16d625594e11e802db234a37b5b826d4a714c1b8c4482d6c40eba8658eda95" Mar 11 09:12:40 crc kubenswrapper[4808]: E0311 09:12:40.474660 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed16d625594e11e802db234a37b5b826d4a714c1b8c4482d6c40eba8658eda95\": container with ID starting with ed16d625594e11e802db234a37b5b826d4a714c1b8c4482d6c40eba8658eda95 not found: ID does not exist" containerID="ed16d625594e11e802db234a37b5b826d4a714c1b8c4482d6c40eba8658eda95" Mar 11 09:12:40 crc kubenswrapper[4808]: I0311 09:12:40.474696 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed16d625594e11e802db234a37b5b826d4a714c1b8c4482d6c40eba8658eda95"} err="failed to get container status \"ed16d625594e11e802db234a37b5b826d4a714c1b8c4482d6c40eba8658eda95\": rpc error: code = NotFound desc = could not find container \"ed16d625594e11e802db234a37b5b826d4a714c1b8c4482d6c40eba8658eda95\": container with ID starting with ed16d625594e11e802db234a37b5b826d4a714c1b8c4482d6c40eba8658eda95 not found: ID does not exist" Mar 11 09:12:40 crc kubenswrapper[4808]: I0311 09:12:40.474715 4808 scope.go:117] "RemoveContainer" containerID="34db354de4a8357b620fd2ad489488e13f8ab9d21b67d57bb1cfaaa6326bb481" Mar 11 09:12:40 crc kubenswrapper[4808]: E0311 09:12:40.475018 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34db354de4a8357b620fd2ad489488e13f8ab9d21b67d57bb1cfaaa6326bb481\": container with ID starting with 34db354de4a8357b620fd2ad489488e13f8ab9d21b67d57bb1cfaaa6326bb481 not found: ID does not exist" containerID="34db354de4a8357b620fd2ad489488e13f8ab9d21b67d57bb1cfaaa6326bb481" Mar 11 09:12:40 crc kubenswrapper[4808]: I0311 09:12:40.475079 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34db354de4a8357b620fd2ad489488e13f8ab9d21b67d57bb1cfaaa6326bb481"} err="failed to get container status \"34db354de4a8357b620fd2ad489488e13f8ab9d21b67d57bb1cfaaa6326bb481\": rpc error: code = NotFound desc = could not find container \"34db354de4a8357b620fd2ad489488e13f8ab9d21b67d57bb1cfaaa6326bb481\": container with ID starting with 34db354de4a8357b620fd2ad489488e13f8ab9d21b67d57bb1cfaaa6326bb481 not found: ID does not exist" Mar 11 09:12:41 crc kubenswrapper[4808]: I0311 09:12:41.797276 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9f48e9c-6f76-467b-868f-d8753273eebb" path="/var/lib/kubelet/pods/e9f48e9c-6f76-467b-868f-d8753273eebb/volumes" Mar 11 09:12:46 crc kubenswrapper[4808]: I0311 09:12:46.027901 4808 patch_prober.go:28] interesting pod/machine-config-daemon-tfsm9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 09:12:46 crc kubenswrapper[4808]: I0311 09:12:46.028289 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 09:12:53 crc kubenswrapper[4808]: I0311 09:12:53.306538 4808 scope.go:117] "RemoveContainer" containerID="2ed7b46258c1046533435bd805e924e189e307712114cbed6cc8b3f1a10aeb7f" Mar 11 09:13:16 crc kubenswrapper[4808]: I0311 09:13:16.027137 4808 patch_prober.go:28] interesting pod/machine-config-daemon-tfsm9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 09:13:16 crc kubenswrapper[4808]: I0311 09:13:16.027782 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 09:13:46 crc kubenswrapper[4808]: I0311 09:13:46.027591 4808 patch_prober.go:28] interesting pod/machine-config-daemon-tfsm9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 09:13:46 crc kubenswrapper[4808]: I0311 09:13:46.028079 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 09:13:46 crc kubenswrapper[4808]: I0311 09:13:46.028126 4808 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" Mar 11 09:13:46 crc kubenswrapper[4808]: I0311 09:13:46.028732 4808 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d9208ed99d4f952025aa3f09482678464767b3c565dc523726a51651018e18fb"} pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 11 09:13:46 crc kubenswrapper[4808]: I0311 09:13:46.028778 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerName="machine-config-daemon" containerID="cri-o://d9208ed99d4f952025aa3f09482678464767b3c565dc523726a51651018e18fb" gracePeriod=600 Mar 11 09:13:46 crc kubenswrapper[4808]: I0311 09:13:46.948460 4808 generic.go:334] "Generic (PLEG): container finished" podID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerID="d9208ed99d4f952025aa3f09482678464767b3c565dc523726a51651018e18fb" exitCode=0 Mar 11 09:13:46 crc kubenswrapper[4808]: I0311 09:13:46.948514 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" event={"ID":"3dda5309-668d-4e3c-b3b2-1d708eecc578","Type":"ContainerDied","Data":"d9208ed99d4f952025aa3f09482678464767b3c565dc523726a51651018e18fb"} Mar 11 09:13:46 crc kubenswrapper[4808]: I0311 09:13:46.948874 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" event={"ID":"3dda5309-668d-4e3c-b3b2-1d708eecc578","Type":"ContainerStarted","Data":"7e18e081ed5442135a787a1914f64692c1d04ef480c559dafcef0144de7c2108"} Mar 11 09:13:46 crc kubenswrapper[4808]: I0311 09:13:46.948909 4808 scope.go:117] "RemoveContainer" containerID="9a282cc104124a36930296e8446190e2ce24a7536755b142f5bbb0e29d5b97d9" Mar 11 09:14:00 crc kubenswrapper[4808]: I0311 09:14:00.143385 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553674-dqsl5"] Mar 11 09:14:00 crc kubenswrapper[4808]: E0311 09:14:00.144443 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9f48e9c-6f76-467b-868f-d8753273eebb" containerName="registry-server" Mar 11 09:14:00 crc kubenswrapper[4808]: I0311 09:14:00.144464 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9f48e9c-6f76-467b-868f-d8753273eebb" containerName="registry-server" Mar 11 09:14:00 crc kubenswrapper[4808]: E0311 09:14:00.144486 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9fd889b-eb2c-420b-b8df-6f28843170d3" containerName="registry-server" Mar 11 09:14:00 crc kubenswrapper[4808]: I0311 09:14:00.144497 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9fd889b-eb2c-420b-b8df-6f28843170d3" containerName="registry-server" Mar 11 09:14:00 crc kubenswrapper[4808]: E0311 09:14:00.144517 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c0d896e-aab8-4ee0-9fb9-7ec5958c8f1f" containerName="extract-content" Mar 11 09:14:00 crc kubenswrapper[4808]: I0311 09:14:00.144526 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c0d896e-aab8-4ee0-9fb9-7ec5958c8f1f" containerName="extract-content" Mar 11 09:14:00 crc kubenswrapper[4808]: E0311 09:14:00.144548 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9f48e9c-6f76-467b-868f-d8753273eebb" containerName="extract-utilities" Mar 11 09:14:00 crc kubenswrapper[4808]: I0311 09:14:00.144558 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9f48e9c-6f76-467b-868f-d8753273eebb" containerName="extract-utilities" Mar 11 09:14:00 crc kubenswrapper[4808]: E0311 09:14:00.144576 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c0d896e-aab8-4ee0-9fb9-7ec5958c8f1f" containerName="registry-server" Mar 11 09:14:00 crc kubenswrapper[4808]: I0311 09:14:00.144585 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c0d896e-aab8-4ee0-9fb9-7ec5958c8f1f" containerName="registry-server" Mar 11 09:14:00 crc kubenswrapper[4808]: E0311 09:14:00.144601 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9fd889b-eb2c-420b-b8df-6f28843170d3" containerName="extract-content" Mar 11 09:14:00 crc kubenswrapper[4808]: I0311 09:14:00.144612 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9fd889b-eb2c-420b-b8df-6f28843170d3" containerName="extract-content" Mar 11 09:14:00 crc kubenswrapper[4808]: E0311 09:14:00.144629 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9f48e9c-6f76-467b-868f-d8753273eebb" containerName="extract-content" Mar 11 09:14:00 crc kubenswrapper[4808]: I0311 09:14:00.144638 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9f48e9c-6f76-467b-868f-d8753273eebb" containerName="extract-content" Mar 11 09:14:00 crc kubenswrapper[4808]: E0311 09:14:00.144653 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c0d896e-aab8-4ee0-9fb9-7ec5958c8f1f" containerName="extract-utilities" Mar 11 09:14:00 crc kubenswrapper[4808]: I0311 09:14:00.144662 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c0d896e-aab8-4ee0-9fb9-7ec5958c8f1f" containerName="extract-utilities" Mar 11 09:14:00 crc kubenswrapper[4808]: E0311 09:14:00.144679 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9fd889b-eb2c-420b-b8df-6f28843170d3" containerName="extract-utilities" Mar 11 09:14:00 crc kubenswrapper[4808]: I0311 09:14:00.144689 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9fd889b-eb2c-420b-b8df-6f28843170d3" containerName="extract-utilities" Mar 11 09:14:00 crc kubenswrapper[4808]: I0311 09:14:00.144900 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9f48e9c-6f76-467b-868f-d8753273eebb" containerName="registry-server" Mar 11 09:14:00 crc kubenswrapper[4808]: I0311 09:14:00.144920 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9fd889b-eb2c-420b-b8df-6f28843170d3" containerName="registry-server" Mar 11 09:14:00 crc kubenswrapper[4808]: I0311 09:14:00.144940 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c0d896e-aab8-4ee0-9fb9-7ec5958c8f1f" containerName="registry-server" Mar 11 09:14:00 crc kubenswrapper[4808]: I0311 09:14:00.145476 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553674-dqsl5" Mar 11 09:14:00 crc kubenswrapper[4808]: I0311 09:14:00.148463 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 09:14:00 crc kubenswrapper[4808]: I0311 09:14:00.148775 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-8r7cc" Mar 11 09:14:00 crc kubenswrapper[4808]: I0311 09:14:00.150846 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 09:14:00 crc kubenswrapper[4808]: I0311 09:14:00.156266 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553674-dqsl5"] Mar 11 09:14:00 crc kubenswrapper[4808]: I0311 09:14:00.206788 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zr2sc\" (UniqueName: \"kubernetes.io/projected/b639b262-9be1-49ad-baff-ea0b0cb51262-kube-api-access-zr2sc\") pod \"auto-csr-approver-29553674-dqsl5\" (UID: \"b639b262-9be1-49ad-baff-ea0b0cb51262\") " pod="openshift-infra/auto-csr-approver-29553674-dqsl5" Mar 11 09:14:00 crc kubenswrapper[4808]: I0311 09:14:00.307663 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zr2sc\" (UniqueName: \"kubernetes.io/projected/b639b262-9be1-49ad-baff-ea0b0cb51262-kube-api-access-zr2sc\") pod \"auto-csr-approver-29553674-dqsl5\" (UID: \"b639b262-9be1-49ad-baff-ea0b0cb51262\") " pod="openshift-infra/auto-csr-approver-29553674-dqsl5" Mar 11 09:14:00 crc kubenswrapper[4808]: I0311 09:14:00.330769 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zr2sc\" (UniqueName: \"kubernetes.io/projected/b639b262-9be1-49ad-baff-ea0b0cb51262-kube-api-access-zr2sc\") pod \"auto-csr-approver-29553674-dqsl5\" (UID: \"b639b262-9be1-49ad-baff-ea0b0cb51262\") " pod="openshift-infra/auto-csr-approver-29553674-dqsl5" Mar 11 09:14:00 crc kubenswrapper[4808]: I0311 09:14:00.464755 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553674-dqsl5" Mar 11 09:14:00 crc kubenswrapper[4808]: I0311 09:14:00.916824 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553674-dqsl5"] Mar 11 09:14:00 crc kubenswrapper[4808]: W0311 09:14:00.921549 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb639b262_9be1_49ad_baff_ea0b0cb51262.slice/crio-62822f7d2b9de514968a141bebb8839c975311111099859a9a9bc27ac637deaa WatchSource:0}: Error finding container 62822f7d2b9de514968a141bebb8839c975311111099859a9a9bc27ac637deaa: Status 404 returned error can't find the container with id 62822f7d2b9de514968a141bebb8839c975311111099859a9a9bc27ac637deaa Mar 11 09:14:01 crc kubenswrapper[4808]: I0311 09:14:01.064967 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553674-dqsl5" event={"ID":"b639b262-9be1-49ad-baff-ea0b0cb51262","Type":"ContainerStarted","Data":"62822f7d2b9de514968a141bebb8839c975311111099859a9a9bc27ac637deaa"} Mar 11 09:14:03 crc kubenswrapper[4808]: I0311 09:14:03.099085 4808 generic.go:334] "Generic (PLEG): container finished" podID="b639b262-9be1-49ad-baff-ea0b0cb51262" containerID="dc089052ca730e179db92a6ed8e5194fce59cbd2449c87cc77712734ffca52eb" exitCode=0 Mar 11 09:14:03 crc kubenswrapper[4808]: I0311 09:14:03.099190 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553674-dqsl5" event={"ID":"b639b262-9be1-49ad-baff-ea0b0cb51262","Type":"ContainerDied","Data":"dc089052ca730e179db92a6ed8e5194fce59cbd2449c87cc77712734ffca52eb"} Mar 11 09:14:04 crc kubenswrapper[4808]: I0311 09:14:04.440845 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553674-dqsl5" Mar 11 09:14:04 crc kubenswrapper[4808]: I0311 09:14:04.462980 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zr2sc\" (UniqueName: \"kubernetes.io/projected/b639b262-9be1-49ad-baff-ea0b0cb51262-kube-api-access-zr2sc\") pod \"b639b262-9be1-49ad-baff-ea0b0cb51262\" (UID: \"b639b262-9be1-49ad-baff-ea0b0cb51262\") " Mar 11 09:14:04 crc kubenswrapper[4808]: I0311 09:14:04.471082 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b639b262-9be1-49ad-baff-ea0b0cb51262-kube-api-access-zr2sc" (OuterVolumeSpecName: "kube-api-access-zr2sc") pod "b639b262-9be1-49ad-baff-ea0b0cb51262" (UID: "b639b262-9be1-49ad-baff-ea0b0cb51262"). InnerVolumeSpecName "kube-api-access-zr2sc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:14:04 crc kubenswrapper[4808]: I0311 09:14:04.565434 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zr2sc\" (UniqueName: \"kubernetes.io/projected/b639b262-9be1-49ad-baff-ea0b0cb51262-kube-api-access-zr2sc\") on node \"crc\" DevicePath \"\"" Mar 11 09:14:05 crc kubenswrapper[4808]: I0311 09:14:05.122114 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553674-dqsl5" event={"ID":"b639b262-9be1-49ad-baff-ea0b0cb51262","Type":"ContainerDied","Data":"62822f7d2b9de514968a141bebb8839c975311111099859a9a9bc27ac637deaa"} Mar 11 09:14:05 crc kubenswrapper[4808]: I0311 09:14:05.122162 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="62822f7d2b9de514968a141bebb8839c975311111099859a9a9bc27ac637deaa" Mar 11 09:14:05 crc kubenswrapper[4808]: I0311 09:14:05.122213 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553674-dqsl5" Mar 11 09:14:05 crc kubenswrapper[4808]: I0311 09:14:05.513111 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553668-zthqs"] Mar 11 09:14:05 crc kubenswrapper[4808]: I0311 09:14:05.518996 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553668-zthqs"] Mar 11 09:14:05 crc kubenswrapper[4808]: I0311 09:14:05.806072 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5cdb5fb0-486f-4682-8b57-8d8547c75c40" path="/var/lib/kubelet/pods/5cdb5fb0-486f-4682-8b57-8d8547c75c40/volumes" Mar 11 09:14:53 crc kubenswrapper[4808]: I0311 09:14:53.437960 4808 scope.go:117] "RemoveContainer" containerID="2e71b80d005e7f62280ffd44d21cda569cd2fc6ee8722b786691772fc9206bc8" Mar 11 09:15:00 crc kubenswrapper[4808]: I0311 09:15:00.145043 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29553675-pmqvq"] Mar 11 09:15:00 crc kubenswrapper[4808]: E0311 09:15:00.145997 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b639b262-9be1-49ad-baff-ea0b0cb51262" containerName="oc" Mar 11 09:15:00 crc kubenswrapper[4808]: I0311 09:15:00.146013 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="b639b262-9be1-49ad-baff-ea0b0cb51262" containerName="oc" Mar 11 09:15:00 crc kubenswrapper[4808]: I0311 09:15:00.146198 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="b639b262-9be1-49ad-baff-ea0b0cb51262" containerName="oc" Mar 11 09:15:00 crc kubenswrapper[4808]: I0311 09:15:00.146796 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29553675-pmqvq" Mar 11 09:15:00 crc kubenswrapper[4808]: I0311 09:15:00.150397 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 11 09:15:00 crc kubenswrapper[4808]: I0311 09:15:00.152191 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 11 09:15:00 crc kubenswrapper[4808]: I0311 09:15:00.152505 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29553675-pmqvq"] Mar 11 09:15:00 crc kubenswrapper[4808]: I0311 09:15:00.215949 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/069d0ae5-50fb-466e-878f-14380d00e7e4-config-volume\") pod \"collect-profiles-29553675-pmqvq\" (UID: \"069d0ae5-50fb-466e-878f-14380d00e7e4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553675-pmqvq" Mar 11 09:15:00 crc kubenswrapper[4808]: I0311 09:15:00.216015 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/069d0ae5-50fb-466e-878f-14380d00e7e4-secret-volume\") pod \"collect-profiles-29553675-pmqvq\" (UID: \"069d0ae5-50fb-466e-878f-14380d00e7e4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553675-pmqvq" Mar 11 09:15:00 crc kubenswrapper[4808]: I0311 09:15:00.216243 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9n5q7\" (UniqueName: \"kubernetes.io/projected/069d0ae5-50fb-466e-878f-14380d00e7e4-kube-api-access-9n5q7\") pod \"collect-profiles-29553675-pmqvq\" (UID: \"069d0ae5-50fb-466e-878f-14380d00e7e4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553675-pmqvq" Mar 11 09:15:00 crc kubenswrapper[4808]: I0311 09:15:00.317769 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9n5q7\" (UniqueName: \"kubernetes.io/projected/069d0ae5-50fb-466e-878f-14380d00e7e4-kube-api-access-9n5q7\") pod \"collect-profiles-29553675-pmqvq\" (UID: \"069d0ae5-50fb-466e-878f-14380d00e7e4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553675-pmqvq" Mar 11 09:15:00 crc kubenswrapper[4808]: I0311 09:15:00.317981 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/069d0ae5-50fb-466e-878f-14380d00e7e4-config-volume\") pod \"collect-profiles-29553675-pmqvq\" (UID: \"069d0ae5-50fb-466e-878f-14380d00e7e4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553675-pmqvq" Mar 11 09:15:00 crc kubenswrapper[4808]: I0311 09:15:00.318020 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/069d0ae5-50fb-466e-878f-14380d00e7e4-secret-volume\") pod \"collect-profiles-29553675-pmqvq\" (UID: \"069d0ae5-50fb-466e-878f-14380d00e7e4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553675-pmqvq" Mar 11 09:15:00 crc kubenswrapper[4808]: I0311 09:15:00.320171 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/069d0ae5-50fb-466e-878f-14380d00e7e4-config-volume\") pod \"collect-profiles-29553675-pmqvq\" (UID: \"069d0ae5-50fb-466e-878f-14380d00e7e4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553675-pmqvq" Mar 11 09:15:00 crc kubenswrapper[4808]: I0311 09:15:00.324776 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/069d0ae5-50fb-466e-878f-14380d00e7e4-secret-volume\") pod \"collect-profiles-29553675-pmqvq\" (UID: \"069d0ae5-50fb-466e-878f-14380d00e7e4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553675-pmqvq" Mar 11 09:15:00 crc kubenswrapper[4808]: I0311 09:15:00.345987 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9n5q7\" (UniqueName: \"kubernetes.io/projected/069d0ae5-50fb-466e-878f-14380d00e7e4-kube-api-access-9n5q7\") pod \"collect-profiles-29553675-pmqvq\" (UID: \"069d0ae5-50fb-466e-878f-14380d00e7e4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553675-pmqvq" Mar 11 09:15:00 crc kubenswrapper[4808]: I0311 09:15:00.511023 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29553675-pmqvq" Mar 11 09:15:00 crc kubenswrapper[4808]: I0311 09:15:00.929076 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29553675-pmqvq"] Mar 11 09:15:01 crc kubenswrapper[4808]: I0311 09:15:01.565065 4808 generic.go:334] "Generic (PLEG): container finished" podID="069d0ae5-50fb-466e-878f-14380d00e7e4" containerID="25b72a33bfd5cc9f1796797f22bee5e2e0653ceb7aae48a65ad3d8d0e86aaa31" exitCode=0 Mar 11 09:15:01 crc kubenswrapper[4808]: I0311 09:15:01.565118 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29553675-pmqvq" event={"ID":"069d0ae5-50fb-466e-878f-14380d00e7e4","Type":"ContainerDied","Data":"25b72a33bfd5cc9f1796797f22bee5e2e0653ceb7aae48a65ad3d8d0e86aaa31"} Mar 11 09:15:01 crc kubenswrapper[4808]: I0311 09:15:01.565385 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29553675-pmqvq" event={"ID":"069d0ae5-50fb-466e-878f-14380d00e7e4","Type":"ContainerStarted","Data":"4ea01beae0649a836a3d95bd9d0530341e029839e29512bb80376e677f5ad0f1"} Mar 11 09:15:02 crc kubenswrapper[4808]: I0311 09:15:02.827204 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29553675-pmqvq" Mar 11 09:15:02 crc kubenswrapper[4808]: I0311 09:15:02.864053 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/069d0ae5-50fb-466e-878f-14380d00e7e4-config-volume\") pod \"069d0ae5-50fb-466e-878f-14380d00e7e4\" (UID: \"069d0ae5-50fb-466e-878f-14380d00e7e4\") " Mar 11 09:15:02 crc kubenswrapper[4808]: I0311 09:15:02.864109 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9n5q7\" (UniqueName: \"kubernetes.io/projected/069d0ae5-50fb-466e-878f-14380d00e7e4-kube-api-access-9n5q7\") pod \"069d0ae5-50fb-466e-878f-14380d00e7e4\" (UID: \"069d0ae5-50fb-466e-878f-14380d00e7e4\") " Mar 11 09:15:02 crc kubenswrapper[4808]: I0311 09:15:02.864161 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/069d0ae5-50fb-466e-878f-14380d00e7e4-secret-volume\") pod \"069d0ae5-50fb-466e-878f-14380d00e7e4\" (UID: \"069d0ae5-50fb-466e-878f-14380d00e7e4\") " Mar 11 09:15:02 crc kubenswrapper[4808]: I0311 09:15:02.864836 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/069d0ae5-50fb-466e-878f-14380d00e7e4-config-volume" (OuterVolumeSpecName: "config-volume") pod "069d0ae5-50fb-466e-878f-14380d00e7e4" (UID: "069d0ae5-50fb-466e-878f-14380d00e7e4"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:15:02 crc kubenswrapper[4808]: I0311 09:15:02.870312 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/069d0ae5-50fb-466e-878f-14380d00e7e4-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "069d0ae5-50fb-466e-878f-14380d00e7e4" (UID: "069d0ae5-50fb-466e-878f-14380d00e7e4"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:15:02 crc kubenswrapper[4808]: I0311 09:15:02.870868 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/069d0ae5-50fb-466e-878f-14380d00e7e4-kube-api-access-9n5q7" (OuterVolumeSpecName: "kube-api-access-9n5q7") pod "069d0ae5-50fb-466e-878f-14380d00e7e4" (UID: "069d0ae5-50fb-466e-878f-14380d00e7e4"). InnerVolumeSpecName "kube-api-access-9n5q7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:15:02 crc kubenswrapper[4808]: I0311 09:15:02.965440 4808 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/069d0ae5-50fb-466e-878f-14380d00e7e4-config-volume\") on node \"crc\" DevicePath \"\"" Mar 11 09:15:02 crc kubenswrapper[4808]: I0311 09:15:02.965990 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9n5q7\" (UniqueName: \"kubernetes.io/projected/069d0ae5-50fb-466e-878f-14380d00e7e4-kube-api-access-9n5q7\") on node \"crc\" DevicePath \"\"" Mar 11 09:15:02 crc kubenswrapper[4808]: I0311 09:15:02.966010 4808 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/069d0ae5-50fb-466e-878f-14380d00e7e4-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 11 09:15:03 crc kubenswrapper[4808]: I0311 09:15:03.579796 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29553675-pmqvq" event={"ID":"069d0ae5-50fb-466e-878f-14380d00e7e4","Type":"ContainerDied","Data":"4ea01beae0649a836a3d95bd9d0530341e029839e29512bb80376e677f5ad0f1"} Mar 11 09:15:03 crc kubenswrapper[4808]: I0311 09:15:03.579851 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ea01beae0649a836a3d95bd9d0530341e029839e29512bb80376e677f5ad0f1" Mar 11 09:15:03 crc kubenswrapper[4808]: I0311 09:15:03.579865 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29553675-pmqvq" Mar 11 09:15:03 crc kubenswrapper[4808]: I0311 09:15:03.943123 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29553630-gsnv8"] Mar 11 09:15:03 crc kubenswrapper[4808]: I0311 09:15:03.949258 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29553630-gsnv8"] Mar 11 09:15:05 crc kubenswrapper[4808]: I0311 09:15:05.802418 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86edbb02-cc48-4845-86d4-51c46a4120bf" path="/var/lib/kubelet/pods/86edbb02-cc48-4845-86d4-51c46a4120bf/volumes" Mar 11 09:15:46 crc kubenswrapper[4808]: I0311 09:15:46.026934 4808 patch_prober.go:28] interesting pod/machine-config-daemon-tfsm9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 09:15:46 crc kubenswrapper[4808]: I0311 09:15:46.027385 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 09:15:48 crc kubenswrapper[4808]: I0311 09:15:48.107780 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nxpmp"] Mar 11 09:15:48 crc kubenswrapper[4808]: E0311 09:15:48.108086 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="069d0ae5-50fb-466e-878f-14380d00e7e4" containerName="collect-profiles" Mar 11 09:15:48 crc kubenswrapper[4808]: I0311 09:15:48.108099 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="069d0ae5-50fb-466e-878f-14380d00e7e4" containerName="collect-profiles" Mar 11 09:15:48 crc kubenswrapper[4808]: I0311 09:15:48.108246 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="069d0ae5-50fb-466e-878f-14380d00e7e4" containerName="collect-profiles" Mar 11 09:15:48 crc kubenswrapper[4808]: I0311 09:15:48.109290 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nxpmp" Mar 11 09:15:48 crc kubenswrapper[4808]: I0311 09:15:48.124582 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nxpmp"] Mar 11 09:15:48 crc kubenswrapper[4808]: I0311 09:15:48.238468 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d981b59b-7e01-4d69-a359-c3365d04a710-catalog-content\") pod \"redhat-marketplace-nxpmp\" (UID: \"d981b59b-7e01-4d69-a359-c3365d04a710\") " pod="openshift-marketplace/redhat-marketplace-nxpmp" Mar 11 09:15:48 crc kubenswrapper[4808]: I0311 09:15:48.238768 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d981b59b-7e01-4d69-a359-c3365d04a710-utilities\") pod \"redhat-marketplace-nxpmp\" (UID: \"d981b59b-7e01-4d69-a359-c3365d04a710\") " pod="openshift-marketplace/redhat-marketplace-nxpmp" Mar 11 09:15:48 crc kubenswrapper[4808]: I0311 09:15:48.238858 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kpfr\" (UniqueName: \"kubernetes.io/projected/d981b59b-7e01-4d69-a359-c3365d04a710-kube-api-access-7kpfr\") pod \"redhat-marketplace-nxpmp\" (UID: \"d981b59b-7e01-4d69-a359-c3365d04a710\") " pod="openshift-marketplace/redhat-marketplace-nxpmp" Mar 11 09:15:48 crc kubenswrapper[4808]: I0311 09:15:48.339976 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d981b59b-7e01-4d69-a359-c3365d04a710-utilities\") pod \"redhat-marketplace-nxpmp\" (UID: \"d981b59b-7e01-4d69-a359-c3365d04a710\") " pod="openshift-marketplace/redhat-marketplace-nxpmp" Mar 11 09:15:48 crc kubenswrapper[4808]: I0311 09:15:48.340024 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kpfr\" (UniqueName: \"kubernetes.io/projected/d981b59b-7e01-4d69-a359-c3365d04a710-kube-api-access-7kpfr\") pod \"redhat-marketplace-nxpmp\" (UID: \"d981b59b-7e01-4d69-a359-c3365d04a710\") " pod="openshift-marketplace/redhat-marketplace-nxpmp" Mar 11 09:15:48 crc kubenswrapper[4808]: I0311 09:15:48.340064 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d981b59b-7e01-4d69-a359-c3365d04a710-catalog-content\") pod \"redhat-marketplace-nxpmp\" (UID: \"d981b59b-7e01-4d69-a359-c3365d04a710\") " pod="openshift-marketplace/redhat-marketplace-nxpmp" Mar 11 09:15:48 crc kubenswrapper[4808]: I0311 09:15:48.340644 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d981b59b-7e01-4d69-a359-c3365d04a710-catalog-content\") pod \"redhat-marketplace-nxpmp\" (UID: \"d981b59b-7e01-4d69-a359-c3365d04a710\") " pod="openshift-marketplace/redhat-marketplace-nxpmp" Mar 11 09:15:48 crc kubenswrapper[4808]: I0311 09:15:48.340876 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d981b59b-7e01-4d69-a359-c3365d04a710-utilities\") pod \"redhat-marketplace-nxpmp\" (UID: \"d981b59b-7e01-4d69-a359-c3365d04a710\") " pod="openshift-marketplace/redhat-marketplace-nxpmp" Mar 11 09:15:48 crc kubenswrapper[4808]: I0311 09:15:48.360040 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kpfr\" (UniqueName: \"kubernetes.io/projected/d981b59b-7e01-4d69-a359-c3365d04a710-kube-api-access-7kpfr\") pod \"redhat-marketplace-nxpmp\" (UID: \"d981b59b-7e01-4d69-a359-c3365d04a710\") " pod="openshift-marketplace/redhat-marketplace-nxpmp" Mar 11 09:15:48 crc kubenswrapper[4808]: I0311 09:15:48.427769 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nxpmp" Mar 11 09:15:48 crc kubenswrapper[4808]: I0311 09:15:48.849272 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nxpmp"] Mar 11 09:15:48 crc kubenswrapper[4808]: I0311 09:15:48.885673 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nxpmp" event={"ID":"d981b59b-7e01-4d69-a359-c3365d04a710","Type":"ContainerStarted","Data":"7d8eb9231302f81576ae6e99e77c3c0cd74ea4aa1308cd25a7991effc2d1dad9"} Mar 11 09:15:49 crc kubenswrapper[4808]: I0311 09:15:49.892297 4808 generic.go:334] "Generic (PLEG): container finished" podID="d981b59b-7e01-4d69-a359-c3365d04a710" containerID="bc202cccdcd7a4eb19f5b3144a02e3461e1ba4e3787534095a8a77acb3ed9ecb" exitCode=0 Mar 11 09:15:49 crc kubenswrapper[4808]: I0311 09:15:49.892371 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nxpmp" event={"ID":"d981b59b-7e01-4d69-a359-c3365d04a710","Type":"ContainerDied","Data":"bc202cccdcd7a4eb19f5b3144a02e3461e1ba4e3787534095a8a77acb3ed9ecb"} Mar 11 09:15:49 crc kubenswrapper[4808]: I0311 09:15:49.895725 4808 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 11 09:15:51 crc kubenswrapper[4808]: I0311 09:15:51.910582 4808 generic.go:334] "Generic (PLEG): container finished" podID="d981b59b-7e01-4d69-a359-c3365d04a710" containerID="a5e5597158bd35270b109a5fca9fffdc82eef2fd512228f2c96249d107a9153e" exitCode=0 Mar 11 09:15:51 crc kubenswrapper[4808]: I0311 09:15:51.910645 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nxpmp" event={"ID":"d981b59b-7e01-4d69-a359-c3365d04a710","Type":"ContainerDied","Data":"a5e5597158bd35270b109a5fca9fffdc82eef2fd512228f2c96249d107a9153e"} Mar 11 09:15:52 crc kubenswrapper[4808]: I0311 09:15:52.920004 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nxpmp" event={"ID":"d981b59b-7e01-4d69-a359-c3365d04a710","Type":"ContainerStarted","Data":"1cf72c5454b528f5930f9365c7c6e62014cafbce72ec2a80f33d56ffdb74bf5c"} Mar 11 09:15:52 crc kubenswrapper[4808]: I0311 09:15:52.940091 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nxpmp" podStartSLOduration=2.178900403 podStartE2EDuration="4.940069741s" podCreationTimestamp="2026-03-11 09:15:48 +0000 UTC" firstStartedPulling="2026-03-11 09:15:49.895529086 +0000 UTC m=+2200.848852406" lastFinishedPulling="2026-03-11 09:15:52.656698424 +0000 UTC m=+2203.610021744" observedRunningTime="2026-03-11 09:15:52.934291108 +0000 UTC m=+2203.887614428" watchObservedRunningTime="2026-03-11 09:15:52.940069741 +0000 UTC m=+2203.893393061" Mar 11 09:15:53 crc kubenswrapper[4808]: I0311 09:15:53.507060 4808 scope.go:117] "RemoveContainer" containerID="4c93a608601bdd1a5a8a72c0a02be858aa4010c878cb3b6b1a0c808f95563a34" Mar 11 09:15:58 crc kubenswrapper[4808]: I0311 09:15:58.428683 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nxpmp" Mar 11 09:15:58 crc kubenswrapper[4808]: I0311 09:15:58.429213 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nxpmp" Mar 11 09:15:58 crc kubenswrapper[4808]: I0311 09:15:58.471704 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nxpmp" Mar 11 09:15:59 crc kubenswrapper[4808]: I0311 09:15:59.019473 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nxpmp" Mar 11 09:15:59 crc kubenswrapper[4808]: I0311 09:15:59.495637 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nxpmp"] Mar 11 09:16:00 crc kubenswrapper[4808]: I0311 09:16:00.136244 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553676-hjx44"] Mar 11 09:16:00 crc kubenswrapper[4808]: I0311 09:16:00.137386 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553676-hjx44" Mar 11 09:16:00 crc kubenswrapper[4808]: I0311 09:16:00.139096 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 09:16:00 crc kubenswrapper[4808]: I0311 09:16:00.139931 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 09:16:00 crc kubenswrapper[4808]: I0311 09:16:00.144226 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-8r7cc" Mar 11 09:16:00 crc kubenswrapper[4808]: I0311 09:16:00.157089 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553676-hjx44"] Mar 11 09:16:00 crc kubenswrapper[4808]: I0311 09:16:00.213389 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5t7mv\" (UniqueName: \"kubernetes.io/projected/dd0e6e71-4916-40d5-a360-aa0500ba7c20-kube-api-access-5t7mv\") pod \"auto-csr-approver-29553676-hjx44\" (UID: \"dd0e6e71-4916-40d5-a360-aa0500ba7c20\") " pod="openshift-infra/auto-csr-approver-29553676-hjx44" Mar 11 09:16:00 crc kubenswrapper[4808]: I0311 09:16:00.314356 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5t7mv\" (UniqueName: \"kubernetes.io/projected/dd0e6e71-4916-40d5-a360-aa0500ba7c20-kube-api-access-5t7mv\") pod \"auto-csr-approver-29553676-hjx44\" (UID: \"dd0e6e71-4916-40d5-a360-aa0500ba7c20\") " pod="openshift-infra/auto-csr-approver-29553676-hjx44" Mar 11 09:16:00 crc kubenswrapper[4808]: I0311 09:16:00.333270 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5t7mv\" (UniqueName: \"kubernetes.io/projected/dd0e6e71-4916-40d5-a360-aa0500ba7c20-kube-api-access-5t7mv\") pod \"auto-csr-approver-29553676-hjx44\" (UID: \"dd0e6e71-4916-40d5-a360-aa0500ba7c20\") " pod="openshift-infra/auto-csr-approver-29553676-hjx44" Mar 11 09:16:00 crc kubenswrapper[4808]: I0311 09:16:00.457154 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553676-hjx44" Mar 11 09:16:00 crc kubenswrapper[4808]: I0311 09:16:00.877906 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553676-hjx44"] Mar 11 09:16:00 crc kubenswrapper[4808]: I0311 09:16:00.987884 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553676-hjx44" event={"ID":"dd0e6e71-4916-40d5-a360-aa0500ba7c20","Type":"ContainerStarted","Data":"d4ef901fdf3e10c91f7e03fa8d73559f1a37eb32779d27ee93ca4107306d772c"} Mar 11 09:16:00 crc kubenswrapper[4808]: I0311 09:16:00.988073 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-nxpmp" podUID="d981b59b-7e01-4d69-a359-c3365d04a710" containerName="registry-server" containerID="cri-o://1cf72c5454b528f5930f9365c7c6e62014cafbce72ec2a80f33d56ffdb74bf5c" gracePeriod=2 Mar 11 09:16:01 crc kubenswrapper[4808]: I0311 09:16:01.999873 4808 generic.go:334] "Generic (PLEG): container finished" podID="d981b59b-7e01-4d69-a359-c3365d04a710" containerID="1cf72c5454b528f5930f9365c7c6e62014cafbce72ec2a80f33d56ffdb74bf5c" exitCode=0 Mar 11 09:16:01 crc kubenswrapper[4808]: I0311 09:16:01.999913 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nxpmp" event={"ID":"d981b59b-7e01-4d69-a359-c3365d04a710","Type":"ContainerDied","Data":"1cf72c5454b528f5930f9365c7c6e62014cafbce72ec2a80f33d56ffdb74bf5c"} Mar 11 09:16:02 crc kubenswrapper[4808]: I0311 09:16:01.999936 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nxpmp" event={"ID":"d981b59b-7e01-4d69-a359-c3365d04a710","Type":"ContainerDied","Data":"7d8eb9231302f81576ae6e99e77c3c0cd74ea4aa1308cd25a7991effc2d1dad9"} Mar 11 09:16:02 crc kubenswrapper[4808]: I0311 09:16:01.999947 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d8eb9231302f81576ae6e99e77c3c0cd74ea4aa1308cd25a7991effc2d1dad9" Mar 11 09:16:02 crc kubenswrapper[4808]: I0311 09:16:02.019396 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nxpmp" Mar 11 09:16:02 crc kubenswrapper[4808]: I0311 09:16:02.151636 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d981b59b-7e01-4d69-a359-c3365d04a710-utilities\") pod \"d981b59b-7e01-4d69-a359-c3365d04a710\" (UID: \"d981b59b-7e01-4d69-a359-c3365d04a710\") " Mar 11 09:16:02 crc kubenswrapper[4808]: I0311 09:16:02.151705 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d981b59b-7e01-4d69-a359-c3365d04a710-catalog-content\") pod \"d981b59b-7e01-4d69-a359-c3365d04a710\" (UID: \"d981b59b-7e01-4d69-a359-c3365d04a710\") " Mar 11 09:16:02 crc kubenswrapper[4808]: I0311 09:16:02.151786 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7kpfr\" (UniqueName: \"kubernetes.io/projected/d981b59b-7e01-4d69-a359-c3365d04a710-kube-api-access-7kpfr\") pod \"d981b59b-7e01-4d69-a359-c3365d04a710\" (UID: \"d981b59b-7e01-4d69-a359-c3365d04a710\") " Mar 11 09:16:02 crc kubenswrapper[4808]: I0311 09:16:02.153531 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d981b59b-7e01-4d69-a359-c3365d04a710-utilities" (OuterVolumeSpecName: "utilities") pod "d981b59b-7e01-4d69-a359-c3365d04a710" (UID: "d981b59b-7e01-4d69-a359-c3365d04a710"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:16:02 crc kubenswrapper[4808]: I0311 09:16:02.159389 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d981b59b-7e01-4d69-a359-c3365d04a710-kube-api-access-7kpfr" (OuterVolumeSpecName: "kube-api-access-7kpfr") pod "d981b59b-7e01-4d69-a359-c3365d04a710" (UID: "d981b59b-7e01-4d69-a359-c3365d04a710"). InnerVolumeSpecName "kube-api-access-7kpfr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:16:02 crc kubenswrapper[4808]: I0311 09:16:02.254348 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d981b59b-7e01-4d69-a359-c3365d04a710-utilities\") on node \"crc\" DevicePath \"\"" Mar 11 09:16:02 crc kubenswrapper[4808]: I0311 09:16:02.254420 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7kpfr\" (UniqueName: \"kubernetes.io/projected/d981b59b-7e01-4d69-a359-c3365d04a710-kube-api-access-7kpfr\") on node \"crc\" DevicePath \"\"" Mar 11 09:16:02 crc kubenswrapper[4808]: I0311 09:16:02.721587 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d981b59b-7e01-4d69-a359-c3365d04a710-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d981b59b-7e01-4d69-a359-c3365d04a710" (UID: "d981b59b-7e01-4d69-a359-c3365d04a710"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:16:02 crc kubenswrapper[4808]: I0311 09:16:02.762020 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d981b59b-7e01-4d69-a359-c3365d04a710-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 11 09:16:03 crc kubenswrapper[4808]: I0311 09:16:03.005078 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nxpmp" Mar 11 09:16:03 crc kubenswrapper[4808]: I0311 09:16:03.059750 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nxpmp"] Mar 11 09:16:03 crc kubenswrapper[4808]: I0311 09:16:03.066698 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-nxpmp"] Mar 11 09:16:03 crc kubenswrapper[4808]: I0311 09:16:03.797672 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d981b59b-7e01-4d69-a359-c3365d04a710" path="/var/lib/kubelet/pods/d981b59b-7e01-4d69-a359-c3365d04a710/volumes" Mar 11 09:16:04 crc kubenswrapper[4808]: I0311 09:16:04.013172 4808 generic.go:334] "Generic (PLEG): container finished" podID="dd0e6e71-4916-40d5-a360-aa0500ba7c20" containerID="eea579e030cb241572cc2f2a47b2290c0b690c1ebdc26ea3bbfe6ce6da9fec18" exitCode=0 Mar 11 09:16:04 crc kubenswrapper[4808]: I0311 09:16:04.013217 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553676-hjx44" event={"ID":"dd0e6e71-4916-40d5-a360-aa0500ba7c20","Type":"ContainerDied","Data":"eea579e030cb241572cc2f2a47b2290c0b690c1ebdc26ea3bbfe6ce6da9fec18"} Mar 11 09:16:05 crc kubenswrapper[4808]: I0311 09:16:05.332472 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553676-hjx44" Mar 11 09:16:05 crc kubenswrapper[4808]: I0311 09:16:05.396838 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5t7mv\" (UniqueName: \"kubernetes.io/projected/dd0e6e71-4916-40d5-a360-aa0500ba7c20-kube-api-access-5t7mv\") pod \"dd0e6e71-4916-40d5-a360-aa0500ba7c20\" (UID: \"dd0e6e71-4916-40d5-a360-aa0500ba7c20\") " Mar 11 09:16:05 crc kubenswrapper[4808]: I0311 09:16:05.401219 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd0e6e71-4916-40d5-a360-aa0500ba7c20-kube-api-access-5t7mv" (OuterVolumeSpecName: "kube-api-access-5t7mv") pod "dd0e6e71-4916-40d5-a360-aa0500ba7c20" (UID: "dd0e6e71-4916-40d5-a360-aa0500ba7c20"). InnerVolumeSpecName "kube-api-access-5t7mv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:16:05 crc kubenswrapper[4808]: I0311 09:16:05.498900 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5t7mv\" (UniqueName: \"kubernetes.io/projected/dd0e6e71-4916-40d5-a360-aa0500ba7c20-kube-api-access-5t7mv\") on node \"crc\" DevicePath \"\"" Mar 11 09:16:05 crc kubenswrapper[4808]: E0311 09:16:05.896574 4808 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddd0e6e71_4916_40d5_a360_aa0500ba7c20.slice/crio-d4ef901fdf3e10c91f7e03fa8d73559f1a37eb32779d27ee93ca4107306d772c\": RecentStats: unable to find data in memory cache]" Mar 11 09:16:06 crc kubenswrapper[4808]: I0311 09:16:06.029505 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553676-hjx44" event={"ID":"dd0e6e71-4916-40d5-a360-aa0500ba7c20","Type":"ContainerDied","Data":"d4ef901fdf3e10c91f7e03fa8d73559f1a37eb32779d27ee93ca4107306d772c"} Mar 11 09:16:06 crc kubenswrapper[4808]: I0311 09:16:06.029547 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d4ef901fdf3e10c91f7e03fa8d73559f1a37eb32779d27ee93ca4107306d772c" Mar 11 09:16:06 crc kubenswrapper[4808]: I0311 09:16:06.029598 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553676-hjx44" Mar 11 09:16:06 crc kubenswrapper[4808]: I0311 09:16:06.391622 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553670-qhptn"] Mar 11 09:16:06 crc kubenswrapper[4808]: I0311 09:16:06.398565 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553670-qhptn"] Mar 11 09:16:07 crc kubenswrapper[4808]: I0311 09:16:07.797486 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8dc7c529-f9ec-4c84-bcf3-5d5c97aa1b3b" path="/var/lib/kubelet/pods/8dc7c529-f9ec-4c84-bcf3-5d5c97aa1b3b/volumes" Mar 11 09:16:16 crc kubenswrapper[4808]: I0311 09:16:16.027269 4808 patch_prober.go:28] interesting pod/machine-config-daemon-tfsm9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 09:16:16 crc kubenswrapper[4808]: I0311 09:16:16.027832 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 09:16:46 crc kubenswrapper[4808]: I0311 09:16:46.027582 4808 patch_prober.go:28] interesting pod/machine-config-daemon-tfsm9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 09:16:46 crc kubenswrapper[4808]: I0311 09:16:46.028179 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 09:16:46 crc kubenswrapper[4808]: I0311 09:16:46.028221 4808 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" Mar 11 09:16:46 crc kubenswrapper[4808]: I0311 09:16:46.028920 4808 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7e18e081ed5442135a787a1914f64692c1d04ef480c559dafcef0144de7c2108"} pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 11 09:16:46 crc kubenswrapper[4808]: I0311 09:16:46.028978 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerName="machine-config-daemon" containerID="cri-o://7e18e081ed5442135a787a1914f64692c1d04ef480c559dafcef0144de7c2108" gracePeriod=600 Mar 11 09:16:46 crc kubenswrapper[4808]: E0311 09:16:46.163521 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:16:46 crc kubenswrapper[4808]: I0311 09:16:46.309073 4808 generic.go:334] "Generic (PLEG): container finished" podID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerID="7e18e081ed5442135a787a1914f64692c1d04ef480c559dafcef0144de7c2108" exitCode=0 Mar 11 09:16:46 crc kubenswrapper[4808]: I0311 09:16:46.309117 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" event={"ID":"3dda5309-668d-4e3c-b3b2-1d708eecc578","Type":"ContainerDied","Data":"7e18e081ed5442135a787a1914f64692c1d04ef480c559dafcef0144de7c2108"} Mar 11 09:16:46 crc kubenswrapper[4808]: I0311 09:16:46.309152 4808 scope.go:117] "RemoveContainer" containerID="d9208ed99d4f952025aa3f09482678464767b3c565dc523726a51651018e18fb" Mar 11 09:16:46 crc kubenswrapper[4808]: I0311 09:16:46.309691 4808 scope.go:117] "RemoveContainer" containerID="7e18e081ed5442135a787a1914f64692c1d04ef480c559dafcef0144de7c2108" Mar 11 09:16:46 crc kubenswrapper[4808]: E0311 09:16:46.309942 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:16:53 crc kubenswrapper[4808]: I0311 09:16:53.566140 4808 scope.go:117] "RemoveContainer" containerID="f3fc88acc0e4e7832c4d82f271ad9f16e21fbe5b73a9e3013b2fe512a046b0aa" Mar 11 09:16:59 crc kubenswrapper[4808]: I0311 09:16:59.792945 4808 scope.go:117] "RemoveContainer" containerID="7e18e081ed5442135a787a1914f64692c1d04ef480c559dafcef0144de7c2108" Mar 11 09:16:59 crc kubenswrapper[4808]: E0311 09:16:59.793703 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:17:13 crc kubenswrapper[4808]: I0311 09:17:13.789951 4808 scope.go:117] "RemoveContainer" containerID="7e18e081ed5442135a787a1914f64692c1d04ef480c559dafcef0144de7c2108" Mar 11 09:17:13 crc kubenswrapper[4808]: E0311 09:17:13.790708 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:17:26 crc kubenswrapper[4808]: I0311 09:17:26.789509 4808 scope.go:117] "RemoveContainer" containerID="7e18e081ed5442135a787a1914f64692c1d04ef480c559dafcef0144de7c2108" Mar 11 09:17:26 crc kubenswrapper[4808]: E0311 09:17:26.790539 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:17:38 crc kubenswrapper[4808]: I0311 09:17:38.789438 4808 scope.go:117] "RemoveContainer" containerID="7e18e081ed5442135a787a1914f64692c1d04ef480c559dafcef0144de7c2108" Mar 11 09:17:38 crc kubenswrapper[4808]: E0311 09:17:38.790625 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:17:51 crc kubenswrapper[4808]: I0311 09:17:51.790108 4808 scope.go:117] "RemoveContainer" containerID="7e18e081ed5442135a787a1914f64692c1d04ef480c559dafcef0144de7c2108" Mar 11 09:17:51 crc kubenswrapper[4808]: E0311 09:17:51.790872 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:18:00 crc kubenswrapper[4808]: I0311 09:18:00.147190 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553678-p8dsw"] Mar 11 09:18:00 crc kubenswrapper[4808]: E0311 09:18:00.148202 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d981b59b-7e01-4d69-a359-c3365d04a710" containerName="registry-server" Mar 11 09:18:00 crc kubenswrapper[4808]: I0311 09:18:00.148218 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="d981b59b-7e01-4d69-a359-c3365d04a710" containerName="registry-server" Mar 11 09:18:00 crc kubenswrapper[4808]: E0311 09:18:00.148240 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd0e6e71-4916-40d5-a360-aa0500ba7c20" containerName="oc" Mar 11 09:18:00 crc kubenswrapper[4808]: I0311 09:18:00.148246 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd0e6e71-4916-40d5-a360-aa0500ba7c20" containerName="oc" Mar 11 09:18:00 crc kubenswrapper[4808]: E0311 09:18:00.148262 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d981b59b-7e01-4d69-a359-c3365d04a710" containerName="extract-content" Mar 11 09:18:00 crc kubenswrapper[4808]: I0311 09:18:00.148269 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="d981b59b-7e01-4d69-a359-c3365d04a710" containerName="extract-content" Mar 11 09:18:00 crc kubenswrapper[4808]: E0311 09:18:00.148278 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d981b59b-7e01-4d69-a359-c3365d04a710" containerName="extract-utilities" Mar 11 09:18:00 crc kubenswrapper[4808]: I0311 09:18:00.148285 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="d981b59b-7e01-4d69-a359-c3365d04a710" containerName="extract-utilities" Mar 11 09:18:00 crc kubenswrapper[4808]: I0311 09:18:00.148440 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="d981b59b-7e01-4d69-a359-c3365d04a710" containerName="registry-server" Mar 11 09:18:00 crc kubenswrapper[4808]: I0311 09:18:00.148465 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd0e6e71-4916-40d5-a360-aa0500ba7c20" containerName="oc" Mar 11 09:18:00 crc kubenswrapper[4808]: I0311 09:18:00.149062 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553678-p8dsw" Mar 11 09:18:00 crc kubenswrapper[4808]: I0311 09:18:00.152985 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 09:18:00 crc kubenswrapper[4808]: I0311 09:18:00.153275 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-8r7cc" Mar 11 09:18:00 crc kubenswrapper[4808]: I0311 09:18:00.153517 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 09:18:00 crc kubenswrapper[4808]: I0311 09:18:00.156936 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553678-p8dsw"] Mar 11 09:18:00 crc kubenswrapper[4808]: I0311 09:18:00.256584 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsnpg\" (UniqueName: \"kubernetes.io/projected/43f15f5d-6b5e-42f9-a135-1386a2116d67-kube-api-access-nsnpg\") pod \"auto-csr-approver-29553678-p8dsw\" (UID: \"43f15f5d-6b5e-42f9-a135-1386a2116d67\") " pod="openshift-infra/auto-csr-approver-29553678-p8dsw" Mar 11 09:18:00 crc kubenswrapper[4808]: I0311 09:18:00.359414 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nsnpg\" (UniqueName: \"kubernetes.io/projected/43f15f5d-6b5e-42f9-a135-1386a2116d67-kube-api-access-nsnpg\") pod \"auto-csr-approver-29553678-p8dsw\" (UID: \"43f15f5d-6b5e-42f9-a135-1386a2116d67\") " pod="openshift-infra/auto-csr-approver-29553678-p8dsw" Mar 11 09:18:00 crc kubenswrapper[4808]: I0311 09:18:00.379414 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nsnpg\" (UniqueName: \"kubernetes.io/projected/43f15f5d-6b5e-42f9-a135-1386a2116d67-kube-api-access-nsnpg\") pod \"auto-csr-approver-29553678-p8dsw\" (UID: \"43f15f5d-6b5e-42f9-a135-1386a2116d67\") " pod="openshift-infra/auto-csr-approver-29553678-p8dsw" Mar 11 09:18:00 crc kubenswrapper[4808]: I0311 09:18:00.480734 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553678-p8dsw" Mar 11 09:18:00 crc kubenswrapper[4808]: I0311 09:18:00.897500 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553678-p8dsw"] Mar 11 09:18:01 crc kubenswrapper[4808]: I0311 09:18:01.875411 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553678-p8dsw" event={"ID":"43f15f5d-6b5e-42f9-a135-1386a2116d67","Type":"ContainerStarted","Data":"81d65b2a4945266a3df5385f7c583fccc57cdf56f9432967f01b139e59b6c440"} Mar 11 09:18:02 crc kubenswrapper[4808]: I0311 09:18:02.888967 4808 generic.go:334] "Generic (PLEG): container finished" podID="43f15f5d-6b5e-42f9-a135-1386a2116d67" containerID="02cc1c44d1f249cba294d2c0e78111e65624569534e5a2f02fba0c3d8e26539a" exitCode=0 Mar 11 09:18:02 crc kubenswrapper[4808]: I0311 09:18:02.889065 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553678-p8dsw" event={"ID":"43f15f5d-6b5e-42f9-a135-1386a2116d67","Type":"ContainerDied","Data":"02cc1c44d1f249cba294d2c0e78111e65624569534e5a2f02fba0c3d8e26539a"} Mar 11 09:18:04 crc kubenswrapper[4808]: I0311 09:18:04.175220 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553678-p8dsw" Mar 11 09:18:04 crc kubenswrapper[4808]: I0311 09:18:04.316636 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nsnpg\" (UniqueName: \"kubernetes.io/projected/43f15f5d-6b5e-42f9-a135-1386a2116d67-kube-api-access-nsnpg\") pod \"43f15f5d-6b5e-42f9-a135-1386a2116d67\" (UID: \"43f15f5d-6b5e-42f9-a135-1386a2116d67\") " Mar 11 09:18:04 crc kubenswrapper[4808]: I0311 09:18:04.323577 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43f15f5d-6b5e-42f9-a135-1386a2116d67-kube-api-access-nsnpg" (OuterVolumeSpecName: "kube-api-access-nsnpg") pod "43f15f5d-6b5e-42f9-a135-1386a2116d67" (UID: "43f15f5d-6b5e-42f9-a135-1386a2116d67"). InnerVolumeSpecName "kube-api-access-nsnpg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:18:04 crc kubenswrapper[4808]: I0311 09:18:04.419424 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nsnpg\" (UniqueName: \"kubernetes.io/projected/43f15f5d-6b5e-42f9-a135-1386a2116d67-kube-api-access-nsnpg\") on node \"crc\" DevicePath \"\"" Mar 11 09:18:04 crc kubenswrapper[4808]: I0311 09:18:04.905596 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553678-p8dsw" event={"ID":"43f15f5d-6b5e-42f9-a135-1386a2116d67","Type":"ContainerDied","Data":"81d65b2a4945266a3df5385f7c583fccc57cdf56f9432967f01b139e59b6c440"} Mar 11 09:18:04 crc kubenswrapper[4808]: I0311 09:18:04.905655 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="81d65b2a4945266a3df5385f7c583fccc57cdf56f9432967f01b139e59b6c440" Mar 11 09:18:04 crc kubenswrapper[4808]: I0311 09:18:04.905676 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553678-p8dsw" Mar 11 09:18:05 crc kubenswrapper[4808]: I0311 09:18:05.256868 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553672-vd5p2"] Mar 11 09:18:05 crc kubenswrapper[4808]: I0311 09:18:05.261818 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553672-vd5p2"] Mar 11 09:18:05 crc kubenswrapper[4808]: I0311 09:18:05.798304 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa77e4ec-3714-4b83-9245-07d5cbc814d7" path="/var/lib/kubelet/pods/aa77e4ec-3714-4b83-9245-07d5cbc814d7/volumes" Mar 11 09:18:06 crc kubenswrapper[4808]: I0311 09:18:06.789567 4808 scope.go:117] "RemoveContainer" containerID="7e18e081ed5442135a787a1914f64692c1d04ef480c559dafcef0144de7c2108" Mar 11 09:18:06 crc kubenswrapper[4808]: E0311 09:18:06.789786 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:18:17 crc kubenswrapper[4808]: I0311 09:18:17.790241 4808 scope.go:117] "RemoveContainer" containerID="7e18e081ed5442135a787a1914f64692c1d04ef480c559dafcef0144de7c2108" Mar 11 09:18:17 crc kubenswrapper[4808]: E0311 09:18:17.790998 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:18:28 crc kubenswrapper[4808]: I0311 09:18:28.789348 4808 scope.go:117] "RemoveContainer" containerID="7e18e081ed5442135a787a1914f64692c1d04ef480c559dafcef0144de7c2108" Mar 11 09:18:28 crc kubenswrapper[4808]: E0311 09:18:28.790129 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:18:42 crc kubenswrapper[4808]: I0311 09:18:42.789706 4808 scope.go:117] "RemoveContainer" containerID="7e18e081ed5442135a787a1914f64692c1d04ef480c559dafcef0144de7c2108" Mar 11 09:18:42 crc kubenswrapper[4808]: E0311 09:18:42.790476 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:18:53 crc kubenswrapper[4808]: I0311 09:18:53.642708 4808 scope.go:117] "RemoveContainer" containerID="b768cdfaf7349316cb6f82547734b735c2235f4d8df6a9907758083a5f8874f7" Mar 11 09:18:56 crc kubenswrapper[4808]: I0311 09:18:56.789490 4808 scope.go:117] "RemoveContainer" containerID="7e18e081ed5442135a787a1914f64692c1d04ef480c559dafcef0144de7c2108" Mar 11 09:18:56 crc kubenswrapper[4808]: E0311 09:18:56.790348 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:19:09 crc kubenswrapper[4808]: I0311 09:19:09.796011 4808 scope.go:117] "RemoveContainer" containerID="7e18e081ed5442135a787a1914f64692c1d04ef480c559dafcef0144de7c2108" Mar 11 09:19:09 crc kubenswrapper[4808]: E0311 09:19:09.798409 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:19:21 crc kubenswrapper[4808]: I0311 09:19:21.789594 4808 scope.go:117] "RemoveContainer" containerID="7e18e081ed5442135a787a1914f64692c1d04ef480c559dafcef0144de7c2108" Mar 11 09:19:21 crc kubenswrapper[4808]: E0311 09:19:21.790530 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:19:32 crc kubenswrapper[4808]: I0311 09:19:32.789430 4808 scope.go:117] "RemoveContainer" containerID="7e18e081ed5442135a787a1914f64692c1d04ef480c559dafcef0144de7c2108" Mar 11 09:19:32 crc kubenswrapper[4808]: E0311 09:19:32.790153 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:19:44 crc kubenswrapper[4808]: I0311 09:19:44.788862 4808 scope.go:117] "RemoveContainer" containerID="7e18e081ed5442135a787a1914f64692c1d04ef480c559dafcef0144de7c2108" Mar 11 09:19:44 crc kubenswrapper[4808]: E0311 09:19:44.789588 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:19:59 crc kubenswrapper[4808]: I0311 09:19:59.797826 4808 scope.go:117] "RemoveContainer" containerID="7e18e081ed5442135a787a1914f64692c1d04ef480c559dafcef0144de7c2108" Mar 11 09:19:59 crc kubenswrapper[4808]: E0311 09:19:59.799196 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:20:00 crc kubenswrapper[4808]: I0311 09:20:00.160689 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553680-n59sn"] Mar 11 09:20:00 crc kubenswrapper[4808]: E0311 09:20:00.161002 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43f15f5d-6b5e-42f9-a135-1386a2116d67" containerName="oc" Mar 11 09:20:00 crc kubenswrapper[4808]: I0311 09:20:00.161014 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="43f15f5d-6b5e-42f9-a135-1386a2116d67" containerName="oc" Mar 11 09:20:00 crc kubenswrapper[4808]: I0311 09:20:00.161154 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="43f15f5d-6b5e-42f9-a135-1386a2116d67" containerName="oc" Mar 11 09:20:00 crc kubenswrapper[4808]: I0311 09:20:00.161682 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553680-n59sn" Mar 11 09:20:00 crc kubenswrapper[4808]: I0311 09:20:00.164121 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-8r7cc" Mar 11 09:20:00 crc kubenswrapper[4808]: I0311 09:20:00.164434 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 09:20:00 crc kubenswrapper[4808]: I0311 09:20:00.164595 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 09:20:00 crc kubenswrapper[4808]: I0311 09:20:00.170030 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553680-n59sn"] Mar 11 09:20:00 crc kubenswrapper[4808]: I0311 09:20:00.216519 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67khl\" (UniqueName: \"kubernetes.io/projected/930c46fa-8cc3-45ad-946a-2bc2f5ff0745-kube-api-access-67khl\") pod \"auto-csr-approver-29553680-n59sn\" (UID: \"930c46fa-8cc3-45ad-946a-2bc2f5ff0745\") " pod="openshift-infra/auto-csr-approver-29553680-n59sn" Mar 11 09:20:00 crc kubenswrapper[4808]: I0311 09:20:00.318211 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67khl\" (UniqueName: \"kubernetes.io/projected/930c46fa-8cc3-45ad-946a-2bc2f5ff0745-kube-api-access-67khl\") pod \"auto-csr-approver-29553680-n59sn\" (UID: \"930c46fa-8cc3-45ad-946a-2bc2f5ff0745\") " pod="openshift-infra/auto-csr-approver-29553680-n59sn" Mar 11 09:20:00 crc kubenswrapper[4808]: I0311 09:20:00.341669 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67khl\" (UniqueName: \"kubernetes.io/projected/930c46fa-8cc3-45ad-946a-2bc2f5ff0745-kube-api-access-67khl\") pod \"auto-csr-approver-29553680-n59sn\" (UID: \"930c46fa-8cc3-45ad-946a-2bc2f5ff0745\") " pod="openshift-infra/auto-csr-approver-29553680-n59sn" Mar 11 09:20:00 crc kubenswrapper[4808]: I0311 09:20:00.499082 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553680-n59sn" Mar 11 09:20:00 crc kubenswrapper[4808]: I0311 09:20:00.978851 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553680-n59sn"] Mar 11 09:20:01 crc kubenswrapper[4808]: I0311 09:20:01.911315 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553680-n59sn" event={"ID":"930c46fa-8cc3-45ad-946a-2bc2f5ff0745","Type":"ContainerStarted","Data":"7a0b2856c8182d8fdb4a00cca1c4b32d693168a537c47c38e59f2fa8760da5c1"} Mar 11 09:20:03 crc kubenswrapper[4808]: I0311 09:20:03.928464 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553680-n59sn" event={"ID":"930c46fa-8cc3-45ad-946a-2bc2f5ff0745","Type":"ContainerStarted","Data":"f6a9d9787f703ff5353d96a250475b232a6e416ec8e0d071181c56878aca576b"} Mar 11 09:20:03 crc kubenswrapper[4808]: I0311 09:20:03.942448 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29553680-n59sn" podStartSLOduration=1.4523023560000001 podStartE2EDuration="3.94242961s" podCreationTimestamp="2026-03-11 09:20:00 +0000 UTC" firstStartedPulling="2026-03-11 09:20:00.991613169 +0000 UTC m=+2451.944936499" lastFinishedPulling="2026-03-11 09:20:03.481740423 +0000 UTC m=+2454.435063753" observedRunningTime="2026-03-11 09:20:03.942109241 +0000 UTC m=+2454.895432571" watchObservedRunningTime="2026-03-11 09:20:03.94242961 +0000 UTC m=+2454.895752920" Mar 11 09:20:04 crc kubenswrapper[4808]: I0311 09:20:04.937065 4808 generic.go:334] "Generic (PLEG): container finished" podID="930c46fa-8cc3-45ad-946a-2bc2f5ff0745" containerID="f6a9d9787f703ff5353d96a250475b232a6e416ec8e0d071181c56878aca576b" exitCode=0 Mar 11 09:20:04 crc kubenswrapper[4808]: I0311 09:20:04.937131 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553680-n59sn" event={"ID":"930c46fa-8cc3-45ad-946a-2bc2f5ff0745","Type":"ContainerDied","Data":"f6a9d9787f703ff5353d96a250475b232a6e416ec8e0d071181c56878aca576b"} Mar 11 09:20:06 crc kubenswrapper[4808]: I0311 09:20:06.289526 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553680-n59sn" Mar 11 09:20:06 crc kubenswrapper[4808]: I0311 09:20:06.410201 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67khl\" (UniqueName: \"kubernetes.io/projected/930c46fa-8cc3-45ad-946a-2bc2f5ff0745-kube-api-access-67khl\") pod \"930c46fa-8cc3-45ad-946a-2bc2f5ff0745\" (UID: \"930c46fa-8cc3-45ad-946a-2bc2f5ff0745\") " Mar 11 09:20:06 crc kubenswrapper[4808]: I0311 09:20:06.415423 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/930c46fa-8cc3-45ad-946a-2bc2f5ff0745-kube-api-access-67khl" (OuterVolumeSpecName: "kube-api-access-67khl") pod "930c46fa-8cc3-45ad-946a-2bc2f5ff0745" (UID: "930c46fa-8cc3-45ad-946a-2bc2f5ff0745"). InnerVolumeSpecName "kube-api-access-67khl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:20:06 crc kubenswrapper[4808]: I0311 09:20:06.512326 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-67khl\" (UniqueName: \"kubernetes.io/projected/930c46fa-8cc3-45ad-946a-2bc2f5ff0745-kube-api-access-67khl\") on node \"crc\" DevicePath \"\"" Mar 11 09:20:06 crc kubenswrapper[4808]: I0311 09:20:06.966821 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553680-n59sn" event={"ID":"930c46fa-8cc3-45ad-946a-2bc2f5ff0745","Type":"ContainerDied","Data":"7a0b2856c8182d8fdb4a00cca1c4b32d693168a537c47c38e59f2fa8760da5c1"} Mar 11 09:20:06 crc kubenswrapper[4808]: I0311 09:20:06.966866 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a0b2856c8182d8fdb4a00cca1c4b32d693168a537c47c38e59f2fa8760da5c1" Mar 11 09:20:06 crc kubenswrapper[4808]: I0311 09:20:06.966896 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553680-n59sn" Mar 11 09:20:07 crc kubenswrapper[4808]: I0311 09:20:07.020438 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553674-dqsl5"] Mar 11 09:20:07 crc kubenswrapper[4808]: I0311 09:20:07.026858 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553674-dqsl5"] Mar 11 09:20:07 crc kubenswrapper[4808]: I0311 09:20:07.812827 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b639b262-9be1-49ad-baff-ea0b0cb51262" path="/var/lib/kubelet/pods/b639b262-9be1-49ad-baff-ea0b0cb51262/volumes" Mar 11 09:20:13 crc kubenswrapper[4808]: I0311 09:20:13.789075 4808 scope.go:117] "RemoveContainer" containerID="7e18e081ed5442135a787a1914f64692c1d04ef480c559dafcef0144de7c2108" Mar 11 09:20:13 crc kubenswrapper[4808]: E0311 09:20:13.789858 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:20:25 crc kubenswrapper[4808]: I0311 09:20:25.789650 4808 scope.go:117] "RemoveContainer" containerID="7e18e081ed5442135a787a1914f64692c1d04ef480c559dafcef0144de7c2108" Mar 11 09:20:25 crc kubenswrapper[4808]: E0311 09:20:25.790609 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:20:36 crc kubenswrapper[4808]: I0311 09:20:36.789925 4808 scope.go:117] "RemoveContainer" containerID="7e18e081ed5442135a787a1914f64692c1d04ef480c559dafcef0144de7c2108" Mar 11 09:20:36 crc kubenswrapper[4808]: E0311 09:20:36.791116 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:20:49 crc kubenswrapper[4808]: I0311 09:20:49.792690 4808 scope.go:117] "RemoveContainer" containerID="7e18e081ed5442135a787a1914f64692c1d04ef480c559dafcef0144de7c2108" Mar 11 09:20:49 crc kubenswrapper[4808]: E0311 09:20:49.793524 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:20:53 crc kubenswrapper[4808]: I0311 09:20:53.759093 4808 scope.go:117] "RemoveContainer" containerID="dc089052ca730e179db92a6ed8e5194fce59cbd2449c87cc77712734ffca52eb" Mar 11 09:21:04 crc kubenswrapper[4808]: I0311 09:21:04.789399 4808 scope.go:117] "RemoveContainer" containerID="7e18e081ed5442135a787a1914f64692c1d04ef480c559dafcef0144de7c2108" Mar 11 09:21:04 crc kubenswrapper[4808]: E0311 09:21:04.790051 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:21:15 crc kubenswrapper[4808]: I0311 09:21:15.789472 4808 scope.go:117] "RemoveContainer" containerID="7e18e081ed5442135a787a1914f64692c1d04ef480c559dafcef0144de7c2108" Mar 11 09:21:15 crc kubenswrapper[4808]: E0311 09:21:15.790669 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:21:26 crc kubenswrapper[4808]: I0311 09:21:26.789914 4808 scope.go:117] "RemoveContainer" containerID="7e18e081ed5442135a787a1914f64692c1d04ef480c559dafcef0144de7c2108" Mar 11 09:21:26 crc kubenswrapper[4808]: E0311 09:21:26.790567 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:21:41 crc kubenswrapper[4808]: I0311 09:21:41.789826 4808 scope.go:117] "RemoveContainer" containerID="7e18e081ed5442135a787a1914f64692c1d04ef480c559dafcef0144de7c2108" Mar 11 09:21:41 crc kubenswrapper[4808]: E0311 09:21:41.790857 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:21:53 crc kubenswrapper[4808]: I0311 09:21:53.789876 4808 scope.go:117] "RemoveContainer" containerID="7e18e081ed5442135a787a1914f64692c1d04ef480c559dafcef0144de7c2108" Mar 11 09:21:53 crc kubenswrapper[4808]: I0311 09:21:53.852716 4808 scope.go:117] "RemoveContainer" containerID="a5e5597158bd35270b109a5fca9fffdc82eef2fd512228f2c96249d107a9153e" Mar 11 09:21:53 crc kubenswrapper[4808]: I0311 09:21:53.896471 4808 scope.go:117] "RemoveContainer" containerID="bc202cccdcd7a4eb19f5b3144a02e3461e1ba4e3787534095a8a77acb3ed9ecb" Mar 11 09:21:53 crc kubenswrapper[4808]: I0311 09:21:53.925997 4808 scope.go:117] "RemoveContainer" containerID="1cf72c5454b528f5930f9365c7c6e62014cafbce72ec2a80f33d56ffdb74bf5c" Mar 11 09:21:54 crc kubenswrapper[4808]: I0311 09:21:54.831438 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" event={"ID":"3dda5309-668d-4e3c-b3b2-1d708eecc578","Type":"ContainerStarted","Data":"5541d39dfe0a2921cc7a274b7e0e5e13c34f7201893fb76d9278c96d41e9af2b"} Mar 11 09:22:00 crc kubenswrapper[4808]: I0311 09:22:00.146202 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553682-h478n"] Mar 11 09:22:00 crc kubenswrapper[4808]: E0311 09:22:00.148173 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="930c46fa-8cc3-45ad-946a-2bc2f5ff0745" containerName="oc" Mar 11 09:22:00 crc kubenswrapper[4808]: I0311 09:22:00.148230 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="930c46fa-8cc3-45ad-946a-2bc2f5ff0745" containerName="oc" Mar 11 09:22:00 crc kubenswrapper[4808]: I0311 09:22:00.148455 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="930c46fa-8cc3-45ad-946a-2bc2f5ff0745" containerName="oc" Mar 11 09:22:00 crc kubenswrapper[4808]: I0311 09:22:00.149249 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553682-h478n" Mar 11 09:22:00 crc kubenswrapper[4808]: I0311 09:22:00.152292 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 09:22:00 crc kubenswrapper[4808]: I0311 09:22:00.152348 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-8r7cc" Mar 11 09:22:00 crc kubenswrapper[4808]: I0311 09:22:00.156976 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553682-h478n"] Mar 11 09:22:00 crc kubenswrapper[4808]: I0311 09:22:00.158997 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 09:22:00 crc kubenswrapper[4808]: I0311 09:22:00.272884 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nd45v\" (UniqueName: \"kubernetes.io/projected/f8998cf7-d284-41cc-b5aa-699df0283d3a-kube-api-access-nd45v\") pod \"auto-csr-approver-29553682-h478n\" (UID: \"f8998cf7-d284-41cc-b5aa-699df0283d3a\") " pod="openshift-infra/auto-csr-approver-29553682-h478n" Mar 11 09:22:00 crc kubenswrapper[4808]: I0311 09:22:00.374339 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nd45v\" (UniqueName: \"kubernetes.io/projected/f8998cf7-d284-41cc-b5aa-699df0283d3a-kube-api-access-nd45v\") pod \"auto-csr-approver-29553682-h478n\" (UID: \"f8998cf7-d284-41cc-b5aa-699df0283d3a\") " pod="openshift-infra/auto-csr-approver-29553682-h478n" Mar 11 09:22:00 crc kubenswrapper[4808]: I0311 09:22:00.411084 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nd45v\" (UniqueName: \"kubernetes.io/projected/f8998cf7-d284-41cc-b5aa-699df0283d3a-kube-api-access-nd45v\") pod \"auto-csr-approver-29553682-h478n\" (UID: \"f8998cf7-d284-41cc-b5aa-699df0283d3a\") " pod="openshift-infra/auto-csr-approver-29553682-h478n" Mar 11 09:22:00 crc kubenswrapper[4808]: I0311 09:22:00.467988 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553682-h478n" Mar 11 09:22:00 crc kubenswrapper[4808]: I0311 09:22:00.777053 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553682-h478n"] Mar 11 09:22:00 crc kubenswrapper[4808]: I0311 09:22:00.785912 4808 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 11 09:22:00 crc kubenswrapper[4808]: I0311 09:22:00.872478 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553682-h478n" event={"ID":"f8998cf7-d284-41cc-b5aa-699df0283d3a","Type":"ContainerStarted","Data":"0be63195bdf2f8aa63c1ea45a5eca8ca003df3315f374bce1975d016fd54833f"} Mar 11 09:22:03 crc kubenswrapper[4808]: I0311 09:22:03.896638 4808 generic.go:334] "Generic (PLEG): container finished" podID="f8998cf7-d284-41cc-b5aa-699df0283d3a" containerID="229eb8c7dc4d4463b0d94f280010d7c479770a758c973a3e8d8e450b1dea11a2" exitCode=0 Mar 11 09:22:03 crc kubenswrapper[4808]: I0311 09:22:03.896689 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553682-h478n" event={"ID":"f8998cf7-d284-41cc-b5aa-699df0283d3a","Type":"ContainerDied","Data":"229eb8c7dc4d4463b0d94f280010d7c479770a758c973a3e8d8e450b1dea11a2"} Mar 11 09:22:05 crc kubenswrapper[4808]: I0311 09:22:05.282830 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553682-h478n" Mar 11 09:22:05 crc kubenswrapper[4808]: I0311 09:22:05.445719 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nd45v\" (UniqueName: \"kubernetes.io/projected/f8998cf7-d284-41cc-b5aa-699df0283d3a-kube-api-access-nd45v\") pod \"f8998cf7-d284-41cc-b5aa-699df0283d3a\" (UID: \"f8998cf7-d284-41cc-b5aa-699df0283d3a\") " Mar 11 09:22:05 crc kubenswrapper[4808]: I0311 09:22:05.453611 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8998cf7-d284-41cc-b5aa-699df0283d3a-kube-api-access-nd45v" (OuterVolumeSpecName: "kube-api-access-nd45v") pod "f8998cf7-d284-41cc-b5aa-699df0283d3a" (UID: "f8998cf7-d284-41cc-b5aa-699df0283d3a"). InnerVolumeSpecName "kube-api-access-nd45v". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:22:05 crc kubenswrapper[4808]: I0311 09:22:05.547846 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nd45v\" (UniqueName: \"kubernetes.io/projected/f8998cf7-d284-41cc-b5aa-699df0283d3a-kube-api-access-nd45v\") on node \"crc\" DevicePath \"\"" Mar 11 09:22:05 crc kubenswrapper[4808]: I0311 09:22:05.920143 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553682-h478n" event={"ID":"f8998cf7-d284-41cc-b5aa-699df0283d3a","Type":"ContainerDied","Data":"0be63195bdf2f8aa63c1ea45a5eca8ca003df3315f374bce1975d016fd54833f"} Mar 11 09:22:05 crc kubenswrapper[4808]: I0311 09:22:05.920447 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0be63195bdf2f8aa63c1ea45a5eca8ca003df3315f374bce1975d016fd54833f" Mar 11 09:22:05 crc kubenswrapper[4808]: I0311 09:22:05.920243 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553682-h478n" Mar 11 09:22:06 crc kubenswrapper[4808]: I0311 09:22:06.367938 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553676-hjx44"] Mar 11 09:22:06 crc kubenswrapper[4808]: I0311 09:22:06.375530 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553676-hjx44"] Mar 11 09:22:07 crc kubenswrapper[4808]: I0311 09:22:07.799959 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd0e6e71-4916-40d5-a360-aa0500ba7c20" path="/var/lib/kubelet/pods/dd0e6e71-4916-40d5-a360-aa0500ba7c20/volumes" Mar 11 09:22:53 crc kubenswrapper[4808]: I0311 09:22:53.984475 4808 scope.go:117] "RemoveContainer" containerID="eea579e030cb241572cc2f2a47b2290c0b690c1ebdc26ea3bbfe6ce6da9fec18" Mar 11 09:22:55 crc kubenswrapper[4808]: I0311 09:22:55.183224 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mtsf5"] Mar 11 09:22:55 crc kubenswrapper[4808]: E0311 09:22:55.183829 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8998cf7-d284-41cc-b5aa-699df0283d3a" containerName="oc" Mar 11 09:22:55 crc kubenswrapper[4808]: I0311 09:22:55.183840 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8998cf7-d284-41cc-b5aa-699df0283d3a" containerName="oc" Mar 11 09:22:55 crc kubenswrapper[4808]: I0311 09:22:55.183985 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8998cf7-d284-41cc-b5aa-699df0283d3a" containerName="oc" Mar 11 09:22:55 crc kubenswrapper[4808]: I0311 09:22:55.184923 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mtsf5" Mar 11 09:22:55 crc kubenswrapper[4808]: I0311 09:22:55.209636 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mtsf5"] Mar 11 09:22:55 crc kubenswrapper[4808]: I0311 09:22:55.212672 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1429fc6b-05da-4f25-81c3-29984dfa4805-catalog-content\") pod \"certified-operators-mtsf5\" (UID: \"1429fc6b-05da-4f25-81c3-29984dfa4805\") " pod="openshift-marketplace/certified-operators-mtsf5" Mar 11 09:22:55 crc kubenswrapper[4808]: I0311 09:22:55.212735 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pw9qn\" (UniqueName: \"kubernetes.io/projected/1429fc6b-05da-4f25-81c3-29984dfa4805-kube-api-access-pw9qn\") pod \"certified-operators-mtsf5\" (UID: \"1429fc6b-05da-4f25-81c3-29984dfa4805\") " pod="openshift-marketplace/certified-operators-mtsf5" Mar 11 09:22:55 crc kubenswrapper[4808]: I0311 09:22:55.212799 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1429fc6b-05da-4f25-81c3-29984dfa4805-utilities\") pod \"certified-operators-mtsf5\" (UID: \"1429fc6b-05da-4f25-81c3-29984dfa4805\") " pod="openshift-marketplace/certified-operators-mtsf5" Mar 11 09:22:55 crc kubenswrapper[4808]: I0311 09:22:55.314351 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1429fc6b-05da-4f25-81c3-29984dfa4805-catalog-content\") pod \"certified-operators-mtsf5\" (UID: \"1429fc6b-05da-4f25-81c3-29984dfa4805\") " pod="openshift-marketplace/certified-operators-mtsf5" Mar 11 09:22:55 crc kubenswrapper[4808]: I0311 09:22:55.314416 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pw9qn\" (UniqueName: \"kubernetes.io/projected/1429fc6b-05da-4f25-81c3-29984dfa4805-kube-api-access-pw9qn\") pod \"certified-operators-mtsf5\" (UID: \"1429fc6b-05da-4f25-81c3-29984dfa4805\") " pod="openshift-marketplace/certified-operators-mtsf5" Mar 11 09:22:55 crc kubenswrapper[4808]: I0311 09:22:55.314448 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1429fc6b-05da-4f25-81c3-29984dfa4805-utilities\") pod \"certified-operators-mtsf5\" (UID: \"1429fc6b-05da-4f25-81c3-29984dfa4805\") " pod="openshift-marketplace/certified-operators-mtsf5" Mar 11 09:22:55 crc kubenswrapper[4808]: I0311 09:22:55.315043 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1429fc6b-05da-4f25-81c3-29984dfa4805-utilities\") pod \"certified-operators-mtsf5\" (UID: \"1429fc6b-05da-4f25-81c3-29984dfa4805\") " pod="openshift-marketplace/certified-operators-mtsf5" Mar 11 09:22:55 crc kubenswrapper[4808]: I0311 09:22:55.315046 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1429fc6b-05da-4f25-81c3-29984dfa4805-catalog-content\") pod \"certified-operators-mtsf5\" (UID: \"1429fc6b-05da-4f25-81c3-29984dfa4805\") " pod="openshift-marketplace/certified-operators-mtsf5" Mar 11 09:22:55 crc kubenswrapper[4808]: I0311 09:22:55.339616 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pw9qn\" (UniqueName: \"kubernetes.io/projected/1429fc6b-05da-4f25-81c3-29984dfa4805-kube-api-access-pw9qn\") pod \"certified-operators-mtsf5\" (UID: \"1429fc6b-05da-4f25-81c3-29984dfa4805\") " pod="openshift-marketplace/certified-operators-mtsf5" Mar 11 09:22:55 crc kubenswrapper[4808]: I0311 09:22:55.504640 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mtsf5" Mar 11 09:22:55 crc kubenswrapper[4808]: I0311 09:22:55.942711 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mtsf5"] Mar 11 09:22:56 crc kubenswrapper[4808]: I0311 09:22:56.366287 4808 generic.go:334] "Generic (PLEG): container finished" podID="1429fc6b-05da-4f25-81c3-29984dfa4805" containerID="386803b9e7c87a8226ce1a008007dc728d6c06d2603d91e21e0c32f1edd841f8" exitCode=0 Mar 11 09:22:56 crc kubenswrapper[4808]: I0311 09:22:56.366329 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mtsf5" event={"ID":"1429fc6b-05da-4f25-81c3-29984dfa4805","Type":"ContainerDied","Data":"386803b9e7c87a8226ce1a008007dc728d6c06d2603d91e21e0c32f1edd841f8"} Mar 11 09:22:56 crc kubenswrapper[4808]: I0311 09:22:56.366351 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mtsf5" event={"ID":"1429fc6b-05da-4f25-81c3-29984dfa4805","Type":"ContainerStarted","Data":"b6c03239178e9a5c7640903e3c678c746455c103cbbabd33468f4f3d471c620b"} Mar 11 09:22:57 crc kubenswrapper[4808]: I0311 09:22:57.374130 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mtsf5" event={"ID":"1429fc6b-05da-4f25-81c3-29984dfa4805","Type":"ContainerStarted","Data":"6e080d73ae18babbcf29029600aa89f4aa200003c25b6d8c828457c7bfe065d9"} Mar 11 09:22:58 crc kubenswrapper[4808]: I0311 09:22:58.388950 4808 generic.go:334] "Generic (PLEG): container finished" podID="1429fc6b-05da-4f25-81c3-29984dfa4805" containerID="6e080d73ae18babbcf29029600aa89f4aa200003c25b6d8c828457c7bfe065d9" exitCode=0 Mar 11 09:22:58 crc kubenswrapper[4808]: I0311 09:22:58.389009 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mtsf5" event={"ID":"1429fc6b-05da-4f25-81c3-29984dfa4805","Type":"ContainerDied","Data":"6e080d73ae18babbcf29029600aa89f4aa200003c25b6d8c828457c7bfe065d9"} Mar 11 09:22:59 crc kubenswrapper[4808]: I0311 09:22:59.400382 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mtsf5" event={"ID":"1429fc6b-05da-4f25-81c3-29984dfa4805","Type":"ContainerStarted","Data":"00f4de7c4bfd779efcd7f1a2504cd8e102c4f651ffd85f749903c9586c2a0feb"} Mar 11 09:22:59 crc kubenswrapper[4808]: I0311 09:22:59.421402 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mtsf5" podStartSLOduration=1.930983485 podStartE2EDuration="4.421387507s" podCreationTimestamp="2026-03-11 09:22:55 +0000 UTC" firstStartedPulling="2026-03-11 09:22:56.368115163 +0000 UTC m=+2627.321438493" lastFinishedPulling="2026-03-11 09:22:58.858519155 +0000 UTC m=+2629.811842515" observedRunningTime="2026-03-11 09:22:59.420129502 +0000 UTC m=+2630.373452832" watchObservedRunningTime="2026-03-11 09:22:59.421387507 +0000 UTC m=+2630.374710827" Mar 11 09:23:05 crc kubenswrapper[4808]: I0311 09:23:05.505822 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mtsf5" Mar 11 09:23:05 crc kubenswrapper[4808]: I0311 09:23:05.508449 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mtsf5" Mar 11 09:23:05 crc kubenswrapper[4808]: I0311 09:23:05.584804 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mtsf5" Mar 11 09:23:06 crc kubenswrapper[4808]: I0311 09:23:06.518845 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mtsf5" Mar 11 09:23:06 crc kubenswrapper[4808]: I0311 09:23:06.567015 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mtsf5"] Mar 11 09:23:08 crc kubenswrapper[4808]: I0311 09:23:08.474302 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mtsf5" podUID="1429fc6b-05da-4f25-81c3-29984dfa4805" containerName="registry-server" containerID="cri-o://00f4de7c4bfd779efcd7f1a2504cd8e102c4f651ffd85f749903c9586c2a0feb" gracePeriod=2 Mar 11 09:23:08 crc kubenswrapper[4808]: I0311 09:23:08.899621 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mtsf5" Mar 11 09:23:09 crc kubenswrapper[4808]: I0311 09:23:09.019655 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1429fc6b-05da-4f25-81c3-29984dfa4805-utilities\") pod \"1429fc6b-05da-4f25-81c3-29984dfa4805\" (UID: \"1429fc6b-05da-4f25-81c3-29984dfa4805\") " Mar 11 09:23:09 crc kubenswrapper[4808]: I0311 09:23:09.019734 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pw9qn\" (UniqueName: \"kubernetes.io/projected/1429fc6b-05da-4f25-81c3-29984dfa4805-kube-api-access-pw9qn\") pod \"1429fc6b-05da-4f25-81c3-29984dfa4805\" (UID: \"1429fc6b-05da-4f25-81c3-29984dfa4805\") " Mar 11 09:23:09 crc kubenswrapper[4808]: I0311 09:23:09.019806 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1429fc6b-05da-4f25-81c3-29984dfa4805-catalog-content\") pod \"1429fc6b-05da-4f25-81c3-29984dfa4805\" (UID: \"1429fc6b-05da-4f25-81c3-29984dfa4805\") " Mar 11 09:23:09 crc kubenswrapper[4808]: I0311 09:23:09.021010 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1429fc6b-05da-4f25-81c3-29984dfa4805-utilities" (OuterVolumeSpecName: "utilities") pod "1429fc6b-05da-4f25-81c3-29984dfa4805" (UID: "1429fc6b-05da-4f25-81c3-29984dfa4805"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:23:09 crc kubenswrapper[4808]: I0311 09:23:09.028705 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1429fc6b-05da-4f25-81c3-29984dfa4805-kube-api-access-pw9qn" (OuterVolumeSpecName: "kube-api-access-pw9qn") pod "1429fc6b-05da-4f25-81c3-29984dfa4805" (UID: "1429fc6b-05da-4f25-81c3-29984dfa4805"). InnerVolumeSpecName "kube-api-access-pw9qn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:23:09 crc kubenswrapper[4808]: I0311 09:23:09.105873 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1429fc6b-05da-4f25-81c3-29984dfa4805-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1429fc6b-05da-4f25-81c3-29984dfa4805" (UID: "1429fc6b-05da-4f25-81c3-29984dfa4805"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:23:09 crc kubenswrapper[4808]: I0311 09:23:09.122537 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1429fc6b-05da-4f25-81c3-29984dfa4805-utilities\") on node \"crc\" DevicePath \"\"" Mar 11 09:23:09 crc kubenswrapper[4808]: I0311 09:23:09.122605 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pw9qn\" (UniqueName: \"kubernetes.io/projected/1429fc6b-05da-4f25-81c3-29984dfa4805-kube-api-access-pw9qn\") on node \"crc\" DevicePath \"\"" Mar 11 09:23:09 crc kubenswrapper[4808]: I0311 09:23:09.122634 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1429fc6b-05da-4f25-81c3-29984dfa4805-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 11 09:23:09 crc kubenswrapper[4808]: I0311 09:23:09.484471 4808 generic.go:334] "Generic (PLEG): container finished" podID="1429fc6b-05da-4f25-81c3-29984dfa4805" containerID="00f4de7c4bfd779efcd7f1a2504cd8e102c4f651ffd85f749903c9586c2a0feb" exitCode=0 Mar 11 09:23:09 crc kubenswrapper[4808]: I0311 09:23:09.484513 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mtsf5" event={"ID":"1429fc6b-05da-4f25-81c3-29984dfa4805","Type":"ContainerDied","Data":"00f4de7c4bfd779efcd7f1a2504cd8e102c4f651ffd85f749903c9586c2a0feb"} Mar 11 09:23:09 crc kubenswrapper[4808]: I0311 09:23:09.484540 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mtsf5" event={"ID":"1429fc6b-05da-4f25-81c3-29984dfa4805","Type":"ContainerDied","Data":"b6c03239178e9a5c7640903e3c678c746455c103cbbabd33468f4f3d471c620b"} Mar 11 09:23:09 crc kubenswrapper[4808]: I0311 09:23:09.484575 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mtsf5" Mar 11 09:23:09 crc kubenswrapper[4808]: I0311 09:23:09.484579 4808 scope.go:117] "RemoveContainer" containerID="00f4de7c4bfd779efcd7f1a2504cd8e102c4f651ffd85f749903c9586c2a0feb" Mar 11 09:23:09 crc kubenswrapper[4808]: I0311 09:23:09.508636 4808 scope.go:117] "RemoveContainer" containerID="6e080d73ae18babbcf29029600aa89f4aa200003c25b6d8c828457c7bfe065d9" Mar 11 09:23:09 crc kubenswrapper[4808]: I0311 09:23:09.524530 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mtsf5"] Mar 11 09:23:09 crc kubenswrapper[4808]: I0311 09:23:09.529697 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mtsf5"] Mar 11 09:23:09 crc kubenswrapper[4808]: I0311 09:23:09.553071 4808 scope.go:117] "RemoveContainer" containerID="386803b9e7c87a8226ce1a008007dc728d6c06d2603d91e21e0c32f1edd841f8" Mar 11 09:23:09 crc kubenswrapper[4808]: I0311 09:23:09.569869 4808 scope.go:117] "RemoveContainer" containerID="00f4de7c4bfd779efcd7f1a2504cd8e102c4f651ffd85f749903c9586c2a0feb" Mar 11 09:23:09 crc kubenswrapper[4808]: E0311 09:23:09.570395 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00f4de7c4bfd779efcd7f1a2504cd8e102c4f651ffd85f749903c9586c2a0feb\": container with ID starting with 00f4de7c4bfd779efcd7f1a2504cd8e102c4f651ffd85f749903c9586c2a0feb not found: ID does not exist" containerID="00f4de7c4bfd779efcd7f1a2504cd8e102c4f651ffd85f749903c9586c2a0feb" Mar 11 09:23:09 crc kubenswrapper[4808]: I0311 09:23:09.570435 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00f4de7c4bfd779efcd7f1a2504cd8e102c4f651ffd85f749903c9586c2a0feb"} err="failed to get container status \"00f4de7c4bfd779efcd7f1a2504cd8e102c4f651ffd85f749903c9586c2a0feb\": rpc error: code = NotFound desc = could not find container \"00f4de7c4bfd779efcd7f1a2504cd8e102c4f651ffd85f749903c9586c2a0feb\": container with ID starting with 00f4de7c4bfd779efcd7f1a2504cd8e102c4f651ffd85f749903c9586c2a0feb not found: ID does not exist" Mar 11 09:23:09 crc kubenswrapper[4808]: I0311 09:23:09.570456 4808 scope.go:117] "RemoveContainer" containerID="6e080d73ae18babbcf29029600aa89f4aa200003c25b6d8c828457c7bfe065d9" Mar 11 09:23:09 crc kubenswrapper[4808]: E0311 09:23:09.570788 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e080d73ae18babbcf29029600aa89f4aa200003c25b6d8c828457c7bfe065d9\": container with ID starting with 6e080d73ae18babbcf29029600aa89f4aa200003c25b6d8c828457c7bfe065d9 not found: ID does not exist" containerID="6e080d73ae18babbcf29029600aa89f4aa200003c25b6d8c828457c7bfe065d9" Mar 11 09:23:09 crc kubenswrapper[4808]: I0311 09:23:09.570809 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e080d73ae18babbcf29029600aa89f4aa200003c25b6d8c828457c7bfe065d9"} err="failed to get container status \"6e080d73ae18babbcf29029600aa89f4aa200003c25b6d8c828457c7bfe065d9\": rpc error: code = NotFound desc = could not find container \"6e080d73ae18babbcf29029600aa89f4aa200003c25b6d8c828457c7bfe065d9\": container with ID starting with 6e080d73ae18babbcf29029600aa89f4aa200003c25b6d8c828457c7bfe065d9 not found: ID does not exist" Mar 11 09:23:09 crc kubenswrapper[4808]: I0311 09:23:09.570822 4808 scope.go:117] "RemoveContainer" containerID="386803b9e7c87a8226ce1a008007dc728d6c06d2603d91e21e0c32f1edd841f8" Mar 11 09:23:09 crc kubenswrapper[4808]: E0311 09:23:09.571077 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"386803b9e7c87a8226ce1a008007dc728d6c06d2603d91e21e0c32f1edd841f8\": container with ID starting with 386803b9e7c87a8226ce1a008007dc728d6c06d2603d91e21e0c32f1edd841f8 not found: ID does not exist" containerID="386803b9e7c87a8226ce1a008007dc728d6c06d2603d91e21e0c32f1edd841f8" Mar 11 09:23:09 crc kubenswrapper[4808]: I0311 09:23:09.571103 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"386803b9e7c87a8226ce1a008007dc728d6c06d2603d91e21e0c32f1edd841f8"} err="failed to get container status \"386803b9e7c87a8226ce1a008007dc728d6c06d2603d91e21e0c32f1edd841f8\": rpc error: code = NotFound desc = could not find container \"386803b9e7c87a8226ce1a008007dc728d6c06d2603d91e21e0c32f1edd841f8\": container with ID starting with 386803b9e7c87a8226ce1a008007dc728d6c06d2603d91e21e0c32f1edd841f8 not found: ID does not exist" Mar 11 09:23:09 crc kubenswrapper[4808]: I0311 09:23:09.807306 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1429fc6b-05da-4f25-81c3-29984dfa4805" path="/var/lib/kubelet/pods/1429fc6b-05da-4f25-81c3-29984dfa4805/volumes" Mar 11 09:23:19 crc kubenswrapper[4808]: I0311 09:23:19.006959 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xzf2v"] Mar 11 09:23:19 crc kubenswrapper[4808]: E0311 09:23:19.008168 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1429fc6b-05da-4f25-81c3-29984dfa4805" containerName="extract-utilities" Mar 11 09:23:19 crc kubenswrapper[4808]: I0311 09:23:19.008195 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="1429fc6b-05da-4f25-81c3-29984dfa4805" containerName="extract-utilities" Mar 11 09:23:19 crc kubenswrapper[4808]: E0311 09:23:19.008241 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1429fc6b-05da-4f25-81c3-29984dfa4805" containerName="extract-content" Mar 11 09:23:19 crc kubenswrapper[4808]: I0311 09:23:19.008254 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="1429fc6b-05da-4f25-81c3-29984dfa4805" containerName="extract-content" Mar 11 09:23:19 crc kubenswrapper[4808]: E0311 09:23:19.008280 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1429fc6b-05da-4f25-81c3-29984dfa4805" containerName="registry-server" Mar 11 09:23:19 crc kubenswrapper[4808]: I0311 09:23:19.008296 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="1429fc6b-05da-4f25-81c3-29984dfa4805" containerName="registry-server" Mar 11 09:23:19 crc kubenswrapper[4808]: I0311 09:23:19.008640 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="1429fc6b-05da-4f25-81c3-29984dfa4805" containerName="registry-server" Mar 11 09:23:19 crc kubenswrapper[4808]: I0311 09:23:19.010864 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xzf2v" Mar 11 09:23:19 crc kubenswrapper[4808]: I0311 09:23:19.039426 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xzf2v"] Mar 11 09:23:19 crc kubenswrapper[4808]: I0311 09:23:19.173701 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eef17312-d225-41c1-81be-4275ea1f2202-catalog-content\") pod \"community-operators-xzf2v\" (UID: \"eef17312-d225-41c1-81be-4275ea1f2202\") " pod="openshift-marketplace/community-operators-xzf2v" Mar 11 09:23:19 crc kubenswrapper[4808]: I0311 09:23:19.173797 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5km4\" (UniqueName: \"kubernetes.io/projected/eef17312-d225-41c1-81be-4275ea1f2202-kube-api-access-d5km4\") pod \"community-operators-xzf2v\" (UID: \"eef17312-d225-41c1-81be-4275ea1f2202\") " pod="openshift-marketplace/community-operators-xzf2v" Mar 11 09:23:19 crc kubenswrapper[4808]: I0311 09:23:19.173830 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eef17312-d225-41c1-81be-4275ea1f2202-utilities\") pod \"community-operators-xzf2v\" (UID: \"eef17312-d225-41c1-81be-4275ea1f2202\") " pod="openshift-marketplace/community-operators-xzf2v" Mar 11 09:23:19 crc kubenswrapper[4808]: I0311 09:23:19.275712 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eef17312-d225-41c1-81be-4275ea1f2202-catalog-content\") pod \"community-operators-xzf2v\" (UID: \"eef17312-d225-41c1-81be-4275ea1f2202\") " pod="openshift-marketplace/community-operators-xzf2v" Mar 11 09:23:19 crc kubenswrapper[4808]: I0311 09:23:19.275791 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5km4\" (UniqueName: \"kubernetes.io/projected/eef17312-d225-41c1-81be-4275ea1f2202-kube-api-access-d5km4\") pod \"community-operators-xzf2v\" (UID: \"eef17312-d225-41c1-81be-4275ea1f2202\") " pod="openshift-marketplace/community-operators-xzf2v" Mar 11 09:23:19 crc kubenswrapper[4808]: I0311 09:23:19.275816 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eef17312-d225-41c1-81be-4275ea1f2202-utilities\") pod \"community-operators-xzf2v\" (UID: \"eef17312-d225-41c1-81be-4275ea1f2202\") " pod="openshift-marketplace/community-operators-xzf2v" Mar 11 09:23:19 crc kubenswrapper[4808]: I0311 09:23:19.276296 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eef17312-d225-41c1-81be-4275ea1f2202-utilities\") pod \"community-operators-xzf2v\" (UID: \"eef17312-d225-41c1-81be-4275ea1f2202\") " pod="openshift-marketplace/community-operators-xzf2v" Mar 11 09:23:19 crc kubenswrapper[4808]: I0311 09:23:19.276551 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eef17312-d225-41c1-81be-4275ea1f2202-catalog-content\") pod \"community-operators-xzf2v\" (UID: \"eef17312-d225-41c1-81be-4275ea1f2202\") " pod="openshift-marketplace/community-operators-xzf2v" Mar 11 09:23:19 crc kubenswrapper[4808]: I0311 09:23:19.295317 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5km4\" (UniqueName: \"kubernetes.io/projected/eef17312-d225-41c1-81be-4275ea1f2202-kube-api-access-d5km4\") pod \"community-operators-xzf2v\" (UID: \"eef17312-d225-41c1-81be-4275ea1f2202\") " pod="openshift-marketplace/community-operators-xzf2v" Mar 11 09:23:19 crc kubenswrapper[4808]: I0311 09:23:19.339334 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xzf2v" Mar 11 09:23:19 crc kubenswrapper[4808]: I0311 09:23:19.870499 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xzf2v"] Mar 11 09:23:20 crc kubenswrapper[4808]: I0311 09:23:20.595130 4808 generic.go:334] "Generic (PLEG): container finished" podID="eef17312-d225-41c1-81be-4275ea1f2202" containerID="b9ca559ad1f6febd8baf07b88955fd00b51004c550a6a83d5f9d90940ee63310" exitCode=0 Mar 11 09:23:20 crc kubenswrapper[4808]: I0311 09:23:20.595518 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xzf2v" event={"ID":"eef17312-d225-41c1-81be-4275ea1f2202","Type":"ContainerDied","Data":"b9ca559ad1f6febd8baf07b88955fd00b51004c550a6a83d5f9d90940ee63310"} Mar 11 09:23:20 crc kubenswrapper[4808]: I0311 09:23:20.595563 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xzf2v" event={"ID":"eef17312-d225-41c1-81be-4275ea1f2202","Type":"ContainerStarted","Data":"b21a717adbe12da5687d25c9e955583ccac19c0db7bf5120ab33fb9e613410c4"} Mar 11 09:23:22 crc kubenswrapper[4808]: I0311 09:23:22.614831 4808 generic.go:334] "Generic (PLEG): container finished" podID="eef17312-d225-41c1-81be-4275ea1f2202" containerID="28521c3ee049456805a7de5aa2a8dba2511f5ec4614681f179e0243886e7ac80" exitCode=0 Mar 11 09:23:22 crc kubenswrapper[4808]: I0311 09:23:22.614909 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xzf2v" event={"ID":"eef17312-d225-41c1-81be-4275ea1f2202","Type":"ContainerDied","Data":"28521c3ee049456805a7de5aa2a8dba2511f5ec4614681f179e0243886e7ac80"} Mar 11 09:23:23 crc kubenswrapper[4808]: I0311 09:23:23.624220 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xzf2v" event={"ID":"eef17312-d225-41c1-81be-4275ea1f2202","Type":"ContainerStarted","Data":"e79b8c3a0e81d5c2c73280a7b512d3ea7f78656d6f89401f7d760d9d214a3192"} Mar 11 09:23:23 crc kubenswrapper[4808]: I0311 09:23:23.646812 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xzf2v" podStartSLOduration=3.105379674 podStartE2EDuration="5.646788719s" podCreationTimestamp="2026-03-11 09:23:18 +0000 UTC" firstStartedPulling="2026-03-11 09:23:20.597801246 +0000 UTC m=+2651.551124596" lastFinishedPulling="2026-03-11 09:23:23.139210311 +0000 UTC m=+2654.092533641" observedRunningTime="2026-03-11 09:23:23.646642985 +0000 UTC m=+2654.599966325" watchObservedRunningTime="2026-03-11 09:23:23.646788719 +0000 UTC m=+2654.600112039" Mar 11 09:23:29 crc kubenswrapper[4808]: I0311 09:23:29.339749 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xzf2v" Mar 11 09:23:29 crc kubenswrapper[4808]: I0311 09:23:29.340317 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xzf2v" Mar 11 09:23:29 crc kubenswrapper[4808]: I0311 09:23:29.421490 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xzf2v" Mar 11 09:23:29 crc kubenswrapper[4808]: I0311 09:23:29.745847 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xzf2v" Mar 11 09:23:29 crc kubenswrapper[4808]: I0311 09:23:29.820671 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xzf2v"] Mar 11 09:23:31 crc kubenswrapper[4808]: I0311 09:23:31.709876 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-xzf2v" podUID="eef17312-d225-41c1-81be-4275ea1f2202" containerName="registry-server" containerID="cri-o://e79b8c3a0e81d5c2c73280a7b512d3ea7f78656d6f89401f7d760d9d214a3192" gracePeriod=2 Mar 11 09:23:32 crc kubenswrapper[4808]: I0311 09:23:32.138122 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xzf2v" Mar 11 09:23:32 crc kubenswrapper[4808]: I0311 09:23:32.195770 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eef17312-d225-41c1-81be-4275ea1f2202-utilities\") pod \"eef17312-d225-41c1-81be-4275ea1f2202\" (UID: \"eef17312-d225-41c1-81be-4275ea1f2202\") " Mar 11 09:23:32 crc kubenswrapper[4808]: I0311 09:23:32.195902 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5km4\" (UniqueName: \"kubernetes.io/projected/eef17312-d225-41c1-81be-4275ea1f2202-kube-api-access-d5km4\") pod \"eef17312-d225-41c1-81be-4275ea1f2202\" (UID: \"eef17312-d225-41c1-81be-4275ea1f2202\") " Mar 11 09:23:32 crc kubenswrapper[4808]: I0311 09:23:32.195967 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eef17312-d225-41c1-81be-4275ea1f2202-catalog-content\") pod \"eef17312-d225-41c1-81be-4275ea1f2202\" (UID: \"eef17312-d225-41c1-81be-4275ea1f2202\") " Mar 11 09:23:32 crc kubenswrapper[4808]: I0311 09:23:32.197288 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eef17312-d225-41c1-81be-4275ea1f2202-utilities" (OuterVolumeSpecName: "utilities") pod "eef17312-d225-41c1-81be-4275ea1f2202" (UID: "eef17312-d225-41c1-81be-4275ea1f2202"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:23:32 crc kubenswrapper[4808]: I0311 09:23:32.205014 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eef17312-d225-41c1-81be-4275ea1f2202-kube-api-access-d5km4" (OuterVolumeSpecName: "kube-api-access-d5km4") pod "eef17312-d225-41c1-81be-4275ea1f2202" (UID: "eef17312-d225-41c1-81be-4275ea1f2202"). InnerVolumeSpecName "kube-api-access-d5km4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:23:32 crc kubenswrapper[4808]: I0311 09:23:32.268784 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eef17312-d225-41c1-81be-4275ea1f2202-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eef17312-d225-41c1-81be-4275ea1f2202" (UID: "eef17312-d225-41c1-81be-4275ea1f2202"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:23:32 crc kubenswrapper[4808]: I0311 09:23:32.297021 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eef17312-d225-41c1-81be-4275ea1f2202-utilities\") on node \"crc\" DevicePath \"\"" Mar 11 09:23:32 crc kubenswrapper[4808]: I0311 09:23:32.297048 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d5km4\" (UniqueName: \"kubernetes.io/projected/eef17312-d225-41c1-81be-4275ea1f2202-kube-api-access-d5km4\") on node \"crc\" DevicePath \"\"" Mar 11 09:23:32 crc kubenswrapper[4808]: I0311 09:23:32.297061 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eef17312-d225-41c1-81be-4275ea1f2202-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 11 09:23:32 crc kubenswrapper[4808]: I0311 09:23:32.725506 4808 generic.go:334] "Generic (PLEG): container finished" podID="eef17312-d225-41c1-81be-4275ea1f2202" containerID="e79b8c3a0e81d5c2c73280a7b512d3ea7f78656d6f89401f7d760d9d214a3192" exitCode=0 Mar 11 09:23:32 crc kubenswrapper[4808]: I0311 09:23:32.725603 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xzf2v" Mar 11 09:23:32 crc kubenswrapper[4808]: I0311 09:23:32.725599 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xzf2v" event={"ID":"eef17312-d225-41c1-81be-4275ea1f2202","Type":"ContainerDied","Data":"e79b8c3a0e81d5c2c73280a7b512d3ea7f78656d6f89401f7d760d9d214a3192"} Mar 11 09:23:32 crc kubenswrapper[4808]: I0311 09:23:32.726135 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xzf2v" event={"ID":"eef17312-d225-41c1-81be-4275ea1f2202","Type":"ContainerDied","Data":"b21a717adbe12da5687d25c9e955583ccac19c0db7bf5120ab33fb9e613410c4"} Mar 11 09:23:32 crc kubenswrapper[4808]: I0311 09:23:32.726170 4808 scope.go:117] "RemoveContainer" containerID="e79b8c3a0e81d5c2c73280a7b512d3ea7f78656d6f89401f7d760d9d214a3192" Mar 11 09:23:32 crc kubenswrapper[4808]: I0311 09:23:32.775817 4808 scope.go:117] "RemoveContainer" containerID="28521c3ee049456805a7de5aa2a8dba2511f5ec4614681f179e0243886e7ac80" Mar 11 09:23:32 crc kubenswrapper[4808]: I0311 09:23:32.779709 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xzf2v"] Mar 11 09:23:32 crc kubenswrapper[4808]: I0311 09:23:32.785205 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-xzf2v"] Mar 11 09:23:32 crc kubenswrapper[4808]: I0311 09:23:32.795489 4808 scope.go:117] "RemoveContainer" containerID="b9ca559ad1f6febd8baf07b88955fd00b51004c550a6a83d5f9d90940ee63310" Mar 11 09:23:32 crc kubenswrapper[4808]: I0311 09:23:32.824029 4808 scope.go:117] "RemoveContainer" containerID="e79b8c3a0e81d5c2c73280a7b512d3ea7f78656d6f89401f7d760d9d214a3192" Mar 11 09:23:32 crc kubenswrapper[4808]: E0311 09:23:32.824589 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e79b8c3a0e81d5c2c73280a7b512d3ea7f78656d6f89401f7d760d9d214a3192\": container with ID starting with e79b8c3a0e81d5c2c73280a7b512d3ea7f78656d6f89401f7d760d9d214a3192 not found: ID does not exist" containerID="e79b8c3a0e81d5c2c73280a7b512d3ea7f78656d6f89401f7d760d9d214a3192" Mar 11 09:23:32 crc kubenswrapper[4808]: I0311 09:23:32.824632 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e79b8c3a0e81d5c2c73280a7b512d3ea7f78656d6f89401f7d760d9d214a3192"} err="failed to get container status \"e79b8c3a0e81d5c2c73280a7b512d3ea7f78656d6f89401f7d760d9d214a3192\": rpc error: code = NotFound desc = could not find container \"e79b8c3a0e81d5c2c73280a7b512d3ea7f78656d6f89401f7d760d9d214a3192\": container with ID starting with e79b8c3a0e81d5c2c73280a7b512d3ea7f78656d6f89401f7d760d9d214a3192 not found: ID does not exist" Mar 11 09:23:32 crc kubenswrapper[4808]: I0311 09:23:32.824658 4808 scope.go:117] "RemoveContainer" containerID="28521c3ee049456805a7de5aa2a8dba2511f5ec4614681f179e0243886e7ac80" Mar 11 09:23:32 crc kubenswrapper[4808]: E0311 09:23:32.825213 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28521c3ee049456805a7de5aa2a8dba2511f5ec4614681f179e0243886e7ac80\": container with ID starting with 28521c3ee049456805a7de5aa2a8dba2511f5ec4614681f179e0243886e7ac80 not found: ID does not exist" containerID="28521c3ee049456805a7de5aa2a8dba2511f5ec4614681f179e0243886e7ac80" Mar 11 09:23:32 crc kubenswrapper[4808]: I0311 09:23:32.825268 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28521c3ee049456805a7de5aa2a8dba2511f5ec4614681f179e0243886e7ac80"} err="failed to get container status \"28521c3ee049456805a7de5aa2a8dba2511f5ec4614681f179e0243886e7ac80\": rpc error: code = NotFound desc = could not find container \"28521c3ee049456805a7de5aa2a8dba2511f5ec4614681f179e0243886e7ac80\": container with ID starting with 28521c3ee049456805a7de5aa2a8dba2511f5ec4614681f179e0243886e7ac80 not found: ID does not exist" Mar 11 09:23:32 crc kubenswrapper[4808]: I0311 09:23:32.825302 4808 scope.go:117] "RemoveContainer" containerID="b9ca559ad1f6febd8baf07b88955fd00b51004c550a6a83d5f9d90940ee63310" Mar 11 09:23:32 crc kubenswrapper[4808]: E0311 09:23:32.825753 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9ca559ad1f6febd8baf07b88955fd00b51004c550a6a83d5f9d90940ee63310\": container with ID starting with b9ca559ad1f6febd8baf07b88955fd00b51004c550a6a83d5f9d90940ee63310 not found: ID does not exist" containerID="b9ca559ad1f6febd8baf07b88955fd00b51004c550a6a83d5f9d90940ee63310" Mar 11 09:23:32 crc kubenswrapper[4808]: I0311 09:23:32.825807 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9ca559ad1f6febd8baf07b88955fd00b51004c550a6a83d5f9d90940ee63310"} err="failed to get container status \"b9ca559ad1f6febd8baf07b88955fd00b51004c550a6a83d5f9d90940ee63310\": rpc error: code = NotFound desc = could not find container \"b9ca559ad1f6febd8baf07b88955fd00b51004c550a6a83d5f9d90940ee63310\": container with ID starting with b9ca559ad1f6febd8baf07b88955fd00b51004c550a6a83d5f9d90940ee63310 not found: ID does not exist" Mar 11 09:23:33 crc kubenswrapper[4808]: I0311 09:23:33.808455 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eef17312-d225-41c1-81be-4275ea1f2202" path="/var/lib/kubelet/pods/eef17312-d225-41c1-81be-4275ea1f2202/volumes" Mar 11 09:24:00 crc kubenswrapper[4808]: I0311 09:24:00.140185 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553684-fqxrz"] Mar 11 09:24:00 crc kubenswrapper[4808]: E0311 09:24:00.141986 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eef17312-d225-41c1-81be-4275ea1f2202" containerName="extract-content" Mar 11 09:24:00 crc kubenswrapper[4808]: I0311 09:24:00.142009 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="eef17312-d225-41c1-81be-4275ea1f2202" containerName="extract-content" Mar 11 09:24:00 crc kubenswrapper[4808]: E0311 09:24:00.142027 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eef17312-d225-41c1-81be-4275ea1f2202" containerName="extract-utilities" Mar 11 09:24:00 crc kubenswrapper[4808]: I0311 09:24:00.142035 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="eef17312-d225-41c1-81be-4275ea1f2202" containerName="extract-utilities" Mar 11 09:24:00 crc kubenswrapper[4808]: E0311 09:24:00.142052 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eef17312-d225-41c1-81be-4275ea1f2202" containerName="registry-server" Mar 11 09:24:00 crc kubenswrapper[4808]: I0311 09:24:00.142059 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="eef17312-d225-41c1-81be-4275ea1f2202" containerName="registry-server" Mar 11 09:24:00 crc kubenswrapper[4808]: I0311 09:24:00.142277 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="eef17312-d225-41c1-81be-4275ea1f2202" containerName="registry-server" Mar 11 09:24:00 crc kubenswrapper[4808]: I0311 09:24:00.142870 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553684-fqxrz" Mar 11 09:24:00 crc kubenswrapper[4808]: I0311 09:24:00.144833 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 09:24:00 crc kubenswrapper[4808]: I0311 09:24:00.145424 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-8r7cc" Mar 11 09:24:00 crc kubenswrapper[4808]: I0311 09:24:00.145551 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 09:24:00 crc kubenswrapper[4808]: I0311 09:24:00.156205 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553684-fqxrz"] Mar 11 09:24:00 crc kubenswrapper[4808]: I0311 09:24:00.243945 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpz8l\" (UniqueName: \"kubernetes.io/projected/905c244d-08d7-45cd-9d63-ebe879768c31-kube-api-access-hpz8l\") pod \"auto-csr-approver-29553684-fqxrz\" (UID: \"905c244d-08d7-45cd-9d63-ebe879768c31\") " pod="openshift-infra/auto-csr-approver-29553684-fqxrz" Mar 11 09:24:00 crc kubenswrapper[4808]: I0311 09:24:00.345759 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpz8l\" (UniqueName: \"kubernetes.io/projected/905c244d-08d7-45cd-9d63-ebe879768c31-kube-api-access-hpz8l\") pod \"auto-csr-approver-29553684-fqxrz\" (UID: \"905c244d-08d7-45cd-9d63-ebe879768c31\") " pod="openshift-infra/auto-csr-approver-29553684-fqxrz" Mar 11 09:24:00 crc kubenswrapper[4808]: I0311 09:24:00.370710 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpz8l\" (UniqueName: \"kubernetes.io/projected/905c244d-08d7-45cd-9d63-ebe879768c31-kube-api-access-hpz8l\") pod \"auto-csr-approver-29553684-fqxrz\" (UID: \"905c244d-08d7-45cd-9d63-ebe879768c31\") " pod="openshift-infra/auto-csr-approver-29553684-fqxrz" Mar 11 09:24:00 crc kubenswrapper[4808]: I0311 09:24:00.471033 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553684-fqxrz" Mar 11 09:24:00 crc kubenswrapper[4808]: I0311 09:24:00.900153 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553684-fqxrz"] Mar 11 09:24:00 crc kubenswrapper[4808]: I0311 09:24:00.960250 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553684-fqxrz" event={"ID":"905c244d-08d7-45cd-9d63-ebe879768c31","Type":"ContainerStarted","Data":"0016f12288da7c056a88de5c7e0124e653e6604342eb850c7552bb03b0f1203f"} Mar 11 09:24:02 crc kubenswrapper[4808]: I0311 09:24:02.980956 4808 generic.go:334] "Generic (PLEG): container finished" podID="905c244d-08d7-45cd-9d63-ebe879768c31" containerID="3ba3491182a0aa1a4bb571a427424cb21e9b5c5f2ba7b9e73b6ea5c423d524ea" exitCode=0 Mar 11 09:24:02 crc kubenswrapper[4808]: I0311 09:24:02.981235 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553684-fqxrz" event={"ID":"905c244d-08d7-45cd-9d63-ebe879768c31","Type":"ContainerDied","Data":"3ba3491182a0aa1a4bb571a427424cb21e9b5c5f2ba7b9e73b6ea5c423d524ea"} Mar 11 09:24:04 crc kubenswrapper[4808]: I0311 09:24:04.304567 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553684-fqxrz" Mar 11 09:24:04 crc kubenswrapper[4808]: I0311 09:24:04.400765 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hpz8l\" (UniqueName: \"kubernetes.io/projected/905c244d-08d7-45cd-9d63-ebe879768c31-kube-api-access-hpz8l\") pod \"905c244d-08d7-45cd-9d63-ebe879768c31\" (UID: \"905c244d-08d7-45cd-9d63-ebe879768c31\") " Mar 11 09:24:04 crc kubenswrapper[4808]: I0311 09:24:04.406232 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/905c244d-08d7-45cd-9d63-ebe879768c31-kube-api-access-hpz8l" (OuterVolumeSpecName: "kube-api-access-hpz8l") pod "905c244d-08d7-45cd-9d63-ebe879768c31" (UID: "905c244d-08d7-45cd-9d63-ebe879768c31"). InnerVolumeSpecName "kube-api-access-hpz8l". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:24:04 crc kubenswrapper[4808]: I0311 09:24:04.502308 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hpz8l\" (UniqueName: \"kubernetes.io/projected/905c244d-08d7-45cd-9d63-ebe879768c31-kube-api-access-hpz8l\") on node \"crc\" DevicePath \"\"" Mar 11 09:24:04 crc kubenswrapper[4808]: I0311 09:24:04.997825 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553684-fqxrz" event={"ID":"905c244d-08d7-45cd-9d63-ebe879768c31","Type":"ContainerDied","Data":"0016f12288da7c056a88de5c7e0124e653e6604342eb850c7552bb03b0f1203f"} Mar 11 09:24:04 crc kubenswrapper[4808]: I0311 09:24:04.997882 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0016f12288da7c056a88de5c7e0124e653e6604342eb850c7552bb03b0f1203f" Mar 11 09:24:04 crc kubenswrapper[4808]: I0311 09:24:04.997966 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553684-fqxrz" Mar 11 09:24:05 crc kubenswrapper[4808]: I0311 09:24:05.368982 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553678-p8dsw"] Mar 11 09:24:05 crc kubenswrapper[4808]: I0311 09:24:05.374153 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553678-p8dsw"] Mar 11 09:24:05 crc kubenswrapper[4808]: I0311 09:24:05.808833 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43f15f5d-6b5e-42f9-a135-1386a2116d67" path="/var/lib/kubelet/pods/43f15f5d-6b5e-42f9-a135-1386a2116d67/volumes" Mar 11 09:24:16 crc kubenswrapper[4808]: I0311 09:24:16.027039 4808 patch_prober.go:28] interesting pod/machine-config-daemon-tfsm9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 09:24:16 crc kubenswrapper[4808]: I0311 09:24:16.027641 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 09:24:46 crc kubenswrapper[4808]: I0311 09:24:46.027896 4808 patch_prober.go:28] interesting pod/machine-config-daemon-tfsm9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 09:24:46 crc kubenswrapper[4808]: I0311 09:24:46.028502 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 09:24:54 crc kubenswrapper[4808]: I0311 09:24:54.123405 4808 scope.go:117] "RemoveContainer" containerID="02cc1c44d1f249cba294d2c0e78111e65624569534e5a2f02fba0c3d8e26539a" Mar 11 09:25:11 crc kubenswrapper[4808]: I0311 09:25:11.334100 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hwp6r"] Mar 11 09:25:11 crc kubenswrapper[4808]: E0311 09:25:11.336432 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="905c244d-08d7-45cd-9d63-ebe879768c31" containerName="oc" Mar 11 09:25:11 crc kubenswrapper[4808]: I0311 09:25:11.336549 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="905c244d-08d7-45cd-9d63-ebe879768c31" containerName="oc" Mar 11 09:25:11 crc kubenswrapper[4808]: I0311 09:25:11.336815 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="905c244d-08d7-45cd-9d63-ebe879768c31" containerName="oc" Mar 11 09:25:11 crc kubenswrapper[4808]: I0311 09:25:11.338256 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hwp6r" Mar 11 09:25:11 crc kubenswrapper[4808]: I0311 09:25:11.343666 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hwp6r"] Mar 11 09:25:11 crc kubenswrapper[4808]: I0311 09:25:11.468062 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8x87n\" (UniqueName: \"kubernetes.io/projected/0426dadd-5c0b-4f78-8364-7c09b73b51dc-kube-api-access-8x87n\") pod \"redhat-operators-hwp6r\" (UID: \"0426dadd-5c0b-4f78-8364-7c09b73b51dc\") " pod="openshift-marketplace/redhat-operators-hwp6r" Mar 11 09:25:11 crc kubenswrapper[4808]: I0311 09:25:11.468133 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0426dadd-5c0b-4f78-8364-7c09b73b51dc-catalog-content\") pod \"redhat-operators-hwp6r\" (UID: \"0426dadd-5c0b-4f78-8364-7c09b73b51dc\") " pod="openshift-marketplace/redhat-operators-hwp6r" Mar 11 09:25:11 crc kubenswrapper[4808]: I0311 09:25:11.468151 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0426dadd-5c0b-4f78-8364-7c09b73b51dc-utilities\") pod \"redhat-operators-hwp6r\" (UID: \"0426dadd-5c0b-4f78-8364-7c09b73b51dc\") " pod="openshift-marketplace/redhat-operators-hwp6r" Mar 11 09:25:11 crc kubenswrapper[4808]: I0311 09:25:11.569576 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8x87n\" (UniqueName: \"kubernetes.io/projected/0426dadd-5c0b-4f78-8364-7c09b73b51dc-kube-api-access-8x87n\") pod \"redhat-operators-hwp6r\" (UID: \"0426dadd-5c0b-4f78-8364-7c09b73b51dc\") " pod="openshift-marketplace/redhat-operators-hwp6r" Mar 11 09:25:11 crc kubenswrapper[4808]: I0311 09:25:11.569665 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0426dadd-5c0b-4f78-8364-7c09b73b51dc-catalog-content\") pod \"redhat-operators-hwp6r\" (UID: \"0426dadd-5c0b-4f78-8364-7c09b73b51dc\") " pod="openshift-marketplace/redhat-operators-hwp6r" Mar 11 09:25:11 crc kubenswrapper[4808]: I0311 09:25:11.569691 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0426dadd-5c0b-4f78-8364-7c09b73b51dc-utilities\") pod \"redhat-operators-hwp6r\" (UID: \"0426dadd-5c0b-4f78-8364-7c09b73b51dc\") " pod="openshift-marketplace/redhat-operators-hwp6r" Mar 11 09:25:11 crc kubenswrapper[4808]: I0311 09:25:11.570198 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0426dadd-5c0b-4f78-8364-7c09b73b51dc-utilities\") pod \"redhat-operators-hwp6r\" (UID: \"0426dadd-5c0b-4f78-8364-7c09b73b51dc\") " pod="openshift-marketplace/redhat-operators-hwp6r" Mar 11 09:25:11 crc kubenswrapper[4808]: I0311 09:25:11.570276 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0426dadd-5c0b-4f78-8364-7c09b73b51dc-catalog-content\") pod \"redhat-operators-hwp6r\" (UID: \"0426dadd-5c0b-4f78-8364-7c09b73b51dc\") " pod="openshift-marketplace/redhat-operators-hwp6r" Mar 11 09:25:11 crc kubenswrapper[4808]: I0311 09:25:11.589203 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8x87n\" (UniqueName: \"kubernetes.io/projected/0426dadd-5c0b-4f78-8364-7c09b73b51dc-kube-api-access-8x87n\") pod \"redhat-operators-hwp6r\" (UID: \"0426dadd-5c0b-4f78-8364-7c09b73b51dc\") " pod="openshift-marketplace/redhat-operators-hwp6r" Mar 11 09:25:11 crc kubenswrapper[4808]: I0311 09:25:11.654740 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hwp6r" Mar 11 09:25:12 crc kubenswrapper[4808]: I0311 09:25:12.148937 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hwp6r"] Mar 11 09:25:12 crc kubenswrapper[4808]: W0311 09:25:12.157413 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0426dadd_5c0b_4f78_8364_7c09b73b51dc.slice/crio-0d4a9021298af5182657a6cda3ad97575d2109311e3cd4b0061c66609338374a WatchSource:0}: Error finding container 0d4a9021298af5182657a6cda3ad97575d2109311e3cd4b0061c66609338374a: Status 404 returned error can't find the container with id 0d4a9021298af5182657a6cda3ad97575d2109311e3cd4b0061c66609338374a Mar 11 09:25:12 crc kubenswrapper[4808]: I0311 09:25:12.556713 4808 generic.go:334] "Generic (PLEG): container finished" podID="0426dadd-5c0b-4f78-8364-7c09b73b51dc" containerID="a7d65464e040e99a0e49285c4427bacf9dad420db279e71d333b1af8625e0cda" exitCode=0 Mar 11 09:25:12 crc kubenswrapper[4808]: I0311 09:25:12.556884 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hwp6r" event={"ID":"0426dadd-5c0b-4f78-8364-7c09b73b51dc","Type":"ContainerDied","Data":"a7d65464e040e99a0e49285c4427bacf9dad420db279e71d333b1af8625e0cda"} Mar 11 09:25:12 crc kubenswrapper[4808]: I0311 09:25:12.557048 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hwp6r" event={"ID":"0426dadd-5c0b-4f78-8364-7c09b73b51dc","Type":"ContainerStarted","Data":"0d4a9021298af5182657a6cda3ad97575d2109311e3cd4b0061c66609338374a"} Mar 11 09:25:16 crc kubenswrapper[4808]: I0311 09:25:16.027910 4808 patch_prober.go:28] interesting pod/machine-config-daemon-tfsm9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 09:25:16 crc kubenswrapper[4808]: I0311 09:25:16.028252 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 09:25:16 crc kubenswrapper[4808]: I0311 09:25:16.028301 4808 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" Mar 11 09:25:16 crc kubenswrapper[4808]: I0311 09:25:16.029083 4808 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5541d39dfe0a2921cc7a274b7e0e5e13c34f7201893fb76d9278c96d41e9af2b"} pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 11 09:25:16 crc kubenswrapper[4808]: I0311 09:25:16.029166 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerName="machine-config-daemon" containerID="cri-o://5541d39dfe0a2921cc7a274b7e0e5e13c34f7201893fb76d9278c96d41e9af2b" gracePeriod=600 Mar 11 09:25:16 crc kubenswrapper[4808]: I0311 09:25:16.587727 4808 generic.go:334] "Generic (PLEG): container finished" podID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerID="5541d39dfe0a2921cc7a274b7e0e5e13c34f7201893fb76d9278c96d41e9af2b" exitCode=0 Mar 11 09:25:16 crc kubenswrapper[4808]: I0311 09:25:16.587765 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" event={"ID":"3dda5309-668d-4e3c-b3b2-1d708eecc578","Type":"ContainerDied","Data":"5541d39dfe0a2921cc7a274b7e0e5e13c34f7201893fb76d9278c96d41e9af2b"} Mar 11 09:25:16 crc kubenswrapper[4808]: I0311 09:25:16.588404 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" event={"ID":"3dda5309-668d-4e3c-b3b2-1d708eecc578","Type":"ContainerStarted","Data":"da4536925bbcce34ca74e0b8d6dd1cb5d3d10fde6f44c64699696606d3835faa"} Mar 11 09:25:16 crc kubenswrapper[4808]: I0311 09:25:16.588422 4808 scope.go:117] "RemoveContainer" containerID="7e18e081ed5442135a787a1914f64692c1d04ef480c559dafcef0144de7c2108" Mar 11 09:25:20 crc kubenswrapper[4808]: I0311 09:25:20.623628 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hwp6r" event={"ID":"0426dadd-5c0b-4f78-8364-7c09b73b51dc","Type":"ContainerStarted","Data":"688c1e4938a0eec6939a488695b9d2c0ced0a2ca6f798a873da8157dea83a5e8"} Mar 11 09:25:21 crc kubenswrapper[4808]: I0311 09:25:21.632059 4808 generic.go:334] "Generic (PLEG): container finished" podID="0426dadd-5c0b-4f78-8364-7c09b73b51dc" containerID="688c1e4938a0eec6939a488695b9d2c0ced0a2ca6f798a873da8157dea83a5e8" exitCode=0 Mar 11 09:25:21 crc kubenswrapper[4808]: I0311 09:25:21.632115 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hwp6r" event={"ID":"0426dadd-5c0b-4f78-8364-7c09b73b51dc","Type":"ContainerDied","Data":"688c1e4938a0eec6939a488695b9d2c0ced0a2ca6f798a873da8157dea83a5e8"} Mar 11 09:25:22 crc kubenswrapper[4808]: I0311 09:25:22.642861 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hwp6r" event={"ID":"0426dadd-5c0b-4f78-8364-7c09b73b51dc","Type":"ContainerStarted","Data":"9fa6d8eb1ea53460ff1315a4425d182e7e154ec41f143fff76e3d69c5f837542"} Mar 11 09:25:22 crc kubenswrapper[4808]: I0311 09:25:22.664818 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hwp6r" podStartSLOduration=2.141128964 podStartE2EDuration="11.664772346s" podCreationTimestamp="2026-03-11 09:25:11 +0000 UTC" firstStartedPulling="2026-03-11 09:25:12.558936665 +0000 UTC m=+2763.512259985" lastFinishedPulling="2026-03-11 09:25:22.082580007 +0000 UTC m=+2773.035903367" observedRunningTime="2026-03-11 09:25:22.660870078 +0000 UTC m=+2773.614193428" watchObservedRunningTime="2026-03-11 09:25:22.664772346 +0000 UTC m=+2773.618095676" Mar 11 09:25:31 crc kubenswrapper[4808]: I0311 09:25:31.655395 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hwp6r" Mar 11 09:25:31 crc kubenswrapper[4808]: I0311 09:25:31.655991 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hwp6r" Mar 11 09:25:31 crc kubenswrapper[4808]: I0311 09:25:31.707929 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hwp6r" Mar 11 09:25:31 crc kubenswrapper[4808]: I0311 09:25:31.761300 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hwp6r" Mar 11 09:25:31 crc kubenswrapper[4808]: I0311 09:25:31.845970 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hwp6r"] Mar 11 09:25:31 crc kubenswrapper[4808]: I0311 09:25:31.946786 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rjf7z"] Mar 11 09:25:31 crc kubenswrapper[4808]: I0311 09:25:31.947098 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rjf7z" podUID="7a8a6d81-44ea-4ab5-a6c2-fa901e1ebf40" containerName="registry-server" containerID="cri-o://29513221fa40a2f9b4434f3864ec7928adecba4e1493638e65f3de914068b1d8" gracePeriod=2 Mar 11 09:25:32 crc kubenswrapper[4808]: I0311 09:25:32.720162 4808 generic.go:334] "Generic (PLEG): container finished" podID="7a8a6d81-44ea-4ab5-a6c2-fa901e1ebf40" containerID="29513221fa40a2f9b4434f3864ec7928adecba4e1493638e65f3de914068b1d8" exitCode=0 Mar 11 09:25:32 crc kubenswrapper[4808]: I0311 09:25:32.720276 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rjf7z" event={"ID":"7a8a6d81-44ea-4ab5-a6c2-fa901e1ebf40","Type":"ContainerDied","Data":"29513221fa40a2f9b4434f3864ec7928adecba4e1493638e65f3de914068b1d8"} Mar 11 09:25:34 crc kubenswrapper[4808]: I0311 09:25:34.971094 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rjf7z" Mar 11 09:25:35 crc kubenswrapper[4808]: I0311 09:25:35.068794 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a8a6d81-44ea-4ab5-a6c2-fa901e1ebf40-catalog-content\") pod \"7a8a6d81-44ea-4ab5-a6c2-fa901e1ebf40\" (UID: \"7a8a6d81-44ea-4ab5-a6c2-fa901e1ebf40\") " Mar 11 09:25:35 crc kubenswrapper[4808]: I0311 09:25:35.068875 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a8a6d81-44ea-4ab5-a6c2-fa901e1ebf40-utilities\") pod \"7a8a6d81-44ea-4ab5-a6c2-fa901e1ebf40\" (UID: \"7a8a6d81-44ea-4ab5-a6c2-fa901e1ebf40\") " Mar 11 09:25:35 crc kubenswrapper[4808]: I0311 09:25:35.068912 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mqrw4\" (UniqueName: \"kubernetes.io/projected/7a8a6d81-44ea-4ab5-a6c2-fa901e1ebf40-kube-api-access-mqrw4\") pod \"7a8a6d81-44ea-4ab5-a6c2-fa901e1ebf40\" (UID: \"7a8a6d81-44ea-4ab5-a6c2-fa901e1ebf40\") " Mar 11 09:25:35 crc kubenswrapper[4808]: I0311 09:25:35.070313 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a8a6d81-44ea-4ab5-a6c2-fa901e1ebf40-utilities" (OuterVolumeSpecName: "utilities") pod "7a8a6d81-44ea-4ab5-a6c2-fa901e1ebf40" (UID: "7a8a6d81-44ea-4ab5-a6c2-fa901e1ebf40"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:25:35 crc kubenswrapper[4808]: I0311 09:25:35.075240 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a8a6d81-44ea-4ab5-a6c2-fa901e1ebf40-kube-api-access-mqrw4" (OuterVolumeSpecName: "kube-api-access-mqrw4") pod "7a8a6d81-44ea-4ab5-a6c2-fa901e1ebf40" (UID: "7a8a6d81-44ea-4ab5-a6c2-fa901e1ebf40"). InnerVolumeSpecName "kube-api-access-mqrw4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:25:35 crc kubenswrapper[4808]: I0311 09:25:35.170178 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mqrw4\" (UniqueName: \"kubernetes.io/projected/7a8a6d81-44ea-4ab5-a6c2-fa901e1ebf40-kube-api-access-mqrw4\") on node \"crc\" DevicePath \"\"" Mar 11 09:25:35 crc kubenswrapper[4808]: I0311 09:25:35.170218 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a8a6d81-44ea-4ab5-a6c2-fa901e1ebf40-utilities\") on node \"crc\" DevicePath \"\"" Mar 11 09:25:35 crc kubenswrapper[4808]: I0311 09:25:35.222668 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a8a6d81-44ea-4ab5-a6c2-fa901e1ebf40-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7a8a6d81-44ea-4ab5-a6c2-fa901e1ebf40" (UID: "7a8a6d81-44ea-4ab5-a6c2-fa901e1ebf40"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:25:35 crc kubenswrapper[4808]: I0311 09:25:35.271768 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a8a6d81-44ea-4ab5-a6c2-fa901e1ebf40-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 11 09:25:35 crc kubenswrapper[4808]: I0311 09:25:35.746686 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rjf7z" event={"ID":"7a8a6d81-44ea-4ab5-a6c2-fa901e1ebf40","Type":"ContainerDied","Data":"31d7df0b1a92b6af99a8547c27256530f89aed4b5f5978403bb104683b11ea20"} Mar 11 09:25:35 crc kubenswrapper[4808]: I0311 09:25:35.746799 4808 scope.go:117] "RemoveContainer" containerID="29513221fa40a2f9b4434f3864ec7928adecba4e1493638e65f3de914068b1d8" Mar 11 09:25:35 crc kubenswrapper[4808]: I0311 09:25:35.746804 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rjf7z" Mar 11 09:25:35 crc kubenswrapper[4808]: I0311 09:25:35.781464 4808 scope.go:117] "RemoveContainer" containerID="2f693ca5f7842cd62201e937bc422b97a81173594c3c463df398dde4054c3800" Mar 11 09:25:35 crc kubenswrapper[4808]: I0311 09:25:35.805613 4808 scope.go:117] "RemoveContainer" containerID="1fe0a3720c5193da7a2bf28b8802dce077f0452c2fb8fb7006be60fe8ebe461c" Mar 11 09:25:35 crc kubenswrapper[4808]: I0311 09:25:35.808940 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rjf7z"] Mar 11 09:25:35 crc kubenswrapper[4808]: I0311 09:25:35.808993 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rjf7z"] Mar 11 09:25:37 crc kubenswrapper[4808]: I0311 09:25:37.803912 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a8a6d81-44ea-4ab5-a6c2-fa901e1ebf40" path="/var/lib/kubelet/pods/7a8a6d81-44ea-4ab5-a6c2-fa901e1ebf40/volumes" Mar 11 09:25:54 crc kubenswrapper[4808]: I0311 09:25:54.231971 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-x829q"] Mar 11 09:25:54 crc kubenswrapper[4808]: E0311 09:25:54.232948 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a8a6d81-44ea-4ab5-a6c2-fa901e1ebf40" containerName="extract-content" Mar 11 09:25:54 crc kubenswrapper[4808]: I0311 09:25:54.232961 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a8a6d81-44ea-4ab5-a6c2-fa901e1ebf40" containerName="extract-content" Mar 11 09:25:54 crc kubenswrapper[4808]: E0311 09:25:54.232975 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a8a6d81-44ea-4ab5-a6c2-fa901e1ebf40" containerName="registry-server" Mar 11 09:25:54 crc kubenswrapper[4808]: I0311 09:25:54.232981 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a8a6d81-44ea-4ab5-a6c2-fa901e1ebf40" containerName="registry-server" Mar 11 09:25:54 crc kubenswrapper[4808]: E0311 09:25:54.232993 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a8a6d81-44ea-4ab5-a6c2-fa901e1ebf40" containerName="extract-utilities" Mar 11 09:25:54 crc kubenswrapper[4808]: I0311 09:25:54.233000 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a8a6d81-44ea-4ab5-a6c2-fa901e1ebf40" containerName="extract-utilities" Mar 11 09:25:54 crc kubenswrapper[4808]: I0311 09:25:54.233170 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a8a6d81-44ea-4ab5-a6c2-fa901e1ebf40" containerName="registry-server" Mar 11 09:25:54 crc kubenswrapper[4808]: I0311 09:25:54.235246 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x829q" Mar 11 09:25:54 crc kubenswrapper[4808]: I0311 09:25:54.244664 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-x829q"] Mar 11 09:25:54 crc kubenswrapper[4808]: I0311 09:25:54.394028 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac7a646f-80dc-45da-a157-abdbaa22c095-utilities\") pod \"redhat-marketplace-x829q\" (UID: \"ac7a646f-80dc-45da-a157-abdbaa22c095\") " pod="openshift-marketplace/redhat-marketplace-x829q" Mar 11 09:25:54 crc kubenswrapper[4808]: I0311 09:25:54.394245 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45f8w\" (UniqueName: \"kubernetes.io/projected/ac7a646f-80dc-45da-a157-abdbaa22c095-kube-api-access-45f8w\") pod \"redhat-marketplace-x829q\" (UID: \"ac7a646f-80dc-45da-a157-abdbaa22c095\") " pod="openshift-marketplace/redhat-marketplace-x829q" Mar 11 09:25:54 crc kubenswrapper[4808]: I0311 09:25:54.394469 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac7a646f-80dc-45da-a157-abdbaa22c095-catalog-content\") pod \"redhat-marketplace-x829q\" (UID: \"ac7a646f-80dc-45da-a157-abdbaa22c095\") " pod="openshift-marketplace/redhat-marketplace-x829q" Mar 11 09:25:54 crc kubenswrapper[4808]: I0311 09:25:54.495866 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac7a646f-80dc-45da-a157-abdbaa22c095-utilities\") pod \"redhat-marketplace-x829q\" (UID: \"ac7a646f-80dc-45da-a157-abdbaa22c095\") " pod="openshift-marketplace/redhat-marketplace-x829q" Mar 11 09:25:54 crc kubenswrapper[4808]: I0311 09:25:54.496270 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45f8w\" (UniqueName: \"kubernetes.io/projected/ac7a646f-80dc-45da-a157-abdbaa22c095-kube-api-access-45f8w\") pod \"redhat-marketplace-x829q\" (UID: \"ac7a646f-80dc-45da-a157-abdbaa22c095\") " pod="openshift-marketplace/redhat-marketplace-x829q" Mar 11 09:25:54 crc kubenswrapper[4808]: I0311 09:25:54.496372 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac7a646f-80dc-45da-a157-abdbaa22c095-catalog-content\") pod \"redhat-marketplace-x829q\" (UID: \"ac7a646f-80dc-45da-a157-abdbaa22c095\") " pod="openshift-marketplace/redhat-marketplace-x829q" Mar 11 09:25:54 crc kubenswrapper[4808]: I0311 09:25:54.496465 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac7a646f-80dc-45da-a157-abdbaa22c095-utilities\") pod \"redhat-marketplace-x829q\" (UID: \"ac7a646f-80dc-45da-a157-abdbaa22c095\") " pod="openshift-marketplace/redhat-marketplace-x829q" Mar 11 09:25:54 crc kubenswrapper[4808]: I0311 09:25:54.496771 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac7a646f-80dc-45da-a157-abdbaa22c095-catalog-content\") pod \"redhat-marketplace-x829q\" (UID: \"ac7a646f-80dc-45da-a157-abdbaa22c095\") " pod="openshift-marketplace/redhat-marketplace-x829q" Mar 11 09:25:54 crc kubenswrapper[4808]: I0311 09:25:54.522227 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45f8w\" (UniqueName: \"kubernetes.io/projected/ac7a646f-80dc-45da-a157-abdbaa22c095-kube-api-access-45f8w\") pod \"redhat-marketplace-x829q\" (UID: \"ac7a646f-80dc-45da-a157-abdbaa22c095\") " pod="openshift-marketplace/redhat-marketplace-x829q" Mar 11 09:25:54 crc kubenswrapper[4808]: I0311 09:25:54.556735 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x829q" Mar 11 09:25:55 crc kubenswrapper[4808]: I0311 09:25:55.010892 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-x829q"] Mar 11 09:25:55 crc kubenswrapper[4808]: I0311 09:25:55.921850 4808 generic.go:334] "Generic (PLEG): container finished" podID="ac7a646f-80dc-45da-a157-abdbaa22c095" containerID="5341994667a0ea7b94ed7600ea8af39ed6dd6a63a9acc798c3c80f35d7f6670a" exitCode=0 Mar 11 09:25:55 crc kubenswrapper[4808]: I0311 09:25:55.921960 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x829q" event={"ID":"ac7a646f-80dc-45da-a157-abdbaa22c095","Type":"ContainerDied","Data":"5341994667a0ea7b94ed7600ea8af39ed6dd6a63a9acc798c3c80f35d7f6670a"} Mar 11 09:25:55 crc kubenswrapper[4808]: I0311 09:25:55.922178 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x829q" event={"ID":"ac7a646f-80dc-45da-a157-abdbaa22c095","Type":"ContainerStarted","Data":"99e54d9782cf3e889de82fd0bd2439d139f4f783d8e72985b3ead0fd9ca7485e"} Mar 11 09:25:57 crc kubenswrapper[4808]: I0311 09:25:57.936780 4808 generic.go:334] "Generic (PLEG): container finished" podID="ac7a646f-80dc-45da-a157-abdbaa22c095" containerID="8040d225f5b767733dbadf3e4f261ec02a27313b06001f82d5e005f3e57258b8" exitCode=0 Mar 11 09:25:57 crc kubenswrapper[4808]: I0311 09:25:57.936824 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x829q" event={"ID":"ac7a646f-80dc-45da-a157-abdbaa22c095","Type":"ContainerDied","Data":"8040d225f5b767733dbadf3e4f261ec02a27313b06001f82d5e005f3e57258b8"} Mar 11 09:25:58 crc kubenswrapper[4808]: I0311 09:25:58.947273 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x829q" event={"ID":"ac7a646f-80dc-45da-a157-abdbaa22c095","Type":"ContainerStarted","Data":"ddd56df798bbb4db5961e260f7d148ca699ee53c7d50f319bdaf1af27b13b71c"} Mar 11 09:26:00 crc kubenswrapper[4808]: I0311 09:26:00.141119 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-x829q" podStartSLOduration=3.691063094 podStartE2EDuration="6.141096114s" podCreationTimestamp="2026-03-11 09:25:54 +0000 UTC" firstStartedPulling="2026-03-11 09:25:55.924694211 +0000 UTC m=+2806.878017531" lastFinishedPulling="2026-03-11 09:25:58.374727211 +0000 UTC m=+2809.328050551" observedRunningTime="2026-03-11 09:25:58.967907484 +0000 UTC m=+2809.921230814" watchObservedRunningTime="2026-03-11 09:26:00.141096114 +0000 UTC m=+2811.094419454" Mar 11 09:26:00 crc kubenswrapper[4808]: I0311 09:26:00.142283 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553686-9wd4h"] Mar 11 09:26:00 crc kubenswrapper[4808]: I0311 09:26:00.143749 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553686-9wd4h" Mar 11 09:26:00 crc kubenswrapper[4808]: I0311 09:26:00.145516 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 09:26:00 crc kubenswrapper[4808]: I0311 09:26:00.146026 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 09:26:00 crc kubenswrapper[4808]: I0311 09:26:00.146057 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-8r7cc" Mar 11 09:26:00 crc kubenswrapper[4808]: I0311 09:26:00.150089 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553686-9wd4h"] Mar 11 09:26:00 crc kubenswrapper[4808]: I0311 09:26:00.198854 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d59tg\" (UniqueName: \"kubernetes.io/projected/634dfe6c-347d-4626-8807-995866218e6d-kube-api-access-d59tg\") pod \"auto-csr-approver-29553686-9wd4h\" (UID: \"634dfe6c-347d-4626-8807-995866218e6d\") " pod="openshift-infra/auto-csr-approver-29553686-9wd4h" Mar 11 09:26:00 crc kubenswrapper[4808]: I0311 09:26:00.300928 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d59tg\" (UniqueName: \"kubernetes.io/projected/634dfe6c-347d-4626-8807-995866218e6d-kube-api-access-d59tg\") pod \"auto-csr-approver-29553686-9wd4h\" (UID: \"634dfe6c-347d-4626-8807-995866218e6d\") " pod="openshift-infra/auto-csr-approver-29553686-9wd4h" Mar 11 09:26:00 crc kubenswrapper[4808]: I0311 09:26:00.322669 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d59tg\" (UniqueName: \"kubernetes.io/projected/634dfe6c-347d-4626-8807-995866218e6d-kube-api-access-d59tg\") pod \"auto-csr-approver-29553686-9wd4h\" (UID: \"634dfe6c-347d-4626-8807-995866218e6d\") " pod="openshift-infra/auto-csr-approver-29553686-9wd4h" Mar 11 09:26:00 crc kubenswrapper[4808]: I0311 09:26:00.461699 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553686-9wd4h" Mar 11 09:26:00 crc kubenswrapper[4808]: I0311 09:26:00.926104 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553686-9wd4h"] Mar 11 09:26:00 crc kubenswrapper[4808]: W0311 09:26:00.929730 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod634dfe6c_347d_4626_8807_995866218e6d.slice/crio-c4eb5669e4d0a6aa2f4c480d21bad68273cdfe0322a048ad8a6e61bdd577af5a WatchSource:0}: Error finding container c4eb5669e4d0a6aa2f4c480d21bad68273cdfe0322a048ad8a6e61bdd577af5a: Status 404 returned error can't find the container with id c4eb5669e4d0a6aa2f4c480d21bad68273cdfe0322a048ad8a6e61bdd577af5a Mar 11 09:26:00 crc kubenswrapper[4808]: I0311 09:26:00.962684 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553686-9wd4h" event={"ID":"634dfe6c-347d-4626-8807-995866218e6d","Type":"ContainerStarted","Data":"c4eb5669e4d0a6aa2f4c480d21bad68273cdfe0322a048ad8a6e61bdd577af5a"} Mar 11 09:26:02 crc kubenswrapper[4808]: I0311 09:26:02.977898 4808 generic.go:334] "Generic (PLEG): container finished" podID="634dfe6c-347d-4626-8807-995866218e6d" containerID="207e9f262ca66312d00fbb813fa63074cb46966f59734eba1689f75004cf22d7" exitCode=0 Mar 11 09:26:02 crc kubenswrapper[4808]: I0311 09:26:02.978001 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553686-9wd4h" event={"ID":"634dfe6c-347d-4626-8807-995866218e6d","Type":"ContainerDied","Data":"207e9f262ca66312d00fbb813fa63074cb46966f59734eba1689f75004cf22d7"} Mar 11 09:26:04 crc kubenswrapper[4808]: I0311 09:26:04.288397 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553686-9wd4h" Mar 11 09:26:04 crc kubenswrapper[4808]: I0311 09:26:04.364773 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d59tg\" (UniqueName: \"kubernetes.io/projected/634dfe6c-347d-4626-8807-995866218e6d-kube-api-access-d59tg\") pod \"634dfe6c-347d-4626-8807-995866218e6d\" (UID: \"634dfe6c-347d-4626-8807-995866218e6d\") " Mar 11 09:26:04 crc kubenswrapper[4808]: I0311 09:26:04.373693 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/634dfe6c-347d-4626-8807-995866218e6d-kube-api-access-d59tg" (OuterVolumeSpecName: "kube-api-access-d59tg") pod "634dfe6c-347d-4626-8807-995866218e6d" (UID: "634dfe6c-347d-4626-8807-995866218e6d"). InnerVolumeSpecName "kube-api-access-d59tg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:26:04 crc kubenswrapper[4808]: I0311 09:26:04.466846 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d59tg\" (UniqueName: \"kubernetes.io/projected/634dfe6c-347d-4626-8807-995866218e6d-kube-api-access-d59tg\") on node \"crc\" DevicePath \"\"" Mar 11 09:26:04 crc kubenswrapper[4808]: I0311 09:26:04.558179 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-x829q" Mar 11 09:26:04 crc kubenswrapper[4808]: I0311 09:26:04.558228 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-x829q" Mar 11 09:26:04 crc kubenswrapper[4808]: I0311 09:26:04.605543 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-x829q" Mar 11 09:26:04 crc kubenswrapper[4808]: I0311 09:26:04.996387 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553686-9wd4h" Mar 11 09:26:05 crc kubenswrapper[4808]: I0311 09:26:05.004461 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553686-9wd4h" event={"ID":"634dfe6c-347d-4626-8807-995866218e6d","Type":"ContainerDied","Data":"c4eb5669e4d0a6aa2f4c480d21bad68273cdfe0322a048ad8a6e61bdd577af5a"} Mar 11 09:26:05 crc kubenswrapper[4808]: I0311 09:26:05.004513 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4eb5669e4d0a6aa2f4c480d21bad68273cdfe0322a048ad8a6e61bdd577af5a" Mar 11 09:26:05 crc kubenswrapper[4808]: I0311 09:26:05.090128 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-x829q" Mar 11 09:26:05 crc kubenswrapper[4808]: I0311 09:26:05.149306 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-x829q"] Mar 11 09:26:05 crc kubenswrapper[4808]: I0311 09:26:05.355043 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553680-n59sn"] Mar 11 09:26:05 crc kubenswrapper[4808]: I0311 09:26:05.361919 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553680-n59sn"] Mar 11 09:26:05 crc kubenswrapper[4808]: I0311 09:26:05.798066 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="930c46fa-8cc3-45ad-946a-2bc2f5ff0745" path="/var/lib/kubelet/pods/930c46fa-8cc3-45ad-946a-2bc2f5ff0745/volumes" Mar 11 09:26:07 crc kubenswrapper[4808]: I0311 09:26:07.010921 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-x829q" podUID="ac7a646f-80dc-45da-a157-abdbaa22c095" containerName="registry-server" containerID="cri-o://ddd56df798bbb4db5961e260f7d148ca699ee53c7d50f319bdaf1af27b13b71c" gracePeriod=2 Mar 11 09:26:07 crc kubenswrapper[4808]: I0311 09:26:07.499116 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x829q" Mar 11 09:26:07 crc kubenswrapper[4808]: I0311 09:26:07.608128 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac7a646f-80dc-45da-a157-abdbaa22c095-utilities\") pod \"ac7a646f-80dc-45da-a157-abdbaa22c095\" (UID: \"ac7a646f-80dc-45da-a157-abdbaa22c095\") " Mar 11 09:26:07 crc kubenswrapper[4808]: I0311 09:26:07.608206 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac7a646f-80dc-45da-a157-abdbaa22c095-catalog-content\") pod \"ac7a646f-80dc-45da-a157-abdbaa22c095\" (UID: \"ac7a646f-80dc-45da-a157-abdbaa22c095\") " Mar 11 09:26:07 crc kubenswrapper[4808]: I0311 09:26:07.608255 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-45f8w\" (UniqueName: \"kubernetes.io/projected/ac7a646f-80dc-45da-a157-abdbaa22c095-kube-api-access-45f8w\") pod \"ac7a646f-80dc-45da-a157-abdbaa22c095\" (UID: \"ac7a646f-80dc-45da-a157-abdbaa22c095\") " Mar 11 09:26:07 crc kubenswrapper[4808]: I0311 09:26:07.610226 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac7a646f-80dc-45da-a157-abdbaa22c095-utilities" (OuterVolumeSpecName: "utilities") pod "ac7a646f-80dc-45da-a157-abdbaa22c095" (UID: "ac7a646f-80dc-45da-a157-abdbaa22c095"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:26:07 crc kubenswrapper[4808]: I0311 09:26:07.616300 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac7a646f-80dc-45da-a157-abdbaa22c095-kube-api-access-45f8w" (OuterVolumeSpecName: "kube-api-access-45f8w") pod "ac7a646f-80dc-45da-a157-abdbaa22c095" (UID: "ac7a646f-80dc-45da-a157-abdbaa22c095"). InnerVolumeSpecName "kube-api-access-45f8w". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:26:07 crc kubenswrapper[4808]: I0311 09:26:07.670470 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac7a646f-80dc-45da-a157-abdbaa22c095-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ac7a646f-80dc-45da-a157-abdbaa22c095" (UID: "ac7a646f-80dc-45da-a157-abdbaa22c095"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:26:07 crc kubenswrapper[4808]: I0311 09:26:07.709894 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac7a646f-80dc-45da-a157-abdbaa22c095-utilities\") on node \"crc\" DevicePath \"\"" Mar 11 09:26:07 crc kubenswrapper[4808]: I0311 09:26:07.709944 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac7a646f-80dc-45da-a157-abdbaa22c095-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 11 09:26:07 crc kubenswrapper[4808]: I0311 09:26:07.709963 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-45f8w\" (UniqueName: \"kubernetes.io/projected/ac7a646f-80dc-45da-a157-abdbaa22c095-kube-api-access-45f8w\") on node \"crc\" DevicePath \"\"" Mar 11 09:26:08 crc kubenswrapper[4808]: I0311 09:26:08.022270 4808 generic.go:334] "Generic (PLEG): container finished" podID="ac7a646f-80dc-45da-a157-abdbaa22c095" containerID="ddd56df798bbb4db5961e260f7d148ca699ee53c7d50f319bdaf1af27b13b71c" exitCode=0 Mar 11 09:26:08 crc kubenswrapper[4808]: I0311 09:26:08.022329 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x829q" event={"ID":"ac7a646f-80dc-45da-a157-abdbaa22c095","Type":"ContainerDied","Data":"ddd56df798bbb4db5961e260f7d148ca699ee53c7d50f319bdaf1af27b13b71c"} Mar 11 09:26:08 crc kubenswrapper[4808]: I0311 09:26:08.022388 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x829q" Mar 11 09:26:08 crc kubenswrapper[4808]: I0311 09:26:08.022402 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x829q" event={"ID":"ac7a646f-80dc-45da-a157-abdbaa22c095","Type":"ContainerDied","Data":"99e54d9782cf3e889de82fd0bd2439d139f4f783d8e72985b3ead0fd9ca7485e"} Mar 11 09:26:08 crc kubenswrapper[4808]: I0311 09:26:08.022418 4808 scope.go:117] "RemoveContainer" containerID="ddd56df798bbb4db5961e260f7d148ca699ee53c7d50f319bdaf1af27b13b71c" Mar 11 09:26:08 crc kubenswrapper[4808]: I0311 09:26:08.047456 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-x829q"] Mar 11 09:26:08 crc kubenswrapper[4808]: I0311 09:26:08.054903 4808 scope.go:117] "RemoveContainer" containerID="8040d225f5b767733dbadf3e4f261ec02a27313b06001f82d5e005f3e57258b8" Mar 11 09:26:08 crc kubenswrapper[4808]: I0311 09:26:08.060906 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-x829q"] Mar 11 09:26:08 crc kubenswrapper[4808]: I0311 09:26:08.073454 4808 scope.go:117] "RemoveContainer" containerID="5341994667a0ea7b94ed7600ea8af39ed6dd6a63a9acc798c3c80f35d7f6670a" Mar 11 09:26:08 crc kubenswrapper[4808]: I0311 09:26:08.096977 4808 scope.go:117] "RemoveContainer" containerID="ddd56df798bbb4db5961e260f7d148ca699ee53c7d50f319bdaf1af27b13b71c" Mar 11 09:26:08 crc kubenswrapper[4808]: E0311 09:26:08.097492 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ddd56df798bbb4db5961e260f7d148ca699ee53c7d50f319bdaf1af27b13b71c\": container with ID starting with ddd56df798bbb4db5961e260f7d148ca699ee53c7d50f319bdaf1af27b13b71c not found: ID does not exist" containerID="ddd56df798bbb4db5961e260f7d148ca699ee53c7d50f319bdaf1af27b13b71c" Mar 11 09:26:08 crc kubenswrapper[4808]: I0311 09:26:08.097536 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddd56df798bbb4db5961e260f7d148ca699ee53c7d50f319bdaf1af27b13b71c"} err="failed to get container status \"ddd56df798bbb4db5961e260f7d148ca699ee53c7d50f319bdaf1af27b13b71c\": rpc error: code = NotFound desc = could not find container \"ddd56df798bbb4db5961e260f7d148ca699ee53c7d50f319bdaf1af27b13b71c\": container with ID starting with ddd56df798bbb4db5961e260f7d148ca699ee53c7d50f319bdaf1af27b13b71c not found: ID does not exist" Mar 11 09:26:08 crc kubenswrapper[4808]: I0311 09:26:08.097556 4808 scope.go:117] "RemoveContainer" containerID="8040d225f5b767733dbadf3e4f261ec02a27313b06001f82d5e005f3e57258b8" Mar 11 09:26:08 crc kubenswrapper[4808]: E0311 09:26:08.098343 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8040d225f5b767733dbadf3e4f261ec02a27313b06001f82d5e005f3e57258b8\": container with ID starting with 8040d225f5b767733dbadf3e4f261ec02a27313b06001f82d5e005f3e57258b8 not found: ID does not exist" containerID="8040d225f5b767733dbadf3e4f261ec02a27313b06001f82d5e005f3e57258b8" Mar 11 09:26:08 crc kubenswrapper[4808]: I0311 09:26:08.098425 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8040d225f5b767733dbadf3e4f261ec02a27313b06001f82d5e005f3e57258b8"} err="failed to get container status \"8040d225f5b767733dbadf3e4f261ec02a27313b06001f82d5e005f3e57258b8\": rpc error: code = NotFound desc = could not find container \"8040d225f5b767733dbadf3e4f261ec02a27313b06001f82d5e005f3e57258b8\": container with ID starting with 8040d225f5b767733dbadf3e4f261ec02a27313b06001f82d5e005f3e57258b8 not found: ID does not exist" Mar 11 09:26:08 crc kubenswrapper[4808]: I0311 09:26:08.098469 4808 scope.go:117] "RemoveContainer" containerID="5341994667a0ea7b94ed7600ea8af39ed6dd6a63a9acc798c3c80f35d7f6670a" Mar 11 09:26:08 crc kubenswrapper[4808]: E0311 09:26:08.098879 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5341994667a0ea7b94ed7600ea8af39ed6dd6a63a9acc798c3c80f35d7f6670a\": container with ID starting with 5341994667a0ea7b94ed7600ea8af39ed6dd6a63a9acc798c3c80f35d7f6670a not found: ID does not exist" containerID="5341994667a0ea7b94ed7600ea8af39ed6dd6a63a9acc798c3c80f35d7f6670a" Mar 11 09:26:08 crc kubenswrapper[4808]: I0311 09:26:08.098900 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5341994667a0ea7b94ed7600ea8af39ed6dd6a63a9acc798c3c80f35d7f6670a"} err="failed to get container status \"5341994667a0ea7b94ed7600ea8af39ed6dd6a63a9acc798c3c80f35d7f6670a\": rpc error: code = NotFound desc = could not find container \"5341994667a0ea7b94ed7600ea8af39ed6dd6a63a9acc798c3c80f35d7f6670a\": container with ID starting with 5341994667a0ea7b94ed7600ea8af39ed6dd6a63a9acc798c3c80f35d7f6670a not found: ID does not exist" Mar 11 09:26:09 crc kubenswrapper[4808]: I0311 09:26:09.800898 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac7a646f-80dc-45da-a157-abdbaa22c095" path="/var/lib/kubelet/pods/ac7a646f-80dc-45da-a157-abdbaa22c095/volumes" Mar 11 09:26:54 crc kubenswrapper[4808]: I0311 09:26:54.219722 4808 scope.go:117] "RemoveContainer" containerID="f6a9d9787f703ff5353d96a250475b232a6e416ec8e0d071181c56878aca576b" Mar 11 09:27:16 crc kubenswrapper[4808]: I0311 09:27:16.027396 4808 patch_prober.go:28] interesting pod/machine-config-daemon-tfsm9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 09:27:16 crc kubenswrapper[4808]: I0311 09:27:16.028304 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 09:27:46 crc kubenswrapper[4808]: I0311 09:27:46.026910 4808 patch_prober.go:28] interesting pod/machine-config-daemon-tfsm9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 09:27:46 crc kubenswrapper[4808]: I0311 09:27:46.027422 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 09:28:00 crc kubenswrapper[4808]: I0311 09:28:00.144485 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553688-f8tvc"] Mar 11 09:28:00 crc kubenswrapper[4808]: E0311 09:28:00.146276 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="634dfe6c-347d-4626-8807-995866218e6d" containerName="oc" Mar 11 09:28:00 crc kubenswrapper[4808]: I0311 09:28:00.146306 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="634dfe6c-347d-4626-8807-995866218e6d" containerName="oc" Mar 11 09:28:00 crc kubenswrapper[4808]: E0311 09:28:00.146385 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac7a646f-80dc-45da-a157-abdbaa22c095" containerName="registry-server" Mar 11 09:28:00 crc kubenswrapper[4808]: I0311 09:28:00.146394 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac7a646f-80dc-45da-a157-abdbaa22c095" containerName="registry-server" Mar 11 09:28:00 crc kubenswrapper[4808]: E0311 09:28:00.146408 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac7a646f-80dc-45da-a157-abdbaa22c095" containerName="extract-utilities" Mar 11 09:28:00 crc kubenswrapper[4808]: I0311 09:28:00.146418 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac7a646f-80dc-45da-a157-abdbaa22c095" containerName="extract-utilities" Mar 11 09:28:00 crc kubenswrapper[4808]: E0311 09:28:00.146430 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac7a646f-80dc-45da-a157-abdbaa22c095" containerName="extract-content" Mar 11 09:28:00 crc kubenswrapper[4808]: I0311 09:28:00.146439 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac7a646f-80dc-45da-a157-abdbaa22c095" containerName="extract-content" Mar 11 09:28:00 crc kubenswrapper[4808]: I0311 09:28:00.146651 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac7a646f-80dc-45da-a157-abdbaa22c095" containerName="registry-server" Mar 11 09:28:00 crc kubenswrapper[4808]: I0311 09:28:00.146687 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="634dfe6c-347d-4626-8807-995866218e6d" containerName="oc" Mar 11 09:28:00 crc kubenswrapper[4808]: I0311 09:28:00.147475 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553688-f8tvc" Mar 11 09:28:00 crc kubenswrapper[4808]: I0311 09:28:00.150283 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 09:28:00 crc kubenswrapper[4808]: I0311 09:28:00.150527 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 09:28:00 crc kubenswrapper[4808]: I0311 09:28:00.151162 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-8r7cc" Mar 11 09:28:00 crc kubenswrapper[4808]: I0311 09:28:00.152951 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553688-f8tvc"] Mar 11 09:28:00 crc kubenswrapper[4808]: I0311 09:28:00.263138 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2m67\" (UniqueName: \"kubernetes.io/projected/e7eebce3-3490-4aa3-8222-cc75f81b298b-kube-api-access-d2m67\") pod \"auto-csr-approver-29553688-f8tvc\" (UID: \"e7eebce3-3490-4aa3-8222-cc75f81b298b\") " pod="openshift-infra/auto-csr-approver-29553688-f8tvc" Mar 11 09:28:00 crc kubenswrapper[4808]: I0311 09:28:00.365469 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2m67\" (UniqueName: \"kubernetes.io/projected/e7eebce3-3490-4aa3-8222-cc75f81b298b-kube-api-access-d2m67\") pod \"auto-csr-approver-29553688-f8tvc\" (UID: \"e7eebce3-3490-4aa3-8222-cc75f81b298b\") " pod="openshift-infra/auto-csr-approver-29553688-f8tvc" Mar 11 09:28:00 crc kubenswrapper[4808]: I0311 09:28:00.388427 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2m67\" (UniqueName: \"kubernetes.io/projected/e7eebce3-3490-4aa3-8222-cc75f81b298b-kube-api-access-d2m67\") pod \"auto-csr-approver-29553688-f8tvc\" (UID: \"e7eebce3-3490-4aa3-8222-cc75f81b298b\") " pod="openshift-infra/auto-csr-approver-29553688-f8tvc" Mar 11 09:28:00 crc kubenswrapper[4808]: I0311 09:28:00.480452 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553688-f8tvc" Mar 11 09:28:01 crc kubenswrapper[4808]: I0311 09:28:00.927786 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553688-f8tvc"] Mar 11 09:28:01 crc kubenswrapper[4808]: I0311 09:28:00.934241 4808 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 11 09:28:01 crc kubenswrapper[4808]: I0311 09:28:01.930110 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553688-f8tvc" event={"ID":"e7eebce3-3490-4aa3-8222-cc75f81b298b","Type":"ContainerStarted","Data":"20d91ef97f586035dd1ba1a982848abc80c58abf9cfa10fd79e16baa12c213b9"} Mar 11 09:28:02 crc kubenswrapper[4808]: I0311 09:28:02.943058 4808 generic.go:334] "Generic (PLEG): container finished" podID="e7eebce3-3490-4aa3-8222-cc75f81b298b" containerID="946f50b01cfaba73009a2bb55cdbfca7cff5d867670f7081f2ca6baf7f24f4d8" exitCode=0 Mar 11 09:28:02 crc kubenswrapper[4808]: I0311 09:28:02.943117 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553688-f8tvc" event={"ID":"e7eebce3-3490-4aa3-8222-cc75f81b298b","Type":"ContainerDied","Data":"946f50b01cfaba73009a2bb55cdbfca7cff5d867670f7081f2ca6baf7f24f4d8"} Mar 11 09:28:04 crc kubenswrapper[4808]: I0311 09:28:04.191738 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553688-f8tvc" Mar 11 09:28:04 crc kubenswrapper[4808]: I0311 09:28:04.231634 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2m67\" (UniqueName: \"kubernetes.io/projected/e7eebce3-3490-4aa3-8222-cc75f81b298b-kube-api-access-d2m67\") pod \"e7eebce3-3490-4aa3-8222-cc75f81b298b\" (UID: \"e7eebce3-3490-4aa3-8222-cc75f81b298b\") " Mar 11 09:28:04 crc kubenswrapper[4808]: I0311 09:28:04.237648 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7eebce3-3490-4aa3-8222-cc75f81b298b-kube-api-access-d2m67" (OuterVolumeSpecName: "kube-api-access-d2m67") pod "e7eebce3-3490-4aa3-8222-cc75f81b298b" (UID: "e7eebce3-3490-4aa3-8222-cc75f81b298b"). InnerVolumeSpecName "kube-api-access-d2m67". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:28:04 crc kubenswrapper[4808]: I0311 09:28:04.333406 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2m67\" (UniqueName: \"kubernetes.io/projected/e7eebce3-3490-4aa3-8222-cc75f81b298b-kube-api-access-d2m67\") on node \"crc\" DevicePath \"\"" Mar 11 09:28:04 crc kubenswrapper[4808]: I0311 09:28:04.956081 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553688-f8tvc" event={"ID":"e7eebce3-3490-4aa3-8222-cc75f81b298b","Type":"ContainerDied","Data":"20d91ef97f586035dd1ba1a982848abc80c58abf9cfa10fd79e16baa12c213b9"} Mar 11 09:28:04 crc kubenswrapper[4808]: I0311 09:28:04.956128 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20d91ef97f586035dd1ba1a982848abc80c58abf9cfa10fd79e16baa12c213b9" Mar 11 09:28:04 crc kubenswrapper[4808]: I0311 09:28:04.956162 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553688-f8tvc" Mar 11 09:28:05 crc kubenswrapper[4808]: I0311 09:28:05.267689 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553682-h478n"] Mar 11 09:28:05 crc kubenswrapper[4808]: I0311 09:28:05.271100 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553682-h478n"] Mar 11 09:28:05 crc kubenswrapper[4808]: I0311 09:28:05.805586 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8998cf7-d284-41cc-b5aa-699df0283d3a" path="/var/lib/kubelet/pods/f8998cf7-d284-41cc-b5aa-699df0283d3a/volumes" Mar 11 09:28:16 crc kubenswrapper[4808]: I0311 09:28:16.027865 4808 patch_prober.go:28] interesting pod/machine-config-daemon-tfsm9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 09:28:16 crc kubenswrapper[4808]: I0311 09:28:16.028484 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 09:28:16 crc kubenswrapper[4808]: I0311 09:28:16.028534 4808 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" Mar 11 09:28:16 crc kubenswrapper[4808]: I0311 09:28:16.029110 4808 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"da4536925bbcce34ca74e0b8d6dd1cb5d3d10fde6f44c64699696606d3835faa"} pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 11 09:28:16 crc kubenswrapper[4808]: I0311 09:28:16.029198 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerName="machine-config-daemon" containerID="cri-o://da4536925bbcce34ca74e0b8d6dd1cb5d3d10fde6f44c64699696606d3835faa" gracePeriod=600 Mar 11 09:28:16 crc kubenswrapper[4808]: E0311 09:28:16.159214 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:28:17 crc kubenswrapper[4808]: I0311 09:28:17.032995 4808 generic.go:334] "Generic (PLEG): container finished" podID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerID="da4536925bbcce34ca74e0b8d6dd1cb5d3d10fde6f44c64699696606d3835faa" exitCode=0 Mar 11 09:28:17 crc kubenswrapper[4808]: I0311 09:28:17.033066 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" event={"ID":"3dda5309-668d-4e3c-b3b2-1d708eecc578","Type":"ContainerDied","Data":"da4536925bbcce34ca74e0b8d6dd1cb5d3d10fde6f44c64699696606d3835faa"} Mar 11 09:28:17 crc kubenswrapper[4808]: I0311 09:28:17.033132 4808 scope.go:117] "RemoveContainer" containerID="5541d39dfe0a2921cc7a274b7e0e5e13c34f7201893fb76d9278c96d41e9af2b" Mar 11 09:28:17 crc kubenswrapper[4808]: I0311 09:28:17.033651 4808 scope.go:117] "RemoveContainer" containerID="da4536925bbcce34ca74e0b8d6dd1cb5d3d10fde6f44c64699696606d3835faa" Mar 11 09:28:17 crc kubenswrapper[4808]: E0311 09:28:17.033926 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:28:30 crc kubenswrapper[4808]: I0311 09:28:30.790325 4808 scope.go:117] "RemoveContainer" containerID="da4536925bbcce34ca74e0b8d6dd1cb5d3d10fde6f44c64699696606d3835faa" Mar 11 09:28:30 crc kubenswrapper[4808]: E0311 09:28:30.791415 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:28:44 crc kubenswrapper[4808]: I0311 09:28:44.790521 4808 scope.go:117] "RemoveContainer" containerID="da4536925bbcce34ca74e0b8d6dd1cb5d3d10fde6f44c64699696606d3835faa" Mar 11 09:28:44 crc kubenswrapper[4808]: E0311 09:28:44.791631 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:28:54 crc kubenswrapper[4808]: I0311 09:28:54.317234 4808 scope.go:117] "RemoveContainer" containerID="229eb8c7dc4d4463b0d94f280010d7c479770a758c973a3e8d8e450b1dea11a2" Mar 11 09:28:58 crc kubenswrapper[4808]: I0311 09:28:58.789906 4808 scope.go:117] "RemoveContainer" containerID="da4536925bbcce34ca74e0b8d6dd1cb5d3d10fde6f44c64699696606d3835faa" Mar 11 09:28:58 crc kubenswrapper[4808]: E0311 09:28:58.790664 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:29:13 crc kubenswrapper[4808]: I0311 09:29:13.790568 4808 scope.go:117] "RemoveContainer" containerID="da4536925bbcce34ca74e0b8d6dd1cb5d3d10fde6f44c64699696606d3835faa" Mar 11 09:29:13 crc kubenswrapper[4808]: E0311 09:29:13.791549 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:29:25 crc kubenswrapper[4808]: I0311 09:29:25.789492 4808 scope.go:117] "RemoveContainer" containerID="da4536925bbcce34ca74e0b8d6dd1cb5d3d10fde6f44c64699696606d3835faa" Mar 11 09:29:25 crc kubenswrapper[4808]: E0311 09:29:25.790901 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:29:38 crc kubenswrapper[4808]: I0311 09:29:38.790467 4808 scope.go:117] "RemoveContainer" containerID="da4536925bbcce34ca74e0b8d6dd1cb5d3d10fde6f44c64699696606d3835faa" Mar 11 09:29:38 crc kubenswrapper[4808]: E0311 09:29:38.791564 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:29:49 crc kubenswrapper[4808]: I0311 09:29:49.789806 4808 scope.go:117] "RemoveContainer" containerID="da4536925bbcce34ca74e0b8d6dd1cb5d3d10fde6f44c64699696606d3835faa" Mar 11 09:29:49 crc kubenswrapper[4808]: E0311 09:29:49.790793 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:30:00 crc kubenswrapper[4808]: I0311 09:30:00.162333 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553690-w5cxs"] Mar 11 09:30:00 crc kubenswrapper[4808]: E0311 09:30:00.165134 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7eebce3-3490-4aa3-8222-cc75f81b298b" containerName="oc" Mar 11 09:30:00 crc kubenswrapper[4808]: I0311 09:30:00.165286 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7eebce3-3490-4aa3-8222-cc75f81b298b" containerName="oc" Mar 11 09:30:00 crc kubenswrapper[4808]: I0311 09:30:00.165831 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7eebce3-3490-4aa3-8222-cc75f81b298b" containerName="oc" Mar 11 09:30:00 crc kubenswrapper[4808]: I0311 09:30:00.166734 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553690-w5cxs" Mar 11 09:30:00 crc kubenswrapper[4808]: I0311 09:30:00.170257 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 09:30:00 crc kubenswrapper[4808]: I0311 09:30:00.171158 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-8r7cc" Mar 11 09:30:00 crc kubenswrapper[4808]: I0311 09:30:00.173868 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29553690-9xbqh"] Mar 11 09:30:00 crc kubenswrapper[4808]: I0311 09:30:00.174973 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29553690-9xbqh" Mar 11 09:30:00 crc kubenswrapper[4808]: I0311 09:30:00.176649 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 09:30:00 crc kubenswrapper[4808]: I0311 09:30:00.177964 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 11 09:30:00 crc kubenswrapper[4808]: I0311 09:30:00.178029 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 11 09:30:00 crc kubenswrapper[4808]: I0311 09:30:00.182738 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553690-w5cxs"] Mar 11 09:30:00 crc kubenswrapper[4808]: I0311 09:30:00.189033 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29553690-9xbqh"] Mar 11 09:30:00 crc kubenswrapper[4808]: I0311 09:30:00.329259 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26pxx\" (UniqueName: \"kubernetes.io/projected/be40bf22-dc95-42d9-89ab-ad8bde725469-kube-api-access-26pxx\") pod \"collect-profiles-29553690-9xbqh\" (UID: \"be40bf22-dc95-42d9-89ab-ad8bde725469\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553690-9xbqh" Mar 11 09:30:00 crc kubenswrapper[4808]: I0311 09:30:00.329326 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/be40bf22-dc95-42d9-89ab-ad8bde725469-secret-volume\") pod \"collect-profiles-29553690-9xbqh\" (UID: \"be40bf22-dc95-42d9-89ab-ad8bde725469\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553690-9xbqh" Mar 11 09:30:00 crc kubenswrapper[4808]: I0311 09:30:00.329377 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/be40bf22-dc95-42d9-89ab-ad8bde725469-config-volume\") pod \"collect-profiles-29553690-9xbqh\" (UID: \"be40bf22-dc95-42d9-89ab-ad8bde725469\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553690-9xbqh" Mar 11 09:30:00 crc kubenswrapper[4808]: I0311 09:30:00.329527 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vvw7\" (UniqueName: \"kubernetes.io/projected/9a745817-9e61-4724-8327-136b52963f82-kube-api-access-8vvw7\") pod \"auto-csr-approver-29553690-w5cxs\" (UID: \"9a745817-9e61-4724-8327-136b52963f82\") " pod="openshift-infra/auto-csr-approver-29553690-w5cxs" Mar 11 09:30:00 crc kubenswrapper[4808]: I0311 09:30:00.430612 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26pxx\" (UniqueName: \"kubernetes.io/projected/be40bf22-dc95-42d9-89ab-ad8bde725469-kube-api-access-26pxx\") pod \"collect-profiles-29553690-9xbqh\" (UID: \"be40bf22-dc95-42d9-89ab-ad8bde725469\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553690-9xbqh" Mar 11 09:30:00 crc kubenswrapper[4808]: I0311 09:30:00.430703 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/be40bf22-dc95-42d9-89ab-ad8bde725469-secret-volume\") pod \"collect-profiles-29553690-9xbqh\" (UID: \"be40bf22-dc95-42d9-89ab-ad8bde725469\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553690-9xbqh" Mar 11 09:30:00 crc kubenswrapper[4808]: I0311 09:30:00.430761 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/be40bf22-dc95-42d9-89ab-ad8bde725469-config-volume\") pod \"collect-profiles-29553690-9xbqh\" (UID: \"be40bf22-dc95-42d9-89ab-ad8bde725469\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553690-9xbqh" Mar 11 09:30:00 crc kubenswrapper[4808]: I0311 09:30:00.431581 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vvw7\" (UniqueName: \"kubernetes.io/projected/9a745817-9e61-4724-8327-136b52963f82-kube-api-access-8vvw7\") pod \"auto-csr-approver-29553690-w5cxs\" (UID: \"9a745817-9e61-4724-8327-136b52963f82\") " pod="openshift-infra/auto-csr-approver-29553690-w5cxs" Mar 11 09:30:00 crc kubenswrapper[4808]: I0311 09:30:00.433021 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/be40bf22-dc95-42d9-89ab-ad8bde725469-config-volume\") pod \"collect-profiles-29553690-9xbqh\" (UID: \"be40bf22-dc95-42d9-89ab-ad8bde725469\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553690-9xbqh" Mar 11 09:30:00 crc kubenswrapper[4808]: I0311 09:30:00.438635 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/be40bf22-dc95-42d9-89ab-ad8bde725469-secret-volume\") pod \"collect-profiles-29553690-9xbqh\" (UID: \"be40bf22-dc95-42d9-89ab-ad8bde725469\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553690-9xbqh" Mar 11 09:30:00 crc kubenswrapper[4808]: I0311 09:30:00.450004 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vvw7\" (UniqueName: \"kubernetes.io/projected/9a745817-9e61-4724-8327-136b52963f82-kube-api-access-8vvw7\") pod \"auto-csr-approver-29553690-w5cxs\" (UID: \"9a745817-9e61-4724-8327-136b52963f82\") " pod="openshift-infra/auto-csr-approver-29553690-w5cxs" Mar 11 09:30:00 crc kubenswrapper[4808]: I0311 09:30:00.452537 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26pxx\" (UniqueName: \"kubernetes.io/projected/be40bf22-dc95-42d9-89ab-ad8bde725469-kube-api-access-26pxx\") pod \"collect-profiles-29553690-9xbqh\" (UID: \"be40bf22-dc95-42d9-89ab-ad8bde725469\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553690-9xbqh" Mar 11 09:30:00 crc kubenswrapper[4808]: I0311 09:30:00.529047 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553690-w5cxs" Mar 11 09:30:00 crc kubenswrapper[4808]: I0311 09:30:00.535909 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29553690-9xbqh" Mar 11 09:30:00 crc kubenswrapper[4808]: I0311 09:30:00.789918 4808 scope.go:117] "RemoveContainer" containerID="da4536925bbcce34ca74e0b8d6dd1cb5d3d10fde6f44c64699696606d3835faa" Mar 11 09:30:00 crc kubenswrapper[4808]: E0311 09:30:00.790490 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:30:00 crc kubenswrapper[4808]: I0311 09:30:00.959821 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29553690-9xbqh"] Mar 11 09:30:00 crc kubenswrapper[4808]: I0311 09:30:00.993150 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553690-w5cxs"] Mar 11 09:30:01 crc kubenswrapper[4808]: W0311 09:30:01.005547 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9a745817_9e61_4724_8327_136b52963f82.slice/crio-5c9594fec511a8ce03cf1c2c398e4835a541060a94be8935e8df11cf5e60deed WatchSource:0}: Error finding container 5c9594fec511a8ce03cf1c2c398e4835a541060a94be8935e8df11cf5e60deed: Status 404 returned error can't find the container with id 5c9594fec511a8ce03cf1c2c398e4835a541060a94be8935e8df11cf5e60deed Mar 11 09:30:01 crc kubenswrapper[4808]: I0311 09:30:01.696797 4808 generic.go:334] "Generic (PLEG): container finished" podID="be40bf22-dc95-42d9-89ab-ad8bde725469" containerID="992934f4ee9270c3e7ee0f6747df2a4d888a787f9fff9947d49e533ad5ffab1d" exitCode=0 Mar 11 09:30:01 crc kubenswrapper[4808]: I0311 09:30:01.696847 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29553690-9xbqh" event={"ID":"be40bf22-dc95-42d9-89ab-ad8bde725469","Type":"ContainerDied","Data":"992934f4ee9270c3e7ee0f6747df2a4d888a787f9fff9947d49e533ad5ffab1d"} Mar 11 09:30:01 crc kubenswrapper[4808]: I0311 09:30:01.696990 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29553690-9xbqh" event={"ID":"be40bf22-dc95-42d9-89ab-ad8bde725469","Type":"ContainerStarted","Data":"a731f171a0cb5c199a2e45b3b98cd1ebb8a0ebeaa6e10ecf6b58be1f6b5eb787"} Mar 11 09:30:01 crc kubenswrapper[4808]: I0311 09:30:01.698918 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553690-w5cxs" event={"ID":"9a745817-9e61-4724-8327-136b52963f82","Type":"ContainerStarted","Data":"5c9594fec511a8ce03cf1c2c398e4835a541060a94be8935e8df11cf5e60deed"} Mar 11 09:30:03 crc kubenswrapper[4808]: I0311 09:30:03.115312 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29553690-9xbqh" Mar 11 09:30:03 crc kubenswrapper[4808]: I0311 09:30:03.177721 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/be40bf22-dc95-42d9-89ab-ad8bde725469-secret-volume\") pod \"be40bf22-dc95-42d9-89ab-ad8bde725469\" (UID: \"be40bf22-dc95-42d9-89ab-ad8bde725469\") " Mar 11 09:30:03 crc kubenswrapper[4808]: I0311 09:30:03.177789 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/be40bf22-dc95-42d9-89ab-ad8bde725469-config-volume\") pod \"be40bf22-dc95-42d9-89ab-ad8bde725469\" (UID: \"be40bf22-dc95-42d9-89ab-ad8bde725469\") " Mar 11 09:30:03 crc kubenswrapper[4808]: I0311 09:30:03.177897 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26pxx\" (UniqueName: \"kubernetes.io/projected/be40bf22-dc95-42d9-89ab-ad8bde725469-kube-api-access-26pxx\") pod \"be40bf22-dc95-42d9-89ab-ad8bde725469\" (UID: \"be40bf22-dc95-42d9-89ab-ad8bde725469\") " Mar 11 09:30:03 crc kubenswrapper[4808]: I0311 09:30:03.178580 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be40bf22-dc95-42d9-89ab-ad8bde725469-config-volume" (OuterVolumeSpecName: "config-volume") pod "be40bf22-dc95-42d9-89ab-ad8bde725469" (UID: "be40bf22-dc95-42d9-89ab-ad8bde725469"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:30:03 crc kubenswrapper[4808]: I0311 09:30:03.185650 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be40bf22-dc95-42d9-89ab-ad8bde725469-kube-api-access-26pxx" (OuterVolumeSpecName: "kube-api-access-26pxx") pod "be40bf22-dc95-42d9-89ab-ad8bde725469" (UID: "be40bf22-dc95-42d9-89ab-ad8bde725469"). InnerVolumeSpecName "kube-api-access-26pxx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:30:03 crc kubenswrapper[4808]: I0311 09:30:03.185685 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be40bf22-dc95-42d9-89ab-ad8bde725469-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "be40bf22-dc95-42d9-89ab-ad8bde725469" (UID: "be40bf22-dc95-42d9-89ab-ad8bde725469"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:30:03 crc kubenswrapper[4808]: I0311 09:30:03.279307 4808 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/be40bf22-dc95-42d9-89ab-ad8bde725469-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 11 09:30:03 crc kubenswrapper[4808]: I0311 09:30:03.279423 4808 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/be40bf22-dc95-42d9-89ab-ad8bde725469-config-volume\") on node \"crc\" DevicePath \"\"" Mar 11 09:30:03 crc kubenswrapper[4808]: I0311 09:30:03.279445 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-26pxx\" (UniqueName: \"kubernetes.io/projected/be40bf22-dc95-42d9-89ab-ad8bde725469-kube-api-access-26pxx\") on node \"crc\" DevicePath \"\"" Mar 11 09:30:03 crc kubenswrapper[4808]: I0311 09:30:03.727040 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29553690-9xbqh" event={"ID":"be40bf22-dc95-42d9-89ab-ad8bde725469","Type":"ContainerDied","Data":"a731f171a0cb5c199a2e45b3b98cd1ebb8a0ebeaa6e10ecf6b58be1f6b5eb787"} Mar 11 09:30:03 crc kubenswrapper[4808]: I0311 09:30:03.727071 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a731f171a0cb5c199a2e45b3b98cd1ebb8a0ebeaa6e10ecf6b58be1f6b5eb787" Mar 11 09:30:03 crc kubenswrapper[4808]: I0311 09:30:03.727103 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29553690-9xbqh" Mar 11 09:30:04 crc kubenswrapper[4808]: I0311 09:30:04.179012 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29553645-mfhm9"] Mar 11 09:30:04 crc kubenswrapper[4808]: I0311 09:30:04.184304 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29553645-mfhm9"] Mar 11 09:30:04 crc kubenswrapper[4808]: I0311 09:30:04.738873 4808 generic.go:334] "Generic (PLEG): container finished" podID="9a745817-9e61-4724-8327-136b52963f82" containerID="c272e6cb0dca49600b41ba5a4fbaec798371e54a84b290fd82bcfd3e0b34e7f1" exitCode=0 Mar 11 09:30:04 crc kubenswrapper[4808]: I0311 09:30:04.738920 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553690-w5cxs" event={"ID":"9a745817-9e61-4724-8327-136b52963f82","Type":"ContainerDied","Data":"c272e6cb0dca49600b41ba5a4fbaec798371e54a84b290fd82bcfd3e0b34e7f1"} Mar 11 09:30:05 crc kubenswrapper[4808]: I0311 09:30:05.801844 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38820ed5-b98d-45c8-8935-e18657f601b2" path="/var/lib/kubelet/pods/38820ed5-b98d-45c8-8935-e18657f601b2/volumes" Mar 11 09:30:06 crc kubenswrapper[4808]: I0311 09:30:06.009554 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553690-w5cxs" Mar 11 09:30:06 crc kubenswrapper[4808]: I0311 09:30:06.019764 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8vvw7\" (UniqueName: \"kubernetes.io/projected/9a745817-9e61-4724-8327-136b52963f82-kube-api-access-8vvw7\") pod \"9a745817-9e61-4724-8327-136b52963f82\" (UID: \"9a745817-9e61-4724-8327-136b52963f82\") " Mar 11 09:30:06 crc kubenswrapper[4808]: I0311 09:30:06.027554 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a745817-9e61-4724-8327-136b52963f82-kube-api-access-8vvw7" (OuterVolumeSpecName: "kube-api-access-8vvw7") pod "9a745817-9e61-4724-8327-136b52963f82" (UID: "9a745817-9e61-4724-8327-136b52963f82"). InnerVolumeSpecName "kube-api-access-8vvw7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:30:06 crc kubenswrapper[4808]: I0311 09:30:06.121159 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8vvw7\" (UniqueName: \"kubernetes.io/projected/9a745817-9e61-4724-8327-136b52963f82-kube-api-access-8vvw7\") on node \"crc\" DevicePath \"\"" Mar 11 09:30:06 crc kubenswrapper[4808]: I0311 09:30:06.756776 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553690-w5cxs" event={"ID":"9a745817-9e61-4724-8327-136b52963f82","Type":"ContainerDied","Data":"5c9594fec511a8ce03cf1c2c398e4835a541060a94be8935e8df11cf5e60deed"} Mar 11 09:30:06 crc kubenswrapper[4808]: I0311 09:30:06.756876 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553690-w5cxs" Mar 11 09:30:06 crc kubenswrapper[4808]: I0311 09:30:06.757235 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c9594fec511a8ce03cf1c2c398e4835a541060a94be8935e8df11cf5e60deed" Mar 11 09:30:07 crc kubenswrapper[4808]: I0311 09:30:07.071998 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553684-fqxrz"] Mar 11 09:30:07 crc kubenswrapper[4808]: I0311 09:30:07.079564 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553684-fqxrz"] Mar 11 09:30:07 crc kubenswrapper[4808]: I0311 09:30:07.797764 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="905c244d-08d7-45cd-9d63-ebe879768c31" path="/var/lib/kubelet/pods/905c244d-08d7-45cd-9d63-ebe879768c31/volumes" Mar 11 09:30:12 crc kubenswrapper[4808]: I0311 09:30:12.789632 4808 scope.go:117] "RemoveContainer" containerID="da4536925bbcce34ca74e0b8d6dd1cb5d3d10fde6f44c64699696606d3835faa" Mar 11 09:30:12 crc kubenswrapper[4808]: E0311 09:30:12.790476 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:30:23 crc kubenswrapper[4808]: I0311 09:30:23.789137 4808 scope.go:117] "RemoveContainer" containerID="da4536925bbcce34ca74e0b8d6dd1cb5d3d10fde6f44c64699696606d3835faa" Mar 11 09:30:23 crc kubenswrapper[4808]: E0311 09:30:23.790188 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:30:36 crc kubenswrapper[4808]: I0311 09:30:36.789780 4808 scope.go:117] "RemoveContainer" containerID="da4536925bbcce34ca74e0b8d6dd1cb5d3d10fde6f44c64699696606d3835faa" Mar 11 09:30:36 crc kubenswrapper[4808]: E0311 09:30:36.790876 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:30:50 crc kubenswrapper[4808]: I0311 09:30:50.788984 4808 scope.go:117] "RemoveContainer" containerID="da4536925bbcce34ca74e0b8d6dd1cb5d3d10fde6f44c64699696606d3835faa" Mar 11 09:30:50 crc kubenswrapper[4808]: E0311 09:30:50.789844 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:30:54 crc kubenswrapper[4808]: I0311 09:30:54.400054 4808 scope.go:117] "RemoveContainer" containerID="3ba3491182a0aa1a4bb571a427424cb21e9b5c5f2ba7b9e73b6ea5c423d524ea" Mar 11 09:30:54 crc kubenswrapper[4808]: I0311 09:30:54.454972 4808 scope.go:117] "RemoveContainer" containerID="d895f084cd2c6d40dc6a13803628f2f5f55e341124eb6b77075602b58de710e5" Mar 11 09:31:01 crc kubenswrapper[4808]: I0311 09:31:01.789575 4808 scope.go:117] "RemoveContainer" containerID="da4536925bbcce34ca74e0b8d6dd1cb5d3d10fde6f44c64699696606d3835faa" Mar 11 09:31:01 crc kubenswrapper[4808]: E0311 09:31:01.790246 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:31:13 crc kubenswrapper[4808]: I0311 09:31:13.789114 4808 scope.go:117] "RemoveContainer" containerID="da4536925bbcce34ca74e0b8d6dd1cb5d3d10fde6f44c64699696606d3835faa" Mar 11 09:31:13 crc kubenswrapper[4808]: E0311 09:31:13.789980 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:31:28 crc kubenswrapper[4808]: I0311 09:31:28.790671 4808 scope.go:117] "RemoveContainer" containerID="da4536925bbcce34ca74e0b8d6dd1cb5d3d10fde6f44c64699696606d3835faa" Mar 11 09:31:28 crc kubenswrapper[4808]: E0311 09:31:28.791459 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:31:41 crc kubenswrapper[4808]: I0311 09:31:41.791546 4808 scope.go:117] "RemoveContainer" containerID="da4536925bbcce34ca74e0b8d6dd1cb5d3d10fde6f44c64699696606d3835faa" Mar 11 09:31:41 crc kubenswrapper[4808]: E0311 09:31:41.792438 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:31:52 crc kubenswrapper[4808]: I0311 09:31:52.788926 4808 scope.go:117] "RemoveContainer" containerID="da4536925bbcce34ca74e0b8d6dd1cb5d3d10fde6f44c64699696606d3835faa" Mar 11 09:31:52 crc kubenswrapper[4808]: E0311 09:31:52.789707 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:32:00 crc kubenswrapper[4808]: I0311 09:32:00.142423 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553692-9vgcj"] Mar 11 09:32:00 crc kubenswrapper[4808]: E0311 09:32:00.143457 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be40bf22-dc95-42d9-89ab-ad8bde725469" containerName="collect-profiles" Mar 11 09:32:00 crc kubenswrapper[4808]: I0311 09:32:00.143477 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="be40bf22-dc95-42d9-89ab-ad8bde725469" containerName="collect-profiles" Mar 11 09:32:00 crc kubenswrapper[4808]: E0311 09:32:00.143533 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a745817-9e61-4724-8327-136b52963f82" containerName="oc" Mar 11 09:32:00 crc kubenswrapper[4808]: I0311 09:32:00.143547 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a745817-9e61-4724-8327-136b52963f82" containerName="oc" Mar 11 09:32:00 crc kubenswrapper[4808]: I0311 09:32:00.143747 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="be40bf22-dc95-42d9-89ab-ad8bde725469" containerName="collect-profiles" Mar 11 09:32:00 crc kubenswrapper[4808]: I0311 09:32:00.143772 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a745817-9e61-4724-8327-136b52963f82" containerName="oc" Mar 11 09:32:00 crc kubenswrapper[4808]: I0311 09:32:00.144484 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553692-9vgcj" Mar 11 09:32:00 crc kubenswrapper[4808]: I0311 09:32:00.146677 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-8r7cc" Mar 11 09:32:00 crc kubenswrapper[4808]: I0311 09:32:00.147692 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 09:32:00 crc kubenswrapper[4808]: I0311 09:32:00.148029 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 09:32:00 crc kubenswrapper[4808]: I0311 09:32:00.149959 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553692-9vgcj"] Mar 11 09:32:00 crc kubenswrapper[4808]: I0311 09:32:00.250646 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fw9n5\" (UniqueName: \"kubernetes.io/projected/87f85b3a-83e6-4767-b819-e5c20a147972-kube-api-access-fw9n5\") pod \"auto-csr-approver-29553692-9vgcj\" (UID: \"87f85b3a-83e6-4767-b819-e5c20a147972\") " pod="openshift-infra/auto-csr-approver-29553692-9vgcj" Mar 11 09:32:00 crc kubenswrapper[4808]: I0311 09:32:00.351967 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fw9n5\" (UniqueName: \"kubernetes.io/projected/87f85b3a-83e6-4767-b819-e5c20a147972-kube-api-access-fw9n5\") pod \"auto-csr-approver-29553692-9vgcj\" (UID: \"87f85b3a-83e6-4767-b819-e5c20a147972\") " pod="openshift-infra/auto-csr-approver-29553692-9vgcj" Mar 11 09:32:00 crc kubenswrapper[4808]: I0311 09:32:00.374919 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fw9n5\" (UniqueName: \"kubernetes.io/projected/87f85b3a-83e6-4767-b819-e5c20a147972-kube-api-access-fw9n5\") pod \"auto-csr-approver-29553692-9vgcj\" (UID: \"87f85b3a-83e6-4767-b819-e5c20a147972\") " pod="openshift-infra/auto-csr-approver-29553692-9vgcj" Mar 11 09:32:00 crc kubenswrapper[4808]: I0311 09:32:00.467659 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553692-9vgcj" Mar 11 09:32:00 crc kubenswrapper[4808]: I0311 09:32:00.712902 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553692-9vgcj"] Mar 11 09:32:01 crc kubenswrapper[4808]: I0311 09:32:01.673536 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553692-9vgcj" event={"ID":"87f85b3a-83e6-4767-b819-e5c20a147972","Type":"ContainerStarted","Data":"a29987774c70b43ecca44876a847f73d675ef22b26c429017d59287ac64d6ac8"} Mar 11 09:32:02 crc kubenswrapper[4808]: I0311 09:32:02.685225 4808 generic.go:334] "Generic (PLEG): container finished" podID="87f85b3a-83e6-4767-b819-e5c20a147972" containerID="525c51ddbe213746de9164ecd521193b3950372c485b90c7b7ff241e508b16fe" exitCode=0 Mar 11 09:32:02 crc kubenswrapper[4808]: I0311 09:32:02.685453 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553692-9vgcj" event={"ID":"87f85b3a-83e6-4767-b819-e5c20a147972","Type":"ContainerDied","Data":"525c51ddbe213746de9164ecd521193b3950372c485b90c7b7ff241e508b16fe"} Mar 11 09:32:03 crc kubenswrapper[4808]: I0311 09:32:03.951521 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553692-9vgcj" Mar 11 09:32:04 crc kubenswrapper[4808]: I0311 09:32:04.104800 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fw9n5\" (UniqueName: \"kubernetes.io/projected/87f85b3a-83e6-4767-b819-e5c20a147972-kube-api-access-fw9n5\") pod \"87f85b3a-83e6-4767-b819-e5c20a147972\" (UID: \"87f85b3a-83e6-4767-b819-e5c20a147972\") " Mar 11 09:32:04 crc kubenswrapper[4808]: I0311 09:32:04.110266 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87f85b3a-83e6-4767-b819-e5c20a147972-kube-api-access-fw9n5" (OuterVolumeSpecName: "kube-api-access-fw9n5") pod "87f85b3a-83e6-4767-b819-e5c20a147972" (UID: "87f85b3a-83e6-4767-b819-e5c20a147972"). InnerVolumeSpecName "kube-api-access-fw9n5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:32:04 crc kubenswrapper[4808]: I0311 09:32:04.206964 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fw9n5\" (UniqueName: \"kubernetes.io/projected/87f85b3a-83e6-4767-b819-e5c20a147972-kube-api-access-fw9n5\") on node \"crc\" DevicePath \"\"" Mar 11 09:32:04 crc kubenswrapper[4808]: I0311 09:32:04.703499 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553692-9vgcj" event={"ID":"87f85b3a-83e6-4767-b819-e5c20a147972","Type":"ContainerDied","Data":"a29987774c70b43ecca44876a847f73d675ef22b26c429017d59287ac64d6ac8"} Mar 11 09:32:04 crc kubenswrapper[4808]: I0311 09:32:04.703548 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a29987774c70b43ecca44876a847f73d675ef22b26c429017d59287ac64d6ac8" Mar 11 09:32:04 crc kubenswrapper[4808]: I0311 09:32:04.703572 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553692-9vgcj" Mar 11 09:32:04 crc kubenswrapper[4808]: I0311 09:32:04.789503 4808 scope.go:117] "RemoveContainer" containerID="da4536925bbcce34ca74e0b8d6dd1cb5d3d10fde6f44c64699696606d3835faa" Mar 11 09:32:04 crc kubenswrapper[4808]: E0311 09:32:04.789691 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:32:05 crc kubenswrapper[4808]: I0311 09:32:05.019447 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553686-9wd4h"] Mar 11 09:32:05 crc kubenswrapper[4808]: I0311 09:32:05.027013 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553686-9wd4h"] Mar 11 09:32:05 crc kubenswrapper[4808]: I0311 09:32:05.805355 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="634dfe6c-347d-4626-8807-995866218e6d" path="/var/lib/kubelet/pods/634dfe6c-347d-4626-8807-995866218e6d/volumes" Mar 11 09:32:16 crc kubenswrapper[4808]: I0311 09:32:16.789111 4808 scope.go:117] "RemoveContainer" containerID="da4536925bbcce34ca74e0b8d6dd1cb5d3d10fde6f44c64699696606d3835faa" Mar 11 09:32:16 crc kubenswrapper[4808]: E0311 09:32:16.789789 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:32:29 crc kubenswrapper[4808]: I0311 09:32:29.819855 4808 scope.go:117] "RemoveContainer" containerID="da4536925bbcce34ca74e0b8d6dd1cb5d3d10fde6f44c64699696606d3835faa" Mar 11 09:32:29 crc kubenswrapper[4808]: E0311 09:32:29.820863 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:32:41 crc kubenswrapper[4808]: I0311 09:32:41.789841 4808 scope.go:117] "RemoveContainer" containerID="da4536925bbcce34ca74e0b8d6dd1cb5d3d10fde6f44c64699696606d3835faa" Mar 11 09:32:41 crc kubenswrapper[4808]: E0311 09:32:41.790745 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:32:54 crc kubenswrapper[4808]: I0311 09:32:54.527996 4808 scope.go:117] "RemoveContainer" containerID="207e9f262ca66312d00fbb813fa63074cb46966f59734eba1689f75004cf22d7" Mar 11 09:32:56 crc kubenswrapper[4808]: I0311 09:32:56.789971 4808 scope.go:117] "RemoveContainer" containerID="da4536925bbcce34ca74e0b8d6dd1cb5d3d10fde6f44c64699696606d3835faa" Mar 11 09:32:56 crc kubenswrapper[4808]: E0311 09:32:56.790516 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:33:10 crc kubenswrapper[4808]: I0311 09:33:10.789172 4808 scope.go:117] "RemoveContainer" containerID="da4536925bbcce34ca74e0b8d6dd1cb5d3d10fde6f44c64699696606d3835faa" Mar 11 09:33:10 crc kubenswrapper[4808]: E0311 09:33:10.789915 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:33:23 crc kubenswrapper[4808]: I0311 09:33:23.799222 4808 scope.go:117] "RemoveContainer" containerID="da4536925bbcce34ca74e0b8d6dd1cb5d3d10fde6f44c64699696606d3835faa" Mar 11 09:33:24 crc kubenswrapper[4808]: I0311 09:33:24.330011 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" event={"ID":"3dda5309-668d-4e3c-b3b2-1d708eecc578","Type":"ContainerStarted","Data":"4dac0e2e6e145c420b5cb6c39ed1e2098a6a0826a1b856327936bbb0a9d7973c"} Mar 11 09:33:39 crc kubenswrapper[4808]: I0311 09:33:39.525675 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9c8pg"] Mar 11 09:33:39 crc kubenswrapper[4808]: E0311 09:33:39.526938 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87f85b3a-83e6-4767-b819-e5c20a147972" containerName="oc" Mar 11 09:33:39 crc kubenswrapper[4808]: I0311 09:33:39.526972 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="87f85b3a-83e6-4767-b819-e5c20a147972" containerName="oc" Mar 11 09:33:39 crc kubenswrapper[4808]: I0311 09:33:39.527454 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="87f85b3a-83e6-4767-b819-e5c20a147972" containerName="oc" Mar 11 09:33:39 crc kubenswrapper[4808]: I0311 09:33:39.529703 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9c8pg" Mar 11 09:33:39 crc kubenswrapper[4808]: I0311 09:33:39.538446 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9c8pg"] Mar 11 09:33:39 crc kubenswrapper[4808]: I0311 09:33:39.556378 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zk7q\" (UniqueName: \"kubernetes.io/projected/39da8ba9-d457-4e7d-97d9-daea56bfe2c3-kube-api-access-9zk7q\") pod \"community-operators-9c8pg\" (UID: \"39da8ba9-d457-4e7d-97d9-daea56bfe2c3\") " pod="openshift-marketplace/community-operators-9c8pg" Mar 11 09:33:39 crc kubenswrapper[4808]: I0311 09:33:39.556626 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39da8ba9-d457-4e7d-97d9-daea56bfe2c3-catalog-content\") pod \"community-operators-9c8pg\" (UID: \"39da8ba9-d457-4e7d-97d9-daea56bfe2c3\") " pod="openshift-marketplace/community-operators-9c8pg" Mar 11 09:33:39 crc kubenswrapper[4808]: I0311 09:33:39.556748 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39da8ba9-d457-4e7d-97d9-daea56bfe2c3-utilities\") pod \"community-operators-9c8pg\" (UID: \"39da8ba9-d457-4e7d-97d9-daea56bfe2c3\") " pod="openshift-marketplace/community-operators-9c8pg" Mar 11 09:33:39 crc kubenswrapper[4808]: I0311 09:33:39.658256 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zk7q\" (UniqueName: \"kubernetes.io/projected/39da8ba9-d457-4e7d-97d9-daea56bfe2c3-kube-api-access-9zk7q\") pod \"community-operators-9c8pg\" (UID: \"39da8ba9-d457-4e7d-97d9-daea56bfe2c3\") " pod="openshift-marketplace/community-operators-9c8pg" Mar 11 09:33:39 crc kubenswrapper[4808]: I0311 09:33:39.658333 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39da8ba9-d457-4e7d-97d9-daea56bfe2c3-catalog-content\") pod \"community-operators-9c8pg\" (UID: \"39da8ba9-d457-4e7d-97d9-daea56bfe2c3\") " pod="openshift-marketplace/community-operators-9c8pg" Mar 11 09:33:39 crc kubenswrapper[4808]: I0311 09:33:39.658414 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39da8ba9-d457-4e7d-97d9-daea56bfe2c3-utilities\") pod \"community-operators-9c8pg\" (UID: \"39da8ba9-d457-4e7d-97d9-daea56bfe2c3\") " pod="openshift-marketplace/community-operators-9c8pg" Mar 11 09:33:39 crc kubenswrapper[4808]: I0311 09:33:39.659008 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39da8ba9-d457-4e7d-97d9-daea56bfe2c3-catalog-content\") pod \"community-operators-9c8pg\" (UID: \"39da8ba9-d457-4e7d-97d9-daea56bfe2c3\") " pod="openshift-marketplace/community-operators-9c8pg" Mar 11 09:33:39 crc kubenswrapper[4808]: I0311 09:33:39.659225 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39da8ba9-d457-4e7d-97d9-daea56bfe2c3-utilities\") pod \"community-operators-9c8pg\" (UID: \"39da8ba9-d457-4e7d-97d9-daea56bfe2c3\") " pod="openshift-marketplace/community-operators-9c8pg" Mar 11 09:33:39 crc kubenswrapper[4808]: I0311 09:33:39.694688 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zk7q\" (UniqueName: \"kubernetes.io/projected/39da8ba9-d457-4e7d-97d9-daea56bfe2c3-kube-api-access-9zk7q\") pod \"community-operators-9c8pg\" (UID: \"39da8ba9-d457-4e7d-97d9-daea56bfe2c3\") " pod="openshift-marketplace/community-operators-9c8pg" Mar 11 09:33:39 crc kubenswrapper[4808]: I0311 09:33:39.860681 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9c8pg" Mar 11 09:33:40 crc kubenswrapper[4808]: I0311 09:33:40.115478 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9c8pg"] Mar 11 09:33:40 crc kubenswrapper[4808]: I0311 09:33:40.445519 4808 generic.go:334] "Generic (PLEG): container finished" podID="39da8ba9-d457-4e7d-97d9-daea56bfe2c3" containerID="dae1f3e8e1f8050137a67f91877cf6fc71d3b6b6dcfb71ff4e6d29b1e4e59326" exitCode=0 Mar 11 09:33:40 crc kubenswrapper[4808]: I0311 09:33:40.445576 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9c8pg" event={"ID":"39da8ba9-d457-4e7d-97d9-daea56bfe2c3","Type":"ContainerDied","Data":"dae1f3e8e1f8050137a67f91877cf6fc71d3b6b6dcfb71ff4e6d29b1e4e59326"} Mar 11 09:33:40 crc kubenswrapper[4808]: I0311 09:33:40.445919 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9c8pg" event={"ID":"39da8ba9-d457-4e7d-97d9-daea56bfe2c3","Type":"ContainerStarted","Data":"0e417ce26c499b11705e1daf81e4d17cfcfebf5bb7b7f3156a32324f43e4a4b9"} Mar 11 09:33:40 crc kubenswrapper[4808]: I0311 09:33:40.447498 4808 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 11 09:33:41 crc kubenswrapper[4808]: I0311 09:33:41.461921 4808 generic.go:334] "Generic (PLEG): container finished" podID="39da8ba9-d457-4e7d-97d9-daea56bfe2c3" containerID="7c4bc70685c6a20af5ad5bb7076a1f5073f665cd40572e4b7479cb97119e96a8" exitCode=0 Mar 11 09:33:41 crc kubenswrapper[4808]: I0311 09:33:41.461958 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9c8pg" event={"ID":"39da8ba9-d457-4e7d-97d9-daea56bfe2c3","Type":"ContainerDied","Data":"7c4bc70685c6a20af5ad5bb7076a1f5073f665cd40572e4b7479cb97119e96a8"} Mar 11 09:33:42 crc kubenswrapper[4808]: I0311 09:33:42.474777 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9c8pg" event={"ID":"39da8ba9-d457-4e7d-97d9-daea56bfe2c3","Type":"ContainerStarted","Data":"7441959987a0912e0f05583919b3aa95f00ee68dbc08dcc94a24381c33a6710e"} Mar 11 09:33:42 crc kubenswrapper[4808]: I0311 09:33:42.498938 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9c8pg" podStartSLOduration=2.105011038 podStartE2EDuration="3.498921941s" podCreationTimestamp="2026-03-11 09:33:39 +0000 UTC" firstStartedPulling="2026-03-11 09:33:40.447263798 +0000 UTC m=+3271.400587118" lastFinishedPulling="2026-03-11 09:33:41.841174701 +0000 UTC m=+3272.794498021" observedRunningTime="2026-03-11 09:33:42.496353749 +0000 UTC m=+3273.449677079" watchObservedRunningTime="2026-03-11 09:33:42.498921941 +0000 UTC m=+3273.452245281" Mar 11 09:33:49 crc kubenswrapper[4808]: I0311 09:33:49.861462 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9c8pg" Mar 11 09:33:49 crc kubenswrapper[4808]: I0311 09:33:49.863427 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9c8pg" Mar 11 09:33:49 crc kubenswrapper[4808]: I0311 09:33:49.932184 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9c8pg" Mar 11 09:33:50 crc kubenswrapper[4808]: I0311 09:33:50.588704 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9c8pg" Mar 11 09:33:50 crc kubenswrapper[4808]: I0311 09:33:50.647448 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9c8pg"] Mar 11 09:33:52 crc kubenswrapper[4808]: I0311 09:33:52.544799 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9c8pg" podUID="39da8ba9-d457-4e7d-97d9-daea56bfe2c3" containerName="registry-server" containerID="cri-o://7441959987a0912e0f05583919b3aa95f00ee68dbc08dcc94a24381c33a6710e" gracePeriod=2 Mar 11 09:33:52 crc kubenswrapper[4808]: I0311 09:33:52.972477 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9c8pg" Mar 11 09:33:53 crc kubenswrapper[4808]: I0311 09:33:53.165606 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39da8ba9-d457-4e7d-97d9-daea56bfe2c3-catalog-content\") pod \"39da8ba9-d457-4e7d-97d9-daea56bfe2c3\" (UID: \"39da8ba9-d457-4e7d-97d9-daea56bfe2c3\") " Mar 11 09:33:53 crc kubenswrapper[4808]: I0311 09:33:53.166041 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39da8ba9-d457-4e7d-97d9-daea56bfe2c3-utilities\") pod \"39da8ba9-d457-4e7d-97d9-daea56bfe2c3\" (UID: \"39da8ba9-d457-4e7d-97d9-daea56bfe2c3\") " Mar 11 09:33:53 crc kubenswrapper[4808]: I0311 09:33:53.166176 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9zk7q\" (UniqueName: \"kubernetes.io/projected/39da8ba9-d457-4e7d-97d9-daea56bfe2c3-kube-api-access-9zk7q\") pod \"39da8ba9-d457-4e7d-97d9-daea56bfe2c3\" (UID: \"39da8ba9-d457-4e7d-97d9-daea56bfe2c3\") " Mar 11 09:33:53 crc kubenswrapper[4808]: I0311 09:33:53.167161 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39da8ba9-d457-4e7d-97d9-daea56bfe2c3-utilities" (OuterVolumeSpecName: "utilities") pod "39da8ba9-d457-4e7d-97d9-daea56bfe2c3" (UID: "39da8ba9-d457-4e7d-97d9-daea56bfe2c3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:33:53 crc kubenswrapper[4808]: I0311 09:33:53.175808 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39da8ba9-d457-4e7d-97d9-daea56bfe2c3-kube-api-access-9zk7q" (OuterVolumeSpecName: "kube-api-access-9zk7q") pod "39da8ba9-d457-4e7d-97d9-daea56bfe2c3" (UID: "39da8ba9-d457-4e7d-97d9-daea56bfe2c3"). InnerVolumeSpecName "kube-api-access-9zk7q". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:33:53 crc kubenswrapper[4808]: I0311 09:33:53.214236 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39da8ba9-d457-4e7d-97d9-daea56bfe2c3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "39da8ba9-d457-4e7d-97d9-daea56bfe2c3" (UID: "39da8ba9-d457-4e7d-97d9-daea56bfe2c3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:33:53 crc kubenswrapper[4808]: I0311 09:33:53.268630 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39da8ba9-d457-4e7d-97d9-daea56bfe2c3-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 11 09:33:53 crc kubenswrapper[4808]: I0311 09:33:53.268683 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39da8ba9-d457-4e7d-97d9-daea56bfe2c3-utilities\") on node \"crc\" DevicePath \"\"" Mar 11 09:33:53 crc kubenswrapper[4808]: I0311 09:33:53.268704 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9zk7q\" (UniqueName: \"kubernetes.io/projected/39da8ba9-d457-4e7d-97d9-daea56bfe2c3-kube-api-access-9zk7q\") on node \"crc\" DevicePath \"\"" Mar 11 09:33:53 crc kubenswrapper[4808]: I0311 09:33:53.552812 4808 generic.go:334] "Generic (PLEG): container finished" podID="39da8ba9-d457-4e7d-97d9-daea56bfe2c3" containerID="7441959987a0912e0f05583919b3aa95f00ee68dbc08dcc94a24381c33a6710e" exitCode=0 Mar 11 09:33:53 crc kubenswrapper[4808]: I0311 09:33:53.552851 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9c8pg" event={"ID":"39da8ba9-d457-4e7d-97d9-daea56bfe2c3","Type":"ContainerDied","Data":"7441959987a0912e0f05583919b3aa95f00ee68dbc08dcc94a24381c33a6710e"} Mar 11 09:33:53 crc kubenswrapper[4808]: I0311 09:33:53.552877 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9c8pg" event={"ID":"39da8ba9-d457-4e7d-97d9-daea56bfe2c3","Type":"ContainerDied","Data":"0e417ce26c499b11705e1daf81e4d17cfcfebf5bb7b7f3156a32324f43e4a4b9"} Mar 11 09:33:53 crc kubenswrapper[4808]: I0311 09:33:53.552901 4808 scope.go:117] "RemoveContainer" containerID="7441959987a0912e0f05583919b3aa95f00ee68dbc08dcc94a24381c33a6710e" Mar 11 09:33:53 crc kubenswrapper[4808]: I0311 09:33:53.552905 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9c8pg" Mar 11 09:33:53 crc kubenswrapper[4808]: I0311 09:33:53.589551 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9c8pg"] Mar 11 09:33:53 crc kubenswrapper[4808]: I0311 09:33:53.591931 4808 scope.go:117] "RemoveContainer" containerID="7c4bc70685c6a20af5ad5bb7076a1f5073f665cd40572e4b7479cb97119e96a8" Mar 11 09:33:53 crc kubenswrapper[4808]: I0311 09:33:53.597125 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9c8pg"] Mar 11 09:33:53 crc kubenswrapper[4808]: I0311 09:33:53.609008 4808 scope.go:117] "RemoveContainer" containerID="dae1f3e8e1f8050137a67f91877cf6fc71d3b6b6dcfb71ff4e6d29b1e4e59326" Mar 11 09:33:53 crc kubenswrapper[4808]: I0311 09:33:53.631050 4808 scope.go:117] "RemoveContainer" containerID="7441959987a0912e0f05583919b3aa95f00ee68dbc08dcc94a24381c33a6710e" Mar 11 09:33:53 crc kubenswrapper[4808]: E0311 09:33:53.631462 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7441959987a0912e0f05583919b3aa95f00ee68dbc08dcc94a24381c33a6710e\": container with ID starting with 7441959987a0912e0f05583919b3aa95f00ee68dbc08dcc94a24381c33a6710e not found: ID does not exist" containerID="7441959987a0912e0f05583919b3aa95f00ee68dbc08dcc94a24381c33a6710e" Mar 11 09:33:53 crc kubenswrapper[4808]: I0311 09:33:53.631491 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7441959987a0912e0f05583919b3aa95f00ee68dbc08dcc94a24381c33a6710e"} err="failed to get container status \"7441959987a0912e0f05583919b3aa95f00ee68dbc08dcc94a24381c33a6710e\": rpc error: code = NotFound desc = could not find container \"7441959987a0912e0f05583919b3aa95f00ee68dbc08dcc94a24381c33a6710e\": container with ID starting with 7441959987a0912e0f05583919b3aa95f00ee68dbc08dcc94a24381c33a6710e not found: ID does not exist" Mar 11 09:33:53 crc kubenswrapper[4808]: I0311 09:33:53.631512 4808 scope.go:117] "RemoveContainer" containerID="7c4bc70685c6a20af5ad5bb7076a1f5073f665cd40572e4b7479cb97119e96a8" Mar 11 09:33:53 crc kubenswrapper[4808]: E0311 09:33:53.631805 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c4bc70685c6a20af5ad5bb7076a1f5073f665cd40572e4b7479cb97119e96a8\": container with ID starting with 7c4bc70685c6a20af5ad5bb7076a1f5073f665cd40572e4b7479cb97119e96a8 not found: ID does not exist" containerID="7c4bc70685c6a20af5ad5bb7076a1f5073f665cd40572e4b7479cb97119e96a8" Mar 11 09:33:53 crc kubenswrapper[4808]: I0311 09:33:53.631823 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c4bc70685c6a20af5ad5bb7076a1f5073f665cd40572e4b7479cb97119e96a8"} err="failed to get container status \"7c4bc70685c6a20af5ad5bb7076a1f5073f665cd40572e4b7479cb97119e96a8\": rpc error: code = NotFound desc = could not find container \"7c4bc70685c6a20af5ad5bb7076a1f5073f665cd40572e4b7479cb97119e96a8\": container with ID starting with 7c4bc70685c6a20af5ad5bb7076a1f5073f665cd40572e4b7479cb97119e96a8 not found: ID does not exist" Mar 11 09:33:53 crc kubenswrapper[4808]: I0311 09:33:53.631836 4808 scope.go:117] "RemoveContainer" containerID="dae1f3e8e1f8050137a67f91877cf6fc71d3b6b6dcfb71ff4e6d29b1e4e59326" Mar 11 09:33:53 crc kubenswrapper[4808]: E0311 09:33:53.632140 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dae1f3e8e1f8050137a67f91877cf6fc71d3b6b6dcfb71ff4e6d29b1e4e59326\": container with ID starting with dae1f3e8e1f8050137a67f91877cf6fc71d3b6b6dcfb71ff4e6d29b1e4e59326 not found: ID does not exist" containerID="dae1f3e8e1f8050137a67f91877cf6fc71d3b6b6dcfb71ff4e6d29b1e4e59326" Mar 11 09:33:53 crc kubenswrapper[4808]: I0311 09:33:53.632161 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dae1f3e8e1f8050137a67f91877cf6fc71d3b6b6dcfb71ff4e6d29b1e4e59326"} err="failed to get container status \"dae1f3e8e1f8050137a67f91877cf6fc71d3b6b6dcfb71ff4e6d29b1e4e59326\": rpc error: code = NotFound desc = could not find container \"dae1f3e8e1f8050137a67f91877cf6fc71d3b6b6dcfb71ff4e6d29b1e4e59326\": container with ID starting with dae1f3e8e1f8050137a67f91877cf6fc71d3b6b6dcfb71ff4e6d29b1e4e59326 not found: ID does not exist" Mar 11 09:33:53 crc kubenswrapper[4808]: I0311 09:33:53.799688 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39da8ba9-d457-4e7d-97d9-daea56bfe2c3" path="/var/lib/kubelet/pods/39da8ba9-d457-4e7d-97d9-daea56bfe2c3/volumes" Mar 11 09:33:56 crc kubenswrapper[4808]: I0311 09:33:56.219098 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-gpmhn"] Mar 11 09:33:56 crc kubenswrapper[4808]: E0311 09:33:56.223442 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39da8ba9-d457-4e7d-97d9-daea56bfe2c3" containerName="extract-content" Mar 11 09:33:56 crc kubenswrapper[4808]: I0311 09:33:56.223531 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="39da8ba9-d457-4e7d-97d9-daea56bfe2c3" containerName="extract-content" Mar 11 09:33:56 crc kubenswrapper[4808]: E0311 09:33:56.223606 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39da8ba9-d457-4e7d-97d9-daea56bfe2c3" containerName="extract-utilities" Mar 11 09:33:56 crc kubenswrapper[4808]: I0311 09:33:56.223655 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="39da8ba9-d457-4e7d-97d9-daea56bfe2c3" containerName="extract-utilities" Mar 11 09:33:56 crc kubenswrapper[4808]: E0311 09:33:56.223718 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39da8ba9-d457-4e7d-97d9-daea56bfe2c3" containerName="registry-server" Mar 11 09:33:56 crc kubenswrapper[4808]: I0311 09:33:56.223784 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="39da8ba9-d457-4e7d-97d9-daea56bfe2c3" containerName="registry-server" Mar 11 09:33:56 crc kubenswrapper[4808]: I0311 09:33:56.223977 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="39da8ba9-d457-4e7d-97d9-daea56bfe2c3" containerName="registry-server" Mar 11 09:33:56 crc kubenswrapper[4808]: I0311 09:33:56.225051 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gpmhn" Mar 11 09:33:56 crc kubenswrapper[4808]: I0311 09:33:56.237638 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gpmhn"] Mar 11 09:33:56 crc kubenswrapper[4808]: I0311 09:33:56.315714 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1c3b4c6-5902-4e2c-92a0-e6ce481ac6b8-catalog-content\") pod \"certified-operators-gpmhn\" (UID: \"d1c3b4c6-5902-4e2c-92a0-e6ce481ac6b8\") " pod="openshift-marketplace/certified-operators-gpmhn" Mar 11 09:33:56 crc kubenswrapper[4808]: I0311 09:33:56.315980 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9x8f5\" (UniqueName: \"kubernetes.io/projected/d1c3b4c6-5902-4e2c-92a0-e6ce481ac6b8-kube-api-access-9x8f5\") pod \"certified-operators-gpmhn\" (UID: \"d1c3b4c6-5902-4e2c-92a0-e6ce481ac6b8\") " pod="openshift-marketplace/certified-operators-gpmhn" Mar 11 09:33:56 crc kubenswrapper[4808]: I0311 09:33:56.316189 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1c3b4c6-5902-4e2c-92a0-e6ce481ac6b8-utilities\") pod \"certified-operators-gpmhn\" (UID: \"d1c3b4c6-5902-4e2c-92a0-e6ce481ac6b8\") " pod="openshift-marketplace/certified-operators-gpmhn" Mar 11 09:33:56 crc kubenswrapper[4808]: I0311 09:33:56.417714 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9x8f5\" (UniqueName: \"kubernetes.io/projected/d1c3b4c6-5902-4e2c-92a0-e6ce481ac6b8-kube-api-access-9x8f5\") pod \"certified-operators-gpmhn\" (UID: \"d1c3b4c6-5902-4e2c-92a0-e6ce481ac6b8\") " pod="openshift-marketplace/certified-operators-gpmhn" Mar 11 09:33:56 crc kubenswrapper[4808]: I0311 09:33:56.417800 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1c3b4c6-5902-4e2c-92a0-e6ce481ac6b8-utilities\") pod \"certified-operators-gpmhn\" (UID: \"d1c3b4c6-5902-4e2c-92a0-e6ce481ac6b8\") " pod="openshift-marketplace/certified-operators-gpmhn" Mar 11 09:33:56 crc kubenswrapper[4808]: I0311 09:33:56.417864 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1c3b4c6-5902-4e2c-92a0-e6ce481ac6b8-catalog-content\") pod \"certified-operators-gpmhn\" (UID: \"d1c3b4c6-5902-4e2c-92a0-e6ce481ac6b8\") " pod="openshift-marketplace/certified-operators-gpmhn" Mar 11 09:33:56 crc kubenswrapper[4808]: I0311 09:33:56.418421 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1c3b4c6-5902-4e2c-92a0-e6ce481ac6b8-catalog-content\") pod \"certified-operators-gpmhn\" (UID: \"d1c3b4c6-5902-4e2c-92a0-e6ce481ac6b8\") " pod="openshift-marketplace/certified-operators-gpmhn" Mar 11 09:33:56 crc kubenswrapper[4808]: I0311 09:33:56.418436 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1c3b4c6-5902-4e2c-92a0-e6ce481ac6b8-utilities\") pod \"certified-operators-gpmhn\" (UID: \"d1c3b4c6-5902-4e2c-92a0-e6ce481ac6b8\") " pod="openshift-marketplace/certified-operators-gpmhn" Mar 11 09:33:56 crc kubenswrapper[4808]: I0311 09:33:56.443849 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9x8f5\" (UniqueName: \"kubernetes.io/projected/d1c3b4c6-5902-4e2c-92a0-e6ce481ac6b8-kube-api-access-9x8f5\") pod \"certified-operators-gpmhn\" (UID: \"d1c3b4c6-5902-4e2c-92a0-e6ce481ac6b8\") " pod="openshift-marketplace/certified-operators-gpmhn" Mar 11 09:33:56 crc kubenswrapper[4808]: I0311 09:33:56.553721 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gpmhn" Mar 11 09:33:56 crc kubenswrapper[4808]: I0311 09:33:56.993682 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gpmhn"] Mar 11 09:33:56 crc kubenswrapper[4808]: W0311 09:33:56.997003 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1c3b4c6_5902_4e2c_92a0_e6ce481ac6b8.slice/crio-d77f5f34eb261e37ef40f89d17ae28ab66d2e4aa0c211c98d1fee6dd25bcedb2 WatchSource:0}: Error finding container d77f5f34eb261e37ef40f89d17ae28ab66d2e4aa0c211c98d1fee6dd25bcedb2: Status 404 returned error can't find the container with id d77f5f34eb261e37ef40f89d17ae28ab66d2e4aa0c211c98d1fee6dd25bcedb2 Mar 11 09:33:57 crc kubenswrapper[4808]: I0311 09:33:57.584903 4808 generic.go:334] "Generic (PLEG): container finished" podID="d1c3b4c6-5902-4e2c-92a0-e6ce481ac6b8" containerID="112e54c93a96967e1ea5a7237eb1249f624a1b4cb699d37d6275c42caa5232c5" exitCode=0 Mar 11 09:33:57 crc kubenswrapper[4808]: I0311 09:33:57.585014 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gpmhn" event={"ID":"d1c3b4c6-5902-4e2c-92a0-e6ce481ac6b8","Type":"ContainerDied","Data":"112e54c93a96967e1ea5a7237eb1249f624a1b4cb699d37d6275c42caa5232c5"} Mar 11 09:33:57 crc kubenswrapper[4808]: I0311 09:33:57.585228 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gpmhn" event={"ID":"d1c3b4c6-5902-4e2c-92a0-e6ce481ac6b8","Type":"ContainerStarted","Data":"d77f5f34eb261e37ef40f89d17ae28ab66d2e4aa0c211c98d1fee6dd25bcedb2"} Mar 11 09:33:58 crc kubenswrapper[4808]: I0311 09:33:58.598049 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gpmhn" event={"ID":"d1c3b4c6-5902-4e2c-92a0-e6ce481ac6b8","Type":"ContainerStarted","Data":"2c3f2aa6cbd33ad5d47c61275912e423eb12104a2af799d00f5271d92c5031b8"} Mar 11 09:33:59 crc kubenswrapper[4808]: I0311 09:33:59.606502 4808 generic.go:334] "Generic (PLEG): container finished" podID="d1c3b4c6-5902-4e2c-92a0-e6ce481ac6b8" containerID="2c3f2aa6cbd33ad5d47c61275912e423eb12104a2af799d00f5271d92c5031b8" exitCode=0 Mar 11 09:33:59 crc kubenswrapper[4808]: I0311 09:33:59.606633 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gpmhn" event={"ID":"d1c3b4c6-5902-4e2c-92a0-e6ce481ac6b8","Type":"ContainerDied","Data":"2c3f2aa6cbd33ad5d47c61275912e423eb12104a2af799d00f5271d92c5031b8"} Mar 11 09:34:00 crc kubenswrapper[4808]: I0311 09:34:00.193894 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553694-rvshs"] Mar 11 09:34:00 crc kubenswrapper[4808]: I0311 09:34:00.195017 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553694-rvshs" Mar 11 09:34:00 crc kubenswrapper[4808]: I0311 09:34:00.197289 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-8r7cc" Mar 11 09:34:00 crc kubenswrapper[4808]: I0311 09:34:00.198962 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 09:34:00 crc kubenswrapper[4808]: I0311 09:34:00.199169 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 09:34:00 crc kubenswrapper[4808]: I0311 09:34:00.209341 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553694-rvshs"] Mar 11 09:34:00 crc kubenswrapper[4808]: I0311 09:34:00.374996 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2n4b\" (UniqueName: \"kubernetes.io/projected/9b647eaf-87f1-4ce1-b0b2-3a359584071e-kube-api-access-z2n4b\") pod \"auto-csr-approver-29553694-rvshs\" (UID: \"9b647eaf-87f1-4ce1-b0b2-3a359584071e\") " pod="openshift-infra/auto-csr-approver-29553694-rvshs" Mar 11 09:34:00 crc kubenswrapper[4808]: I0311 09:34:00.477086 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2n4b\" (UniqueName: \"kubernetes.io/projected/9b647eaf-87f1-4ce1-b0b2-3a359584071e-kube-api-access-z2n4b\") pod \"auto-csr-approver-29553694-rvshs\" (UID: \"9b647eaf-87f1-4ce1-b0b2-3a359584071e\") " pod="openshift-infra/auto-csr-approver-29553694-rvshs" Mar 11 09:34:00 crc kubenswrapper[4808]: I0311 09:34:00.496628 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2n4b\" (UniqueName: \"kubernetes.io/projected/9b647eaf-87f1-4ce1-b0b2-3a359584071e-kube-api-access-z2n4b\") pod \"auto-csr-approver-29553694-rvshs\" (UID: \"9b647eaf-87f1-4ce1-b0b2-3a359584071e\") " pod="openshift-infra/auto-csr-approver-29553694-rvshs" Mar 11 09:34:00 crc kubenswrapper[4808]: I0311 09:34:00.519292 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553694-rvshs" Mar 11 09:34:00 crc kubenswrapper[4808]: I0311 09:34:00.929885 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553694-rvshs"] Mar 11 09:34:00 crc kubenswrapper[4808]: W0311 09:34:00.933348 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9b647eaf_87f1_4ce1_b0b2_3a359584071e.slice/crio-d8f0a423fcef2d0ac4124a6a771d37298f0ab4702aa603dd79232704100ffd77 WatchSource:0}: Error finding container d8f0a423fcef2d0ac4124a6a771d37298f0ab4702aa603dd79232704100ffd77: Status 404 returned error can't find the container with id d8f0a423fcef2d0ac4124a6a771d37298f0ab4702aa603dd79232704100ffd77 Mar 11 09:34:01 crc kubenswrapper[4808]: I0311 09:34:01.621876 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553694-rvshs" event={"ID":"9b647eaf-87f1-4ce1-b0b2-3a359584071e","Type":"ContainerStarted","Data":"d8f0a423fcef2d0ac4124a6a771d37298f0ab4702aa603dd79232704100ffd77"} Mar 11 09:34:01 crc kubenswrapper[4808]: I0311 09:34:01.624409 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gpmhn" event={"ID":"d1c3b4c6-5902-4e2c-92a0-e6ce481ac6b8","Type":"ContainerStarted","Data":"083495b7bf1ac4dc927602ee0688af7894d699b84215692db37041b692f41667"} Mar 11 09:34:01 crc kubenswrapper[4808]: I0311 09:34:01.652123 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-gpmhn" podStartSLOduration=2.580712136 podStartE2EDuration="5.652106529s" podCreationTimestamp="2026-03-11 09:33:56 +0000 UTC" firstStartedPulling="2026-03-11 09:33:57.586652085 +0000 UTC m=+3288.539975405" lastFinishedPulling="2026-03-11 09:34:00.658046478 +0000 UTC m=+3291.611369798" observedRunningTime="2026-03-11 09:34:01.647229051 +0000 UTC m=+3292.600552371" watchObservedRunningTime="2026-03-11 09:34:01.652106529 +0000 UTC m=+3292.605429849" Mar 11 09:34:02 crc kubenswrapper[4808]: I0311 09:34:02.635032 4808 generic.go:334] "Generic (PLEG): container finished" podID="9b647eaf-87f1-4ce1-b0b2-3a359584071e" containerID="c35c72cfcc2b9905f6a9871c999a79ab223bc581d9b12edc5b3f439f56c1926c" exitCode=0 Mar 11 09:34:02 crc kubenswrapper[4808]: I0311 09:34:02.635094 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553694-rvshs" event={"ID":"9b647eaf-87f1-4ce1-b0b2-3a359584071e","Type":"ContainerDied","Data":"c35c72cfcc2b9905f6a9871c999a79ab223bc581d9b12edc5b3f439f56c1926c"} Mar 11 09:34:03 crc kubenswrapper[4808]: I0311 09:34:03.969988 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553694-rvshs" Mar 11 09:34:04 crc kubenswrapper[4808]: I0311 09:34:04.148467 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z2n4b\" (UniqueName: \"kubernetes.io/projected/9b647eaf-87f1-4ce1-b0b2-3a359584071e-kube-api-access-z2n4b\") pod \"9b647eaf-87f1-4ce1-b0b2-3a359584071e\" (UID: \"9b647eaf-87f1-4ce1-b0b2-3a359584071e\") " Mar 11 09:34:04 crc kubenswrapper[4808]: I0311 09:34:04.156553 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b647eaf-87f1-4ce1-b0b2-3a359584071e-kube-api-access-z2n4b" (OuterVolumeSpecName: "kube-api-access-z2n4b") pod "9b647eaf-87f1-4ce1-b0b2-3a359584071e" (UID: "9b647eaf-87f1-4ce1-b0b2-3a359584071e"). InnerVolumeSpecName "kube-api-access-z2n4b". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:34:04 crc kubenswrapper[4808]: I0311 09:34:04.250807 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z2n4b\" (UniqueName: \"kubernetes.io/projected/9b647eaf-87f1-4ce1-b0b2-3a359584071e-kube-api-access-z2n4b\") on node \"crc\" DevicePath \"\"" Mar 11 09:34:04 crc kubenswrapper[4808]: I0311 09:34:04.653632 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553694-rvshs" event={"ID":"9b647eaf-87f1-4ce1-b0b2-3a359584071e","Type":"ContainerDied","Data":"d8f0a423fcef2d0ac4124a6a771d37298f0ab4702aa603dd79232704100ffd77"} Mar 11 09:34:04 crc kubenswrapper[4808]: I0311 09:34:04.653956 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8f0a423fcef2d0ac4124a6a771d37298f0ab4702aa603dd79232704100ffd77" Mar 11 09:34:04 crc kubenswrapper[4808]: I0311 09:34:04.653676 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553694-rvshs" Mar 11 09:34:05 crc kubenswrapper[4808]: I0311 09:34:05.044401 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553688-f8tvc"] Mar 11 09:34:05 crc kubenswrapper[4808]: I0311 09:34:05.056329 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553688-f8tvc"] Mar 11 09:34:05 crc kubenswrapper[4808]: I0311 09:34:05.798884 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7eebce3-3490-4aa3-8222-cc75f81b298b" path="/var/lib/kubelet/pods/e7eebce3-3490-4aa3-8222-cc75f81b298b/volumes" Mar 11 09:34:06 crc kubenswrapper[4808]: I0311 09:34:06.553845 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-gpmhn" Mar 11 09:34:06 crc kubenswrapper[4808]: I0311 09:34:06.554127 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-gpmhn" Mar 11 09:34:06 crc kubenswrapper[4808]: I0311 09:34:06.602802 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-gpmhn" Mar 11 09:34:06 crc kubenswrapper[4808]: I0311 09:34:06.702873 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-gpmhn" Mar 11 09:34:06 crc kubenswrapper[4808]: I0311 09:34:06.847271 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gpmhn"] Mar 11 09:34:08 crc kubenswrapper[4808]: I0311 09:34:08.681920 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-gpmhn" podUID="d1c3b4c6-5902-4e2c-92a0-e6ce481ac6b8" containerName="registry-server" containerID="cri-o://083495b7bf1ac4dc927602ee0688af7894d699b84215692db37041b692f41667" gracePeriod=2 Mar 11 09:34:09 crc kubenswrapper[4808]: I0311 09:34:09.690180 4808 generic.go:334] "Generic (PLEG): container finished" podID="d1c3b4c6-5902-4e2c-92a0-e6ce481ac6b8" containerID="083495b7bf1ac4dc927602ee0688af7894d699b84215692db37041b692f41667" exitCode=0 Mar 11 09:34:09 crc kubenswrapper[4808]: I0311 09:34:09.690225 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gpmhn" event={"ID":"d1c3b4c6-5902-4e2c-92a0-e6ce481ac6b8","Type":"ContainerDied","Data":"083495b7bf1ac4dc927602ee0688af7894d699b84215692db37041b692f41667"} Mar 11 09:34:10 crc kubenswrapper[4808]: I0311 09:34:10.431248 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gpmhn" Mar 11 09:34:10 crc kubenswrapper[4808]: I0311 09:34:10.541115 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9x8f5\" (UniqueName: \"kubernetes.io/projected/d1c3b4c6-5902-4e2c-92a0-e6ce481ac6b8-kube-api-access-9x8f5\") pod \"d1c3b4c6-5902-4e2c-92a0-e6ce481ac6b8\" (UID: \"d1c3b4c6-5902-4e2c-92a0-e6ce481ac6b8\") " Mar 11 09:34:10 crc kubenswrapper[4808]: I0311 09:34:10.541530 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1c3b4c6-5902-4e2c-92a0-e6ce481ac6b8-catalog-content\") pod \"d1c3b4c6-5902-4e2c-92a0-e6ce481ac6b8\" (UID: \"d1c3b4c6-5902-4e2c-92a0-e6ce481ac6b8\") " Mar 11 09:34:10 crc kubenswrapper[4808]: I0311 09:34:10.541611 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1c3b4c6-5902-4e2c-92a0-e6ce481ac6b8-utilities\") pod \"d1c3b4c6-5902-4e2c-92a0-e6ce481ac6b8\" (UID: \"d1c3b4c6-5902-4e2c-92a0-e6ce481ac6b8\") " Mar 11 09:34:10 crc kubenswrapper[4808]: I0311 09:34:10.542436 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1c3b4c6-5902-4e2c-92a0-e6ce481ac6b8-utilities" (OuterVolumeSpecName: "utilities") pod "d1c3b4c6-5902-4e2c-92a0-e6ce481ac6b8" (UID: "d1c3b4c6-5902-4e2c-92a0-e6ce481ac6b8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:34:10 crc kubenswrapper[4808]: I0311 09:34:10.542712 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1c3b4c6-5902-4e2c-92a0-e6ce481ac6b8-utilities\") on node \"crc\" DevicePath \"\"" Mar 11 09:34:10 crc kubenswrapper[4808]: I0311 09:34:10.546518 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1c3b4c6-5902-4e2c-92a0-e6ce481ac6b8-kube-api-access-9x8f5" (OuterVolumeSpecName: "kube-api-access-9x8f5") pod "d1c3b4c6-5902-4e2c-92a0-e6ce481ac6b8" (UID: "d1c3b4c6-5902-4e2c-92a0-e6ce481ac6b8"). InnerVolumeSpecName "kube-api-access-9x8f5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:34:10 crc kubenswrapper[4808]: I0311 09:34:10.611858 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1c3b4c6-5902-4e2c-92a0-e6ce481ac6b8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d1c3b4c6-5902-4e2c-92a0-e6ce481ac6b8" (UID: "d1c3b4c6-5902-4e2c-92a0-e6ce481ac6b8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:34:10 crc kubenswrapper[4808]: I0311 09:34:10.643729 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9x8f5\" (UniqueName: \"kubernetes.io/projected/d1c3b4c6-5902-4e2c-92a0-e6ce481ac6b8-kube-api-access-9x8f5\") on node \"crc\" DevicePath \"\"" Mar 11 09:34:10 crc kubenswrapper[4808]: I0311 09:34:10.643769 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1c3b4c6-5902-4e2c-92a0-e6ce481ac6b8-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 11 09:34:10 crc kubenswrapper[4808]: I0311 09:34:10.701721 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gpmhn" event={"ID":"d1c3b4c6-5902-4e2c-92a0-e6ce481ac6b8","Type":"ContainerDied","Data":"d77f5f34eb261e37ef40f89d17ae28ab66d2e4aa0c211c98d1fee6dd25bcedb2"} Mar 11 09:34:10 crc kubenswrapper[4808]: I0311 09:34:10.701761 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gpmhn" Mar 11 09:34:10 crc kubenswrapper[4808]: I0311 09:34:10.701805 4808 scope.go:117] "RemoveContainer" containerID="083495b7bf1ac4dc927602ee0688af7894d699b84215692db37041b692f41667" Mar 11 09:34:10 crc kubenswrapper[4808]: I0311 09:34:10.721631 4808 scope.go:117] "RemoveContainer" containerID="2c3f2aa6cbd33ad5d47c61275912e423eb12104a2af799d00f5271d92c5031b8" Mar 11 09:34:10 crc kubenswrapper[4808]: I0311 09:34:10.753582 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gpmhn"] Mar 11 09:34:10 crc kubenswrapper[4808]: I0311 09:34:10.762234 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-gpmhn"] Mar 11 09:34:10 crc kubenswrapper[4808]: I0311 09:34:10.774818 4808 scope.go:117] "RemoveContainer" containerID="112e54c93a96967e1ea5a7237eb1249f624a1b4cb699d37d6275c42caa5232c5" Mar 11 09:34:11 crc kubenswrapper[4808]: I0311 09:34:11.801984 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1c3b4c6-5902-4e2c-92a0-e6ce481ac6b8" path="/var/lib/kubelet/pods/d1c3b4c6-5902-4e2c-92a0-e6ce481ac6b8/volumes" Mar 11 09:34:54 crc kubenswrapper[4808]: I0311 09:34:54.625927 4808 scope.go:117] "RemoveContainer" containerID="946f50b01cfaba73009a2bb55cdbfca7cff5d867670f7081f2ca6baf7f24f4d8" Mar 11 09:35:35 crc kubenswrapper[4808]: I0311 09:35:35.782883 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-w6k6l"] Mar 11 09:35:35 crc kubenswrapper[4808]: E0311 09:35:35.783734 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b647eaf-87f1-4ce1-b0b2-3a359584071e" containerName="oc" Mar 11 09:35:35 crc kubenswrapper[4808]: I0311 09:35:35.783751 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b647eaf-87f1-4ce1-b0b2-3a359584071e" containerName="oc" Mar 11 09:35:35 crc kubenswrapper[4808]: E0311 09:35:35.783765 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1c3b4c6-5902-4e2c-92a0-e6ce481ac6b8" containerName="extract-utilities" Mar 11 09:35:35 crc kubenswrapper[4808]: I0311 09:35:35.783774 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1c3b4c6-5902-4e2c-92a0-e6ce481ac6b8" containerName="extract-utilities" Mar 11 09:35:35 crc kubenswrapper[4808]: E0311 09:35:35.783798 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1c3b4c6-5902-4e2c-92a0-e6ce481ac6b8" containerName="extract-content" Mar 11 09:35:35 crc kubenswrapper[4808]: I0311 09:35:35.783807 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1c3b4c6-5902-4e2c-92a0-e6ce481ac6b8" containerName="extract-content" Mar 11 09:35:35 crc kubenswrapper[4808]: E0311 09:35:35.783820 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1c3b4c6-5902-4e2c-92a0-e6ce481ac6b8" containerName="registry-server" Mar 11 09:35:35 crc kubenswrapper[4808]: I0311 09:35:35.783827 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1c3b4c6-5902-4e2c-92a0-e6ce481ac6b8" containerName="registry-server" Mar 11 09:35:35 crc kubenswrapper[4808]: I0311 09:35:35.783995 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1c3b4c6-5902-4e2c-92a0-e6ce481ac6b8" containerName="registry-server" Mar 11 09:35:35 crc kubenswrapper[4808]: I0311 09:35:35.784014 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b647eaf-87f1-4ce1-b0b2-3a359584071e" containerName="oc" Mar 11 09:35:35 crc kubenswrapper[4808]: I0311 09:35:35.785018 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w6k6l" Mar 11 09:35:35 crc kubenswrapper[4808]: I0311 09:35:35.798745 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-w6k6l"] Mar 11 09:35:35 crc kubenswrapper[4808]: I0311 09:35:35.922391 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53ce013b-83ba-4a10-9008-05386a4eff65-catalog-content\") pod \"redhat-operators-w6k6l\" (UID: \"53ce013b-83ba-4a10-9008-05386a4eff65\") " pod="openshift-marketplace/redhat-operators-w6k6l" Mar 11 09:35:35 crc kubenswrapper[4808]: I0311 09:35:35.922467 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53ce013b-83ba-4a10-9008-05386a4eff65-utilities\") pod \"redhat-operators-w6k6l\" (UID: \"53ce013b-83ba-4a10-9008-05386a4eff65\") " pod="openshift-marketplace/redhat-operators-w6k6l" Mar 11 09:35:35 crc kubenswrapper[4808]: I0311 09:35:35.922521 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7zxp\" (UniqueName: \"kubernetes.io/projected/53ce013b-83ba-4a10-9008-05386a4eff65-kube-api-access-k7zxp\") pod \"redhat-operators-w6k6l\" (UID: \"53ce013b-83ba-4a10-9008-05386a4eff65\") " pod="openshift-marketplace/redhat-operators-w6k6l" Mar 11 09:35:36 crc kubenswrapper[4808]: I0311 09:35:36.023646 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53ce013b-83ba-4a10-9008-05386a4eff65-catalog-content\") pod \"redhat-operators-w6k6l\" (UID: \"53ce013b-83ba-4a10-9008-05386a4eff65\") " pod="openshift-marketplace/redhat-operators-w6k6l" Mar 11 09:35:36 crc kubenswrapper[4808]: I0311 09:35:36.023697 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53ce013b-83ba-4a10-9008-05386a4eff65-utilities\") pod \"redhat-operators-w6k6l\" (UID: \"53ce013b-83ba-4a10-9008-05386a4eff65\") " pod="openshift-marketplace/redhat-operators-w6k6l" Mar 11 09:35:36 crc kubenswrapper[4808]: I0311 09:35:36.023808 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7zxp\" (UniqueName: \"kubernetes.io/projected/53ce013b-83ba-4a10-9008-05386a4eff65-kube-api-access-k7zxp\") pod \"redhat-operators-w6k6l\" (UID: \"53ce013b-83ba-4a10-9008-05386a4eff65\") " pod="openshift-marketplace/redhat-operators-w6k6l" Mar 11 09:35:36 crc kubenswrapper[4808]: I0311 09:35:36.024194 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53ce013b-83ba-4a10-9008-05386a4eff65-catalog-content\") pod \"redhat-operators-w6k6l\" (UID: \"53ce013b-83ba-4a10-9008-05386a4eff65\") " pod="openshift-marketplace/redhat-operators-w6k6l" Mar 11 09:35:36 crc kubenswrapper[4808]: I0311 09:35:36.024319 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53ce013b-83ba-4a10-9008-05386a4eff65-utilities\") pod \"redhat-operators-w6k6l\" (UID: \"53ce013b-83ba-4a10-9008-05386a4eff65\") " pod="openshift-marketplace/redhat-operators-w6k6l" Mar 11 09:35:36 crc kubenswrapper[4808]: I0311 09:35:36.051449 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7zxp\" (UniqueName: \"kubernetes.io/projected/53ce013b-83ba-4a10-9008-05386a4eff65-kube-api-access-k7zxp\") pod \"redhat-operators-w6k6l\" (UID: \"53ce013b-83ba-4a10-9008-05386a4eff65\") " pod="openshift-marketplace/redhat-operators-w6k6l" Mar 11 09:35:36 crc kubenswrapper[4808]: I0311 09:35:36.107648 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w6k6l" Mar 11 09:35:36 crc kubenswrapper[4808]: I0311 09:35:36.582681 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-w6k6l"] Mar 11 09:35:37 crc kubenswrapper[4808]: I0311 09:35:37.407703 4808 generic.go:334] "Generic (PLEG): container finished" podID="53ce013b-83ba-4a10-9008-05386a4eff65" containerID="472ace70bb324d4426754cf7bf1f881934bc9b94b192e40ef62f207891355baa" exitCode=0 Mar 11 09:35:37 crc kubenswrapper[4808]: I0311 09:35:37.407806 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w6k6l" event={"ID":"53ce013b-83ba-4a10-9008-05386a4eff65","Type":"ContainerDied","Data":"472ace70bb324d4426754cf7bf1f881934bc9b94b192e40ef62f207891355baa"} Mar 11 09:35:37 crc kubenswrapper[4808]: I0311 09:35:37.408006 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w6k6l" event={"ID":"53ce013b-83ba-4a10-9008-05386a4eff65","Type":"ContainerStarted","Data":"803f1ade4f7605f13588830e82b716828308002a97caec9b0d6f705a96af4113"} Mar 11 09:35:38 crc kubenswrapper[4808]: I0311 09:35:38.416780 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w6k6l" event={"ID":"53ce013b-83ba-4a10-9008-05386a4eff65","Type":"ContainerStarted","Data":"e79f72eaa362867f9f14ed70b849521cff8ebe898e82f64805d21b7ed5fbd490"} Mar 11 09:35:39 crc kubenswrapper[4808]: I0311 09:35:39.424189 4808 generic.go:334] "Generic (PLEG): container finished" podID="53ce013b-83ba-4a10-9008-05386a4eff65" containerID="e79f72eaa362867f9f14ed70b849521cff8ebe898e82f64805d21b7ed5fbd490" exitCode=0 Mar 11 09:35:39 crc kubenswrapper[4808]: I0311 09:35:39.424250 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w6k6l" event={"ID":"53ce013b-83ba-4a10-9008-05386a4eff65","Type":"ContainerDied","Data":"e79f72eaa362867f9f14ed70b849521cff8ebe898e82f64805d21b7ed5fbd490"} Mar 11 09:35:40 crc kubenswrapper[4808]: I0311 09:35:40.433214 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w6k6l" event={"ID":"53ce013b-83ba-4a10-9008-05386a4eff65","Type":"ContainerStarted","Data":"73be924bfae8df164b69d9863a72bc4eb49dc9c8b7b55e703cc95aca9e085ecf"} Mar 11 09:35:40 crc kubenswrapper[4808]: I0311 09:35:40.460579 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-w6k6l" podStartSLOduration=3.055942336 podStartE2EDuration="5.460555523s" podCreationTimestamp="2026-03-11 09:35:35 +0000 UTC" firstStartedPulling="2026-03-11 09:35:37.40893658 +0000 UTC m=+3388.362259900" lastFinishedPulling="2026-03-11 09:35:39.813549757 +0000 UTC m=+3390.766873087" observedRunningTime="2026-03-11 09:35:40.456851728 +0000 UTC m=+3391.410175098" watchObservedRunningTime="2026-03-11 09:35:40.460555523 +0000 UTC m=+3391.413878873" Mar 11 09:35:46 crc kubenswrapper[4808]: I0311 09:35:46.027757 4808 patch_prober.go:28] interesting pod/machine-config-daemon-tfsm9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 09:35:46 crc kubenswrapper[4808]: I0311 09:35:46.028299 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 09:35:46 crc kubenswrapper[4808]: I0311 09:35:46.108560 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-w6k6l" Mar 11 09:35:46 crc kubenswrapper[4808]: I0311 09:35:46.108618 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-w6k6l" Mar 11 09:35:47 crc kubenswrapper[4808]: I0311 09:35:47.186153 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-w6k6l" podUID="53ce013b-83ba-4a10-9008-05386a4eff65" containerName="registry-server" probeResult="failure" output=< Mar 11 09:35:47 crc kubenswrapper[4808]: timeout: failed to connect service ":50051" within 1s Mar 11 09:35:47 crc kubenswrapper[4808]: > Mar 11 09:35:55 crc kubenswrapper[4808]: I0311 09:35:55.617568 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-r79bq"] Mar 11 09:35:55 crc kubenswrapper[4808]: I0311 09:35:55.619337 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r79bq" Mar 11 09:35:55 crc kubenswrapper[4808]: I0311 09:35:55.632109 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-r79bq"] Mar 11 09:35:55 crc kubenswrapper[4808]: I0311 09:35:55.705971 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6a32fcf-262f-4ec5-979d-f7ecf9331bbe-utilities\") pod \"redhat-marketplace-r79bq\" (UID: \"f6a32fcf-262f-4ec5-979d-f7ecf9331bbe\") " pod="openshift-marketplace/redhat-marketplace-r79bq" Mar 11 09:35:55 crc kubenswrapper[4808]: I0311 09:35:55.706077 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpgtn\" (UniqueName: \"kubernetes.io/projected/f6a32fcf-262f-4ec5-979d-f7ecf9331bbe-kube-api-access-gpgtn\") pod \"redhat-marketplace-r79bq\" (UID: \"f6a32fcf-262f-4ec5-979d-f7ecf9331bbe\") " pod="openshift-marketplace/redhat-marketplace-r79bq" Mar 11 09:35:55 crc kubenswrapper[4808]: I0311 09:35:55.706099 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6a32fcf-262f-4ec5-979d-f7ecf9331bbe-catalog-content\") pod \"redhat-marketplace-r79bq\" (UID: \"f6a32fcf-262f-4ec5-979d-f7ecf9331bbe\") " pod="openshift-marketplace/redhat-marketplace-r79bq" Mar 11 09:35:55 crc kubenswrapper[4808]: I0311 09:35:55.807664 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6a32fcf-262f-4ec5-979d-f7ecf9331bbe-utilities\") pod \"redhat-marketplace-r79bq\" (UID: \"f6a32fcf-262f-4ec5-979d-f7ecf9331bbe\") " pod="openshift-marketplace/redhat-marketplace-r79bq" Mar 11 09:35:55 crc kubenswrapper[4808]: I0311 09:35:55.807814 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpgtn\" (UniqueName: \"kubernetes.io/projected/f6a32fcf-262f-4ec5-979d-f7ecf9331bbe-kube-api-access-gpgtn\") pod \"redhat-marketplace-r79bq\" (UID: \"f6a32fcf-262f-4ec5-979d-f7ecf9331bbe\") " pod="openshift-marketplace/redhat-marketplace-r79bq" Mar 11 09:35:55 crc kubenswrapper[4808]: I0311 09:35:55.807852 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6a32fcf-262f-4ec5-979d-f7ecf9331bbe-catalog-content\") pod \"redhat-marketplace-r79bq\" (UID: \"f6a32fcf-262f-4ec5-979d-f7ecf9331bbe\") " pod="openshift-marketplace/redhat-marketplace-r79bq" Mar 11 09:35:55 crc kubenswrapper[4808]: I0311 09:35:55.808222 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6a32fcf-262f-4ec5-979d-f7ecf9331bbe-utilities\") pod \"redhat-marketplace-r79bq\" (UID: \"f6a32fcf-262f-4ec5-979d-f7ecf9331bbe\") " pod="openshift-marketplace/redhat-marketplace-r79bq" Mar 11 09:35:55 crc kubenswrapper[4808]: I0311 09:35:55.808261 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6a32fcf-262f-4ec5-979d-f7ecf9331bbe-catalog-content\") pod \"redhat-marketplace-r79bq\" (UID: \"f6a32fcf-262f-4ec5-979d-f7ecf9331bbe\") " pod="openshift-marketplace/redhat-marketplace-r79bq" Mar 11 09:35:55 crc kubenswrapper[4808]: I0311 09:35:55.829732 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpgtn\" (UniqueName: \"kubernetes.io/projected/f6a32fcf-262f-4ec5-979d-f7ecf9331bbe-kube-api-access-gpgtn\") pod \"redhat-marketplace-r79bq\" (UID: \"f6a32fcf-262f-4ec5-979d-f7ecf9331bbe\") " pod="openshift-marketplace/redhat-marketplace-r79bq" Mar 11 09:35:55 crc kubenswrapper[4808]: I0311 09:35:55.938693 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r79bq" Mar 11 09:35:56 crc kubenswrapper[4808]: I0311 09:35:56.161297 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-w6k6l" Mar 11 09:35:56 crc kubenswrapper[4808]: I0311 09:35:56.201597 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-w6k6l" Mar 11 09:35:56 crc kubenswrapper[4808]: I0311 09:35:56.377851 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-r79bq"] Mar 11 09:35:56 crc kubenswrapper[4808]: I0311 09:35:56.550810 4808 generic.go:334] "Generic (PLEG): container finished" podID="f6a32fcf-262f-4ec5-979d-f7ecf9331bbe" containerID="f0f127f491fcaaa084be1b377f25679aeed1772fbca37d987bac1154ba546c0b" exitCode=0 Mar 11 09:35:56 crc kubenswrapper[4808]: I0311 09:35:56.550849 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r79bq" event={"ID":"f6a32fcf-262f-4ec5-979d-f7ecf9331bbe","Type":"ContainerDied","Data":"f0f127f491fcaaa084be1b377f25679aeed1772fbca37d987bac1154ba546c0b"} Mar 11 09:35:56 crc kubenswrapper[4808]: I0311 09:35:56.551215 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r79bq" event={"ID":"f6a32fcf-262f-4ec5-979d-f7ecf9331bbe","Type":"ContainerStarted","Data":"cb01d3f71238fe06cd3f01c27d5c441accda2bd1f17c1986b28ba74422556da0"} Mar 11 09:35:57 crc kubenswrapper[4808]: I0311 09:35:57.559656 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r79bq" event={"ID":"f6a32fcf-262f-4ec5-979d-f7ecf9331bbe","Type":"ContainerDied","Data":"166c7008ed9322db8e6ff8512f74c31203fc0ca43295fb4e34c12dcefe9da268"} Mar 11 09:35:57 crc kubenswrapper[4808]: I0311 09:35:57.559619 4808 generic.go:334] "Generic (PLEG): container finished" podID="f6a32fcf-262f-4ec5-979d-f7ecf9331bbe" containerID="166c7008ed9322db8e6ff8512f74c31203fc0ca43295fb4e34c12dcefe9da268" exitCode=0 Mar 11 09:35:58 crc kubenswrapper[4808]: I0311 09:35:58.398603 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-w6k6l"] Mar 11 09:35:58 crc kubenswrapper[4808]: I0311 09:35:58.398874 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-w6k6l" podUID="53ce013b-83ba-4a10-9008-05386a4eff65" containerName="registry-server" containerID="cri-o://73be924bfae8df164b69d9863a72bc4eb49dc9c8b7b55e703cc95aca9e085ecf" gracePeriod=2 Mar 11 09:35:58 crc kubenswrapper[4808]: I0311 09:35:58.571497 4808 generic.go:334] "Generic (PLEG): container finished" podID="53ce013b-83ba-4a10-9008-05386a4eff65" containerID="73be924bfae8df164b69d9863a72bc4eb49dc9c8b7b55e703cc95aca9e085ecf" exitCode=0 Mar 11 09:35:58 crc kubenswrapper[4808]: I0311 09:35:58.571578 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w6k6l" event={"ID":"53ce013b-83ba-4a10-9008-05386a4eff65","Type":"ContainerDied","Data":"73be924bfae8df164b69d9863a72bc4eb49dc9c8b7b55e703cc95aca9e085ecf"} Mar 11 09:35:58 crc kubenswrapper[4808]: I0311 09:35:58.575577 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r79bq" event={"ID":"f6a32fcf-262f-4ec5-979d-f7ecf9331bbe","Type":"ContainerStarted","Data":"1b4c14aaa1a5893363db64119ec4e166e836961e90ad6ce7d6da9265e6764ba6"} Mar 11 09:35:58 crc kubenswrapper[4808]: I0311 09:35:58.602272 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-r79bq" podStartSLOduration=2.202604424 podStartE2EDuration="3.60225233s" podCreationTimestamp="2026-03-11 09:35:55 +0000 UTC" firstStartedPulling="2026-03-11 09:35:56.553274632 +0000 UTC m=+3407.506597952" lastFinishedPulling="2026-03-11 09:35:57.952922538 +0000 UTC m=+3408.906245858" observedRunningTime="2026-03-11 09:35:58.598491923 +0000 UTC m=+3409.551815243" watchObservedRunningTime="2026-03-11 09:35:58.60225233 +0000 UTC m=+3409.555575670" Mar 11 09:35:58 crc kubenswrapper[4808]: I0311 09:35:58.847632 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w6k6l" Mar 11 09:35:58 crc kubenswrapper[4808]: I0311 09:35:58.960606 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53ce013b-83ba-4a10-9008-05386a4eff65-catalog-content\") pod \"53ce013b-83ba-4a10-9008-05386a4eff65\" (UID: \"53ce013b-83ba-4a10-9008-05386a4eff65\") " Mar 11 09:35:58 crc kubenswrapper[4808]: I0311 09:35:58.960678 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k7zxp\" (UniqueName: \"kubernetes.io/projected/53ce013b-83ba-4a10-9008-05386a4eff65-kube-api-access-k7zxp\") pod \"53ce013b-83ba-4a10-9008-05386a4eff65\" (UID: \"53ce013b-83ba-4a10-9008-05386a4eff65\") " Mar 11 09:35:58 crc kubenswrapper[4808]: I0311 09:35:58.960755 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53ce013b-83ba-4a10-9008-05386a4eff65-utilities\") pod \"53ce013b-83ba-4a10-9008-05386a4eff65\" (UID: \"53ce013b-83ba-4a10-9008-05386a4eff65\") " Mar 11 09:35:58 crc kubenswrapper[4808]: I0311 09:35:58.961624 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53ce013b-83ba-4a10-9008-05386a4eff65-utilities" (OuterVolumeSpecName: "utilities") pod "53ce013b-83ba-4a10-9008-05386a4eff65" (UID: "53ce013b-83ba-4a10-9008-05386a4eff65"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:35:58 crc kubenswrapper[4808]: I0311 09:35:58.968731 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53ce013b-83ba-4a10-9008-05386a4eff65-kube-api-access-k7zxp" (OuterVolumeSpecName: "kube-api-access-k7zxp") pod "53ce013b-83ba-4a10-9008-05386a4eff65" (UID: "53ce013b-83ba-4a10-9008-05386a4eff65"). InnerVolumeSpecName "kube-api-access-k7zxp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:35:59 crc kubenswrapper[4808]: I0311 09:35:59.062757 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k7zxp\" (UniqueName: \"kubernetes.io/projected/53ce013b-83ba-4a10-9008-05386a4eff65-kube-api-access-k7zxp\") on node \"crc\" DevicePath \"\"" Mar 11 09:35:59 crc kubenswrapper[4808]: I0311 09:35:59.062816 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53ce013b-83ba-4a10-9008-05386a4eff65-utilities\") on node \"crc\" DevicePath \"\"" Mar 11 09:35:59 crc kubenswrapper[4808]: I0311 09:35:59.099721 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53ce013b-83ba-4a10-9008-05386a4eff65-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "53ce013b-83ba-4a10-9008-05386a4eff65" (UID: "53ce013b-83ba-4a10-9008-05386a4eff65"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:35:59 crc kubenswrapper[4808]: I0311 09:35:59.165398 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53ce013b-83ba-4a10-9008-05386a4eff65-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 11 09:35:59 crc kubenswrapper[4808]: I0311 09:35:59.603493 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w6k6l" Mar 11 09:35:59 crc kubenswrapper[4808]: I0311 09:35:59.603695 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w6k6l" event={"ID":"53ce013b-83ba-4a10-9008-05386a4eff65","Type":"ContainerDied","Data":"803f1ade4f7605f13588830e82b716828308002a97caec9b0d6f705a96af4113"} Mar 11 09:35:59 crc kubenswrapper[4808]: I0311 09:35:59.603734 4808 scope.go:117] "RemoveContainer" containerID="73be924bfae8df164b69d9863a72bc4eb49dc9c8b7b55e703cc95aca9e085ecf" Mar 11 09:35:59 crc kubenswrapper[4808]: I0311 09:35:59.633285 4808 scope.go:117] "RemoveContainer" containerID="e79f72eaa362867f9f14ed70b849521cff8ebe898e82f64805d21b7ed5fbd490" Mar 11 09:35:59 crc kubenswrapper[4808]: I0311 09:35:59.634736 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-w6k6l"] Mar 11 09:35:59 crc kubenswrapper[4808]: I0311 09:35:59.642447 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-w6k6l"] Mar 11 09:35:59 crc kubenswrapper[4808]: I0311 09:35:59.667705 4808 scope.go:117] "RemoveContainer" containerID="472ace70bb324d4426754cf7bf1f881934bc9b94b192e40ef62f207891355baa" Mar 11 09:35:59 crc kubenswrapper[4808]: I0311 09:35:59.797146 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53ce013b-83ba-4a10-9008-05386a4eff65" path="/var/lib/kubelet/pods/53ce013b-83ba-4a10-9008-05386a4eff65/volumes" Mar 11 09:36:00 crc kubenswrapper[4808]: I0311 09:36:00.153379 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553696-mp8rt"] Mar 11 09:36:00 crc kubenswrapper[4808]: E0311 09:36:00.154038 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53ce013b-83ba-4a10-9008-05386a4eff65" containerName="extract-content" Mar 11 09:36:00 crc kubenswrapper[4808]: I0311 09:36:00.154063 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="53ce013b-83ba-4a10-9008-05386a4eff65" containerName="extract-content" Mar 11 09:36:00 crc kubenswrapper[4808]: E0311 09:36:00.154091 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53ce013b-83ba-4a10-9008-05386a4eff65" containerName="registry-server" Mar 11 09:36:00 crc kubenswrapper[4808]: I0311 09:36:00.154098 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="53ce013b-83ba-4a10-9008-05386a4eff65" containerName="registry-server" Mar 11 09:36:00 crc kubenswrapper[4808]: E0311 09:36:00.154111 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53ce013b-83ba-4a10-9008-05386a4eff65" containerName="extract-utilities" Mar 11 09:36:00 crc kubenswrapper[4808]: I0311 09:36:00.154118 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="53ce013b-83ba-4a10-9008-05386a4eff65" containerName="extract-utilities" Mar 11 09:36:00 crc kubenswrapper[4808]: I0311 09:36:00.154304 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="53ce013b-83ba-4a10-9008-05386a4eff65" containerName="registry-server" Mar 11 09:36:00 crc kubenswrapper[4808]: I0311 09:36:00.154923 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553696-mp8rt" Mar 11 09:36:00 crc kubenswrapper[4808]: I0311 09:36:00.158606 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 09:36:00 crc kubenswrapper[4808]: I0311 09:36:00.158760 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 09:36:00 crc kubenswrapper[4808]: I0311 09:36:00.159120 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-8r7cc" Mar 11 09:36:00 crc kubenswrapper[4808]: I0311 09:36:00.163435 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553696-mp8rt"] Mar 11 09:36:00 crc kubenswrapper[4808]: I0311 09:36:00.284150 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cv8wk\" (UniqueName: \"kubernetes.io/projected/71858ed3-1101-4d2d-8ad3-b4006f61dd75-kube-api-access-cv8wk\") pod \"auto-csr-approver-29553696-mp8rt\" (UID: \"71858ed3-1101-4d2d-8ad3-b4006f61dd75\") " pod="openshift-infra/auto-csr-approver-29553696-mp8rt" Mar 11 09:36:00 crc kubenswrapper[4808]: I0311 09:36:00.386072 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cv8wk\" (UniqueName: \"kubernetes.io/projected/71858ed3-1101-4d2d-8ad3-b4006f61dd75-kube-api-access-cv8wk\") pod \"auto-csr-approver-29553696-mp8rt\" (UID: \"71858ed3-1101-4d2d-8ad3-b4006f61dd75\") " pod="openshift-infra/auto-csr-approver-29553696-mp8rt" Mar 11 09:36:00 crc kubenswrapper[4808]: I0311 09:36:00.408508 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cv8wk\" (UniqueName: \"kubernetes.io/projected/71858ed3-1101-4d2d-8ad3-b4006f61dd75-kube-api-access-cv8wk\") pod \"auto-csr-approver-29553696-mp8rt\" (UID: \"71858ed3-1101-4d2d-8ad3-b4006f61dd75\") " pod="openshift-infra/auto-csr-approver-29553696-mp8rt" Mar 11 09:36:00 crc kubenswrapper[4808]: I0311 09:36:00.472906 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553696-mp8rt" Mar 11 09:36:00 crc kubenswrapper[4808]: I0311 09:36:00.960838 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553696-mp8rt"] Mar 11 09:36:00 crc kubenswrapper[4808]: W0311 09:36:00.963685 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod71858ed3_1101_4d2d_8ad3_b4006f61dd75.slice/crio-af6d4481409830c010c2e697e82d2cb8c1dfb3ddff06458b2cd16afe55607327 WatchSource:0}: Error finding container af6d4481409830c010c2e697e82d2cb8c1dfb3ddff06458b2cd16afe55607327: Status 404 returned error can't find the container with id af6d4481409830c010c2e697e82d2cb8c1dfb3ddff06458b2cd16afe55607327 Mar 11 09:36:01 crc kubenswrapper[4808]: I0311 09:36:01.628626 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553696-mp8rt" event={"ID":"71858ed3-1101-4d2d-8ad3-b4006f61dd75","Type":"ContainerStarted","Data":"af6d4481409830c010c2e697e82d2cb8c1dfb3ddff06458b2cd16afe55607327"} Mar 11 09:36:02 crc kubenswrapper[4808]: I0311 09:36:02.637609 4808 generic.go:334] "Generic (PLEG): container finished" podID="71858ed3-1101-4d2d-8ad3-b4006f61dd75" containerID="46842e97d4d490d4fd500ca93f8e38f58eade08d5ac3b42000ffd8f8733a634a" exitCode=0 Mar 11 09:36:02 crc kubenswrapper[4808]: I0311 09:36:02.637658 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553696-mp8rt" event={"ID":"71858ed3-1101-4d2d-8ad3-b4006f61dd75","Type":"ContainerDied","Data":"46842e97d4d490d4fd500ca93f8e38f58eade08d5ac3b42000ffd8f8733a634a"} Mar 11 09:36:03 crc kubenswrapper[4808]: I0311 09:36:03.983417 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553696-mp8rt" Mar 11 09:36:04 crc kubenswrapper[4808]: I0311 09:36:04.145504 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cv8wk\" (UniqueName: \"kubernetes.io/projected/71858ed3-1101-4d2d-8ad3-b4006f61dd75-kube-api-access-cv8wk\") pod \"71858ed3-1101-4d2d-8ad3-b4006f61dd75\" (UID: \"71858ed3-1101-4d2d-8ad3-b4006f61dd75\") " Mar 11 09:36:04 crc kubenswrapper[4808]: I0311 09:36:04.151661 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71858ed3-1101-4d2d-8ad3-b4006f61dd75-kube-api-access-cv8wk" (OuterVolumeSpecName: "kube-api-access-cv8wk") pod "71858ed3-1101-4d2d-8ad3-b4006f61dd75" (UID: "71858ed3-1101-4d2d-8ad3-b4006f61dd75"). InnerVolumeSpecName "kube-api-access-cv8wk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:36:04 crc kubenswrapper[4808]: I0311 09:36:04.246732 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cv8wk\" (UniqueName: \"kubernetes.io/projected/71858ed3-1101-4d2d-8ad3-b4006f61dd75-kube-api-access-cv8wk\") on node \"crc\" DevicePath \"\"" Mar 11 09:36:04 crc kubenswrapper[4808]: I0311 09:36:04.659772 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553696-mp8rt" event={"ID":"71858ed3-1101-4d2d-8ad3-b4006f61dd75","Type":"ContainerDied","Data":"af6d4481409830c010c2e697e82d2cb8c1dfb3ddff06458b2cd16afe55607327"} Mar 11 09:36:04 crc kubenswrapper[4808]: I0311 09:36:04.660134 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af6d4481409830c010c2e697e82d2cb8c1dfb3ddff06458b2cd16afe55607327" Mar 11 09:36:04 crc kubenswrapper[4808]: I0311 09:36:04.659900 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553696-mp8rt" Mar 11 09:36:05 crc kubenswrapper[4808]: I0311 09:36:05.076220 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553690-w5cxs"] Mar 11 09:36:05 crc kubenswrapper[4808]: I0311 09:36:05.087744 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553690-w5cxs"] Mar 11 09:36:05 crc kubenswrapper[4808]: I0311 09:36:05.800404 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a745817-9e61-4724-8327-136b52963f82" path="/var/lib/kubelet/pods/9a745817-9e61-4724-8327-136b52963f82/volumes" Mar 11 09:36:05 crc kubenswrapper[4808]: I0311 09:36:05.939138 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-r79bq" Mar 11 09:36:05 crc kubenswrapper[4808]: I0311 09:36:05.939186 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-r79bq" Mar 11 09:36:06 crc kubenswrapper[4808]: I0311 09:36:06.013398 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-r79bq" Mar 11 09:36:06 crc kubenswrapper[4808]: I0311 09:36:06.731786 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-r79bq" Mar 11 09:36:06 crc kubenswrapper[4808]: I0311 09:36:06.982649 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-r79bq"] Mar 11 09:36:08 crc kubenswrapper[4808]: I0311 09:36:08.695880 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-r79bq" podUID="f6a32fcf-262f-4ec5-979d-f7ecf9331bbe" containerName="registry-server" containerID="cri-o://1b4c14aaa1a5893363db64119ec4e166e836961e90ad6ce7d6da9265e6764ba6" gracePeriod=2 Mar 11 09:36:09 crc kubenswrapper[4808]: I0311 09:36:09.112875 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r79bq" Mar 11 09:36:09 crc kubenswrapper[4808]: I0311 09:36:09.216558 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6a32fcf-262f-4ec5-979d-f7ecf9331bbe-catalog-content\") pod \"f6a32fcf-262f-4ec5-979d-f7ecf9331bbe\" (UID: \"f6a32fcf-262f-4ec5-979d-f7ecf9331bbe\") " Mar 11 09:36:09 crc kubenswrapper[4808]: I0311 09:36:09.216694 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6a32fcf-262f-4ec5-979d-f7ecf9331bbe-utilities\") pod \"f6a32fcf-262f-4ec5-979d-f7ecf9331bbe\" (UID: \"f6a32fcf-262f-4ec5-979d-f7ecf9331bbe\") " Mar 11 09:36:09 crc kubenswrapper[4808]: I0311 09:36:09.216724 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gpgtn\" (UniqueName: \"kubernetes.io/projected/f6a32fcf-262f-4ec5-979d-f7ecf9331bbe-kube-api-access-gpgtn\") pod \"f6a32fcf-262f-4ec5-979d-f7ecf9331bbe\" (UID: \"f6a32fcf-262f-4ec5-979d-f7ecf9331bbe\") " Mar 11 09:36:09 crc kubenswrapper[4808]: I0311 09:36:09.217448 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6a32fcf-262f-4ec5-979d-f7ecf9331bbe-utilities" (OuterVolumeSpecName: "utilities") pod "f6a32fcf-262f-4ec5-979d-f7ecf9331bbe" (UID: "f6a32fcf-262f-4ec5-979d-f7ecf9331bbe"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:36:09 crc kubenswrapper[4808]: I0311 09:36:09.221125 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6a32fcf-262f-4ec5-979d-f7ecf9331bbe-kube-api-access-gpgtn" (OuterVolumeSpecName: "kube-api-access-gpgtn") pod "f6a32fcf-262f-4ec5-979d-f7ecf9331bbe" (UID: "f6a32fcf-262f-4ec5-979d-f7ecf9331bbe"). InnerVolumeSpecName "kube-api-access-gpgtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:36:09 crc kubenswrapper[4808]: I0311 09:36:09.241877 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6a32fcf-262f-4ec5-979d-f7ecf9331bbe-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f6a32fcf-262f-4ec5-979d-f7ecf9331bbe" (UID: "f6a32fcf-262f-4ec5-979d-f7ecf9331bbe"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:36:09 crc kubenswrapper[4808]: I0311 09:36:09.318398 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6a32fcf-262f-4ec5-979d-f7ecf9331bbe-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 11 09:36:09 crc kubenswrapper[4808]: I0311 09:36:09.318459 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6a32fcf-262f-4ec5-979d-f7ecf9331bbe-utilities\") on node \"crc\" DevicePath \"\"" Mar 11 09:36:09 crc kubenswrapper[4808]: I0311 09:36:09.318478 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gpgtn\" (UniqueName: \"kubernetes.io/projected/f6a32fcf-262f-4ec5-979d-f7ecf9331bbe-kube-api-access-gpgtn\") on node \"crc\" DevicePath \"\"" Mar 11 09:36:09 crc kubenswrapper[4808]: I0311 09:36:09.707215 4808 generic.go:334] "Generic (PLEG): container finished" podID="f6a32fcf-262f-4ec5-979d-f7ecf9331bbe" containerID="1b4c14aaa1a5893363db64119ec4e166e836961e90ad6ce7d6da9265e6764ba6" exitCode=0 Mar 11 09:36:09 crc kubenswrapper[4808]: I0311 09:36:09.707269 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r79bq" event={"ID":"f6a32fcf-262f-4ec5-979d-f7ecf9331bbe","Type":"ContainerDied","Data":"1b4c14aaa1a5893363db64119ec4e166e836961e90ad6ce7d6da9265e6764ba6"} Mar 11 09:36:09 crc kubenswrapper[4808]: I0311 09:36:09.707332 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r79bq" Mar 11 09:36:09 crc kubenswrapper[4808]: I0311 09:36:09.707349 4808 scope.go:117] "RemoveContainer" containerID="1b4c14aaa1a5893363db64119ec4e166e836961e90ad6ce7d6da9265e6764ba6" Mar 11 09:36:09 crc kubenswrapper[4808]: I0311 09:36:09.707334 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r79bq" event={"ID":"f6a32fcf-262f-4ec5-979d-f7ecf9331bbe","Type":"ContainerDied","Data":"cb01d3f71238fe06cd3f01c27d5c441accda2bd1f17c1986b28ba74422556da0"} Mar 11 09:36:09 crc kubenswrapper[4808]: I0311 09:36:09.735278 4808 scope.go:117] "RemoveContainer" containerID="166c7008ed9322db8e6ff8512f74c31203fc0ca43295fb4e34c12dcefe9da268" Mar 11 09:36:09 crc kubenswrapper[4808]: I0311 09:36:09.751963 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-r79bq"] Mar 11 09:36:09 crc kubenswrapper[4808]: I0311 09:36:09.759083 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-r79bq"] Mar 11 09:36:09 crc kubenswrapper[4808]: I0311 09:36:09.766425 4808 scope.go:117] "RemoveContainer" containerID="f0f127f491fcaaa084be1b377f25679aeed1772fbca37d987bac1154ba546c0b" Mar 11 09:36:09 crc kubenswrapper[4808]: I0311 09:36:09.784647 4808 scope.go:117] "RemoveContainer" containerID="1b4c14aaa1a5893363db64119ec4e166e836961e90ad6ce7d6da9265e6764ba6" Mar 11 09:36:09 crc kubenswrapper[4808]: E0311 09:36:09.785281 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b4c14aaa1a5893363db64119ec4e166e836961e90ad6ce7d6da9265e6764ba6\": container with ID starting with 1b4c14aaa1a5893363db64119ec4e166e836961e90ad6ce7d6da9265e6764ba6 not found: ID does not exist" containerID="1b4c14aaa1a5893363db64119ec4e166e836961e90ad6ce7d6da9265e6764ba6" Mar 11 09:36:09 crc kubenswrapper[4808]: I0311 09:36:09.785311 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b4c14aaa1a5893363db64119ec4e166e836961e90ad6ce7d6da9265e6764ba6"} err="failed to get container status \"1b4c14aaa1a5893363db64119ec4e166e836961e90ad6ce7d6da9265e6764ba6\": rpc error: code = NotFound desc = could not find container \"1b4c14aaa1a5893363db64119ec4e166e836961e90ad6ce7d6da9265e6764ba6\": container with ID starting with 1b4c14aaa1a5893363db64119ec4e166e836961e90ad6ce7d6da9265e6764ba6 not found: ID does not exist" Mar 11 09:36:09 crc kubenswrapper[4808]: I0311 09:36:09.785348 4808 scope.go:117] "RemoveContainer" containerID="166c7008ed9322db8e6ff8512f74c31203fc0ca43295fb4e34c12dcefe9da268" Mar 11 09:36:09 crc kubenswrapper[4808]: E0311 09:36:09.785966 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"166c7008ed9322db8e6ff8512f74c31203fc0ca43295fb4e34c12dcefe9da268\": container with ID starting with 166c7008ed9322db8e6ff8512f74c31203fc0ca43295fb4e34c12dcefe9da268 not found: ID does not exist" containerID="166c7008ed9322db8e6ff8512f74c31203fc0ca43295fb4e34c12dcefe9da268" Mar 11 09:36:09 crc kubenswrapper[4808]: I0311 09:36:09.785996 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"166c7008ed9322db8e6ff8512f74c31203fc0ca43295fb4e34c12dcefe9da268"} err="failed to get container status \"166c7008ed9322db8e6ff8512f74c31203fc0ca43295fb4e34c12dcefe9da268\": rpc error: code = NotFound desc = could not find container \"166c7008ed9322db8e6ff8512f74c31203fc0ca43295fb4e34c12dcefe9da268\": container with ID starting with 166c7008ed9322db8e6ff8512f74c31203fc0ca43295fb4e34c12dcefe9da268 not found: ID does not exist" Mar 11 09:36:09 crc kubenswrapper[4808]: I0311 09:36:09.786037 4808 scope.go:117] "RemoveContainer" containerID="f0f127f491fcaaa084be1b377f25679aeed1772fbca37d987bac1154ba546c0b" Mar 11 09:36:09 crc kubenswrapper[4808]: E0311 09:36:09.786877 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0f127f491fcaaa084be1b377f25679aeed1772fbca37d987bac1154ba546c0b\": container with ID starting with f0f127f491fcaaa084be1b377f25679aeed1772fbca37d987bac1154ba546c0b not found: ID does not exist" containerID="f0f127f491fcaaa084be1b377f25679aeed1772fbca37d987bac1154ba546c0b" Mar 11 09:36:09 crc kubenswrapper[4808]: I0311 09:36:09.786904 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0f127f491fcaaa084be1b377f25679aeed1772fbca37d987bac1154ba546c0b"} err="failed to get container status \"f0f127f491fcaaa084be1b377f25679aeed1772fbca37d987bac1154ba546c0b\": rpc error: code = NotFound desc = could not find container \"f0f127f491fcaaa084be1b377f25679aeed1772fbca37d987bac1154ba546c0b\": container with ID starting with f0f127f491fcaaa084be1b377f25679aeed1772fbca37d987bac1154ba546c0b not found: ID does not exist" Mar 11 09:36:09 crc kubenswrapper[4808]: I0311 09:36:09.808665 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6a32fcf-262f-4ec5-979d-f7ecf9331bbe" path="/var/lib/kubelet/pods/f6a32fcf-262f-4ec5-979d-f7ecf9331bbe/volumes" Mar 11 09:36:16 crc kubenswrapper[4808]: I0311 09:36:16.028772 4808 patch_prober.go:28] interesting pod/machine-config-daemon-tfsm9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 09:36:16 crc kubenswrapper[4808]: I0311 09:36:16.029605 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 09:36:46 crc kubenswrapper[4808]: I0311 09:36:46.027303 4808 patch_prober.go:28] interesting pod/machine-config-daemon-tfsm9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 09:36:46 crc kubenswrapper[4808]: I0311 09:36:46.027917 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 09:36:46 crc kubenswrapper[4808]: I0311 09:36:46.027966 4808 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" Mar 11 09:36:46 crc kubenswrapper[4808]: I0311 09:36:46.028613 4808 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4dac0e2e6e145c420b5cb6c39ed1e2098a6a0826a1b856327936bbb0a9d7973c"} pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 11 09:36:46 crc kubenswrapper[4808]: I0311 09:36:46.028665 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerName="machine-config-daemon" containerID="cri-o://4dac0e2e6e145c420b5cb6c39ed1e2098a6a0826a1b856327936bbb0a9d7973c" gracePeriod=600 Mar 11 09:36:47 crc kubenswrapper[4808]: I0311 09:36:47.003477 4808 generic.go:334] "Generic (PLEG): container finished" podID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerID="4dac0e2e6e145c420b5cb6c39ed1e2098a6a0826a1b856327936bbb0a9d7973c" exitCode=0 Mar 11 09:36:47 crc kubenswrapper[4808]: I0311 09:36:47.003539 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" event={"ID":"3dda5309-668d-4e3c-b3b2-1d708eecc578","Type":"ContainerDied","Data":"4dac0e2e6e145c420b5cb6c39ed1e2098a6a0826a1b856327936bbb0a9d7973c"} Mar 11 09:36:47 crc kubenswrapper[4808]: I0311 09:36:47.003962 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" event={"ID":"3dda5309-668d-4e3c-b3b2-1d708eecc578","Type":"ContainerStarted","Data":"327e481c771e263c438770d9467a9e777f57eb934e6c4efbb697dbdded9363eb"} Mar 11 09:36:47 crc kubenswrapper[4808]: I0311 09:36:47.003991 4808 scope.go:117] "RemoveContainer" containerID="da4536925bbcce34ca74e0b8d6dd1cb5d3d10fde6f44c64699696606d3835faa" Mar 11 09:36:54 crc kubenswrapper[4808]: I0311 09:36:54.739241 4808 scope.go:117] "RemoveContainer" containerID="c272e6cb0dca49600b41ba5a4fbaec798371e54a84b290fd82bcfd3e0b34e7f1" Mar 11 09:38:00 crc kubenswrapper[4808]: I0311 09:38:00.151458 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553698-nvqfn"] Mar 11 09:38:00 crc kubenswrapper[4808]: E0311 09:38:00.153141 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71858ed3-1101-4d2d-8ad3-b4006f61dd75" containerName="oc" Mar 11 09:38:00 crc kubenswrapper[4808]: I0311 09:38:00.153164 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="71858ed3-1101-4d2d-8ad3-b4006f61dd75" containerName="oc" Mar 11 09:38:00 crc kubenswrapper[4808]: E0311 09:38:00.153203 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6a32fcf-262f-4ec5-979d-f7ecf9331bbe" containerName="registry-server" Mar 11 09:38:00 crc kubenswrapper[4808]: I0311 09:38:00.153211 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6a32fcf-262f-4ec5-979d-f7ecf9331bbe" containerName="registry-server" Mar 11 09:38:00 crc kubenswrapper[4808]: E0311 09:38:00.153221 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6a32fcf-262f-4ec5-979d-f7ecf9331bbe" containerName="extract-utilities" Mar 11 09:38:00 crc kubenswrapper[4808]: I0311 09:38:00.153231 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6a32fcf-262f-4ec5-979d-f7ecf9331bbe" containerName="extract-utilities" Mar 11 09:38:00 crc kubenswrapper[4808]: E0311 09:38:00.153266 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6a32fcf-262f-4ec5-979d-f7ecf9331bbe" containerName="extract-content" Mar 11 09:38:00 crc kubenswrapper[4808]: I0311 09:38:00.153274 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6a32fcf-262f-4ec5-979d-f7ecf9331bbe" containerName="extract-content" Mar 11 09:38:00 crc kubenswrapper[4808]: I0311 09:38:00.153486 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6a32fcf-262f-4ec5-979d-f7ecf9331bbe" containerName="registry-server" Mar 11 09:38:00 crc kubenswrapper[4808]: I0311 09:38:00.153519 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="71858ed3-1101-4d2d-8ad3-b4006f61dd75" containerName="oc" Mar 11 09:38:00 crc kubenswrapper[4808]: I0311 09:38:00.154332 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553698-nvqfn" Mar 11 09:38:00 crc kubenswrapper[4808]: I0311 09:38:00.158034 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 09:38:00 crc kubenswrapper[4808]: I0311 09:38:00.158221 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 09:38:00 crc kubenswrapper[4808]: I0311 09:38:00.158697 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-8r7cc" Mar 11 09:38:00 crc kubenswrapper[4808]: I0311 09:38:00.161165 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553698-nvqfn"] Mar 11 09:38:00 crc kubenswrapper[4808]: I0311 09:38:00.312417 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bcvq\" (UniqueName: \"kubernetes.io/projected/23f5ba6a-4c95-4a2e-a1d0-b5e7c96da208-kube-api-access-4bcvq\") pod \"auto-csr-approver-29553698-nvqfn\" (UID: \"23f5ba6a-4c95-4a2e-a1d0-b5e7c96da208\") " pod="openshift-infra/auto-csr-approver-29553698-nvqfn" Mar 11 09:38:00 crc kubenswrapper[4808]: I0311 09:38:00.413283 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bcvq\" (UniqueName: \"kubernetes.io/projected/23f5ba6a-4c95-4a2e-a1d0-b5e7c96da208-kube-api-access-4bcvq\") pod \"auto-csr-approver-29553698-nvqfn\" (UID: \"23f5ba6a-4c95-4a2e-a1d0-b5e7c96da208\") " pod="openshift-infra/auto-csr-approver-29553698-nvqfn" Mar 11 09:38:00 crc kubenswrapper[4808]: I0311 09:38:00.444026 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bcvq\" (UniqueName: \"kubernetes.io/projected/23f5ba6a-4c95-4a2e-a1d0-b5e7c96da208-kube-api-access-4bcvq\") pod \"auto-csr-approver-29553698-nvqfn\" (UID: \"23f5ba6a-4c95-4a2e-a1d0-b5e7c96da208\") " pod="openshift-infra/auto-csr-approver-29553698-nvqfn" Mar 11 09:38:00 crc kubenswrapper[4808]: I0311 09:38:00.479722 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553698-nvqfn" Mar 11 09:38:00 crc kubenswrapper[4808]: I0311 09:38:00.928875 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553698-nvqfn"] Mar 11 09:38:01 crc kubenswrapper[4808]: I0311 09:38:01.592317 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553698-nvqfn" event={"ID":"23f5ba6a-4c95-4a2e-a1d0-b5e7c96da208","Type":"ContainerStarted","Data":"7e2f4d98bf4bcf6275f587e69b04b0ed2a7fd7797869b2a1a4b27407521f3920"} Mar 11 09:38:02 crc kubenswrapper[4808]: I0311 09:38:02.601689 4808 generic.go:334] "Generic (PLEG): container finished" podID="23f5ba6a-4c95-4a2e-a1d0-b5e7c96da208" containerID="57603643d93a24319b561448710b13a69e0e77a985604db9d5af05d8c43b37f5" exitCode=0 Mar 11 09:38:02 crc kubenswrapper[4808]: I0311 09:38:02.601870 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553698-nvqfn" event={"ID":"23f5ba6a-4c95-4a2e-a1d0-b5e7c96da208","Type":"ContainerDied","Data":"57603643d93a24319b561448710b13a69e0e77a985604db9d5af05d8c43b37f5"} Mar 11 09:38:03 crc kubenswrapper[4808]: I0311 09:38:03.892259 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553698-nvqfn" Mar 11 09:38:04 crc kubenswrapper[4808]: I0311 09:38:04.074186 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4bcvq\" (UniqueName: \"kubernetes.io/projected/23f5ba6a-4c95-4a2e-a1d0-b5e7c96da208-kube-api-access-4bcvq\") pod \"23f5ba6a-4c95-4a2e-a1d0-b5e7c96da208\" (UID: \"23f5ba6a-4c95-4a2e-a1d0-b5e7c96da208\") " Mar 11 09:38:04 crc kubenswrapper[4808]: I0311 09:38:04.082919 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23f5ba6a-4c95-4a2e-a1d0-b5e7c96da208-kube-api-access-4bcvq" (OuterVolumeSpecName: "kube-api-access-4bcvq") pod "23f5ba6a-4c95-4a2e-a1d0-b5e7c96da208" (UID: "23f5ba6a-4c95-4a2e-a1d0-b5e7c96da208"). InnerVolumeSpecName "kube-api-access-4bcvq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:38:04 crc kubenswrapper[4808]: I0311 09:38:04.176867 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4bcvq\" (UniqueName: \"kubernetes.io/projected/23f5ba6a-4c95-4a2e-a1d0-b5e7c96da208-kube-api-access-4bcvq\") on node \"crc\" DevicePath \"\"" Mar 11 09:38:04 crc kubenswrapper[4808]: I0311 09:38:04.618243 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553698-nvqfn" event={"ID":"23f5ba6a-4c95-4a2e-a1d0-b5e7c96da208","Type":"ContainerDied","Data":"7e2f4d98bf4bcf6275f587e69b04b0ed2a7fd7797869b2a1a4b27407521f3920"} Mar 11 09:38:04 crc kubenswrapper[4808]: I0311 09:38:04.618280 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e2f4d98bf4bcf6275f587e69b04b0ed2a7fd7797869b2a1a4b27407521f3920" Mar 11 09:38:04 crc kubenswrapper[4808]: I0311 09:38:04.618291 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553698-nvqfn" Mar 11 09:38:04 crc kubenswrapper[4808]: I0311 09:38:04.957592 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553692-9vgcj"] Mar 11 09:38:04 crc kubenswrapper[4808]: I0311 09:38:04.964139 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553692-9vgcj"] Mar 11 09:38:05 crc kubenswrapper[4808]: I0311 09:38:05.802908 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87f85b3a-83e6-4767-b819-e5c20a147972" path="/var/lib/kubelet/pods/87f85b3a-83e6-4767-b819-e5c20a147972/volumes" Mar 11 09:38:46 crc kubenswrapper[4808]: I0311 09:38:46.027952 4808 patch_prober.go:28] interesting pod/machine-config-daemon-tfsm9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 09:38:46 crc kubenswrapper[4808]: I0311 09:38:46.028426 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 09:38:54 crc kubenswrapper[4808]: I0311 09:38:54.855093 4808 scope.go:117] "RemoveContainer" containerID="525c51ddbe213746de9164ecd521193b3950372c485b90c7b7ff241e508b16fe" Mar 11 09:39:16 crc kubenswrapper[4808]: I0311 09:39:16.027155 4808 patch_prober.go:28] interesting pod/machine-config-daemon-tfsm9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 09:39:16 crc kubenswrapper[4808]: I0311 09:39:16.027695 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 09:39:46 crc kubenswrapper[4808]: I0311 09:39:46.028213 4808 patch_prober.go:28] interesting pod/machine-config-daemon-tfsm9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 09:39:46 crc kubenswrapper[4808]: I0311 09:39:46.028891 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 09:39:46 crc kubenswrapper[4808]: I0311 09:39:46.029045 4808 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" Mar 11 09:39:46 crc kubenswrapper[4808]: I0311 09:39:46.029898 4808 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"327e481c771e263c438770d9467a9e777f57eb934e6c4efbb697dbdded9363eb"} pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 11 09:39:46 crc kubenswrapper[4808]: I0311 09:39:46.030051 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerName="machine-config-daemon" containerID="cri-o://327e481c771e263c438770d9467a9e777f57eb934e6c4efbb697dbdded9363eb" gracePeriod=600 Mar 11 09:39:46 crc kubenswrapper[4808]: E0311 09:39:46.150765 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:39:46 crc kubenswrapper[4808]: I0311 09:39:46.393925 4808 generic.go:334] "Generic (PLEG): container finished" podID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerID="327e481c771e263c438770d9467a9e777f57eb934e6c4efbb697dbdded9363eb" exitCode=0 Mar 11 09:39:46 crc kubenswrapper[4808]: I0311 09:39:46.393965 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" event={"ID":"3dda5309-668d-4e3c-b3b2-1d708eecc578","Type":"ContainerDied","Data":"327e481c771e263c438770d9467a9e777f57eb934e6c4efbb697dbdded9363eb"} Mar 11 09:39:46 crc kubenswrapper[4808]: I0311 09:39:46.394403 4808 scope.go:117] "RemoveContainer" containerID="4dac0e2e6e145c420b5cb6c39ed1e2098a6a0826a1b856327936bbb0a9d7973c" Mar 11 09:39:46 crc kubenswrapper[4808]: I0311 09:39:46.395056 4808 scope.go:117] "RemoveContainer" containerID="327e481c771e263c438770d9467a9e777f57eb934e6c4efbb697dbdded9363eb" Mar 11 09:39:46 crc kubenswrapper[4808]: E0311 09:39:46.395558 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:39:57 crc kubenswrapper[4808]: I0311 09:39:57.789857 4808 scope.go:117] "RemoveContainer" containerID="327e481c771e263c438770d9467a9e777f57eb934e6c4efbb697dbdded9363eb" Mar 11 09:39:57 crc kubenswrapper[4808]: E0311 09:39:57.790715 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:40:00 crc kubenswrapper[4808]: I0311 09:40:00.161049 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553700-9rdwx"] Mar 11 09:40:00 crc kubenswrapper[4808]: E0311 09:40:00.162173 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23f5ba6a-4c95-4a2e-a1d0-b5e7c96da208" containerName="oc" Mar 11 09:40:00 crc kubenswrapper[4808]: I0311 09:40:00.162206 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="23f5ba6a-4c95-4a2e-a1d0-b5e7c96da208" containerName="oc" Mar 11 09:40:00 crc kubenswrapper[4808]: I0311 09:40:00.162632 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="23f5ba6a-4c95-4a2e-a1d0-b5e7c96da208" containerName="oc" Mar 11 09:40:00 crc kubenswrapper[4808]: I0311 09:40:00.163739 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553700-9rdwx" Mar 11 09:40:00 crc kubenswrapper[4808]: I0311 09:40:00.167026 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 09:40:00 crc kubenswrapper[4808]: I0311 09:40:00.167023 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 09:40:00 crc kubenswrapper[4808]: I0311 09:40:00.171908 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-8r7cc" Mar 11 09:40:00 crc kubenswrapper[4808]: I0311 09:40:00.173271 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553700-9rdwx"] Mar 11 09:40:00 crc kubenswrapper[4808]: I0311 09:40:00.233745 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57p86\" (UniqueName: \"kubernetes.io/projected/fc2d2ddf-c0da-4693-9701-dff7ede958e0-kube-api-access-57p86\") pod \"auto-csr-approver-29553700-9rdwx\" (UID: \"fc2d2ddf-c0da-4693-9701-dff7ede958e0\") " pod="openshift-infra/auto-csr-approver-29553700-9rdwx" Mar 11 09:40:00 crc kubenswrapper[4808]: I0311 09:40:00.336114 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57p86\" (UniqueName: \"kubernetes.io/projected/fc2d2ddf-c0da-4693-9701-dff7ede958e0-kube-api-access-57p86\") pod \"auto-csr-approver-29553700-9rdwx\" (UID: \"fc2d2ddf-c0da-4693-9701-dff7ede958e0\") " pod="openshift-infra/auto-csr-approver-29553700-9rdwx" Mar 11 09:40:00 crc kubenswrapper[4808]: I0311 09:40:00.357737 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57p86\" (UniqueName: \"kubernetes.io/projected/fc2d2ddf-c0da-4693-9701-dff7ede958e0-kube-api-access-57p86\") pod \"auto-csr-approver-29553700-9rdwx\" (UID: \"fc2d2ddf-c0da-4693-9701-dff7ede958e0\") " pod="openshift-infra/auto-csr-approver-29553700-9rdwx" Mar 11 09:40:00 crc kubenswrapper[4808]: I0311 09:40:00.491881 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553700-9rdwx" Mar 11 09:40:00 crc kubenswrapper[4808]: I0311 09:40:00.939823 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553700-9rdwx"] Mar 11 09:40:00 crc kubenswrapper[4808]: W0311 09:40:00.954989 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfc2d2ddf_c0da_4693_9701_dff7ede958e0.slice/crio-a7447e25baa4225982d1664b056a0c3ade3a4def5fc30d925af23e1b2f2f9bc5 WatchSource:0}: Error finding container a7447e25baa4225982d1664b056a0c3ade3a4def5fc30d925af23e1b2f2f9bc5: Status 404 returned error can't find the container with id a7447e25baa4225982d1664b056a0c3ade3a4def5fc30d925af23e1b2f2f9bc5 Mar 11 09:40:00 crc kubenswrapper[4808]: I0311 09:40:00.957990 4808 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 11 09:40:01 crc kubenswrapper[4808]: I0311 09:40:01.546310 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553700-9rdwx" event={"ID":"fc2d2ddf-c0da-4693-9701-dff7ede958e0","Type":"ContainerStarted","Data":"a7447e25baa4225982d1664b056a0c3ade3a4def5fc30d925af23e1b2f2f9bc5"} Mar 11 09:40:02 crc kubenswrapper[4808]: I0311 09:40:02.556310 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553700-9rdwx" event={"ID":"fc2d2ddf-c0da-4693-9701-dff7ede958e0","Type":"ContainerStarted","Data":"85321075453d34457b6d479d6eb8c04cda02d25f3489b5400f1f37ef636c646f"} Mar 11 09:40:02 crc kubenswrapper[4808]: I0311 09:40:02.584534 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29553700-9rdwx" podStartSLOduration=1.435495197 podStartE2EDuration="2.58451068s" podCreationTimestamp="2026-03-11 09:40:00 +0000 UTC" firstStartedPulling="2026-03-11 09:40:00.957619834 +0000 UTC m=+3651.910943154" lastFinishedPulling="2026-03-11 09:40:02.106635317 +0000 UTC m=+3653.059958637" observedRunningTime="2026-03-11 09:40:02.574657291 +0000 UTC m=+3653.527980611" watchObservedRunningTime="2026-03-11 09:40:02.58451068 +0000 UTC m=+3653.537834000" Mar 11 09:40:03 crc kubenswrapper[4808]: I0311 09:40:03.566820 4808 generic.go:334] "Generic (PLEG): container finished" podID="fc2d2ddf-c0da-4693-9701-dff7ede958e0" containerID="85321075453d34457b6d479d6eb8c04cda02d25f3489b5400f1f37ef636c646f" exitCode=0 Mar 11 09:40:03 crc kubenswrapper[4808]: I0311 09:40:03.566918 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553700-9rdwx" event={"ID":"fc2d2ddf-c0da-4693-9701-dff7ede958e0","Type":"ContainerDied","Data":"85321075453d34457b6d479d6eb8c04cda02d25f3489b5400f1f37ef636c646f"} Mar 11 09:40:04 crc kubenswrapper[4808]: I0311 09:40:04.875308 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553700-9rdwx" Mar 11 09:40:05 crc kubenswrapper[4808]: I0311 09:40:05.003111 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57p86\" (UniqueName: \"kubernetes.io/projected/fc2d2ddf-c0da-4693-9701-dff7ede958e0-kube-api-access-57p86\") pod \"fc2d2ddf-c0da-4693-9701-dff7ede958e0\" (UID: \"fc2d2ddf-c0da-4693-9701-dff7ede958e0\") " Mar 11 09:40:05 crc kubenswrapper[4808]: I0311 09:40:05.011820 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc2d2ddf-c0da-4693-9701-dff7ede958e0-kube-api-access-57p86" (OuterVolumeSpecName: "kube-api-access-57p86") pod "fc2d2ddf-c0da-4693-9701-dff7ede958e0" (UID: "fc2d2ddf-c0da-4693-9701-dff7ede958e0"). InnerVolumeSpecName "kube-api-access-57p86". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:40:05 crc kubenswrapper[4808]: I0311 09:40:05.104528 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-57p86\" (UniqueName: \"kubernetes.io/projected/fc2d2ddf-c0da-4693-9701-dff7ede958e0-kube-api-access-57p86\") on node \"crc\" DevicePath \"\"" Mar 11 09:40:05 crc kubenswrapper[4808]: I0311 09:40:05.582907 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553700-9rdwx" event={"ID":"fc2d2ddf-c0da-4693-9701-dff7ede958e0","Type":"ContainerDied","Data":"a7447e25baa4225982d1664b056a0c3ade3a4def5fc30d925af23e1b2f2f9bc5"} Mar 11 09:40:05 crc kubenswrapper[4808]: I0311 09:40:05.582961 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553700-9rdwx" Mar 11 09:40:05 crc kubenswrapper[4808]: I0311 09:40:05.582971 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7447e25baa4225982d1664b056a0c3ade3a4def5fc30d925af23e1b2f2f9bc5" Mar 11 09:40:05 crc kubenswrapper[4808]: I0311 09:40:05.658054 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553694-rvshs"] Mar 11 09:40:05 crc kubenswrapper[4808]: I0311 09:40:05.666498 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553694-rvshs"] Mar 11 09:40:05 crc kubenswrapper[4808]: I0311 09:40:05.800976 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b647eaf-87f1-4ce1-b0b2-3a359584071e" path="/var/lib/kubelet/pods/9b647eaf-87f1-4ce1-b0b2-3a359584071e/volumes" Mar 11 09:40:09 crc kubenswrapper[4808]: I0311 09:40:09.797481 4808 scope.go:117] "RemoveContainer" containerID="327e481c771e263c438770d9467a9e777f57eb934e6c4efbb697dbdded9363eb" Mar 11 09:40:09 crc kubenswrapper[4808]: E0311 09:40:09.798791 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:40:21 crc kubenswrapper[4808]: I0311 09:40:21.789569 4808 scope.go:117] "RemoveContainer" containerID="327e481c771e263c438770d9467a9e777f57eb934e6c4efbb697dbdded9363eb" Mar 11 09:40:21 crc kubenswrapper[4808]: E0311 09:40:21.790193 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:40:33 crc kubenswrapper[4808]: I0311 09:40:33.791252 4808 scope.go:117] "RemoveContainer" containerID="327e481c771e263c438770d9467a9e777f57eb934e6c4efbb697dbdded9363eb" Mar 11 09:40:33 crc kubenswrapper[4808]: E0311 09:40:33.793045 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:40:45 crc kubenswrapper[4808]: I0311 09:40:45.790030 4808 scope.go:117] "RemoveContainer" containerID="327e481c771e263c438770d9467a9e777f57eb934e6c4efbb697dbdded9363eb" Mar 11 09:40:45 crc kubenswrapper[4808]: E0311 09:40:45.790896 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:40:54 crc kubenswrapper[4808]: I0311 09:40:54.945621 4808 scope.go:117] "RemoveContainer" containerID="c35c72cfcc2b9905f6a9871c999a79ab223bc581d9b12edc5b3f439f56c1926c" Mar 11 09:40:56 crc kubenswrapper[4808]: I0311 09:40:56.789785 4808 scope.go:117] "RemoveContainer" containerID="327e481c771e263c438770d9467a9e777f57eb934e6c4efbb697dbdded9363eb" Mar 11 09:40:56 crc kubenswrapper[4808]: E0311 09:40:56.790189 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:41:07 crc kubenswrapper[4808]: I0311 09:41:07.789151 4808 scope.go:117] "RemoveContainer" containerID="327e481c771e263c438770d9467a9e777f57eb934e6c4efbb697dbdded9363eb" Mar 11 09:41:07 crc kubenswrapper[4808]: E0311 09:41:07.789936 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:41:22 crc kubenswrapper[4808]: I0311 09:41:22.789198 4808 scope.go:117] "RemoveContainer" containerID="327e481c771e263c438770d9467a9e777f57eb934e6c4efbb697dbdded9363eb" Mar 11 09:41:22 crc kubenswrapper[4808]: E0311 09:41:22.790153 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:41:36 crc kubenswrapper[4808]: I0311 09:41:36.789028 4808 scope.go:117] "RemoveContainer" containerID="327e481c771e263c438770d9467a9e777f57eb934e6c4efbb697dbdded9363eb" Mar 11 09:41:36 crc kubenswrapper[4808]: E0311 09:41:36.789963 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:41:47 crc kubenswrapper[4808]: I0311 09:41:47.789478 4808 scope.go:117] "RemoveContainer" containerID="327e481c771e263c438770d9467a9e777f57eb934e6c4efbb697dbdded9363eb" Mar 11 09:41:47 crc kubenswrapper[4808]: E0311 09:41:47.790136 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:42:00 crc kubenswrapper[4808]: I0311 09:42:00.143176 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553702-9465s"] Mar 11 09:42:00 crc kubenswrapper[4808]: E0311 09:42:00.144148 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc2d2ddf-c0da-4693-9701-dff7ede958e0" containerName="oc" Mar 11 09:42:00 crc kubenswrapper[4808]: I0311 09:42:00.144166 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc2d2ddf-c0da-4693-9701-dff7ede958e0" containerName="oc" Mar 11 09:42:00 crc kubenswrapper[4808]: I0311 09:42:00.144544 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc2d2ddf-c0da-4693-9701-dff7ede958e0" containerName="oc" Mar 11 09:42:00 crc kubenswrapper[4808]: I0311 09:42:00.145134 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553702-9465s" Mar 11 09:42:00 crc kubenswrapper[4808]: I0311 09:42:00.148144 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 09:42:00 crc kubenswrapper[4808]: I0311 09:42:00.148588 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-8r7cc" Mar 11 09:42:00 crc kubenswrapper[4808]: I0311 09:42:00.148836 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 09:42:00 crc kubenswrapper[4808]: I0311 09:42:00.170706 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553702-9465s"] Mar 11 09:42:00 crc kubenswrapper[4808]: I0311 09:42:00.308666 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sd5tx\" (UniqueName: \"kubernetes.io/projected/dea1bd3a-7355-4bcb-b908-2ab2b9e970a6-kube-api-access-sd5tx\") pod \"auto-csr-approver-29553702-9465s\" (UID: \"dea1bd3a-7355-4bcb-b908-2ab2b9e970a6\") " pod="openshift-infra/auto-csr-approver-29553702-9465s" Mar 11 09:42:00 crc kubenswrapper[4808]: I0311 09:42:00.409954 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sd5tx\" (UniqueName: \"kubernetes.io/projected/dea1bd3a-7355-4bcb-b908-2ab2b9e970a6-kube-api-access-sd5tx\") pod \"auto-csr-approver-29553702-9465s\" (UID: \"dea1bd3a-7355-4bcb-b908-2ab2b9e970a6\") " pod="openshift-infra/auto-csr-approver-29553702-9465s" Mar 11 09:42:00 crc kubenswrapper[4808]: I0311 09:42:00.433827 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sd5tx\" (UniqueName: \"kubernetes.io/projected/dea1bd3a-7355-4bcb-b908-2ab2b9e970a6-kube-api-access-sd5tx\") pod \"auto-csr-approver-29553702-9465s\" (UID: \"dea1bd3a-7355-4bcb-b908-2ab2b9e970a6\") " pod="openshift-infra/auto-csr-approver-29553702-9465s" Mar 11 09:42:00 crc kubenswrapper[4808]: I0311 09:42:00.467261 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553702-9465s" Mar 11 09:42:00 crc kubenswrapper[4808]: I0311 09:42:00.883980 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553702-9465s"] Mar 11 09:42:01 crc kubenswrapper[4808]: I0311 09:42:01.568351 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553702-9465s" event={"ID":"dea1bd3a-7355-4bcb-b908-2ab2b9e970a6","Type":"ContainerStarted","Data":"9411a8826d82c3551ec96f022b006048d9776e7c1b0c706a8be0a1b41de68f36"} Mar 11 09:42:02 crc kubenswrapper[4808]: I0311 09:42:02.579950 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553702-9465s" event={"ID":"dea1bd3a-7355-4bcb-b908-2ab2b9e970a6","Type":"ContainerStarted","Data":"631edf2edd83d1863a615f373b866598a0970c3f9954b64cd536c31993076b8c"} Mar 11 09:42:02 crc kubenswrapper[4808]: I0311 09:42:02.606676 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29553702-9465s" podStartSLOduration=1.4049859470000001 podStartE2EDuration="2.606652282s" podCreationTimestamp="2026-03-11 09:42:00 +0000 UTC" firstStartedPulling="2026-03-11 09:42:00.892977117 +0000 UTC m=+3771.846300437" lastFinishedPulling="2026-03-11 09:42:02.094643442 +0000 UTC m=+3773.047966772" observedRunningTime="2026-03-11 09:42:02.599851939 +0000 UTC m=+3773.553175269" watchObservedRunningTime="2026-03-11 09:42:02.606652282 +0000 UTC m=+3773.559975622" Mar 11 09:42:02 crc kubenswrapper[4808]: I0311 09:42:02.789862 4808 scope.go:117] "RemoveContainer" containerID="327e481c771e263c438770d9467a9e777f57eb934e6c4efbb697dbdded9363eb" Mar 11 09:42:02 crc kubenswrapper[4808]: E0311 09:42:02.790148 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:42:03 crc kubenswrapper[4808]: I0311 09:42:03.588875 4808 generic.go:334] "Generic (PLEG): container finished" podID="dea1bd3a-7355-4bcb-b908-2ab2b9e970a6" containerID="631edf2edd83d1863a615f373b866598a0970c3f9954b64cd536c31993076b8c" exitCode=0 Mar 11 09:42:03 crc kubenswrapper[4808]: I0311 09:42:03.589091 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553702-9465s" event={"ID":"dea1bd3a-7355-4bcb-b908-2ab2b9e970a6","Type":"ContainerDied","Data":"631edf2edd83d1863a615f373b866598a0970c3f9954b64cd536c31993076b8c"} Mar 11 09:42:04 crc kubenswrapper[4808]: I0311 09:42:04.922655 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553702-9465s" Mar 11 09:42:04 crc kubenswrapper[4808]: I0311 09:42:04.980124 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sd5tx\" (UniqueName: \"kubernetes.io/projected/dea1bd3a-7355-4bcb-b908-2ab2b9e970a6-kube-api-access-sd5tx\") pod \"dea1bd3a-7355-4bcb-b908-2ab2b9e970a6\" (UID: \"dea1bd3a-7355-4bcb-b908-2ab2b9e970a6\") " Mar 11 09:42:04 crc kubenswrapper[4808]: I0311 09:42:04.986649 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dea1bd3a-7355-4bcb-b908-2ab2b9e970a6-kube-api-access-sd5tx" (OuterVolumeSpecName: "kube-api-access-sd5tx") pod "dea1bd3a-7355-4bcb-b908-2ab2b9e970a6" (UID: "dea1bd3a-7355-4bcb-b908-2ab2b9e970a6"). InnerVolumeSpecName "kube-api-access-sd5tx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:42:05 crc kubenswrapper[4808]: I0311 09:42:05.081576 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sd5tx\" (UniqueName: \"kubernetes.io/projected/dea1bd3a-7355-4bcb-b908-2ab2b9e970a6-kube-api-access-sd5tx\") on node \"crc\" DevicePath \"\"" Mar 11 09:42:05 crc kubenswrapper[4808]: I0311 09:42:05.608138 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553702-9465s" event={"ID":"dea1bd3a-7355-4bcb-b908-2ab2b9e970a6","Type":"ContainerDied","Data":"9411a8826d82c3551ec96f022b006048d9776e7c1b0c706a8be0a1b41de68f36"} Mar 11 09:42:05 crc kubenswrapper[4808]: I0311 09:42:05.608182 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9411a8826d82c3551ec96f022b006048d9776e7c1b0c706a8be0a1b41de68f36" Mar 11 09:42:05 crc kubenswrapper[4808]: I0311 09:42:05.608238 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553702-9465s" Mar 11 09:42:05 crc kubenswrapper[4808]: I0311 09:42:05.665713 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553696-mp8rt"] Mar 11 09:42:05 crc kubenswrapper[4808]: I0311 09:42:05.672458 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553696-mp8rt"] Mar 11 09:42:05 crc kubenswrapper[4808]: I0311 09:42:05.800582 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71858ed3-1101-4d2d-8ad3-b4006f61dd75" path="/var/lib/kubelet/pods/71858ed3-1101-4d2d-8ad3-b4006f61dd75/volumes" Mar 11 09:42:16 crc kubenswrapper[4808]: I0311 09:42:16.789478 4808 scope.go:117] "RemoveContainer" containerID="327e481c771e263c438770d9467a9e777f57eb934e6c4efbb697dbdded9363eb" Mar 11 09:42:16 crc kubenswrapper[4808]: E0311 09:42:16.791016 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:42:27 crc kubenswrapper[4808]: I0311 09:42:27.789174 4808 scope.go:117] "RemoveContainer" containerID="327e481c771e263c438770d9467a9e777f57eb934e6c4efbb697dbdded9363eb" Mar 11 09:42:27 crc kubenswrapper[4808]: E0311 09:42:27.790117 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:42:40 crc kubenswrapper[4808]: I0311 09:42:40.789814 4808 scope.go:117] "RemoveContainer" containerID="327e481c771e263c438770d9467a9e777f57eb934e6c4efbb697dbdded9363eb" Mar 11 09:42:40 crc kubenswrapper[4808]: E0311 09:42:40.791047 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:42:51 crc kubenswrapper[4808]: I0311 09:42:51.789920 4808 scope.go:117] "RemoveContainer" containerID="327e481c771e263c438770d9467a9e777f57eb934e6c4efbb697dbdded9363eb" Mar 11 09:42:51 crc kubenswrapper[4808]: E0311 09:42:51.790713 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:42:55 crc kubenswrapper[4808]: I0311 09:42:55.055663 4808 scope.go:117] "RemoveContainer" containerID="46842e97d4d490d4fd500ca93f8e38f58eade08d5ac3b42000ffd8f8733a634a" Mar 11 09:43:02 crc kubenswrapper[4808]: I0311 09:43:02.789958 4808 scope.go:117] "RemoveContainer" containerID="327e481c771e263c438770d9467a9e777f57eb934e6c4efbb697dbdded9363eb" Mar 11 09:43:02 crc kubenswrapper[4808]: E0311 09:43:02.791124 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:43:15 crc kubenswrapper[4808]: I0311 09:43:15.789480 4808 scope.go:117] "RemoveContainer" containerID="327e481c771e263c438770d9467a9e777f57eb934e6c4efbb697dbdded9363eb" Mar 11 09:43:15 crc kubenswrapper[4808]: E0311 09:43:15.790241 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:43:30 crc kubenswrapper[4808]: I0311 09:43:30.790070 4808 scope.go:117] "RemoveContainer" containerID="327e481c771e263c438770d9467a9e777f57eb934e6c4efbb697dbdded9363eb" Mar 11 09:43:30 crc kubenswrapper[4808]: E0311 09:43:30.791268 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:43:41 crc kubenswrapper[4808]: I0311 09:43:41.789346 4808 scope.go:117] "RemoveContainer" containerID="327e481c771e263c438770d9467a9e777f57eb934e6c4efbb697dbdded9363eb" Mar 11 09:43:41 crc kubenswrapper[4808]: E0311 09:43:41.790199 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:43:53 crc kubenswrapper[4808]: I0311 09:43:53.789609 4808 scope.go:117] "RemoveContainer" containerID="327e481c771e263c438770d9467a9e777f57eb934e6c4efbb697dbdded9363eb" Mar 11 09:43:53 crc kubenswrapper[4808]: E0311 09:43:53.790351 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:44:00 crc kubenswrapper[4808]: I0311 09:44:00.144693 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553704-d5nxf"] Mar 11 09:44:00 crc kubenswrapper[4808]: E0311 09:44:00.145420 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dea1bd3a-7355-4bcb-b908-2ab2b9e970a6" containerName="oc" Mar 11 09:44:00 crc kubenswrapper[4808]: I0311 09:44:00.145438 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="dea1bd3a-7355-4bcb-b908-2ab2b9e970a6" containerName="oc" Mar 11 09:44:00 crc kubenswrapper[4808]: I0311 09:44:00.145622 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="dea1bd3a-7355-4bcb-b908-2ab2b9e970a6" containerName="oc" Mar 11 09:44:00 crc kubenswrapper[4808]: I0311 09:44:00.146244 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553704-d5nxf" Mar 11 09:44:00 crc kubenswrapper[4808]: I0311 09:44:00.148452 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-8r7cc" Mar 11 09:44:00 crc kubenswrapper[4808]: I0311 09:44:00.149751 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 09:44:00 crc kubenswrapper[4808]: I0311 09:44:00.149969 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 09:44:00 crc kubenswrapper[4808]: I0311 09:44:00.152058 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553704-d5nxf"] Mar 11 09:44:00 crc kubenswrapper[4808]: I0311 09:44:00.325308 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzdt2\" (UniqueName: \"kubernetes.io/projected/afc64431-a07e-43e9-8350-988a62854b7c-kube-api-access-wzdt2\") pod \"auto-csr-approver-29553704-d5nxf\" (UID: \"afc64431-a07e-43e9-8350-988a62854b7c\") " pod="openshift-infra/auto-csr-approver-29553704-d5nxf" Mar 11 09:44:00 crc kubenswrapper[4808]: I0311 09:44:00.427207 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzdt2\" (UniqueName: \"kubernetes.io/projected/afc64431-a07e-43e9-8350-988a62854b7c-kube-api-access-wzdt2\") pod \"auto-csr-approver-29553704-d5nxf\" (UID: \"afc64431-a07e-43e9-8350-988a62854b7c\") " pod="openshift-infra/auto-csr-approver-29553704-d5nxf" Mar 11 09:44:00 crc kubenswrapper[4808]: I0311 09:44:00.454141 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzdt2\" (UniqueName: \"kubernetes.io/projected/afc64431-a07e-43e9-8350-988a62854b7c-kube-api-access-wzdt2\") pod \"auto-csr-approver-29553704-d5nxf\" (UID: \"afc64431-a07e-43e9-8350-988a62854b7c\") " pod="openshift-infra/auto-csr-approver-29553704-d5nxf" Mar 11 09:44:00 crc kubenswrapper[4808]: I0311 09:44:00.476770 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553704-d5nxf" Mar 11 09:44:01 crc kubenswrapper[4808]: I0311 09:44:01.009534 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553704-d5nxf"] Mar 11 09:44:01 crc kubenswrapper[4808]: I0311 09:44:01.570682 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553704-d5nxf" event={"ID":"afc64431-a07e-43e9-8350-988a62854b7c","Type":"ContainerStarted","Data":"4171fa7e94b8b2b6a63fdb5fb08c15f965a9703c92792da71605ad7e86242768"} Mar 11 09:44:02 crc kubenswrapper[4808]: I0311 09:44:02.579955 4808 generic.go:334] "Generic (PLEG): container finished" podID="afc64431-a07e-43e9-8350-988a62854b7c" containerID="90b06ea4f456f3253064833db25778aff5d8bbf457c48c94cb64201e135a6c34" exitCode=0 Mar 11 09:44:02 crc kubenswrapper[4808]: I0311 09:44:02.580067 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553704-d5nxf" event={"ID":"afc64431-a07e-43e9-8350-988a62854b7c","Type":"ContainerDied","Data":"90b06ea4f456f3253064833db25778aff5d8bbf457c48c94cb64201e135a6c34"} Mar 11 09:44:03 crc kubenswrapper[4808]: I0311 09:44:03.955494 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553704-d5nxf" Mar 11 09:44:04 crc kubenswrapper[4808]: I0311 09:44:04.082190 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wzdt2\" (UniqueName: \"kubernetes.io/projected/afc64431-a07e-43e9-8350-988a62854b7c-kube-api-access-wzdt2\") pod \"afc64431-a07e-43e9-8350-988a62854b7c\" (UID: \"afc64431-a07e-43e9-8350-988a62854b7c\") " Mar 11 09:44:04 crc kubenswrapper[4808]: I0311 09:44:04.088880 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afc64431-a07e-43e9-8350-988a62854b7c-kube-api-access-wzdt2" (OuterVolumeSpecName: "kube-api-access-wzdt2") pod "afc64431-a07e-43e9-8350-988a62854b7c" (UID: "afc64431-a07e-43e9-8350-988a62854b7c"). InnerVolumeSpecName "kube-api-access-wzdt2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:44:04 crc kubenswrapper[4808]: I0311 09:44:04.184300 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wzdt2\" (UniqueName: \"kubernetes.io/projected/afc64431-a07e-43e9-8350-988a62854b7c-kube-api-access-wzdt2\") on node \"crc\" DevicePath \"\"" Mar 11 09:44:04 crc kubenswrapper[4808]: I0311 09:44:04.602030 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553704-d5nxf" event={"ID":"afc64431-a07e-43e9-8350-988a62854b7c","Type":"ContainerDied","Data":"4171fa7e94b8b2b6a63fdb5fb08c15f965a9703c92792da71605ad7e86242768"} Mar 11 09:44:04 crc kubenswrapper[4808]: I0311 09:44:04.602466 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4171fa7e94b8b2b6a63fdb5fb08c15f965a9703c92792da71605ad7e86242768" Mar 11 09:44:04 crc kubenswrapper[4808]: I0311 09:44:04.602116 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553704-d5nxf" Mar 11 09:44:04 crc kubenswrapper[4808]: I0311 09:44:04.790221 4808 scope.go:117] "RemoveContainer" containerID="327e481c771e263c438770d9467a9e777f57eb934e6c4efbb697dbdded9363eb" Mar 11 09:44:04 crc kubenswrapper[4808]: E0311 09:44:04.790785 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:44:05 crc kubenswrapper[4808]: I0311 09:44:05.025932 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553698-nvqfn"] Mar 11 09:44:05 crc kubenswrapper[4808]: I0311 09:44:05.030884 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553698-nvqfn"] Mar 11 09:44:05 crc kubenswrapper[4808]: I0311 09:44:05.801655 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23f5ba6a-4c95-4a2e-a1d0-b5e7c96da208" path="/var/lib/kubelet/pods/23f5ba6a-4c95-4a2e-a1d0-b5e7c96da208/volumes" Mar 11 09:44:17 crc kubenswrapper[4808]: I0311 09:44:17.790056 4808 scope.go:117] "RemoveContainer" containerID="327e481c771e263c438770d9467a9e777f57eb934e6c4efbb697dbdded9363eb" Mar 11 09:44:17 crc kubenswrapper[4808]: E0311 09:44:17.790909 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:44:25 crc kubenswrapper[4808]: I0311 09:44:25.879293 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mcsmp"] Mar 11 09:44:25 crc kubenswrapper[4808]: E0311 09:44:25.880284 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afc64431-a07e-43e9-8350-988a62854b7c" containerName="oc" Mar 11 09:44:25 crc kubenswrapper[4808]: I0311 09:44:25.880309 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="afc64431-a07e-43e9-8350-988a62854b7c" containerName="oc" Mar 11 09:44:25 crc kubenswrapper[4808]: I0311 09:44:25.880570 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="afc64431-a07e-43e9-8350-988a62854b7c" containerName="oc" Mar 11 09:44:25 crc kubenswrapper[4808]: I0311 09:44:25.881889 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mcsmp" Mar 11 09:44:25 crc kubenswrapper[4808]: I0311 09:44:25.893726 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mcsmp"] Mar 11 09:44:25 crc kubenswrapper[4808]: I0311 09:44:25.913941 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9j56x\" (UniqueName: \"kubernetes.io/projected/f4446619-4a8d-4e5c-8b21-6c47bbace99c-kube-api-access-9j56x\") pod \"certified-operators-mcsmp\" (UID: \"f4446619-4a8d-4e5c-8b21-6c47bbace99c\") " pod="openshift-marketplace/certified-operators-mcsmp" Mar 11 09:44:25 crc kubenswrapper[4808]: I0311 09:44:25.914017 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4446619-4a8d-4e5c-8b21-6c47bbace99c-catalog-content\") pod \"certified-operators-mcsmp\" (UID: \"f4446619-4a8d-4e5c-8b21-6c47bbace99c\") " pod="openshift-marketplace/certified-operators-mcsmp" Mar 11 09:44:25 crc kubenswrapper[4808]: I0311 09:44:25.914058 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4446619-4a8d-4e5c-8b21-6c47bbace99c-utilities\") pod \"certified-operators-mcsmp\" (UID: \"f4446619-4a8d-4e5c-8b21-6c47bbace99c\") " pod="openshift-marketplace/certified-operators-mcsmp" Mar 11 09:44:26 crc kubenswrapper[4808]: I0311 09:44:26.015893 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9j56x\" (UniqueName: \"kubernetes.io/projected/f4446619-4a8d-4e5c-8b21-6c47bbace99c-kube-api-access-9j56x\") pod \"certified-operators-mcsmp\" (UID: \"f4446619-4a8d-4e5c-8b21-6c47bbace99c\") " pod="openshift-marketplace/certified-operators-mcsmp" Mar 11 09:44:26 crc kubenswrapper[4808]: I0311 09:44:26.015958 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4446619-4a8d-4e5c-8b21-6c47bbace99c-catalog-content\") pod \"certified-operators-mcsmp\" (UID: \"f4446619-4a8d-4e5c-8b21-6c47bbace99c\") " pod="openshift-marketplace/certified-operators-mcsmp" Mar 11 09:44:26 crc kubenswrapper[4808]: I0311 09:44:26.015999 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4446619-4a8d-4e5c-8b21-6c47bbace99c-utilities\") pod \"certified-operators-mcsmp\" (UID: \"f4446619-4a8d-4e5c-8b21-6c47bbace99c\") " pod="openshift-marketplace/certified-operators-mcsmp" Mar 11 09:44:26 crc kubenswrapper[4808]: I0311 09:44:26.016625 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4446619-4a8d-4e5c-8b21-6c47bbace99c-utilities\") pod \"certified-operators-mcsmp\" (UID: \"f4446619-4a8d-4e5c-8b21-6c47bbace99c\") " pod="openshift-marketplace/certified-operators-mcsmp" Mar 11 09:44:26 crc kubenswrapper[4808]: I0311 09:44:26.016650 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4446619-4a8d-4e5c-8b21-6c47bbace99c-catalog-content\") pod \"certified-operators-mcsmp\" (UID: \"f4446619-4a8d-4e5c-8b21-6c47bbace99c\") " pod="openshift-marketplace/certified-operators-mcsmp" Mar 11 09:44:26 crc kubenswrapper[4808]: I0311 09:44:26.039507 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9j56x\" (UniqueName: \"kubernetes.io/projected/f4446619-4a8d-4e5c-8b21-6c47bbace99c-kube-api-access-9j56x\") pod \"certified-operators-mcsmp\" (UID: \"f4446619-4a8d-4e5c-8b21-6c47bbace99c\") " pod="openshift-marketplace/certified-operators-mcsmp" Mar 11 09:44:26 crc kubenswrapper[4808]: I0311 09:44:26.213292 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mcsmp" Mar 11 09:44:26 crc kubenswrapper[4808]: I0311 09:44:26.666985 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mcsmp"] Mar 11 09:44:27 crc kubenswrapper[4808]: I0311 09:44:27.798735 4808 generic.go:334] "Generic (PLEG): container finished" podID="f4446619-4a8d-4e5c-8b21-6c47bbace99c" containerID="8a3325f47af9b1923bb820559b406915bb36c956a533bb229100fca91ac03442" exitCode=0 Mar 11 09:44:27 crc kubenswrapper[4808]: I0311 09:44:27.806660 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mcsmp" event={"ID":"f4446619-4a8d-4e5c-8b21-6c47bbace99c","Type":"ContainerDied","Data":"8a3325f47af9b1923bb820559b406915bb36c956a533bb229100fca91ac03442"} Mar 11 09:44:27 crc kubenswrapper[4808]: I0311 09:44:27.807065 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mcsmp" event={"ID":"f4446619-4a8d-4e5c-8b21-6c47bbace99c","Type":"ContainerStarted","Data":"da7d3be97164fa9190bf9b4f0956acfedeb6eec8a833e484175516143d8fc30c"} Mar 11 09:44:30 crc kubenswrapper[4808]: I0311 09:44:30.789072 4808 scope.go:117] "RemoveContainer" containerID="327e481c771e263c438770d9467a9e777f57eb934e6c4efbb697dbdded9363eb" Mar 11 09:44:30 crc kubenswrapper[4808]: E0311 09:44:30.789658 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:44:32 crc kubenswrapper[4808]: I0311 09:44:32.844933 4808 generic.go:334] "Generic (PLEG): container finished" podID="f4446619-4a8d-4e5c-8b21-6c47bbace99c" containerID="1f1282aafe23a94f08bb1e0e714096a0ce72bb9964fac2c85555f9c7b92cb542" exitCode=0 Mar 11 09:44:32 crc kubenswrapper[4808]: I0311 09:44:32.845049 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mcsmp" event={"ID":"f4446619-4a8d-4e5c-8b21-6c47bbace99c","Type":"ContainerDied","Data":"1f1282aafe23a94f08bb1e0e714096a0ce72bb9964fac2c85555f9c7b92cb542"} Mar 11 09:44:33 crc kubenswrapper[4808]: I0311 09:44:33.859980 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mcsmp" event={"ID":"f4446619-4a8d-4e5c-8b21-6c47bbace99c","Type":"ContainerStarted","Data":"3c400087f191fd0c8ca0ccbec7db3693cee67b98608997da5e2687ae6c4e26f9"} Mar 11 09:44:33 crc kubenswrapper[4808]: I0311 09:44:33.898179 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mcsmp" podStartSLOduration=3.221693646 podStartE2EDuration="8.89815824s" podCreationTimestamp="2026-03-11 09:44:25 +0000 UTC" firstStartedPulling="2026-03-11 09:44:27.80184767 +0000 UTC m=+3918.755171030" lastFinishedPulling="2026-03-11 09:44:33.478312304 +0000 UTC m=+3924.431635624" observedRunningTime="2026-03-11 09:44:33.88830976 +0000 UTC m=+3924.841633120" watchObservedRunningTime="2026-03-11 09:44:33.89815824 +0000 UTC m=+3924.851481560" Mar 11 09:44:36 crc kubenswrapper[4808]: I0311 09:44:36.214389 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mcsmp" Mar 11 09:44:36 crc kubenswrapper[4808]: I0311 09:44:36.214468 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mcsmp" Mar 11 09:44:36 crc kubenswrapper[4808]: I0311 09:44:36.283283 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mcsmp" Mar 11 09:44:42 crc kubenswrapper[4808]: I0311 09:44:42.789685 4808 scope.go:117] "RemoveContainer" containerID="327e481c771e263c438770d9467a9e777f57eb934e6c4efbb697dbdded9363eb" Mar 11 09:44:42 crc kubenswrapper[4808]: E0311 09:44:42.790520 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:44:46 crc kubenswrapper[4808]: I0311 09:44:46.258240 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mcsmp" Mar 11 09:44:46 crc kubenswrapper[4808]: I0311 09:44:46.348161 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mcsmp"] Mar 11 09:44:46 crc kubenswrapper[4808]: I0311 09:44:46.409024 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6xd22"] Mar 11 09:44:46 crc kubenswrapper[4808]: I0311 09:44:46.409279 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6xd22" podUID="bd3fd8a8-324f-444f-99ce-9916706b9b32" containerName="registry-server" containerID="cri-o://8d68e02f813684ce278eb584da951ff72ce29081d995ab88730f73c9d4f61d8f" gracePeriod=2 Mar 11 09:44:46 crc kubenswrapper[4808]: I0311 09:44:46.869017 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6xd22" Mar 11 09:44:46 crc kubenswrapper[4808]: I0311 09:44:46.917464 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9gj8\" (UniqueName: \"kubernetes.io/projected/bd3fd8a8-324f-444f-99ce-9916706b9b32-kube-api-access-q9gj8\") pod \"bd3fd8a8-324f-444f-99ce-9916706b9b32\" (UID: \"bd3fd8a8-324f-444f-99ce-9916706b9b32\") " Mar 11 09:44:46 crc kubenswrapper[4808]: I0311 09:44:46.917515 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd3fd8a8-324f-444f-99ce-9916706b9b32-utilities\") pod \"bd3fd8a8-324f-444f-99ce-9916706b9b32\" (UID: \"bd3fd8a8-324f-444f-99ce-9916706b9b32\") " Mar 11 09:44:46 crc kubenswrapper[4808]: I0311 09:44:46.917578 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd3fd8a8-324f-444f-99ce-9916706b9b32-catalog-content\") pod \"bd3fd8a8-324f-444f-99ce-9916706b9b32\" (UID: \"bd3fd8a8-324f-444f-99ce-9916706b9b32\") " Mar 11 09:44:46 crc kubenswrapper[4808]: I0311 09:44:46.919270 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd3fd8a8-324f-444f-99ce-9916706b9b32-utilities" (OuterVolumeSpecName: "utilities") pod "bd3fd8a8-324f-444f-99ce-9916706b9b32" (UID: "bd3fd8a8-324f-444f-99ce-9916706b9b32"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:44:46 crc kubenswrapper[4808]: I0311 09:44:46.924111 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd3fd8a8-324f-444f-99ce-9916706b9b32-kube-api-access-q9gj8" (OuterVolumeSpecName: "kube-api-access-q9gj8") pod "bd3fd8a8-324f-444f-99ce-9916706b9b32" (UID: "bd3fd8a8-324f-444f-99ce-9916706b9b32"). InnerVolumeSpecName "kube-api-access-q9gj8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:44:46 crc kubenswrapper[4808]: I0311 09:44:46.961212 4808 generic.go:334] "Generic (PLEG): container finished" podID="bd3fd8a8-324f-444f-99ce-9916706b9b32" containerID="8d68e02f813684ce278eb584da951ff72ce29081d995ab88730f73c9d4f61d8f" exitCode=0 Mar 11 09:44:46 crc kubenswrapper[4808]: I0311 09:44:46.961597 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6xd22" Mar 11 09:44:46 crc kubenswrapper[4808]: I0311 09:44:46.961599 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6xd22" event={"ID":"bd3fd8a8-324f-444f-99ce-9916706b9b32","Type":"ContainerDied","Data":"8d68e02f813684ce278eb584da951ff72ce29081d995ab88730f73c9d4f61d8f"} Mar 11 09:44:46 crc kubenswrapper[4808]: I0311 09:44:46.961836 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6xd22" event={"ID":"bd3fd8a8-324f-444f-99ce-9916706b9b32","Type":"ContainerDied","Data":"0faaa6510220595207f3053181174e90ab03bae40d30071da0e186dae42535c0"} Mar 11 09:44:46 crc kubenswrapper[4808]: I0311 09:44:46.961861 4808 scope.go:117] "RemoveContainer" containerID="8d68e02f813684ce278eb584da951ff72ce29081d995ab88730f73c9d4f61d8f" Mar 11 09:44:46 crc kubenswrapper[4808]: I0311 09:44:46.985917 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd3fd8a8-324f-444f-99ce-9916706b9b32-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bd3fd8a8-324f-444f-99ce-9916706b9b32" (UID: "bd3fd8a8-324f-444f-99ce-9916706b9b32"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:44:46 crc kubenswrapper[4808]: I0311 09:44:46.986317 4808 scope.go:117] "RemoveContainer" containerID="d792c6afc2c29d5057276c3350cead8d4cb9e229562a4dbf3d7982dfffa29f42" Mar 11 09:44:47 crc kubenswrapper[4808]: I0311 09:44:47.006088 4808 scope.go:117] "RemoveContainer" containerID="fde5b79f6ecb77a664814cef8cb561bb97d7ff084d730d702ddd91e91cdf8b3b" Mar 11 09:44:47 crc kubenswrapper[4808]: I0311 09:44:47.019138 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q9gj8\" (UniqueName: \"kubernetes.io/projected/bd3fd8a8-324f-444f-99ce-9916706b9b32-kube-api-access-q9gj8\") on node \"crc\" DevicePath \"\"" Mar 11 09:44:47 crc kubenswrapper[4808]: I0311 09:44:47.019174 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd3fd8a8-324f-444f-99ce-9916706b9b32-utilities\") on node \"crc\" DevicePath \"\"" Mar 11 09:44:47 crc kubenswrapper[4808]: I0311 09:44:47.019184 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd3fd8a8-324f-444f-99ce-9916706b9b32-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 11 09:44:47 crc kubenswrapper[4808]: I0311 09:44:47.026928 4808 scope.go:117] "RemoveContainer" containerID="8d68e02f813684ce278eb584da951ff72ce29081d995ab88730f73c9d4f61d8f" Mar 11 09:44:47 crc kubenswrapper[4808]: E0311 09:44:47.027375 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d68e02f813684ce278eb584da951ff72ce29081d995ab88730f73c9d4f61d8f\": container with ID starting with 8d68e02f813684ce278eb584da951ff72ce29081d995ab88730f73c9d4f61d8f not found: ID does not exist" containerID="8d68e02f813684ce278eb584da951ff72ce29081d995ab88730f73c9d4f61d8f" Mar 11 09:44:47 crc kubenswrapper[4808]: I0311 09:44:47.027414 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d68e02f813684ce278eb584da951ff72ce29081d995ab88730f73c9d4f61d8f"} err="failed to get container status \"8d68e02f813684ce278eb584da951ff72ce29081d995ab88730f73c9d4f61d8f\": rpc error: code = NotFound desc = could not find container \"8d68e02f813684ce278eb584da951ff72ce29081d995ab88730f73c9d4f61d8f\": container with ID starting with 8d68e02f813684ce278eb584da951ff72ce29081d995ab88730f73c9d4f61d8f not found: ID does not exist" Mar 11 09:44:47 crc kubenswrapper[4808]: I0311 09:44:47.027436 4808 scope.go:117] "RemoveContainer" containerID="d792c6afc2c29d5057276c3350cead8d4cb9e229562a4dbf3d7982dfffa29f42" Mar 11 09:44:47 crc kubenswrapper[4808]: E0311 09:44:47.029982 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d792c6afc2c29d5057276c3350cead8d4cb9e229562a4dbf3d7982dfffa29f42\": container with ID starting with d792c6afc2c29d5057276c3350cead8d4cb9e229562a4dbf3d7982dfffa29f42 not found: ID does not exist" containerID="d792c6afc2c29d5057276c3350cead8d4cb9e229562a4dbf3d7982dfffa29f42" Mar 11 09:44:47 crc kubenswrapper[4808]: I0311 09:44:47.030027 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d792c6afc2c29d5057276c3350cead8d4cb9e229562a4dbf3d7982dfffa29f42"} err="failed to get container status \"d792c6afc2c29d5057276c3350cead8d4cb9e229562a4dbf3d7982dfffa29f42\": rpc error: code = NotFound desc = could not find container \"d792c6afc2c29d5057276c3350cead8d4cb9e229562a4dbf3d7982dfffa29f42\": container with ID starting with d792c6afc2c29d5057276c3350cead8d4cb9e229562a4dbf3d7982dfffa29f42 not found: ID does not exist" Mar 11 09:44:47 crc kubenswrapper[4808]: I0311 09:44:47.030051 4808 scope.go:117] "RemoveContainer" containerID="fde5b79f6ecb77a664814cef8cb561bb97d7ff084d730d702ddd91e91cdf8b3b" Mar 11 09:44:47 crc kubenswrapper[4808]: E0311 09:44:47.034784 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fde5b79f6ecb77a664814cef8cb561bb97d7ff084d730d702ddd91e91cdf8b3b\": container with ID starting with fde5b79f6ecb77a664814cef8cb561bb97d7ff084d730d702ddd91e91cdf8b3b not found: ID does not exist" containerID="fde5b79f6ecb77a664814cef8cb561bb97d7ff084d730d702ddd91e91cdf8b3b" Mar 11 09:44:47 crc kubenswrapper[4808]: I0311 09:44:47.034825 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fde5b79f6ecb77a664814cef8cb561bb97d7ff084d730d702ddd91e91cdf8b3b"} err="failed to get container status \"fde5b79f6ecb77a664814cef8cb561bb97d7ff084d730d702ddd91e91cdf8b3b\": rpc error: code = NotFound desc = could not find container \"fde5b79f6ecb77a664814cef8cb561bb97d7ff084d730d702ddd91e91cdf8b3b\": container with ID starting with fde5b79f6ecb77a664814cef8cb561bb97d7ff084d730d702ddd91e91cdf8b3b not found: ID does not exist" Mar 11 09:44:47 crc kubenswrapper[4808]: I0311 09:44:47.291557 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6xd22"] Mar 11 09:44:47 crc kubenswrapper[4808]: I0311 09:44:47.296421 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6xd22"] Mar 11 09:44:47 crc kubenswrapper[4808]: I0311 09:44:47.798147 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd3fd8a8-324f-444f-99ce-9916706b9b32" path="/var/lib/kubelet/pods/bd3fd8a8-324f-444f-99ce-9916706b9b32/volumes" Mar 11 09:44:54 crc kubenswrapper[4808]: I0311 09:44:54.789508 4808 scope.go:117] "RemoveContainer" containerID="327e481c771e263c438770d9467a9e777f57eb934e6c4efbb697dbdded9363eb" Mar 11 09:44:55 crc kubenswrapper[4808]: I0311 09:44:55.144611 4808 scope.go:117] "RemoveContainer" containerID="57603643d93a24319b561448710b13a69e0e77a985604db9d5af05d8c43b37f5" Mar 11 09:44:56 crc kubenswrapper[4808]: I0311 09:44:56.027246 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" event={"ID":"3dda5309-668d-4e3c-b3b2-1d708eecc578","Type":"ContainerStarted","Data":"e6211a68183758376bcd3babdb8bbf9648fbb8e99444df32e4a7da0910992c35"} Mar 11 09:45:00 crc kubenswrapper[4808]: I0311 09:45:00.156789 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29553705-525jx"] Mar 11 09:45:00 crc kubenswrapper[4808]: E0311 09:45:00.159711 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd3fd8a8-324f-444f-99ce-9916706b9b32" containerName="registry-server" Mar 11 09:45:00 crc kubenswrapper[4808]: I0311 09:45:00.159906 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd3fd8a8-324f-444f-99ce-9916706b9b32" containerName="registry-server" Mar 11 09:45:00 crc kubenswrapper[4808]: E0311 09:45:00.160030 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd3fd8a8-324f-444f-99ce-9916706b9b32" containerName="extract-content" Mar 11 09:45:00 crc kubenswrapper[4808]: I0311 09:45:00.160141 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd3fd8a8-324f-444f-99ce-9916706b9b32" containerName="extract-content" Mar 11 09:45:00 crc kubenswrapper[4808]: E0311 09:45:00.160486 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd3fd8a8-324f-444f-99ce-9916706b9b32" containerName="extract-utilities" Mar 11 09:45:00 crc kubenswrapper[4808]: I0311 09:45:00.160635 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd3fd8a8-324f-444f-99ce-9916706b9b32" containerName="extract-utilities" Mar 11 09:45:00 crc kubenswrapper[4808]: I0311 09:45:00.161046 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd3fd8a8-324f-444f-99ce-9916706b9b32" containerName="registry-server" Mar 11 09:45:00 crc kubenswrapper[4808]: I0311 09:45:00.162226 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29553705-525jx" Mar 11 09:45:00 crc kubenswrapper[4808]: I0311 09:45:00.164643 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29553705-525jx"] Mar 11 09:45:00 crc kubenswrapper[4808]: I0311 09:45:00.164761 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 11 09:45:00 crc kubenswrapper[4808]: I0311 09:45:00.165185 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 11 09:45:00 crc kubenswrapper[4808]: I0311 09:45:00.311737 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6baca687-187c-4054-8060-85672564432f-config-volume\") pod \"collect-profiles-29553705-525jx\" (UID: \"6baca687-187c-4054-8060-85672564432f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553705-525jx" Mar 11 09:45:00 crc kubenswrapper[4808]: I0311 09:45:00.311831 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6baca687-187c-4054-8060-85672564432f-secret-volume\") pod \"collect-profiles-29553705-525jx\" (UID: \"6baca687-187c-4054-8060-85672564432f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553705-525jx" Mar 11 09:45:00 crc kubenswrapper[4808]: I0311 09:45:00.311948 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2bfq\" (UniqueName: \"kubernetes.io/projected/6baca687-187c-4054-8060-85672564432f-kube-api-access-k2bfq\") pod \"collect-profiles-29553705-525jx\" (UID: \"6baca687-187c-4054-8060-85672564432f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553705-525jx" Mar 11 09:45:00 crc kubenswrapper[4808]: I0311 09:45:00.413718 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6baca687-187c-4054-8060-85672564432f-config-volume\") pod \"collect-profiles-29553705-525jx\" (UID: \"6baca687-187c-4054-8060-85672564432f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553705-525jx" Mar 11 09:45:00 crc kubenswrapper[4808]: I0311 09:45:00.413837 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6baca687-187c-4054-8060-85672564432f-secret-volume\") pod \"collect-profiles-29553705-525jx\" (UID: \"6baca687-187c-4054-8060-85672564432f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553705-525jx" Mar 11 09:45:00 crc kubenswrapper[4808]: I0311 09:45:00.413869 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2bfq\" (UniqueName: \"kubernetes.io/projected/6baca687-187c-4054-8060-85672564432f-kube-api-access-k2bfq\") pod \"collect-profiles-29553705-525jx\" (UID: \"6baca687-187c-4054-8060-85672564432f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553705-525jx" Mar 11 09:45:00 crc kubenswrapper[4808]: I0311 09:45:00.414637 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6baca687-187c-4054-8060-85672564432f-config-volume\") pod \"collect-profiles-29553705-525jx\" (UID: \"6baca687-187c-4054-8060-85672564432f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553705-525jx" Mar 11 09:45:00 crc kubenswrapper[4808]: I0311 09:45:00.422403 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6baca687-187c-4054-8060-85672564432f-secret-volume\") pod \"collect-profiles-29553705-525jx\" (UID: \"6baca687-187c-4054-8060-85672564432f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553705-525jx" Mar 11 09:45:00 crc kubenswrapper[4808]: I0311 09:45:00.437842 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2bfq\" (UniqueName: \"kubernetes.io/projected/6baca687-187c-4054-8060-85672564432f-kube-api-access-k2bfq\") pod \"collect-profiles-29553705-525jx\" (UID: \"6baca687-187c-4054-8060-85672564432f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553705-525jx" Mar 11 09:45:00 crc kubenswrapper[4808]: I0311 09:45:00.495721 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29553705-525jx" Mar 11 09:45:00 crc kubenswrapper[4808]: I0311 09:45:00.967834 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29553705-525jx"] Mar 11 09:45:01 crc kubenswrapper[4808]: I0311 09:45:01.068225 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29553705-525jx" event={"ID":"6baca687-187c-4054-8060-85672564432f","Type":"ContainerStarted","Data":"fab9417313a076e3a1437a11c1201fc315309cdcabd6d18b465fdc6ca1e54def"} Mar 11 09:45:02 crc kubenswrapper[4808]: I0311 09:45:02.078237 4808 generic.go:334] "Generic (PLEG): container finished" podID="6baca687-187c-4054-8060-85672564432f" containerID="856340a335e6fb702ad5c5c24ef14ad63c72fd27b1de0efb12719b7c7718138d" exitCode=0 Mar 11 09:45:02 crc kubenswrapper[4808]: I0311 09:45:02.078464 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29553705-525jx" event={"ID":"6baca687-187c-4054-8060-85672564432f","Type":"ContainerDied","Data":"856340a335e6fb702ad5c5c24ef14ad63c72fd27b1de0efb12719b7c7718138d"} Mar 11 09:45:03 crc kubenswrapper[4808]: I0311 09:45:03.375447 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29553705-525jx" Mar 11 09:45:03 crc kubenswrapper[4808]: I0311 09:45:03.558670 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6baca687-187c-4054-8060-85672564432f-secret-volume\") pod \"6baca687-187c-4054-8060-85672564432f\" (UID: \"6baca687-187c-4054-8060-85672564432f\") " Mar 11 09:45:03 crc kubenswrapper[4808]: I0311 09:45:03.558730 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6baca687-187c-4054-8060-85672564432f-config-volume\") pod \"6baca687-187c-4054-8060-85672564432f\" (UID: \"6baca687-187c-4054-8060-85672564432f\") " Mar 11 09:45:03 crc kubenswrapper[4808]: I0311 09:45:03.558794 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k2bfq\" (UniqueName: \"kubernetes.io/projected/6baca687-187c-4054-8060-85672564432f-kube-api-access-k2bfq\") pod \"6baca687-187c-4054-8060-85672564432f\" (UID: \"6baca687-187c-4054-8060-85672564432f\") " Mar 11 09:45:03 crc kubenswrapper[4808]: I0311 09:45:03.559688 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6baca687-187c-4054-8060-85672564432f-config-volume" (OuterVolumeSpecName: "config-volume") pod "6baca687-187c-4054-8060-85672564432f" (UID: "6baca687-187c-4054-8060-85672564432f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:45:03 crc kubenswrapper[4808]: I0311 09:45:03.565153 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6baca687-187c-4054-8060-85672564432f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "6baca687-187c-4054-8060-85672564432f" (UID: "6baca687-187c-4054-8060-85672564432f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:45:03 crc kubenswrapper[4808]: I0311 09:45:03.565333 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6baca687-187c-4054-8060-85672564432f-kube-api-access-k2bfq" (OuterVolumeSpecName: "kube-api-access-k2bfq") pod "6baca687-187c-4054-8060-85672564432f" (UID: "6baca687-187c-4054-8060-85672564432f"). InnerVolumeSpecName "kube-api-access-k2bfq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:45:03 crc kubenswrapper[4808]: I0311 09:45:03.660219 4808 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6baca687-187c-4054-8060-85672564432f-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 11 09:45:03 crc kubenswrapper[4808]: I0311 09:45:03.660264 4808 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6baca687-187c-4054-8060-85672564432f-config-volume\") on node \"crc\" DevicePath \"\"" Mar 11 09:45:03 crc kubenswrapper[4808]: I0311 09:45:03.660276 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k2bfq\" (UniqueName: \"kubernetes.io/projected/6baca687-187c-4054-8060-85672564432f-kube-api-access-k2bfq\") on node \"crc\" DevicePath \"\"" Mar 11 09:45:04 crc kubenswrapper[4808]: I0311 09:45:04.093932 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29553705-525jx" event={"ID":"6baca687-187c-4054-8060-85672564432f","Type":"ContainerDied","Data":"fab9417313a076e3a1437a11c1201fc315309cdcabd6d18b465fdc6ca1e54def"} Mar 11 09:45:04 crc kubenswrapper[4808]: I0311 09:45:04.093982 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fab9417313a076e3a1437a11c1201fc315309cdcabd6d18b465fdc6ca1e54def" Mar 11 09:45:04 crc kubenswrapper[4808]: I0311 09:45:04.093984 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29553705-525jx" Mar 11 09:45:04 crc kubenswrapper[4808]: I0311 09:45:04.456422 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29553660-2qbqt"] Mar 11 09:45:04 crc kubenswrapper[4808]: I0311 09:45:04.463753 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29553660-2qbqt"] Mar 11 09:45:05 crc kubenswrapper[4808]: I0311 09:45:05.813436 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e57a2f8-6fa7-4b10-96f9-cd2a678da7e9" path="/var/lib/kubelet/pods/6e57a2f8-6fa7-4b10-96f9-cd2a678da7e9/volumes" Mar 11 09:45:52 crc kubenswrapper[4808]: I0311 09:45:52.595602 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-t65c7"] Mar 11 09:45:52 crc kubenswrapper[4808]: E0311 09:45:52.596713 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6baca687-187c-4054-8060-85672564432f" containerName="collect-profiles" Mar 11 09:45:52 crc kubenswrapper[4808]: I0311 09:45:52.596737 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="6baca687-187c-4054-8060-85672564432f" containerName="collect-profiles" Mar 11 09:45:52 crc kubenswrapper[4808]: I0311 09:45:52.596975 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="6baca687-187c-4054-8060-85672564432f" containerName="collect-profiles" Mar 11 09:45:52 crc kubenswrapper[4808]: I0311 09:45:52.598594 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t65c7" Mar 11 09:45:52 crc kubenswrapper[4808]: I0311 09:45:52.611710 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-t65c7"] Mar 11 09:45:52 crc kubenswrapper[4808]: I0311 09:45:52.636285 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fx6zl\" (UniqueName: \"kubernetes.io/projected/7031fdce-b9db-4971-b2d6-7b5749c91421-kube-api-access-fx6zl\") pod \"community-operators-t65c7\" (UID: \"7031fdce-b9db-4971-b2d6-7b5749c91421\") " pod="openshift-marketplace/community-operators-t65c7" Mar 11 09:45:52 crc kubenswrapper[4808]: I0311 09:45:52.636374 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7031fdce-b9db-4971-b2d6-7b5749c91421-catalog-content\") pod \"community-operators-t65c7\" (UID: \"7031fdce-b9db-4971-b2d6-7b5749c91421\") " pod="openshift-marketplace/community-operators-t65c7" Mar 11 09:45:52 crc kubenswrapper[4808]: I0311 09:45:52.636410 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7031fdce-b9db-4971-b2d6-7b5749c91421-utilities\") pod \"community-operators-t65c7\" (UID: \"7031fdce-b9db-4971-b2d6-7b5749c91421\") " pod="openshift-marketplace/community-operators-t65c7" Mar 11 09:45:52 crc kubenswrapper[4808]: I0311 09:45:52.737383 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fx6zl\" (UniqueName: \"kubernetes.io/projected/7031fdce-b9db-4971-b2d6-7b5749c91421-kube-api-access-fx6zl\") pod \"community-operators-t65c7\" (UID: \"7031fdce-b9db-4971-b2d6-7b5749c91421\") " pod="openshift-marketplace/community-operators-t65c7" Mar 11 09:45:52 crc kubenswrapper[4808]: I0311 09:45:52.737447 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7031fdce-b9db-4971-b2d6-7b5749c91421-catalog-content\") pod \"community-operators-t65c7\" (UID: \"7031fdce-b9db-4971-b2d6-7b5749c91421\") " pod="openshift-marketplace/community-operators-t65c7" Mar 11 09:45:52 crc kubenswrapper[4808]: I0311 09:45:52.737495 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7031fdce-b9db-4971-b2d6-7b5749c91421-utilities\") pod \"community-operators-t65c7\" (UID: \"7031fdce-b9db-4971-b2d6-7b5749c91421\") " pod="openshift-marketplace/community-operators-t65c7" Mar 11 09:45:52 crc kubenswrapper[4808]: I0311 09:45:52.738085 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7031fdce-b9db-4971-b2d6-7b5749c91421-catalog-content\") pod \"community-operators-t65c7\" (UID: \"7031fdce-b9db-4971-b2d6-7b5749c91421\") " pod="openshift-marketplace/community-operators-t65c7" Mar 11 09:45:52 crc kubenswrapper[4808]: I0311 09:45:52.738108 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7031fdce-b9db-4971-b2d6-7b5749c91421-utilities\") pod \"community-operators-t65c7\" (UID: \"7031fdce-b9db-4971-b2d6-7b5749c91421\") " pod="openshift-marketplace/community-operators-t65c7" Mar 11 09:45:52 crc kubenswrapper[4808]: I0311 09:45:52.767465 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fx6zl\" (UniqueName: \"kubernetes.io/projected/7031fdce-b9db-4971-b2d6-7b5749c91421-kube-api-access-fx6zl\") pod \"community-operators-t65c7\" (UID: \"7031fdce-b9db-4971-b2d6-7b5749c91421\") " pod="openshift-marketplace/community-operators-t65c7" Mar 11 09:45:52 crc kubenswrapper[4808]: I0311 09:45:52.928541 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t65c7" Mar 11 09:45:53 crc kubenswrapper[4808]: I0311 09:45:53.469782 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-t65c7"] Mar 11 09:45:53 crc kubenswrapper[4808]: I0311 09:45:53.491185 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t65c7" event={"ID":"7031fdce-b9db-4971-b2d6-7b5749c91421","Type":"ContainerStarted","Data":"ab11cbd8b0a6705e8dc90037a81254185daeee5958af5c441a2ad0aad358c395"} Mar 11 09:45:54 crc kubenswrapper[4808]: I0311 09:45:54.498723 4808 generic.go:334] "Generic (PLEG): container finished" podID="7031fdce-b9db-4971-b2d6-7b5749c91421" containerID="1859d490b1e260517a71a754cd3c2b9bf331aa91b36447f71ae4846b6a75ba1b" exitCode=0 Mar 11 09:45:54 crc kubenswrapper[4808]: I0311 09:45:54.498960 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t65c7" event={"ID":"7031fdce-b9db-4971-b2d6-7b5749c91421","Type":"ContainerDied","Data":"1859d490b1e260517a71a754cd3c2b9bf331aa91b36447f71ae4846b6a75ba1b"} Mar 11 09:45:54 crc kubenswrapper[4808]: I0311 09:45:54.500456 4808 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 11 09:45:55 crc kubenswrapper[4808]: I0311 09:45:55.233719 4808 scope.go:117] "RemoveContainer" containerID="b83ba2ddcca5dbf52b6c44e7be61b1acd93f0e028dff7f413c33253fd21fa247" Mar 11 09:45:55 crc kubenswrapper[4808]: I0311 09:45:55.506264 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t65c7" event={"ID":"7031fdce-b9db-4971-b2d6-7b5749c91421","Type":"ContainerStarted","Data":"8308a86883f42f0de442718eb8381137eafa248cd0f3b03e1aa9da6dc5544e2d"} Mar 11 09:45:56 crc kubenswrapper[4808]: I0311 09:45:56.381709 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rjrx2"] Mar 11 09:45:56 crc kubenswrapper[4808]: I0311 09:45:56.383142 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rjrx2" Mar 11 09:45:56 crc kubenswrapper[4808]: I0311 09:45:56.413386 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdqql\" (UniqueName: \"kubernetes.io/projected/cc907152-bc5f-4c0f-85b4-cde69e7d591e-kube-api-access-pdqql\") pod \"redhat-marketplace-rjrx2\" (UID: \"cc907152-bc5f-4c0f-85b4-cde69e7d591e\") " pod="openshift-marketplace/redhat-marketplace-rjrx2" Mar 11 09:45:56 crc kubenswrapper[4808]: I0311 09:45:56.413763 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc907152-bc5f-4c0f-85b4-cde69e7d591e-catalog-content\") pod \"redhat-marketplace-rjrx2\" (UID: \"cc907152-bc5f-4c0f-85b4-cde69e7d591e\") " pod="openshift-marketplace/redhat-marketplace-rjrx2" Mar 11 09:45:56 crc kubenswrapper[4808]: I0311 09:45:56.413823 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc907152-bc5f-4c0f-85b4-cde69e7d591e-utilities\") pod \"redhat-marketplace-rjrx2\" (UID: \"cc907152-bc5f-4c0f-85b4-cde69e7d591e\") " pod="openshift-marketplace/redhat-marketplace-rjrx2" Mar 11 09:45:56 crc kubenswrapper[4808]: I0311 09:45:56.415068 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rjrx2"] Mar 11 09:45:56 crc kubenswrapper[4808]: I0311 09:45:56.513822 4808 generic.go:334] "Generic (PLEG): container finished" podID="7031fdce-b9db-4971-b2d6-7b5749c91421" containerID="8308a86883f42f0de442718eb8381137eafa248cd0f3b03e1aa9da6dc5544e2d" exitCode=0 Mar 11 09:45:56 crc kubenswrapper[4808]: I0311 09:45:56.513897 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t65c7" event={"ID":"7031fdce-b9db-4971-b2d6-7b5749c91421","Type":"ContainerDied","Data":"8308a86883f42f0de442718eb8381137eafa248cd0f3b03e1aa9da6dc5544e2d"} Mar 11 09:45:56 crc kubenswrapper[4808]: I0311 09:45:56.514885 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdqql\" (UniqueName: \"kubernetes.io/projected/cc907152-bc5f-4c0f-85b4-cde69e7d591e-kube-api-access-pdqql\") pod \"redhat-marketplace-rjrx2\" (UID: \"cc907152-bc5f-4c0f-85b4-cde69e7d591e\") " pod="openshift-marketplace/redhat-marketplace-rjrx2" Mar 11 09:45:56 crc kubenswrapper[4808]: I0311 09:45:56.514975 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc907152-bc5f-4c0f-85b4-cde69e7d591e-catalog-content\") pod \"redhat-marketplace-rjrx2\" (UID: \"cc907152-bc5f-4c0f-85b4-cde69e7d591e\") " pod="openshift-marketplace/redhat-marketplace-rjrx2" Mar 11 09:45:56 crc kubenswrapper[4808]: I0311 09:45:56.515076 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc907152-bc5f-4c0f-85b4-cde69e7d591e-utilities\") pod \"redhat-marketplace-rjrx2\" (UID: \"cc907152-bc5f-4c0f-85b4-cde69e7d591e\") " pod="openshift-marketplace/redhat-marketplace-rjrx2" Mar 11 09:45:56 crc kubenswrapper[4808]: I0311 09:45:56.515546 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc907152-bc5f-4c0f-85b4-cde69e7d591e-catalog-content\") pod \"redhat-marketplace-rjrx2\" (UID: \"cc907152-bc5f-4c0f-85b4-cde69e7d591e\") " pod="openshift-marketplace/redhat-marketplace-rjrx2" Mar 11 09:45:56 crc kubenswrapper[4808]: I0311 09:45:56.515557 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc907152-bc5f-4c0f-85b4-cde69e7d591e-utilities\") pod \"redhat-marketplace-rjrx2\" (UID: \"cc907152-bc5f-4c0f-85b4-cde69e7d591e\") " pod="openshift-marketplace/redhat-marketplace-rjrx2" Mar 11 09:45:56 crc kubenswrapper[4808]: I0311 09:45:56.543873 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdqql\" (UniqueName: \"kubernetes.io/projected/cc907152-bc5f-4c0f-85b4-cde69e7d591e-kube-api-access-pdqql\") pod \"redhat-marketplace-rjrx2\" (UID: \"cc907152-bc5f-4c0f-85b4-cde69e7d591e\") " pod="openshift-marketplace/redhat-marketplace-rjrx2" Mar 11 09:45:56 crc kubenswrapper[4808]: I0311 09:45:56.718181 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rjrx2" Mar 11 09:45:56 crc kubenswrapper[4808]: I0311 09:45:56.985291 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rspmc"] Mar 11 09:45:56 crc kubenswrapper[4808]: I0311 09:45:56.988662 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rspmc" Mar 11 09:45:56 crc kubenswrapper[4808]: I0311 09:45:56.994828 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rspmc"] Mar 11 09:45:57 crc kubenswrapper[4808]: I0311 09:45:57.023374 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a48f113c-85b9-4909-ac64-55224003d121-utilities\") pod \"redhat-operators-rspmc\" (UID: \"a48f113c-85b9-4909-ac64-55224003d121\") " pod="openshift-marketplace/redhat-operators-rspmc" Mar 11 09:45:57 crc kubenswrapper[4808]: I0311 09:45:57.023435 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zj9z8\" (UniqueName: \"kubernetes.io/projected/a48f113c-85b9-4909-ac64-55224003d121-kube-api-access-zj9z8\") pod \"redhat-operators-rspmc\" (UID: \"a48f113c-85b9-4909-ac64-55224003d121\") " pod="openshift-marketplace/redhat-operators-rspmc" Mar 11 09:45:57 crc kubenswrapper[4808]: I0311 09:45:57.023476 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a48f113c-85b9-4909-ac64-55224003d121-catalog-content\") pod \"redhat-operators-rspmc\" (UID: \"a48f113c-85b9-4909-ac64-55224003d121\") " pod="openshift-marketplace/redhat-operators-rspmc" Mar 11 09:45:57 crc kubenswrapper[4808]: I0311 09:45:57.124932 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a48f113c-85b9-4909-ac64-55224003d121-catalog-content\") pod \"redhat-operators-rspmc\" (UID: \"a48f113c-85b9-4909-ac64-55224003d121\") " pod="openshift-marketplace/redhat-operators-rspmc" Mar 11 09:45:57 crc kubenswrapper[4808]: I0311 09:45:57.125063 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a48f113c-85b9-4909-ac64-55224003d121-utilities\") pod \"redhat-operators-rspmc\" (UID: \"a48f113c-85b9-4909-ac64-55224003d121\") " pod="openshift-marketplace/redhat-operators-rspmc" Mar 11 09:45:57 crc kubenswrapper[4808]: I0311 09:45:57.125115 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zj9z8\" (UniqueName: \"kubernetes.io/projected/a48f113c-85b9-4909-ac64-55224003d121-kube-api-access-zj9z8\") pod \"redhat-operators-rspmc\" (UID: \"a48f113c-85b9-4909-ac64-55224003d121\") " pod="openshift-marketplace/redhat-operators-rspmc" Mar 11 09:45:57 crc kubenswrapper[4808]: I0311 09:45:57.125867 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a48f113c-85b9-4909-ac64-55224003d121-catalog-content\") pod \"redhat-operators-rspmc\" (UID: \"a48f113c-85b9-4909-ac64-55224003d121\") " pod="openshift-marketplace/redhat-operators-rspmc" Mar 11 09:45:57 crc kubenswrapper[4808]: I0311 09:45:57.125895 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a48f113c-85b9-4909-ac64-55224003d121-utilities\") pod \"redhat-operators-rspmc\" (UID: \"a48f113c-85b9-4909-ac64-55224003d121\") " pod="openshift-marketplace/redhat-operators-rspmc" Mar 11 09:45:57 crc kubenswrapper[4808]: I0311 09:45:57.146482 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zj9z8\" (UniqueName: \"kubernetes.io/projected/a48f113c-85b9-4909-ac64-55224003d121-kube-api-access-zj9z8\") pod \"redhat-operators-rspmc\" (UID: \"a48f113c-85b9-4909-ac64-55224003d121\") " pod="openshift-marketplace/redhat-operators-rspmc" Mar 11 09:45:57 crc kubenswrapper[4808]: I0311 09:45:57.170286 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rjrx2"] Mar 11 09:45:57 crc kubenswrapper[4808]: W0311 09:45:57.178601 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcc907152_bc5f_4c0f_85b4_cde69e7d591e.slice/crio-4510233be1482070d6ad0d062f9cc515e93578739816100453de1368b6f8703a WatchSource:0}: Error finding container 4510233be1482070d6ad0d062f9cc515e93578739816100453de1368b6f8703a: Status 404 returned error can't find the container with id 4510233be1482070d6ad0d062f9cc515e93578739816100453de1368b6f8703a Mar 11 09:45:57 crc kubenswrapper[4808]: I0311 09:45:57.317690 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rspmc" Mar 11 09:45:57 crc kubenswrapper[4808]: I0311 09:45:57.522758 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t65c7" event={"ID":"7031fdce-b9db-4971-b2d6-7b5749c91421","Type":"ContainerStarted","Data":"3931b66826b0ee1dd0c3ce0aa062f56eb3714a6b07db46da61daeb07f33ba610"} Mar 11 09:45:57 crc kubenswrapper[4808]: I0311 09:45:57.525401 4808 generic.go:334] "Generic (PLEG): container finished" podID="cc907152-bc5f-4c0f-85b4-cde69e7d591e" containerID="b835e70cd5cebc0aee7da53347d027ae063c63c04dc1e5da085bae95e0900cd2" exitCode=0 Mar 11 09:45:57 crc kubenswrapper[4808]: I0311 09:45:57.525436 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rjrx2" event={"ID":"cc907152-bc5f-4c0f-85b4-cde69e7d591e","Type":"ContainerDied","Data":"b835e70cd5cebc0aee7da53347d027ae063c63c04dc1e5da085bae95e0900cd2"} Mar 11 09:45:57 crc kubenswrapper[4808]: I0311 09:45:57.525457 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rjrx2" event={"ID":"cc907152-bc5f-4c0f-85b4-cde69e7d591e","Type":"ContainerStarted","Data":"4510233be1482070d6ad0d062f9cc515e93578739816100453de1368b6f8703a"} Mar 11 09:45:57 crc kubenswrapper[4808]: I0311 09:45:57.546864 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-t65c7" podStartSLOduration=3.146604313 podStartE2EDuration="5.546840824s" podCreationTimestamp="2026-03-11 09:45:52 +0000 UTC" firstStartedPulling="2026-03-11 09:45:54.500235209 +0000 UTC m=+4005.453558529" lastFinishedPulling="2026-03-11 09:45:56.90047172 +0000 UTC m=+4007.853795040" observedRunningTime="2026-03-11 09:45:57.541135052 +0000 UTC m=+4008.494458392" watchObservedRunningTime="2026-03-11 09:45:57.546840824 +0000 UTC m=+4008.500164154" Mar 11 09:45:57 crc kubenswrapper[4808]: I0311 09:45:57.754047 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rspmc"] Mar 11 09:45:57 crc kubenswrapper[4808]: W0311 09:45:57.756638 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda48f113c_85b9_4909_ac64_55224003d121.slice/crio-1487cdba4106268cf42d6d54af8050b8ed0e922bdc68fe6e8d0a76973a41a93e WatchSource:0}: Error finding container 1487cdba4106268cf42d6d54af8050b8ed0e922bdc68fe6e8d0a76973a41a93e: Status 404 returned error can't find the container with id 1487cdba4106268cf42d6d54af8050b8ed0e922bdc68fe6e8d0a76973a41a93e Mar 11 09:45:58 crc kubenswrapper[4808]: I0311 09:45:58.532637 4808 generic.go:334] "Generic (PLEG): container finished" podID="a48f113c-85b9-4909-ac64-55224003d121" containerID="324deb0cc84d7f3a570b4a02303f160c9543785daede164f4009c2fc3060bd5f" exitCode=0 Mar 11 09:45:58 crc kubenswrapper[4808]: I0311 09:45:58.533920 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rspmc" event={"ID":"a48f113c-85b9-4909-ac64-55224003d121","Type":"ContainerDied","Data":"324deb0cc84d7f3a570b4a02303f160c9543785daede164f4009c2fc3060bd5f"} Mar 11 09:45:58 crc kubenswrapper[4808]: I0311 09:45:58.533942 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rspmc" event={"ID":"a48f113c-85b9-4909-ac64-55224003d121","Type":"ContainerStarted","Data":"1487cdba4106268cf42d6d54af8050b8ed0e922bdc68fe6e8d0a76973a41a93e"} Mar 11 09:45:59 crc kubenswrapper[4808]: I0311 09:45:59.544996 4808 generic.go:334] "Generic (PLEG): container finished" podID="cc907152-bc5f-4c0f-85b4-cde69e7d591e" containerID="4dcbff4da65dddb950ae47f18d7601edb98120d316136210f933d2b492fb741c" exitCode=0 Mar 11 09:45:59 crc kubenswrapper[4808]: I0311 09:45:59.545049 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rjrx2" event={"ID":"cc907152-bc5f-4c0f-85b4-cde69e7d591e","Type":"ContainerDied","Data":"4dcbff4da65dddb950ae47f18d7601edb98120d316136210f933d2b492fb741c"} Mar 11 09:46:00 crc kubenswrapper[4808]: I0311 09:46:00.142942 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553706-pbvtg"] Mar 11 09:46:00 crc kubenswrapper[4808]: I0311 09:46:00.144160 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553706-pbvtg" Mar 11 09:46:00 crc kubenswrapper[4808]: I0311 09:46:00.150947 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 09:46:00 crc kubenswrapper[4808]: I0311 09:46:00.151475 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 09:46:00 crc kubenswrapper[4808]: I0311 09:46:00.151475 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-8r7cc" Mar 11 09:46:00 crc kubenswrapper[4808]: I0311 09:46:00.154303 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553706-pbvtg"] Mar 11 09:46:00 crc kubenswrapper[4808]: I0311 09:46:00.170167 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srktt\" (UniqueName: \"kubernetes.io/projected/572296bd-3c04-4290-997d-79f418d0d605-kube-api-access-srktt\") pod \"auto-csr-approver-29553706-pbvtg\" (UID: \"572296bd-3c04-4290-997d-79f418d0d605\") " pod="openshift-infra/auto-csr-approver-29553706-pbvtg" Mar 11 09:46:00 crc kubenswrapper[4808]: I0311 09:46:00.271376 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srktt\" (UniqueName: \"kubernetes.io/projected/572296bd-3c04-4290-997d-79f418d0d605-kube-api-access-srktt\") pod \"auto-csr-approver-29553706-pbvtg\" (UID: \"572296bd-3c04-4290-997d-79f418d0d605\") " pod="openshift-infra/auto-csr-approver-29553706-pbvtg" Mar 11 09:46:00 crc kubenswrapper[4808]: I0311 09:46:00.290316 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srktt\" (UniqueName: \"kubernetes.io/projected/572296bd-3c04-4290-997d-79f418d0d605-kube-api-access-srktt\") pod \"auto-csr-approver-29553706-pbvtg\" (UID: \"572296bd-3c04-4290-997d-79f418d0d605\") " pod="openshift-infra/auto-csr-approver-29553706-pbvtg" Mar 11 09:46:00 crc kubenswrapper[4808]: I0311 09:46:00.459849 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553706-pbvtg" Mar 11 09:46:00 crc kubenswrapper[4808]: I0311 09:46:00.584520 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rspmc" event={"ID":"a48f113c-85b9-4909-ac64-55224003d121","Type":"ContainerStarted","Data":"debc44d6375afd1bed332fe84b60fe2905a7b3746fcb29b6adb56589ba4390d1"} Mar 11 09:46:00 crc kubenswrapper[4808]: I0311 09:46:00.896203 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553706-pbvtg"] Mar 11 09:46:00 crc kubenswrapper[4808]: W0311 09:46:00.955186 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod572296bd_3c04_4290_997d_79f418d0d605.slice/crio-9c8591eaace9704ebdd7d0164c1ad287a8afc643ed368dc0879a36931c955306 WatchSource:0}: Error finding container 9c8591eaace9704ebdd7d0164c1ad287a8afc643ed368dc0879a36931c955306: Status 404 returned error can't find the container with id 9c8591eaace9704ebdd7d0164c1ad287a8afc643ed368dc0879a36931c955306 Mar 11 09:46:01 crc kubenswrapper[4808]: I0311 09:46:01.595120 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rjrx2" event={"ID":"cc907152-bc5f-4c0f-85b4-cde69e7d591e","Type":"ContainerStarted","Data":"29bdc6d91768e4fbd7a557fa294ab249f57c67bc44c91329c3fb7c1b6e67923c"} Mar 11 09:46:01 crc kubenswrapper[4808]: I0311 09:46:01.596767 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553706-pbvtg" event={"ID":"572296bd-3c04-4290-997d-79f418d0d605","Type":"ContainerStarted","Data":"9c8591eaace9704ebdd7d0164c1ad287a8afc643ed368dc0879a36931c955306"} Mar 11 09:46:01 crc kubenswrapper[4808]: I0311 09:46:01.598728 4808 generic.go:334] "Generic (PLEG): container finished" podID="a48f113c-85b9-4909-ac64-55224003d121" containerID="debc44d6375afd1bed332fe84b60fe2905a7b3746fcb29b6adb56589ba4390d1" exitCode=0 Mar 11 09:46:01 crc kubenswrapper[4808]: I0311 09:46:01.598753 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rspmc" event={"ID":"a48f113c-85b9-4909-ac64-55224003d121","Type":"ContainerDied","Data":"debc44d6375afd1bed332fe84b60fe2905a7b3746fcb29b6adb56589ba4390d1"} Mar 11 09:46:01 crc kubenswrapper[4808]: I0311 09:46:01.616894 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rjrx2" podStartSLOduration=2.328891501 podStartE2EDuration="5.616876437s" podCreationTimestamp="2026-03-11 09:45:56 +0000 UTC" firstStartedPulling="2026-03-11 09:45:57.526665962 +0000 UTC m=+4008.479989282" lastFinishedPulling="2026-03-11 09:46:00.814650868 +0000 UTC m=+4011.767974218" observedRunningTime="2026-03-11 09:46:01.614783187 +0000 UTC m=+4012.568106527" watchObservedRunningTime="2026-03-11 09:46:01.616876437 +0000 UTC m=+4012.570199767" Mar 11 09:46:02 crc kubenswrapper[4808]: I0311 09:46:02.608120 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rspmc" event={"ID":"a48f113c-85b9-4909-ac64-55224003d121","Type":"ContainerStarted","Data":"b22b04bae697ce6c34897baa50194566ec5a71c860e74835f79cbdf4d0a1f67d"} Mar 11 09:46:02 crc kubenswrapper[4808]: I0311 09:46:02.609965 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553706-pbvtg" event={"ID":"572296bd-3c04-4290-997d-79f418d0d605","Type":"ContainerStarted","Data":"28e24b171e07049f3b03c40ba5fce8620c3e9f525d915257ec6e10de2b8c69f2"} Mar 11 09:46:02 crc kubenswrapper[4808]: I0311 09:46:02.630793 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rspmc" podStartSLOduration=3.023338679 podStartE2EDuration="6.630774522s" podCreationTimestamp="2026-03-11 09:45:56 +0000 UTC" firstStartedPulling="2026-03-11 09:45:58.53963568 +0000 UTC m=+4009.492959000" lastFinishedPulling="2026-03-11 09:46:02.147071523 +0000 UTC m=+4013.100394843" observedRunningTime="2026-03-11 09:46:02.62613842 +0000 UTC m=+4013.579461740" watchObservedRunningTime="2026-03-11 09:46:02.630774522 +0000 UTC m=+4013.584097842" Mar 11 09:46:02 crc kubenswrapper[4808]: I0311 09:46:02.641645 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29553706-pbvtg" podStartSLOduration=1.6823789649999998 podStartE2EDuration="2.641611449s" podCreationTimestamp="2026-03-11 09:46:00 +0000 UTC" firstStartedPulling="2026-03-11 09:46:00.957263926 +0000 UTC m=+4011.910587246" lastFinishedPulling="2026-03-11 09:46:01.9164964 +0000 UTC m=+4012.869819730" observedRunningTime="2026-03-11 09:46:02.641314061 +0000 UTC m=+4013.594637381" watchObservedRunningTime="2026-03-11 09:46:02.641611449 +0000 UTC m=+4013.594934769" Mar 11 09:46:02 crc kubenswrapper[4808]: I0311 09:46:02.929672 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-t65c7" Mar 11 09:46:02 crc kubenswrapper[4808]: I0311 09:46:02.930396 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-t65c7" Mar 11 09:46:02 crc kubenswrapper[4808]: I0311 09:46:02.970883 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-t65c7" Mar 11 09:46:03 crc kubenswrapper[4808]: I0311 09:46:03.618189 4808 generic.go:334] "Generic (PLEG): container finished" podID="572296bd-3c04-4290-997d-79f418d0d605" containerID="28e24b171e07049f3b03c40ba5fce8620c3e9f525d915257ec6e10de2b8c69f2" exitCode=0 Mar 11 09:46:03 crc kubenswrapper[4808]: I0311 09:46:03.618254 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553706-pbvtg" event={"ID":"572296bd-3c04-4290-997d-79f418d0d605","Type":"ContainerDied","Data":"28e24b171e07049f3b03c40ba5fce8620c3e9f525d915257ec6e10de2b8c69f2"} Mar 11 09:46:03 crc kubenswrapper[4808]: I0311 09:46:03.664812 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-t65c7" Mar 11 09:46:05 crc kubenswrapper[4808]: I0311 09:46:04.999732 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553706-pbvtg" Mar 11 09:46:05 crc kubenswrapper[4808]: I0311 09:46:05.036345 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-srktt\" (UniqueName: \"kubernetes.io/projected/572296bd-3c04-4290-997d-79f418d0d605-kube-api-access-srktt\") pod \"572296bd-3c04-4290-997d-79f418d0d605\" (UID: \"572296bd-3c04-4290-997d-79f418d0d605\") " Mar 11 09:46:05 crc kubenswrapper[4808]: I0311 09:46:05.042751 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/572296bd-3c04-4290-997d-79f418d0d605-kube-api-access-srktt" (OuterVolumeSpecName: "kube-api-access-srktt") pod "572296bd-3c04-4290-997d-79f418d0d605" (UID: "572296bd-3c04-4290-997d-79f418d0d605"). InnerVolumeSpecName "kube-api-access-srktt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:46:05 crc kubenswrapper[4808]: I0311 09:46:05.138325 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-srktt\" (UniqueName: \"kubernetes.io/projected/572296bd-3c04-4290-997d-79f418d0d605-kube-api-access-srktt\") on node \"crc\" DevicePath \"\"" Mar 11 09:46:05 crc kubenswrapper[4808]: I0311 09:46:05.634074 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553706-pbvtg" Mar 11 09:46:05 crc kubenswrapper[4808]: I0311 09:46:05.638477 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553706-pbvtg" event={"ID":"572296bd-3c04-4290-997d-79f418d0d605","Type":"ContainerDied","Data":"9c8591eaace9704ebdd7d0164c1ad287a8afc643ed368dc0879a36931c955306"} Mar 11 09:46:05 crc kubenswrapper[4808]: I0311 09:46:05.638520 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c8591eaace9704ebdd7d0164c1ad287a8afc643ed368dc0879a36931c955306" Mar 11 09:46:06 crc kubenswrapper[4808]: I0311 09:46:06.079546 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553700-9rdwx"] Mar 11 09:46:06 crc kubenswrapper[4808]: I0311 09:46:06.087090 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553700-9rdwx"] Mar 11 09:46:06 crc kubenswrapper[4808]: I0311 09:46:06.719813 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rjrx2" Mar 11 09:46:06 crc kubenswrapper[4808]: I0311 09:46:06.720456 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rjrx2" Mar 11 09:46:06 crc kubenswrapper[4808]: I0311 09:46:06.789901 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rjrx2" Mar 11 09:46:07 crc kubenswrapper[4808]: I0311 09:46:07.318831 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rspmc" Mar 11 09:46:07 crc kubenswrapper[4808]: I0311 09:46:07.318887 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rspmc" Mar 11 09:46:07 crc kubenswrapper[4808]: I0311 09:46:07.707200 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rjrx2" Mar 11 09:46:07 crc kubenswrapper[4808]: I0311 09:46:07.800211 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc2d2ddf-c0da-4693-9701-dff7ede958e0" path="/var/lib/kubelet/pods/fc2d2ddf-c0da-4693-9701-dff7ede958e0/volumes" Mar 11 09:46:07 crc kubenswrapper[4808]: I0311 09:46:07.980933 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-t65c7"] Mar 11 09:46:07 crc kubenswrapper[4808]: I0311 09:46:07.981979 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-t65c7" podUID="7031fdce-b9db-4971-b2d6-7b5749c91421" containerName="registry-server" containerID="cri-o://3931b66826b0ee1dd0c3ce0aa062f56eb3714a6b07db46da61daeb07f33ba610" gracePeriod=2 Mar 11 09:46:08 crc kubenswrapper[4808]: I0311 09:46:08.370981 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rspmc" podUID="a48f113c-85b9-4909-ac64-55224003d121" containerName="registry-server" probeResult="failure" output=< Mar 11 09:46:08 crc kubenswrapper[4808]: timeout: failed to connect service ":50051" within 1s Mar 11 09:46:08 crc kubenswrapper[4808]: > Mar 11 09:46:08 crc kubenswrapper[4808]: I0311 09:46:08.453786 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t65c7" Mar 11 09:46:08 crc kubenswrapper[4808]: I0311 09:46:08.589189 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7031fdce-b9db-4971-b2d6-7b5749c91421-utilities\") pod \"7031fdce-b9db-4971-b2d6-7b5749c91421\" (UID: \"7031fdce-b9db-4971-b2d6-7b5749c91421\") " Mar 11 09:46:08 crc kubenswrapper[4808]: I0311 09:46:08.589329 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7031fdce-b9db-4971-b2d6-7b5749c91421-catalog-content\") pod \"7031fdce-b9db-4971-b2d6-7b5749c91421\" (UID: \"7031fdce-b9db-4971-b2d6-7b5749c91421\") " Mar 11 09:46:08 crc kubenswrapper[4808]: I0311 09:46:08.589374 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fx6zl\" (UniqueName: \"kubernetes.io/projected/7031fdce-b9db-4971-b2d6-7b5749c91421-kube-api-access-fx6zl\") pod \"7031fdce-b9db-4971-b2d6-7b5749c91421\" (UID: \"7031fdce-b9db-4971-b2d6-7b5749c91421\") " Mar 11 09:46:08 crc kubenswrapper[4808]: I0311 09:46:08.590142 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7031fdce-b9db-4971-b2d6-7b5749c91421-utilities" (OuterVolumeSpecName: "utilities") pod "7031fdce-b9db-4971-b2d6-7b5749c91421" (UID: "7031fdce-b9db-4971-b2d6-7b5749c91421"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:46:08 crc kubenswrapper[4808]: I0311 09:46:08.594920 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7031fdce-b9db-4971-b2d6-7b5749c91421-kube-api-access-fx6zl" (OuterVolumeSpecName: "kube-api-access-fx6zl") pod "7031fdce-b9db-4971-b2d6-7b5749c91421" (UID: "7031fdce-b9db-4971-b2d6-7b5749c91421"). InnerVolumeSpecName "kube-api-access-fx6zl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:46:08 crc kubenswrapper[4808]: I0311 09:46:08.638489 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7031fdce-b9db-4971-b2d6-7b5749c91421-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7031fdce-b9db-4971-b2d6-7b5749c91421" (UID: "7031fdce-b9db-4971-b2d6-7b5749c91421"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:46:08 crc kubenswrapper[4808]: I0311 09:46:08.661013 4808 generic.go:334] "Generic (PLEG): container finished" podID="7031fdce-b9db-4971-b2d6-7b5749c91421" containerID="3931b66826b0ee1dd0c3ce0aa062f56eb3714a6b07db46da61daeb07f33ba610" exitCode=0 Mar 11 09:46:08 crc kubenswrapper[4808]: I0311 09:46:08.661088 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t65c7" Mar 11 09:46:08 crc kubenswrapper[4808]: I0311 09:46:08.661120 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t65c7" event={"ID":"7031fdce-b9db-4971-b2d6-7b5749c91421","Type":"ContainerDied","Data":"3931b66826b0ee1dd0c3ce0aa062f56eb3714a6b07db46da61daeb07f33ba610"} Mar 11 09:46:08 crc kubenswrapper[4808]: I0311 09:46:08.661183 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t65c7" event={"ID":"7031fdce-b9db-4971-b2d6-7b5749c91421","Type":"ContainerDied","Data":"ab11cbd8b0a6705e8dc90037a81254185daeee5958af5c441a2ad0aad358c395"} Mar 11 09:46:08 crc kubenswrapper[4808]: I0311 09:46:08.661255 4808 scope.go:117] "RemoveContainer" containerID="3931b66826b0ee1dd0c3ce0aa062f56eb3714a6b07db46da61daeb07f33ba610" Mar 11 09:46:08 crc kubenswrapper[4808]: I0311 09:46:08.690497 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7031fdce-b9db-4971-b2d6-7b5749c91421-utilities\") on node \"crc\" DevicePath \"\"" Mar 11 09:46:08 crc kubenswrapper[4808]: I0311 09:46:08.690534 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7031fdce-b9db-4971-b2d6-7b5749c91421-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 11 09:46:08 crc kubenswrapper[4808]: I0311 09:46:08.690544 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fx6zl\" (UniqueName: \"kubernetes.io/projected/7031fdce-b9db-4971-b2d6-7b5749c91421-kube-api-access-fx6zl\") on node \"crc\" DevicePath \"\"" Mar 11 09:46:08 crc kubenswrapper[4808]: I0311 09:46:08.692407 4808 scope.go:117] "RemoveContainer" containerID="8308a86883f42f0de442718eb8381137eafa248cd0f3b03e1aa9da6dc5544e2d" Mar 11 09:46:08 crc kubenswrapper[4808]: I0311 09:46:08.708826 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-t65c7"] Mar 11 09:46:08 crc kubenswrapper[4808]: I0311 09:46:08.716665 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-t65c7"] Mar 11 09:46:08 crc kubenswrapper[4808]: I0311 09:46:08.728652 4808 scope.go:117] "RemoveContainer" containerID="1859d490b1e260517a71a754cd3c2b9bf331aa91b36447f71ae4846b6a75ba1b" Mar 11 09:46:08 crc kubenswrapper[4808]: I0311 09:46:08.752948 4808 scope.go:117] "RemoveContainer" containerID="3931b66826b0ee1dd0c3ce0aa062f56eb3714a6b07db46da61daeb07f33ba610" Mar 11 09:46:08 crc kubenswrapper[4808]: E0311 09:46:08.753574 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3931b66826b0ee1dd0c3ce0aa062f56eb3714a6b07db46da61daeb07f33ba610\": container with ID starting with 3931b66826b0ee1dd0c3ce0aa062f56eb3714a6b07db46da61daeb07f33ba610 not found: ID does not exist" containerID="3931b66826b0ee1dd0c3ce0aa062f56eb3714a6b07db46da61daeb07f33ba610" Mar 11 09:46:08 crc kubenswrapper[4808]: I0311 09:46:08.753620 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3931b66826b0ee1dd0c3ce0aa062f56eb3714a6b07db46da61daeb07f33ba610"} err="failed to get container status \"3931b66826b0ee1dd0c3ce0aa062f56eb3714a6b07db46da61daeb07f33ba610\": rpc error: code = NotFound desc = could not find container \"3931b66826b0ee1dd0c3ce0aa062f56eb3714a6b07db46da61daeb07f33ba610\": container with ID starting with 3931b66826b0ee1dd0c3ce0aa062f56eb3714a6b07db46da61daeb07f33ba610 not found: ID does not exist" Mar 11 09:46:08 crc kubenswrapper[4808]: I0311 09:46:08.753649 4808 scope.go:117] "RemoveContainer" containerID="8308a86883f42f0de442718eb8381137eafa248cd0f3b03e1aa9da6dc5544e2d" Mar 11 09:46:08 crc kubenswrapper[4808]: E0311 09:46:08.753929 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8308a86883f42f0de442718eb8381137eafa248cd0f3b03e1aa9da6dc5544e2d\": container with ID starting with 8308a86883f42f0de442718eb8381137eafa248cd0f3b03e1aa9da6dc5544e2d not found: ID does not exist" containerID="8308a86883f42f0de442718eb8381137eafa248cd0f3b03e1aa9da6dc5544e2d" Mar 11 09:46:08 crc kubenswrapper[4808]: I0311 09:46:08.753948 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8308a86883f42f0de442718eb8381137eafa248cd0f3b03e1aa9da6dc5544e2d"} err="failed to get container status \"8308a86883f42f0de442718eb8381137eafa248cd0f3b03e1aa9da6dc5544e2d\": rpc error: code = NotFound desc = could not find container \"8308a86883f42f0de442718eb8381137eafa248cd0f3b03e1aa9da6dc5544e2d\": container with ID starting with 8308a86883f42f0de442718eb8381137eafa248cd0f3b03e1aa9da6dc5544e2d not found: ID does not exist" Mar 11 09:46:08 crc kubenswrapper[4808]: I0311 09:46:08.753960 4808 scope.go:117] "RemoveContainer" containerID="1859d490b1e260517a71a754cd3c2b9bf331aa91b36447f71ae4846b6a75ba1b" Mar 11 09:46:08 crc kubenswrapper[4808]: E0311 09:46:08.754294 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1859d490b1e260517a71a754cd3c2b9bf331aa91b36447f71ae4846b6a75ba1b\": container with ID starting with 1859d490b1e260517a71a754cd3c2b9bf331aa91b36447f71ae4846b6a75ba1b not found: ID does not exist" containerID="1859d490b1e260517a71a754cd3c2b9bf331aa91b36447f71ae4846b6a75ba1b" Mar 11 09:46:08 crc kubenswrapper[4808]: I0311 09:46:08.754315 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1859d490b1e260517a71a754cd3c2b9bf331aa91b36447f71ae4846b6a75ba1b"} err="failed to get container status \"1859d490b1e260517a71a754cd3c2b9bf331aa91b36447f71ae4846b6a75ba1b\": rpc error: code = NotFound desc = could not find container \"1859d490b1e260517a71a754cd3c2b9bf331aa91b36447f71ae4846b6a75ba1b\": container with ID starting with 1859d490b1e260517a71a754cd3c2b9bf331aa91b36447f71ae4846b6a75ba1b not found: ID does not exist" Mar 11 09:46:08 crc kubenswrapper[4808]: I0311 09:46:08.782468 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rjrx2"] Mar 11 09:46:09 crc kubenswrapper[4808]: I0311 09:46:09.804506 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7031fdce-b9db-4971-b2d6-7b5749c91421" path="/var/lib/kubelet/pods/7031fdce-b9db-4971-b2d6-7b5749c91421/volumes" Mar 11 09:46:10 crc kubenswrapper[4808]: I0311 09:46:10.681933 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rjrx2" podUID="cc907152-bc5f-4c0f-85b4-cde69e7d591e" containerName="registry-server" containerID="cri-o://29bdc6d91768e4fbd7a557fa294ab249f57c67bc44c91329c3fb7c1b6e67923c" gracePeriod=2 Mar 11 09:46:11 crc kubenswrapper[4808]: I0311 09:46:11.692062 4808 generic.go:334] "Generic (PLEG): container finished" podID="cc907152-bc5f-4c0f-85b4-cde69e7d591e" containerID="29bdc6d91768e4fbd7a557fa294ab249f57c67bc44c91329c3fb7c1b6e67923c" exitCode=0 Mar 11 09:46:11 crc kubenswrapper[4808]: I0311 09:46:11.693263 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rjrx2" event={"ID":"cc907152-bc5f-4c0f-85b4-cde69e7d591e","Type":"ContainerDied","Data":"29bdc6d91768e4fbd7a557fa294ab249f57c67bc44c91329c3fb7c1b6e67923c"} Mar 11 09:46:12 crc kubenswrapper[4808]: I0311 09:46:12.317830 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rjrx2" Mar 11 09:46:12 crc kubenswrapper[4808]: I0311 09:46:12.343063 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc907152-bc5f-4c0f-85b4-cde69e7d591e-catalog-content\") pod \"cc907152-bc5f-4c0f-85b4-cde69e7d591e\" (UID: \"cc907152-bc5f-4c0f-85b4-cde69e7d591e\") " Mar 11 09:46:12 crc kubenswrapper[4808]: I0311 09:46:12.343133 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc907152-bc5f-4c0f-85b4-cde69e7d591e-utilities\") pod \"cc907152-bc5f-4c0f-85b4-cde69e7d591e\" (UID: \"cc907152-bc5f-4c0f-85b4-cde69e7d591e\") " Mar 11 09:46:12 crc kubenswrapper[4808]: I0311 09:46:12.343164 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pdqql\" (UniqueName: \"kubernetes.io/projected/cc907152-bc5f-4c0f-85b4-cde69e7d591e-kube-api-access-pdqql\") pod \"cc907152-bc5f-4c0f-85b4-cde69e7d591e\" (UID: \"cc907152-bc5f-4c0f-85b4-cde69e7d591e\") " Mar 11 09:46:12 crc kubenswrapper[4808]: I0311 09:46:12.344664 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc907152-bc5f-4c0f-85b4-cde69e7d591e-utilities" (OuterVolumeSpecName: "utilities") pod "cc907152-bc5f-4c0f-85b4-cde69e7d591e" (UID: "cc907152-bc5f-4c0f-85b4-cde69e7d591e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:46:12 crc kubenswrapper[4808]: I0311 09:46:12.358554 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc907152-bc5f-4c0f-85b4-cde69e7d591e-kube-api-access-pdqql" (OuterVolumeSpecName: "kube-api-access-pdqql") pod "cc907152-bc5f-4c0f-85b4-cde69e7d591e" (UID: "cc907152-bc5f-4c0f-85b4-cde69e7d591e"). InnerVolumeSpecName "kube-api-access-pdqql". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:46:12 crc kubenswrapper[4808]: I0311 09:46:12.367370 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc907152-bc5f-4c0f-85b4-cde69e7d591e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cc907152-bc5f-4c0f-85b4-cde69e7d591e" (UID: "cc907152-bc5f-4c0f-85b4-cde69e7d591e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:46:12 crc kubenswrapper[4808]: I0311 09:46:12.444986 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc907152-bc5f-4c0f-85b4-cde69e7d591e-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 11 09:46:12 crc kubenswrapper[4808]: I0311 09:46:12.445035 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc907152-bc5f-4c0f-85b4-cde69e7d591e-utilities\") on node \"crc\" DevicePath \"\"" Mar 11 09:46:12 crc kubenswrapper[4808]: I0311 09:46:12.445052 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pdqql\" (UniqueName: \"kubernetes.io/projected/cc907152-bc5f-4c0f-85b4-cde69e7d591e-kube-api-access-pdqql\") on node \"crc\" DevicePath \"\"" Mar 11 09:46:12 crc kubenswrapper[4808]: I0311 09:46:12.703137 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rjrx2" event={"ID":"cc907152-bc5f-4c0f-85b4-cde69e7d591e","Type":"ContainerDied","Data":"4510233be1482070d6ad0d062f9cc515e93578739816100453de1368b6f8703a"} Mar 11 09:46:12 crc kubenswrapper[4808]: I0311 09:46:12.703220 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rjrx2" Mar 11 09:46:12 crc kubenswrapper[4808]: I0311 09:46:12.703231 4808 scope.go:117] "RemoveContainer" containerID="29bdc6d91768e4fbd7a557fa294ab249f57c67bc44c91329c3fb7c1b6e67923c" Mar 11 09:46:12 crc kubenswrapper[4808]: I0311 09:46:12.725077 4808 scope.go:117] "RemoveContainer" containerID="4dcbff4da65dddb950ae47f18d7601edb98120d316136210f933d2b492fb741c" Mar 11 09:46:12 crc kubenswrapper[4808]: I0311 09:46:12.742025 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rjrx2"] Mar 11 09:46:12 crc kubenswrapper[4808]: I0311 09:46:12.744757 4808 scope.go:117] "RemoveContainer" containerID="b835e70cd5cebc0aee7da53347d027ae063c63c04dc1e5da085bae95e0900cd2" Mar 11 09:46:12 crc kubenswrapper[4808]: I0311 09:46:12.751458 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rjrx2"] Mar 11 09:46:13 crc kubenswrapper[4808]: I0311 09:46:13.804180 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc907152-bc5f-4c0f-85b4-cde69e7d591e" path="/var/lib/kubelet/pods/cc907152-bc5f-4c0f-85b4-cde69e7d591e/volumes" Mar 11 09:46:17 crc kubenswrapper[4808]: I0311 09:46:17.359624 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rspmc" Mar 11 09:46:17 crc kubenswrapper[4808]: I0311 09:46:17.405457 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rspmc" Mar 11 09:46:21 crc kubenswrapper[4808]: I0311 09:46:21.382824 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rspmc"] Mar 11 09:46:21 crc kubenswrapper[4808]: I0311 09:46:21.383513 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rspmc" podUID="a48f113c-85b9-4909-ac64-55224003d121" containerName="registry-server" containerID="cri-o://b22b04bae697ce6c34897baa50194566ec5a71c860e74835f79cbdf4d0a1f67d" gracePeriod=2 Mar 11 09:46:21 crc kubenswrapper[4808]: I0311 09:46:21.781280 4808 generic.go:334] "Generic (PLEG): container finished" podID="a48f113c-85b9-4909-ac64-55224003d121" containerID="b22b04bae697ce6c34897baa50194566ec5a71c860e74835f79cbdf4d0a1f67d" exitCode=0 Mar 11 09:46:21 crc kubenswrapper[4808]: I0311 09:46:21.781397 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rspmc" event={"ID":"a48f113c-85b9-4909-ac64-55224003d121","Type":"ContainerDied","Data":"b22b04bae697ce6c34897baa50194566ec5a71c860e74835f79cbdf4d0a1f67d"} Mar 11 09:46:21 crc kubenswrapper[4808]: I0311 09:46:21.872214 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rspmc" Mar 11 09:46:21 crc kubenswrapper[4808]: I0311 09:46:21.893915 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a48f113c-85b9-4909-ac64-55224003d121-catalog-content\") pod \"a48f113c-85b9-4909-ac64-55224003d121\" (UID: \"a48f113c-85b9-4909-ac64-55224003d121\") " Mar 11 09:46:21 crc kubenswrapper[4808]: I0311 09:46:21.893998 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zj9z8\" (UniqueName: \"kubernetes.io/projected/a48f113c-85b9-4909-ac64-55224003d121-kube-api-access-zj9z8\") pod \"a48f113c-85b9-4909-ac64-55224003d121\" (UID: \"a48f113c-85b9-4909-ac64-55224003d121\") " Mar 11 09:46:21 crc kubenswrapper[4808]: I0311 09:46:21.894047 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a48f113c-85b9-4909-ac64-55224003d121-utilities\") pod \"a48f113c-85b9-4909-ac64-55224003d121\" (UID: \"a48f113c-85b9-4909-ac64-55224003d121\") " Mar 11 09:46:21 crc kubenswrapper[4808]: I0311 09:46:21.895534 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a48f113c-85b9-4909-ac64-55224003d121-utilities" (OuterVolumeSpecName: "utilities") pod "a48f113c-85b9-4909-ac64-55224003d121" (UID: "a48f113c-85b9-4909-ac64-55224003d121"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:46:21 crc kubenswrapper[4808]: I0311 09:46:21.900337 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a48f113c-85b9-4909-ac64-55224003d121-kube-api-access-zj9z8" (OuterVolumeSpecName: "kube-api-access-zj9z8") pod "a48f113c-85b9-4909-ac64-55224003d121" (UID: "a48f113c-85b9-4909-ac64-55224003d121"). InnerVolumeSpecName "kube-api-access-zj9z8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:46:21 crc kubenswrapper[4808]: I0311 09:46:21.996851 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a48f113c-85b9-4909-ac64-55224003d121-utilities\") on node \"crc\" DevicePath \"\"" Mar 11 09:46:21 crc kubenswrapper[4808]: I0311 09:46:21.996903 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zj9z8\" (UniqueName: \"kubernetes.io/projected/a48f113c-85b9-4909-ac64-55224003d121-kube-api-access-zj9z8\") on node \"crc\" DevicePath \"\"" Mar 11 09:46:22 crc kubenswrapper[4808]: I0311 09:46:22.028636 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a48f113c-85b9-4909-ac64-55224003d121-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a48f113c-85b9-4909-ac64-55224003d121" (UID: "a48f113c-85b9-4909-ac64-55224003d121"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:46:22 crc kubenswrapper[4808]: I0311 09:46:22.097894 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a48f113c-85b9-4909-ac64-55224003d121-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 11 09:46:22 crc kubenswrapper[4808]: I0311 09:46:22.790307 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rspmc" event={"ID":"a48f113c-85b9-4909-ac64-55224003d121","Type":"ContainerDied","Data":"1487cdba4106268cf42d6d54af8050b8ed0e922bdc68fe6e8d0a76973a41a93e"} Mar 11 09:46:22 crc kubenswrapper[4808]: I0311 09:46:22.790624 4808 scope.go:117] "RemoveContainer" containerID="b22b04bae697ce6c34897baa50194566ec5a71c860e74835f79cbdf4d0a1f67d" Mar 11 09:46:22 crc kubenswrapper[4808]: I0311 09:46:22.790329 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rspmc" Mar 11 09:46:22 crc kubenswrapper[4808]: I0311 09:46:22.813511 4808 scope.go:117] "RemoveContainer" containerID="debc44d6375afd1bed332fe84b60fe2905a7b3746fcb29b6adb56589ba4390d1" Mar 11 09:46:22 crc kubenswrapper[4808]: I0311 09:46:22.839940 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rspmc"] Mar 11 09:46:22 crc kubenswrapper[4808]: I0311 09:46:22.845059 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rspmc"] Mar 11 09:46:22 crc kubenswrapper[4808]: I0311 09:46:22.862548 4808 scope.go:117] "RemoveContainer" containerID="324deb0cc84d7f3a570b4a02303f160c9543785daede164f4009c2fc3060bd5f" Mar 11 09:46:23 crc kubenswrapper[4808]: I0311 09:46:23.804765 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a48f113c-85b9-4909-ac64-55224003d121" path="/var/lib/kubelet/pods/a48f113c-85b9-4909-ac64-55224003d121/volumes" Mar 11 09:46:55 crc kubenswrapper[4808]: I0311 09:46:55.286011 4808 scope.go:117] "RemoveContainer" containerID="85321075453d34457b6d479d6eb8c04cda02d25f3489b5400f1f37ef636c646f" Mar 11 09:47:16 crc kubenswrapper[4808]: I0311 09:47:16.027503 4808 patch_prober.go:28] interesting pod/machine-config-daemon-tfsm9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 09:47:16 crc kubenswrapper[4808]: I0311 09:47:16.028261 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 09:47:46 crc kubenswrapper[4808]: I0311 09:47:46.028180 4808 patch_prober.go:28] interesting pod/machine-config-daemon-tfsm9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 09:47:46 crc kubenswrapper[4808]: I0311 09:47:46.029429 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 09:48:00 crc kubenswrapper[4808]: I0311 09:48:00.151861 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553708-lgdxt"] Mar 11 09:48:00 crc kubenswrapper[4808]: E0311 09:48:00.153003 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7031fdce-b9db-4971-b2d6-7b5749c91421" containerName="registry-server" Mar 11 09:48:00 crc kubenswrapper[4808]: I0311 09:48:00.153025 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="7031fdce-b9db-4971-b2d6-7b5749c91421" containerName="registry-server" Mar 11 09:48:00 crc kubenswrapper[4808]: E0311 09:48:00.153044 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7031fdce-b9db-4971-b2d6-7b5749c91421" containerName="extract-utilities" Mar 11 09:48:00 crc kubenswrapper[4808]: I0311 09:48:00.153056 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="7031fdce-b9db-4971-b2d6-7b5749c91421" containerName="extract-utilities" Mar 11 09:48:00 crc kubenswrapper[4808]: E0311 09:48:00.153080 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc907152-bc5f-4c0f-85b4-cde69e7d591e" containerName="extract-content" Mar 11 09:48:00 crc kubenswrapper[4808]: I0311 09:48:00.153092 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc907152-bc5f-4c0f-85b4-cde69e7d591e" containerName="extract-content" Mar 11 09:48:00 crc kubenswrapper[4808]: E0311 09:48:00.153109 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a48f113c-85b9-4909-ac64-55224003d121" containerName="extract-content" Mar 11 09:48:00 crc kubenswrapper[4808]: I0311 09:48:00.153123 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="a48f113c-85b9-4909-ac64-55224003d121" containerName="extract-content" Mar 11 09:48:00 crc kubenswrapper[4808]: E0311 09:48:00.153146 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7031fdce-b9db-4971-b2d6-7b5749c91421" containerName="extract-content" Mar 11 09:48:00 crc kubenswrapper[4808]: I0311 09:48:00.153160 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="7031fdce-b9db-4971-b2d6-7b5749c91421" containerName="extract-content" Mar 11 09:48:00 crc kubenswrapper[4808]: E0311 09:48:00.153177 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc907152-bc5f-4c0f-85b4-cde69e7d591e" containerName="extract-utilities" Mar 11 09:48:00 crc kubenswrapper[4808]: I0311 09:48:00.153189 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc907152-bc5f-4c0f-85b4-cde69e7d591e" containerName="extract-utilities" Mar 11 09:48:00 crc kubenswrapper[4808]: E0311 09:48:00.153202 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="572296bd-3c04-4290-997d-79f418d0d605" containerName="oc" Mar 11 09:48:00 crc kubenswrapper[4808]: I0311 09:48:00.153214 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="572296bd-3c04-4290-997d-79f418d0d605" containerName="oc" Mar 11 09:48:00 crc kubenswrapper[4808]: E0311 09:48:00.153238 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc907152-bc5f-4c0f-85b4-cde69e7d591e" containerName="registry-server" Mar 11 09:48:00 crc kubenswrapper[4808]: I0311 09:48:00.153249 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc907152-bc5f-4c0f-85b4-cde69e7d591e" containerName="registry-server" Mar 11 09:48:00 crc kubenswrapper[4808]: E0311 09:48:00.153270 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a48f113c-85b9-4909-ac64-55224003d121" containerName="registry-server" Mar 11 09:48:00 crc kubenswrapper[4808]: I0311 09:48:00.153282 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="a48f113c-85b9-4909-ac64-55224003d121" containerName="registry-server" Mar 11 09:48:00 crc kubenswrapper[4808]: E0311 09:48:00.153297 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a48f113c-85b9-4909-ac64-55224003d121" containerName="extract-utilities" Mar 11 09:48:00 crc kubenswrapper[4808]: I0311 09:48:00.153309 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="a48f113c-85b9-4909-ac64-55224003d121" containerName="extract-utilities" Mar 11 09:48:00 crc kubenswrapper[4808]: I0311 09:48:00.153612 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="a48f113c-85b9-4909-ac64-55224003d121" containerName="registry-server" Mar 11 09:48:00 crc kubenswrapper[4808]: I0311 09:48:00.153634 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="7031fdce-b9db-4971-b2d6-7b5749c91421" containerName="registry-server" Mar 11 09:48:00 crc kubenswrapper[4808]: I0311 09:48:00.153652 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="572296bd-3c04-4290-997d-79f418d0d605" containerName="oc" Mar 11 09:48:00 crc kubenswrapper[4808]: I0311 09:48:00.153679 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc907152-bc5f-4c0f-85b4-cde69e7d591e" containerName="registry-server" Mar 11 09:48:00 crc kubenswrapper[4808]: I0311 09:48:00.154427 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553708-lgdxt" Mar 11 09:48:00 crc kubenswrapper[4808]: I0311 09:48:00.159572 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-8r7cc" Mar 11 09:48:00 crc kubenswrapper[4808]: I0311 09:48:00.159743 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 09:48:00 crc kubenswrapper[4808]: I0311 09:48:00.165885 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 09:48:00 crc kubenswrapper[4808]: I0311 09:48:00.176767 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553708-lgdxt"] Mar 11 09:48:00 crc kubenswrapper[4808]: I0311 09:48:00.309208 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8b4p\" (UniqueName: \"kubernetes.io/projected/8571e6f1-1f7e-4c10-832b-168dbdfe95ad-kube-api-access-p8b4p\") pod \"auto-csr-approver-29553708-lgdxt\" (UID: \"8571e6f1-1f7e-4c10-832b-168dbdfe95ad\") " pod="openshift-infra/auto-csr-approver-29553708-lgdxt" Mar 11 09:48:00 crc kubenswrapper[4808]: I0311 09:48:00.411383 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8b4p\" (UniqueName: \"kubernetes.io/projected/8571e6f1-1f7e-4c10-832b-168dbdfe95ad-kube-api-access-p8b4p\") pod \"auto-csr-approver-29553708-lgdxt\" (UID: \"8571e6f1-1f7e-4c10-832b-168dbdfe95ad\") " pod="openshift-infra/auto-csr-approver-29553708-lgdxt" Mar 11 09:48:00 crc kubenswrapper[4808]: I0311 09:48:00.444680 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8b4p\" (UniqueName: \"kubernetes.io/projected/8571e6f1-1f7e-4c10-832b-168dbdfe95ad-kube-api-access-p8b4p\") pod \"auto-csr-approver-29553708-lgdxt\" (UID: \"8571e6f1-1f7e-4c10-832b-168dbdfe95ad\") " pod="openshift-infra/auto-csr-approver-29553708-lgdxt" Mar 11 09:48:00 crc kubenswrapper[4808]: I0311 09:48:00.519471 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553708-lgdxt" Mar 11 09:48:00 crc kubenswrapper[4808]: I0311 09:48:00.963081 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553708-lgdxt"] Mar 11 09:48:01 crc kubenswrapper[4808]: I0311 09:48:01.667105 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553708-lgdxt" event={"ID":"8571e6f1-1f7e-4c10-832b-168dbdfe95ad","Type":"ContainerStarted","Data":"ccdf23b8282254d5e0545e30d469a85f05f602368fcbc0ebc313ca3d6b0bf2c8"} Mar 11 09:48:02 crc kubenswrapper[4808]: I0311 09:48:02.677568 4808 generic.go:334] "Generic (PLEG): container finished" podID="8571e6f1-1f7e-4c10-832b-168dbdfe95ad" containerID="90bb9fd32bf81932a5d11a79eb1a96692c6ec92e940f35f9f806a62b6d59a7a9" exitCode=0 Mar 11 09:48:02 crc kubenswrapper[4808]: I0311 09:48:02.677694 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553708-lgdxt" event={"ID":"8571e6f1-1f7e-4c10-832b-168dbdfe95ad","Type":"ContainerDied","Data":"90bb9fd32bf81932a5d11a79eb1a96692c6ec92e940f35f9f806a62b6d59a7a9"} Mar 11 09:48:03 crc kubenswrapper[4808]: I0311 09:48:03.960424 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553708-lgdxt" Mar 11 09:48:04 crc kubenswrapper[4808]: I0311 09:48:04.082735 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p8b4p\" (UniqueName: \"kubernetes.io/projected/8571e6f1-1f7e-4c10-832b-168dbdfe95ad-kube-api-access-p8b4p\") pod \"8571e6f1-1f7e-4c10-832b-168dbdfe95ad\" (UID: \"8571e6f1-1f7e-4c10-832b-168dbdfe95ad\") " Mar 11 09:48:04 crc kubenswrapper[4808]: I0311 09:48:04.088607 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8571e6f1-1f7e-4c10-832b-168dbdfe95ad-kube-api-access-p8b4p" (OuterVolumeSpecName: "kube-api-access-p8b4p") pod "8571e6f1-1f7e-4c10-832b-168dbdfe95ad" (UID: "8571e6f1-1f7e-4c10-832b-168dbdfe95ad"). InnerVolumeSpecName "kube-api-access-p8b4p". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:48:04 crc kubenswrapper[4808]: I0311 09:48:04.184895 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p8b4p\" (UniqueName: \"kubernetes.io/projected/8571e6f1-1f7e-4c10-832b-168dbdfe95ad-kube-api-access-p8b4p\") on node \"crc\" DevicePath \"\"" Mar 11 09:48:04 crc kubenswrapper[4808]: I0311 09:48:04.697337 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553708-lgdxt" event={"ID":"8571e6f1-1f7e-4c10-832b-168dbdfe95ad","Type":"ContainerDied","Data":"ccdf23b8282254d5e0545e30d469a85f05f602368fcbc0ebc313ca3d6b0bf2c8"} Mar 11 09:48:04 crc kubenswrapper[4808]: I0311 09:48:04.697647 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ccdf23b8282254d5e0545e30d469a85f05f602368fcbc0ebc313ca3d6b0bf2c8" Mar 11 09:48:04 crc kubenswrapper[4808]: I0311 09:48:04.697414 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553708-lgdxt" Mar 11 09:48:05 crc kubenswrapper[4808]: I0311 09:48:05.044953 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553702-9465s"] Mar 11 09:48:05 crc kubenswrapper[4808]: I0311 09:48:05.051101 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553702-9465s"] Mar 11 09:48:05 crc kubenswrapper[4808]: I0311 09:48:05.804462 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dea1bd3a-7355-4bcb-b908-2ab2b9e970a6" path="/var/lib/kubelet/pods/dea1bd3a-7355-4bcb-b908-2ab2b9e970a6/volumes" Mar 11 09:48:16 crc kubenswrapper[4808]: I0311 09:48:16.027283 4808 patch_prober.go:28] interesting pod/machine-config-daemon-tfsm9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 09:48:16 crc kubenswrapper[4808]: I0311 09:48:16.027907 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 09:48:16 crc kubenswrapper[4808]: I0311 09:48:16.027985 4808 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" Mar 11 09:48:16 crc kubenswrapper[4808]: I0311 09:48:16.029084 4808 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e6211a68183758376bcd3babdb8bbf9648fbb8e99444df32e4a7da0910992c35"} pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 11 09:48:16 crc kubenswrapper[4808]: I0311 09:48:16.029161 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerName="machine-config-daemon" containerID="cri-o://e6211a68183758376bcd3babdb8bbf9648fbb8e99444df32e4a7da0910992c35" gracePeriod=600 Mar 11 09:48:16 crc kubenswrapper[4808]: I0311 09:48:16.812531 4808 generic.go:334] "Generic (PLEG): container finished" podID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerID="e6211a68183758376bcd3babdb8bbf9648fbb8e99444df32e4a7da0910992c35" exitCode=0 Mar 11 09:48:16 crc kubenswrapper[4808]: I0311 09:48:16.812598 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" event={"ID":"3dda5309-668d-4e3c-b3b2-1d708eecc578","Type":"ContainerDied","Data":"e6211a68183758376bcd3babdb8bbf9648fbb8e99444df32e4a7da0910992c35"} Mar 11 09:48:16 crc kubenswrapper[4808]: I0311 09:48:16.813242 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" event={"ID":"3dda5309-668d-4e3c-b3b2-1d708eecc578","Type":"ContainerStarted","Data":"27e0f48b3822e212a74bc43bb71376ffbdc19fd289c3afa07743f08ceed097ff"} Mar 11 09:48:16 crc kubenswrapper[4808]: I0311 09:48:16.813270 4808 scope.go:117] "RemoveContainer" containerID="327e481c771e263c438770d9467a9e777f57eb934e6c4efbb697dbdded9363eb" Mar 11 09:48:55 crc kubenswrapper[4808]: I0311 09:48:55.387969 4808 scope.go:117] "RemoveContainer" containerID="631edf2edd83d1863a615f373b866598a0970c3f9954b64cd536c31993076b8c" Mar 11 09:50:00 crc kubenswrapper[4808]: I0311 09:50:00.168762 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553710-8kbd6"] Mar 11 09:50:00 crc kubenswrapper[4808]: E0311 09:50:00.171007 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8571e6f1-1f7e-4c10-832b-168dbdfe95ad" containerName="oc" Mar 11 09:50:00 crc kubenswrapper[4808]: I0311 09:50:00.171030 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="8571e6f1-1f7e-4c10-832b-168dbdfe95ad" containerName="oc" Mar 11 09:50:00 crc kubenswrapper[4808]: I0311 09:50:00.171217 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="8571e6f1-1f7e-4c10-832b-168dbdfe95ad" containerName="oc" Mar 11 09:50:00 crc kubenswrapper[4808]: I0311 09:50:00.171760 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553710-8kbd6"] Mar 11 09:50:00 crc kubenswrapper[4808]: I0311 09:50:00.171876 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553710-8kbd6" Mar 11 09:50:00 crc kubenswrapper[4808]: I0311 09:50:00.197578 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 09:50:00 crc kubenswrapper[4808]: I0311 09:50:00.197657 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 09:50:00 crc kubenswrapper[4808]: I0311 09:50:00.197744 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-8r7cc" Mar 11 09:50:00 crc kubenswrapper[4808]: I0311 09:50:00.298176 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7d52d\" (UniqueName: \"kubernetes.io/projected/c89e377b-73f4-4e03-8d45-b56478ac9d0a-kube-api-access-7d52d\") pod \"auto-csr-approver-29553710-8kbd6\" (UID: \"c89e377b-73f4-4e03-8d45-b56478ac9d0a\") " pod="openshift-infra/auto-csr-approver-29553710-8kbd6" Mar 11 09:50:00 crc kubenswrapper[4808]: I0311 09:50:00.399263 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7d52d\" (UniqueName: \"kubernetes.io/projected/c89e377b-73f4-4e03-8d45-b56478ac9d0a-kube-api-access-7d52d\") pod \"auto-csr-approver-29553710-8kbd6\" (UID: \"c89e377b-73f4-4e03-8d45-b56478ac9d0a\") " pod="openshift-infra/auto-csr-approver-29553710-8kbd6" Mar 11 09:50:00 crc kubenswrapper[4808]: I0311 09:50:00.422251 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7d52d\" (UniqueName: \"kubernetes.io/projected/c89e377b-73f4-4e03-8d45-b56478ac9d0a-kube-api-access-7d52d\") pod \"auto-csr-approver-29553710-8kbd6\" (UID: \"c89e377b-73f4-4e03-8d45-b56478ac9d0a\") " pod="openshift-infra/auto-csr-approver-29553710-8kbd6" Mar 11 09:50:00 crc kubenswrapper[4808]: I0311 09:50:00.513745 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553710-8kbd6" Mar 11 09:50:00 crc kubenswrapper[4808]: I0311 09:50:00.962213 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553710-8kbd6"] Mar 11 09:50:01 crc kubenswrapper[4808]: I0311 09:50:01.707757 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553710-8kbd6" event={"ID":"c89e377b-73f4-4e03-8d45-b56478ac9d0a","Type":"ContainerStarted","Data":"007c0e36deb2ecba63e6bf898fdbe52c8dc269772aab9ce99c71c04f6d8e50a1"} Mar 11 09:50:02 crc kubenswrapper[4808]: I0311 09:50:02.717153 4808 generic.go:334] "Generic (PLEG): container finished" podID="c89e377b-73f4-4e03-8d45-b56478ac9d0a" containerID="2cf2592e5817b1526fe38371eb10716efac3d6c42811c750314883ffd5699e04" exitCode=0 Mar 11 09:50:02 crc kubenswrapper[4808]: I0311 09:50:02.717384 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553710-8kbd6" event={"ID":"c89e377b-73f4-4e03-8d45-b56478ac9d0a","Type":"ContainerDied","Data":"2cf2592e5817b1526fe38371eb10716efac3d6c42811c750314883ffd5699e04"} Mar 11 09:50:04 crc kubenswrapper[4808]: I0311 09:50:04.002187 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553710-8kbd6" Mar 11 09:50:04 crc kubenswrapper[4808]: I0311 09:50:04.158838 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7d52d\" (UniqueName: \"kubernetes.io/projected/c89e377b-73f4-4e03-8d45-b56478ac9d0a-kube-api-access-7d52d\") pod \"c89e377b-73f4-4e03-8d45-b56478ac9d0a\" (UID: \"c89e377b-73f4-4e03-8d45-b56478ac9d0a\") " Mar 11 09:50:04 crc kubenswrapper[4808]: I0311 09:50:04.163966 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c89e377b-73f4-4e03-8d45-b56478ac9d0a-kube-api-access-7d52d" (OuterVolumeSpecName: "kube-api-access-7d52d") pod "c89e377b-73f4-4e03-8d45-b56478ac9d0a" (UID: "c89e377b-73f4-4e03-8d45-b56478ac9d0a"). InnerVolumeSpecName "kube-api-access-7d52d". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:50:04 crc kubenswrapper[4808]: I0311 09:50:04.261194 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7d52d\" (UniqueName: \"kubernetes.io/projected/c89e377b-73f4-4e03-8d45-b56478ac9d0a-kube-api-access-7d52d\") on node \"crc\" DevicePath \"\"" Mar 11 09:50:04 crc kubenswrapper[4808]: I0311 09:50:04.734635 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553710-8kbd6" event={"ID":"c89e377b-73f4-4e03-8d45-b56478ac9d0a","Type":"ContainerDied","Data":"007c0e36deb2ecba63e6bf898fdbe52c8dc269772aab9ce99c71c04f6d8e50a1"} Mar 11 09:50:04 crc kubenswrapper[4808]: I0311 09:50:04.734907 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="007c0e36deb2ecba63e6bf898fdbe52c8dc269772aab9ce99c71c04f6d8e50a1" Mar 11 09:50:04 crc kubenswrapper[4808]: I0311 09:50:04.734691 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553710-8kbd6" Mar 11 09:50:05 crc kubenswrapper[4808]: I0311 09:50:05.076195 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553704-d5nxf"] Mar 11 09:50:05 crc kubenswrapper[4808]: I0311 09:50:05.081392 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553704-d5nxf"] Mar 11 09:50:05 crc kubenswrapper[4808]: I0311 09:50:05.797865 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afc64431-a07e-43e9-8350-988a62854b7c" path="/var/lib/kubelet/pods/afc64431-a07e-43e9-8350-988a62854b7c/volumes" Mar 11 09:50:16 crc kubenswrapper[4808]: I0311 09:50:16.027567 4808 patch_prober.go:28] interesting pod/machine-config-daemon-tfsm9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 09:50:16 crc kubenswrapper[4808]: I0311 09:50:16.028104 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 09:50:46 crc kubenswrapper[4808]: I0311 09:50:46.027832 4808 patch_prober.go:28] interesting pod/machine-config-daemon-tfsm9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 09:50:46 crc kubenswrapper[4808]: I0311 09:50:46.028597 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 09:50:55 crc kubenswrapper[4808]: I0311 09:50:55.482659 4808 scope.go:117] "RemoveContainer" containerID="90b06ea4f456f3253064833db25778aff5d8bbf457c48c94cb64201e135a6c34" Mar 11 09:51:16 crc kubenswrapper[4808]: I0311 09:51:16.028130 4808 patch_prober.go:28] interesting pod/machine-config-daemon-tfsm9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 09:51:16 crc kubenswrapper[4808]: I0311 09:51:16.028866 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 09:51:16 crc kubenswrapper[4808]: I0311 09:51:16.028948 4808 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" Mar 11 09:51:16 crc kubenswrapper[4808]: I0311 09:51:16.030118 4808 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"27e0f48b3822e212a74bc43bb71376ffbdc19fd289c3afa07743f08ceed097ff"} pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 11 09:51:16 crc kubenswrapper[4808]: I0311 09:51:16.030432 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerName="machine-config-daemon" containerID="cri-o://27e0f48b3822e212a74bc43bb71376ffbdc19fd289c3afa07743f08ceed097ff" gracePeriod=600 Mar 11 09:51:16 crc kubenswrapper[4808]: E0311 09:51:16.163957 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:51:16 crc kubenswrapper[4808]: I0311 09:51:16.275576 4808 generic.go:334] "Generic (PLEG): container finished" podID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerID="27e0f48b3822e212a74bc43bb71376ffbdc19fd289c3afa07743f08ceed097ff" exitCode=0 Mar 11 09:51:16 crc kubenswrapper[4808]: I0311 09:51:16.275589 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" event={"ID":"3dda5309-668d-4e3c-b3b2-1d708eecc578","Type":"ContainerDied","Data":"27e0f48b3822e212a74bc43bb71376ffbdc19fd289c3afa07743f08ceed097ff"} Mar 11 09:51:16 crc kubenswrapper[4808]: I0311 09:51:16.275698 4808 scope.go:117] "RemoveContainer" containerID="e6211a68183758376bcd3babdb8bbf9648fbb8e99444df32e4a7da0910992c35" Mar 11 09:51:16 crc kubenswrapper[4808]: I0311 09:51:16.276884 4808 scope.go:117] "RemoveContainer" containerID="27e0f48b3822e212a74bc43bb71376ffbdc19fd289c3afa07743f08ceed097ff" Mar 11 09:51:16 crc kubenswrapper[4808]: E0311 09:51:16.277598 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:51:28 crc kubenswrapper[4808]: I0311 09:51:28.789906 4808 scope.go:117] "RemoveContainer" containerID="27e0f48b3822e212a74bc43bb71376ffbdc19fd289c3afa07743f08ceed097ff" Mar 11 09:51:28 crc kubenswrapper[4808]: E0311 09:51:28.792816 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:51:43 crc kubenswrapper[4808]: I0311 09:51:43.791511 4808 scope.go:117] "RemoveContainer" containerID="27e0f48b3822e212a74bc43bb71376ffbdc19fd289c3afa07743f08ceed097ff" Mar 11 09:51:43 crc kubenswrapper[4808]: E0311 09:51:43.792734 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:51:54 crc kubenswrapper[4808]: I0311 09:51:54.789947 4808 scope.go:117] "RemoveContainer" containerID="27e0f48b3822e212a74bc43bb71376ffbdc19fd289c3afa07743f08ceed097ff" Mar 11 09:51:54 crc kubenswrapper[4808]: E0311 09:51:54.790956 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:52:00 crc kubenswrapper[4808]: I0311 09:52:00.141837 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553712-wtjhc"] Mar 11 09:52:00 crc kubenswrapper[4808]: E0311 09:52:00.142592 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c89e377b-73f4-4e03-8d45-b56478ac9d0a" containerName="oc" Mar 11 09:52:00 crc kubenswrapper[4808]: I0311 09:52:00.142611 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="c89e377b-73f4-4e03-8d45-b56478ac9d0a" containerName="oc" Mar 11 09:52:00 crc kubenswrapper[4808]: I0311 09:52:00.142800 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="c89e377b-73f4-4e03-8d45-b56478ac9d0a" containerName="oc" Mar 11 09:52:00 crc kubenswrapper[4808]: I0311 09:52:00.143342 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553712-wtjhc" Mar 11 09:52:00 crc kubenswrapper[4808]: I0311 09:52:00.145545 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-8r7cc" Mar 11 09:52:00 crc kubenswrapper[4808]: I0311 09:52:00.145719 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 09:52:00 crc kubenswrapper[4808]: I0311 09:52:00.146987 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 09:52:00 crc kubenswrapper[4808]: I0311 09:52:00.155335 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553712-wtjhc"] Mar 11 09:52:00 crc kubenswrapper[4808]: I0311 09:52:00.261213 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99r7x\" (UniqueName: \"kubernetes.io/projected/e557c917-d211-48a5-8e7d-e854cf2c8fb9-kube-api-access-99r7x\") pod \"auto-csr-approver-29553712-wtjhc\" (UID: \"e557c917-d211-48a5-8e7d-e854cf2c8fb9\") " pod="openshift-infra/auto-csr-approver-29553712-wtjhc" Mar 11 09:52:00 crc kubenswrapper[4808]: I0311 09:52:00.363238 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99r7x\" (UniqueName: \"kubernetes.io/projected/e557c917-d211-48a5-8e7d-e854cf2c8fb9-kube-api-access-99r7x\") pod \"auto-csr-approver-29553712-wtjhc\" (UID: \"e557c917-d211-48a5-8e7d-e854cf2c8fb9\") " pod="openshift-infra/auto-csr-approver-29553712-wtjhc" Mar 11 09:52:00 crc kubenswrapper[4808]: I0311 09:52:00.395143 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99r7x\" (UniqueName: \"kubernetes.io/projected/e557c917-d211-48a5-8e7d-e854cf2c8fb9-kube-api-access-99r7x\") pod \"auto-csr-approver-29553712-wtjhc\" (UID: \"e557c917-d211-48a5-8e7d-e854cf2c8fb9\") " pod="openshift-infra/auto-csr-approver-29553712-wtjhc" Mar 11 09:52:00 crc kubenswrapper[4808]: I0311 09:52:00.462632 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553712-wtjhc" Mar 11 09:52:00 crc kubenswrapper[4808]: I0311 09:52:00.874093 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553712-wtjhc"] Mar 11 09:52:00 crc kubenswrapper[4808]: W0311 09:52:00.886206 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode557c917_d211_48a5_8e7d_e854cf2c8fb9.slice/crio-eb58a92525b6fe4a01430bb1bc1209b6c0580960d76268668ded54abfdd82bd3 WatchSource:0}: Error finding container eb58a92525b6fe4a01430bb1bc1209b6c0580960d76268668ded54abfdd82bd3: Status 404 returned error can't find the container with id eb58a92525b6fe4a01430bb1bc1209b6c0580960d76268668ded54abfdd82bd3 Mar 11 09:52:00 crc kubenswrapper[4808]: I0311 09:52:00.890799 4808 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 11 09:52:01 crc kubenswrapper[4808]: I0311 09:52:01.679702 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553712-wtjhc" event={"ID":"e557c917-d211-48a5-8e7d-e854cf2c8fb9","Type":"ContainerStarted","Data":"eb58a92525b6fe4a01430bb1bc1209b6c0580960d76268668ded54abfdd82bd3"} Mar 11 09:52:02 crc kubenswrapper[4808]: I0311 09:52:02.690089 4808 generic.go:334] "Generic (PLEG): container finished" podID="e557c917-d211-48a5-8e7d-e854cf2c8fb9" containerID="8bc2a52eb96f9670c872e3553df816c354c711b0341e7cc1792562c85fd573e2" exitCode=0 Mar 11 09:52:02 crc kubenswrapper[4808]: I0311 09:52:02.690194 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553712-wtjhc" event={"ID":"e557c917-d211-48a5-8e7d-e854cf2c8fb9","Type":"ContainerDied","Data":"8bc2a52eb96f9670c872e3553df816c354c711b0341e7cc1792562c85fd573e2"} Mar 11 09:52:03 crc kubenswrapper[4808]: I0311 09:52:03.985899 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553712-wtjhc" Mar 11 09:52:04 crc kubenswrapper[4808]: I0311 09:52:04.120470 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99r7x\" (UniqueName: \"kubernetes.io/projected/e557c917-d211-48a5-8e7d-e854cf2c8fb9-kube-api-access-99r7x\") pod \"e557c917-d211-48a5-8e7d-e854cf2c8fb9\" (UID: \"e557c917-d211-48a5-8e7d-e854cf2c8fb9\") " Mar 11 09:52:04 crc kubenswrapper[4808]: I0311 09:52:04.127531 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e557c917-d211-48a5-8e7d-e854cf2c8fb9-kube-api-access-99r7x" (OuterVolumeSpecName: "kube-api-access-99r7x") pod "e557c917-d211-48a5-8e7d-e854cf2c8fb9" (UID: "e557c917-d211-48a5-8e7d-e854cf2c8fb9"). InnerVolumeSpecName "kube-api-access-99r7x". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:52:04 crc kubenswrapper[4808]: I0311 09:52:04.221933 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-99r7x\" (UniqueName: \"kubernetes.io/projected/e557c917-d211-48a5-8e7d-e854cf2c8fb9-kube-api-access-99r7x\") on node \"crc\" DevicePath \"\"" Mar 11 09:52:04 crc kubenswrapper[4808]: I0311 09:52:04.726535 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553712-wtjhc" event={"ID":"e557c917-d211-48a5-8e7d-e854cf2c8fb9","Type":"ContainerDied","Data":"eb58a92525b6fe4a01430bb1bc1209b6c0580960d76268668ded54abfdd82bd3"} Mar 11 09:52:04 crc kubenswrapper[4808]: I0311 09:52:04.726591 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553712-wtjhc" Mar 11 09:52:04 crc kubenswrapper[4808]: I0311 09:52:04.726595 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb58a92525b6fe4a01430bb1bc1209b6c0580960d76268668ded54abfdd82bd3" Mar 11 09:52:05 crc kubenswrapper[4808]: I0311 09:52:05.066383 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553706-pbvtg"] Mar 11 09:52:05 crc kubenswrapper[4808]: I0311 09:52:05.077227 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553706-pbvtg"] Mar 11 09:52:05 crc kubenswrapper[4808]: I0311 09:52:05.800799 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="572296bd-3c04-4290-997d-79f418d0d605" path="/var/lib/kubelet/pods/572296bd-3c04-4290-997d-79f418d0d605/volumes" Mar 11 09:52:08 crc kubenswrapper[4808]: I0311 09:52:08.789675 4808 scope.go:117] "RemoveContainer" containerID="27e0f48b3822e212a74bc43bb71376ffbdc19fd289c3afa07743f08ceed097ff" Mar 11 09:52:08 crc kubenswrapper[4808]: E0311 09:52:08.790085 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:52:23 crc kubenswrapper[4808]: I0311 09:52:23.790211 4808 scope.go:117] "RemoveContainer" containerID="27e0f48b3822e212a74bc43bb71376ffbdc19fd289c3afa07743f08ceed097ff" Mar 11 09:52:23 crc kubenswrapper[4808]: E0311 09:52:23.791431 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:52:34 crc kubenswrapper[4808]: I0311 09:52:34.790088 4808 scope.go:117] "RemoveContainer" containerID="27e0f48b3822e212a74bc43bb71376ffbdc19fd289c3afa07743f08ceed097ff" Mar 11 09:52:34 crc kubenswrapper[4808]: E0311 09:52:34.790903 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:52:49 crc kubenswrapper[4808]: I0311 09:52:49.799061 4808 scope.go:117] "RemoveContainer" containerID="27e0f48b3822e212a74bc43bb71376ffbdc19fd289c3afa07743f08ceed097ff" Mar 11 09:52:49 crc kubenswrapper[4808]: E0311 09:52:49.800321 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:52:55 crc kubenswrapper[4808]: I0311 09:52:55.587090 4808 scope.go:117] "RemoveContainer" containerID="28e24b171e07049f3b03c40ba5fce8620c3e9f525d915257ec6e10de2b8c69f2" Mar 11 09:53:03 crc kubenswrapper[4808]: I0311 09:53:03.790128 4808 scope.go:117] "RemoveContainer" containerID="27e0f48b3822e212a74bc43bb71376ffbdc19fd289c3afa07743f08ceed097ff" Mar 11 09:53:03 crc kubenswrapper[4808]: E0311 09:53:03.791049 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:53:17 crc kubenswrapper[4808]: I0311 09:53:17.789694 4808 scope.go:117] "RemoveContainer" containerID="27e0f48b3822e212a74bc43bb71376ffbdc19fd289c3afa07743f08ceed097ff" Mar 11 09:53:17 crc kubenswrapper[4808]: E0311 09:53:17.791333 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:53:29 crc kubenswrapper[4808]: I0311 09:53:29.796526 4808 scope.go:117] "RemoveContainer" containerID="27e0f48b3822e212a74bc43bb71376ffbdc19fd289c3afa07743f08ceed097ff" Mar 11 09:53:29 crc kubenswrapper[4808]: E0311 09:53:29.797300 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:53:44 crc kubenswrapper[4808]: I0311 09:53:44.789353 4808 scope.go:117] "RemoveContainer" containerID="27e0f48b3822e212a74bc43bb71376ffbdc19fd289c3afa07743f08ceed097ff" Mar 11 09:53:44 crc kubenswrapper[4808]: E0311 09:53:44.790441 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:53:56 crc kubenswrapper[4808]: I0311 09:53:56.790142 4808 scope.go:117] "RemoveContainer" containerID="27e0f48b3822e212a74bc43bb71376ffbdc19fd289c3afa07743f08ceed097ff" Mar 11 09:53:56 crc kubenswrapper[4808]: E0311 09:53:56.790855 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:54:00 crc kubenswrapper[4808]: I0311 09:54:00.149562 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553714-l8q9s"] Mar 11 09:54:00 crc kubenswrapper[4808]: E0311 09:54:00.151046 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e557c917-d211-48a5-8e7d-e854cf2c8fb9" containerName="oc" Mar 11 09:54:00 crc kubenswrapper[4808]: I0311 09:54:00.151081 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="e557c917-d211-48a5-8e7d-e854cf2c8fb9" containerName="oc" Mar 11 09:54:00 crc kubenswrapper[4808]: I0311 09:54:00.151522 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="e557c917-d211-48a5-8e7d-e854cf2c8fb9" containerName="oc" Mar 11 09:54:00 crc kubenswrapper[4808]: I0311 09:54:00.152542 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553714-l8q9s" Mar 11 09:54:00 crc kubenswrapper[4808]: I0311 09:54:00.156631 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-8r7cc" Mar 11 09:54:00 crc kubenswrapper[4808]: I0311 09:54:00.156732 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gc84j\" (UniqueName: \"kubernetes.io/projected/f1597b1b-7f3f-4285-a9b4-909a884416e0-kube-api-access-gc84j\") pod \"auto-csr-approver-29553714-l8q9s\" (UID: \"f1597b1b-7f3f-4285-a9b4-909a884416e0\") " pod="openshift-infra/auto-csr-approver-29553714-l8q9s" Mar 11 09:54:00 crc kubenswrapper[4808]: I0311 09:54:00.156993 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 09:54:00 crc kubenswrapper[4808]: I0311 09:54:00.157096 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 09:54:00 crc kubenswrapper[4808]: I0311 09:54:00.165319 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553714-l8q9s"] Mar 11 09:54:00 crc kubenswrapper[4808]: I0311 09:54:00.258695 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gc84j\" (UniqueName: \"kubernetes.io/projected/f1597b1b-7f3f-4285-a9b4-909a884416e0-kube-api-access-gc84j\") pod \"auto-csr-approver-29553714-l8q9s\" (UID: \"f1597b1b-7f3f-4285-a9b4-909a884416e0\") " pod="openshift-infra/auto-csr-approver-29553714-l8q9s" Mar 11 09:54:00 crc kubenswrapper[4808]: I0311 09:54:00.284644 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gc84j\" (UniqueName: \"kubernetes.io/projected/f1597b1b-7f3f-4285-a9b4-909a884416e0-kube-api-access-gc84j\") pod \"auto-csr-approver-29553714-l8q9s\" (UID: \"f1597b1b-7f3f-4285-a9b4-909a884416e0\") " pod="openshift-infra/auto-csr-approver-29553714-l8q9s" Mar 11 09:54:00 crc kubenswrapper[4808]: I0311 09:54:00.475233 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553714-l8q9s" Mar 11 09:54:00 crc kubenswrapper[4808]: I0311 09:54:00.987747 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553714-l8q9s"] Mar 11 09:54:01 crc kubenswrapper[4808]: I0311 09:54:01.689036 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553714-l8q9s" event={"ID":"f1597b1b-7f3f-4285-a9b4-909a884416e0","Type":"ContainerStarted","Data":"dac25e14576af66f7449fc498f1dcd2c640f3696179513ff6c4f6b26b5bceeef"} Mar 11 09:54:02 crc kubenswrapper[4808]: I0311 09:54:02.697569 4808 generic.go:334] "Generic (PLEG): container finished" podID="f1597b1b-7f3f-4285-a9b4-909a884416e0" containerID="670cd217bb0ebacbb7aebfe37a6ba8da7d71a66b2675490b95d3ea421bc30b86" exitCode=0 Mar 11 09:54:02 crc kubenswrapper[4808]: I0311 09:54:02.697773 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553714-l8q9s" event={"ID":"f1597b1b-7f3f-4285-a9b4-909a884416e0","Type":"ContainerDied","Data":"670cd217bb0ebacbb7aebfe37a6ba8da7d71a66b2675490b95d3ea421bc30b86"} Mar 11 09:54:03 crc kubenswrapper[4808]: I0311 09:54:03.982855 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553714-l8q9s" Mar 11 09:54:04 crc kubenswrapper[4808]: I0311 09:54:04.113764 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gc84j\" (UniqueName: \"kubernetes.io/projected/f1597b1b-7f3f-4285-a9b4-909a884416e0-kube-api-access-gc84j\") pod \"f1597b1b-7f3f-4285-a9b4-909a884416e0\" (UID: \"f1597b1b-7f3f-4285-a9b4-909a884416e0\") " Mar 11 09:54:04 crc kubenswrapper[4808]: I0311 09:54:04.129794 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1597b1b-7f3f-4285-a9b4-909a884416e0-kube-api-access-gc84j" (OuterVolumeSpecName: "kube-api-access-gc84j") pod "f1597b1b-7f3f-4285-a9b4-909a884416e0" (UID: "f1597b1b-7f3f-4285-a9b4-909a884416e0"). InnerVolumeSpecName "kube-api-access-gc84j". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:54:04 crc kubenswrapper[4808]: I0311 09:54:04.215088 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gc84j\" (UniqueName: \"kubernetes.io/projected/f1597b1b-7f3f-4285-a9b4-909a884416e0-kube-api-access-gc84j\") on node \"crc\" DevicePath \"\"" Mar 11 09:54:04 crc kubenswrapper[4808]: I0311 09:54:04.718529 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553714-l8q9s" event={"ID":"f1597b1b-7f3f-4285-a9b4-909a884416e0","Type":"ContainerDied","Data":"dac25e14576af66f7449fc498f1dcd2c640f3696179513ff6c4f6b26b5bceeef"} Mar 11 09:54:04 crc kubenswrapper[4808]: I0311 09:54:04.718573 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dac25e14576af66f7449fc498f1dcd2c640f3696179513ff6c4f6b26b5bceeef" Mar 11 09:54:04 crc kubenswrapper[4808]: I0311 09:54:04.718597 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553714-l8q9s" Mar 11 09:54:05 crc kubenswrapper[4808]: I0311 09:54:05.051766 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553708-lgdxt"] Mar 11 09:54:05 crc kubenswrapper[4808]: I0311 09:54:05.061469 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553708-lgdxt"] Mar 11 09:54:05 crc kubenswrapper[4808]: I0311 09:54:05.799298 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8571e6f1-1f7e-4c10-832b-168dbdfe95ad" path="/var/lib/kubelet/pods/8571e6f1-1f7e-4c10-832b-168dbdfe95ad/volumes" Mar 11 09:54:07 crc kubenswrapper[4808]: I0311 09:54:07.789670 4808 scope.go:117] "RemoveContainer" containerID="27e0f48b3822e212a74bc43bb71376ffbdc19fd289c3afa07743f08ceed097ff" Mar 11 09:54:07 crc kubenswrapper[4808]: E0311 09:54:07.790272 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:54:19 crc kubenswrapper[4808]: I0311 09:54:19.799682 4808 scope.go:117] "RemoveContainer" containerID="27e0f48b3822e212a74bc43bb71376ffbdc19fd289c3afa07743f08ceed097ff" Mar 11 09:54:19 crc kubenswrapper[4808]: E0311 09:54:19.803047 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:54:34 crc kubenswrapper[4808]: I0311 09:54:34.789274 4808 scope.go:117] "RemoveContainer" containerID="27e0f48b3822e212a74bc43bb71376ffbdc19fd289c3afa07743f08ceed097ff" Mar 11 09:54:34 crc kubenswrapper[4808]: E0311 09:54:34.790236 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:54:49 crc kubenswrapper[4808]: I0311 09:54:49.798665 4808 scope.go:117] "RemoveContainer" containerID="27e0f48b3822e212a74bc43bb71376ffbdc19fd289c3afa07743f08ceed097ff" Mar 11 09:54:49 crc kubenswrapper[4808]: E0311 09:54:49.799437 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:54:55 crc kubenswrapper[4808]: I0311 09:54:55.688102 4808 scope.go:117] "RemoveContainer" containerID="90bb9fd32bf81932a5d11a79eb1a96692c6ec92e940f35f9f806a62b6d59a7a9" Mar 11 09:55:00 crc kubenswrapper[4808]: I0311 09:55:00.789167 4808 scope.go:117] "RemoveContainer" containerID="27e0f48b3822e212a74bc43bb71376ffbdc19fd289c3afa07743f08ceed097ff" Mar 11 09:55:00 crc kubenswrapper[4808]: E0311 09:55:00.789838 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:55:05 crc kubenswrapper[4808]: I0311 09:55:05.435434 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bjpmk"] Mar 11 09:55:05 crc kubenswrapper[4808]: E0311 09:55:05.436416 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1597b1b-7f3f-4285-a9b4-909a884416e0" containerName="oc" Mar 11 09:55:05 crc kubenswrapper[4808]: I0311 09:55:05.436438 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1597b1b-7f3f-4285-a9b4-909a884416e0" containerName="oc" Mar 11 09:55:05 crc kubenswrapper[4808]: I0311 09:55:05.436761 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1597b1b-7f3f-4285-a9b4-909a884416e0" containerName="oc" Mar 11 09:55:05 crc kubenswrapper[4808]: I0311 09:55:05.441872 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bjpmk" Mar 11 09:55:05 crc kubenswrapper[4808]: I0311 09:55:05.485002 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bjpmk"] Mar 11 09:55:05 crc kubenswrapper[4808]: I0311 09:55:05.641094 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f48013e-9364-4f4d-8019-af2dc32bce8b-utilities\") pod \"certified-operators-bjpmk\" (UID: \"8f48013e-9364-4f4d-8019-af2dc32bce8b\") " pod="openshift-marketplace/certified-operators-bjpmk" Mar 11 09:55:05 crc kubenswrapper[4808]: I0311 09:55:05.641177 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rfzr\" (UniqueName: \"kubernetes.io/projected/8f48013e-9364-4f4d-8019-af2dc32bce8b-kube-api-access-8rfzr\") pod \"certified-operators-bjpmk\" (UID: \"8f48013e-9364-4f4d-8019-af2dc32bce8b\") " pod="openshift-marketplace/certified-operators-bjpmk" Mar 11 09:55:05 crc kubenswrapper[4808]: I0311 09:55:05.641202 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f48013e-9364-4f4d-8019-af2dc32bce8b-catalog-content\") pod \"certified-operators-bjpmk\" (UID: \"8f48013e-9364-4f4d-8019-af2dc32bce8b\") " pod="openshift-marketplace/certified-operators-bjpmk" Mar 11 09:55:05 crc kubenswrapper[4808]: I0311 09:55:05.742540 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f48013e-9364-4f4d-8019-af2dc32bce8b-utilities\") pod \"certified-operators-bjpmk\" (UID: \"8f48013e-9364-4f4d-8019-af2dc32bce8b\") " pod="openshift-marketplace/certified-operators-bjpmk" Mar 11 09:55:05 crc kubenswrapper[4808]: I0311 09:55:05.742597 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rfzr\" (UniqueName: \"kubernetes.io/projected/8f48013e-9364-4f4d-8019-af2dc32bce8b-kube-api-access-8rfzr\") pod \"certified-operators-bjpmk\" (UID: \"8f48013e-9364-4f4d-8019-af2dc32bce8b\") " pod="openshift-marketplace/certified-operators-bjpmk" Mar 11 09:55:05 crc kubenswrapper[4808]: I0311 09:55:05.742617 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f48013e-9364-4f4d-8019-af2dc32bce8b-catalog-content\") pod \"certified-operators-bjpmk\" (UID: \"8f48013e-9364-4f4d-8019-af2dc32bce8b\") " pod="openshift-marketplace/certified-operators-bjpmk" Mar 11 09:55:05 crc kubenswrapper[4808]: I0311 09:55:05.743043 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f48013e-9364-4f4d-8019-af2dc32bce8b-catalog-content\") pod \"certified-operators-bjpmk\" (UID: \"8f48013e-9364-4f4d-8019-af2dc32bce8b\") " pod="openshift-marketplace/certified-operators-bjpmk" Mar 11 09:55:05 crc kubenswrapper[4808]: I0311 09:55:05.743217 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f48013e-9364-4f4d-8019-af2dc32bce8b-utilities\") pod \"certified-operators-bjpmk\" (UID: \"8f48013e-9364-4f4d-8019-af2dc32bce8b\") " pod="openshift-marketplace/certified-operators-bjpmk" Mar 11 09:55:05 crc kubenswrapper[4808]: I0311 09:55:05.762087 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rfzr\" (UniqueName: \"kubernetes.io/projected/8f48013e-9364-4f4d-8019-af2dc32bce8b-kube-api-access-8rfzr\") pod \"certified-operators-bjpmk\" (UID: \"8f48013e-9364-4f4d-8019-af2dc32bce8b\") " pod="openshift-marketplace/certified-operators-bjpmk" Mar 11 09:55:05 crc kubenswrapper[4808]: I0311 09:55:05.785426 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bjpmk" Mar 11 09:55:06 crc kubenswrapper[4808]: I0311 09:55:06.272901 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bjpmk"] Mar 11 09:55:07 crc kubenswrapper[4808]: I0311 09:55:07.247731 4808 generic.go:334] "Generic (PLEG): container finished" podID="8f48013e-9364-4f4d-8019-af2dc32bce8b" containerID="b3158e13d01b3eb5c8ac5def30f02b4da520495328253a889d02501a86bafe87" exitCode=0 Mar 11 09:55:07 crc kubenswrapper[4808]: I0311 09:55:07.247771 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bjpmk" event={"ID":"8f48013e-9364-4f4d-8019-af2dc32bce8b","Type":"ContainerDied","Data":"b3158e13d01b3eb5c8ac5def30f02b4da520495328253a889d02501a86bafe87"} Mar 11 09:55:07 crc kubenswrapper[4808]: I0311 09:55:07.247795 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bjpmk" event={"ID":"8f48013e-9364-4f4d-8019-af2dc32bce8b","Type":"ContainerStarted","Data":"4f7f2452762a3b66978e625c7459516216fce4f2190672b04efce3221b8adf94"} Mar 11 09:55:08 crc kubenswrapper[4808]: I0311 09:55:08.257033 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bjpmk" event={"ID":"8f48013e-9364-4f4d-8019-af2dc32bce8b","Type":"ContainerStarted","Data":"f0d6ddcdee28191243a1879ccb29aa457d4e64535d26469b86c5579cc2790537"} Mar 11 09:55:09 crc kubenswrapper[4808]: I0311 09:55:09.265198 4808 generic.go:334] "Generic (PLEG): container finished" podID="8f48013e-9364-4f4d-8019-af2dc32bce8b" containerID="f0d6ddcdee28191243a1879ccb29aa457d4e64535d26469b86c5579cc2790537" exitCode=0 Mar 11 09:55:09 crc kubenswrapper[4808]: I0311 09:55:09.265354 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bjpmk" event={"ID":"8f48013e-9364-4f4d-8019-af2dc32bce8b","Type":"ContainerDied","Data":"f0d6ddcdee28191243a1879ccb29aa457d4e64535d26469b86c5579cc2790537"} Mar 11 09:55:11 crc kubenswrapper[4808]: I0311 09:55:11.286087 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bjpmk" event={"ID":"8f48013e-9364-4f4d-8019-af2dc32bce8b","Type":"ContainerStarted","Data":"2c248d218da81001ac795adf5c3ecf55385b1a68feccdd26b8efc2a22c402c70"} Mar 11 09:55:11 crc kubenswrapper[4808]: I0311 09:55:11.315036 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bjpmk" podStartSLOduration=3.784288875 podStartE2EDuration="6.315006583s" podCreationTimestamp="2026-03-11 09:55:05 +0000 UTC" firstStartedPulling="2026-03-11 09:55:07.24957973 +0000 UTC m=+4558.202903040" lastFinishedPulling="2026-03-11 09:55:09.780297428 +0000 UTC m=+4560.733620748" observedRunningTime="2026-03-11 09:55:11.307904252 +0000 UTC m=+4562.261227572" watchObservedRunningTime="2026-03-11 09:55:11.315006583 +0000 UTC m=+4562.268329943" Mar 11 09:55:11 crc kubenswrapper[4808]: I0311 09:55:11.790698 4808 scope.go:117] "RemoveContainer" containerID="27e0f48b3822e212a74bc43bb71376ffbdc19fd289c3afa07743f08ceed097ff" Mar 11 09:55:11 crc kubenswrapper[4808]: E0311 09:55:11.790918 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:55:15 crc kubenswrapper[4808]: I0311 09:55:15.786286 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bjpmk" Mar 11 09:55:15 crc kubenswrapper[4808]: I0311 09:55:15.787337 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bjpmk" Mar 11 09:55:15 crc kubenswrapper[4808]: I0311 09:55:15.826311 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bjpmk" Mar 11 09:55:16 crc kubenswrapper[4808]: I0311 09:55:16.367191 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bjpmk" Mar 11 09:55:16 crc kubenswrapper[4808]: I0311 09:55:16.415344 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bjpmk"] Mar 11 09:55:18 crc kubenswrapper[4808]: I0311 09:55:18.340607 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bjpmk" podUID="8f48013e-9364-4f4d-8019-af2dc32bce8b" containerName="registry-server" containerID="cri-o://2c248d218da81001ac795adf5c3ecf55385b1a68feccdd26b8efc2a22c402c70" gracePeriod=2 Mar 11 09:55:18 crc kubenswrapper[4808]: I0311 09:55:18.828335 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bjpmk" Mar 11 09:55:18 crc kubenswrapper[4808]: I0311 09:55:18.865711 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f48013e-9364-4f4d-8019-af2dc32bce8b-utilities\") pod \"8f48013e-9364-4f4d-8019-af2dc32bce8b\" (UID: \"8f48013e-9364-4f4d-8019-af2dc32bce8b\") " Mar 11 09:55:18 crc kubenswrapper[4808]: I0311 09:55:18.865781 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8rfzr\" (UniqueName: \"kubernetes.io/projected/8f48013e-9364-4f4d-8019-af2dc32bce8b-kube-api-access-8rfzr\") pod \"8f48013e-9364-4f4d-8019-af2dc32bce8b\" (UID: \"8f48013e-9364-4f4d-8019-af2dc32bce8b\") " Mar 11 09:55:18 crc kubenswrapper[4808]: I0311 09:55:18.865847 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f48013e-9364-4f4d-8019-af2dc32bce8b-catalog-content\") pod \"8f48013e-9364-4f4d-8019-af2dc32bce8b\" (UID: \"8f48013e-9364-4f4d-8019-af2dc32bce8b\") " Mar 11 09:55:18 crc kubenswrapper[4808]: I0311 09:55:18.866747 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f48013e-9364-4f4d-8019-af2dc32bce8b-utilities" (OuterVolumeSpecName: "utilities") pod "8f48013e-9364-4f4d-8019-af2dc32bce8b" (UID: "8f48013e-9364-4f4d-8019-af2dc32bce8b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:55:18 crc kubenswrapper[4808]: I0311 09:55:18.877204 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f48013e-9364-4f4d-8019-af2dc32bce8b-kube-api-access-8rfzr" (OuterVolumeSpecName: "kube-api-access-8rfzr") pod "8f48013e-9364-4f4d-8019-af2dc32bce8b" (UID: "8f48013e-9364-4f4d-8019-af2dc32bce8b"). InnerVolumeSpecName "kube-api-access-8rfzr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:55:18 crc kubenswrapper[4808]: I0311 09:55:18.967880 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f48013e-9364-4f4d-8019-af2dc32bce8b-utilities\") on node \"crc\" DevicePath \"\"" Mar 11 09:55:18 crc kubenswrapper[4808]: I0311 09:55:18.968308 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8rfzr\" (UniqueName: \"kubernetes.io/projected/8f48013e-9364-4f4d-8019-af2dc32bce8b-kube-api-access-8rfzr\") on node \"crc\" DevicePath \"\"" Mar 11 09:55:19 crc kubenswrapper[4808]: I0311 09:55:19.292983 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f48013e-9364-4f4d-8019-af2dc32bce8b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8f48013e-9364-4f4d-8019-af2dc32bce8b" (UID: "8f48013e-9364-4f4d-8019-af2dc32bce8b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:55:19 crc kubenswrapper[4808]: I0311 09:55:19.352938 4808 generic.go:334] "Generic (PLEG): container finished" podID="8f48013e-9364-4f4d-8019-af2dc32bce8b" containerID="2c248d218da81001ac795adf5c3ecf55385b1a68feccdd26b8efc2a22c402c70" exitCode=0 Mar 11 09:55:19 crc kubenswrapper[4808]: I0311 09:55:19.353015 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bjpmk" Mar 11 09:55:19 crc kubenswrapper[4808]: I0311 09:55:19.353015 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bjpmk" event={"ID":"8f48013e-9364-4f4d-8019-af2dc32bce8b","Type":"ContainerDied","Data":"2c248d218da81001ac795adf5c3ecf55385b1a68feccdd26b8efc2a22c402c70"} Mar 11 09:55:19 crc kubenswrapper[4808]: I0311 09:55:19.353097 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bjpmk" event={"ID":"8f48013e-9364-4f4d-8019-af2dc32bce8b","Type":"ContainerDied","Data":"4f7f2452762a3b66978e625c7459516216fce4f2190672b04efce3221b8adf94"} Mar 11 09:55:19 crc kubenswrapper[4808]: I0311 09:55:19.353159 4808 scope.go:117] "RemoveContainer" containerID="2c248d218da81001ac795adf5c3ecf55385b1a68feccdd26b8efc2a22c402c70" Mar 11 09:55:19 crc kubenswrapper[4808]: I0311 09:55:19.374845 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f48013e-9364-4f4d-8019-af2dc32bce8b-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 11 09:55:19 crc kubenswrapper[4808]: I0311 09:55:19.402825 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bjpmk"] Mar 11 09:55:19 crc kubenswrapper[4808]: I0311 09:55:19.406874 4808 scope.go:117] "RemoveContainer" containerID="f0d6ddcdee28191243a1879ccb29aa457d4e64535d26469b86c5579cc2790537" Mar 11 09:55:19 crc kubenswrapper[4808]: I0311 09:55:19.410824 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bjpmk"] Mar 11 09:55:19 crc kubenswrapper[4808]: I0311 09:55:19.434739 4808 scope.go:117] "RemoveContainer" containerID="b3158e13d01b3eb5c8ac5def30f02b4da520495328253a889d02501a86bafe87" Mar 11 09:55:19 crc kubenswrapper[4808]: I0311 09:55:19.461495 4808 scope.go:117] "RemoveContainer" containerID="2c248d218da81001ac795adf5c3ecf55385b1a68feccdd26b8efc2a22c402c70" Mar 11 09:55:19 crc kubenswrapper[4808]: E0311 09:55:19.464415 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c248d218da81001ac795adf5c3ecf55385b1a68feccdd26b8efc2a22c402c70\": container with ID starting with 2c248d218da81001ac795adf5c3ecf55385b1a68feccdd26b8efc2a22c402c70 not found: ID does not exist" containerID="2c248d218da81001ac795adf5c3ecf55385b1a68feccdd26b8efc2a22c402c70" Mar 11 09:55:19 crc kubenswrapper[4808]: I0311 09:55:19.464476 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c248d218da81001ac795adf5c3ecf55385b1a68feccdd26b8efc2a22c402c70"} err="failed to get container status \"2c248d218da81001ac795adf5c3ecf55385b1a68feccdd26b8efc2a22c402c70\": rpc error: code = NotFound desc = could not find container \"2c248d218da81001ac795adf5c3ecf55385b1a68feccdd26b8efc2a22c402c70\": container with ID starting with 2c248d218da81001ac795adf5c3ecf55385b1a68feccdd26b8efc2a22c402c70 not found: ID does not exist" Mar 11 09:55:19 crc kubenswrapper[4808]: I0311 09:55:19.464508 4808 scope.go:117] "RemoveContainer" containerID="f0d6ddcdee28191243a1879ccb29aa457d4e64535d26469b86c5579cc2790537" Mar 11 09:55:19 crc kubenswrapper[4808]: E0311 09:55:19.464986 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0d6ddcdee28191243a1879ccb29aa457d4e64535d26469b86c5579cc2790537\": container with ID starting with f0d6ddcdee28191243a1879ccb29aa457d4e64535d26469b86c5579cc2790537 not found: ID does not exist" containerID="f0d6ddcdee28191243a1879ccb29aa457d4e64535d26469b86c5579cc2790537" Mar 11 09:55:19 crc kubenswrapper[4808]: I0311 09:55:19.465093 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0d6ddcdee28191243a1879ccb29aa457d4e64535d26469b86c5579cc2790537"} err="failed to get container status \"f0d6ddcdee28191243a1879ccb29aa457d4e64535d26469b86c5579cc2790537\": rpc error: code = NotFound desc = could not find container \"f0d6ddcdee28191243a1879ccb29aa457d4e64535d26469b86c5579cc2790537\": container with ID starting with f0d6ddcdee28191243a1879ccb29aa457d4e64535d26469b86c5579cc2790537 not found: ID does not exist" Mar 11 09:55:19 crc kubenswrapper[4808]: I0311 09:55:19.465134 4808 scope.go:117] "RemoveContainer" containerID="b3158e13d01b3eb5c8ac5def30f02b4da520495328253a889d02501a86bafe87" Mar 11 09:55:19 crc kubenswrapper[4808]: E0311 09:55:19.465656 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3158e13d01b3eb5c8ac5def30f02b4da520495328253a889d02501a86bafe87\": container with ID starting with b3158e13d01b3eb5c8ac5def30f02b4da520495328253a889d02501a86bafe87 not found: ID does not exist" containerID="b3158e13d01b3eb5c8ac5def30f02b4da520495328253a889d02501a86bafe87" Mar 11 09:55:19 crc kubenswrapper[4808]: I0311 09:55:19.465699 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3158e13d01b3eb5c8ac5def30f02b4da520495328253a889d02501a86bafe87"} err="failed to get container status \"b3158e13d01b3eb5c8ac5def30f02b4da520495328253a889d02501a86bafe87\": rpc error: code = NotFound desc = could not find container \"b3158e13d01b3eb5c8ac5def30f02b4da520495328253a889d02501a86bafe87\": container with ID starting with b3158e13d01b3eb5c8ac5def30f02b4da520495328253a889d02501a86bafe87 not found: ID does not exist" Mar 11 09:55:19 crc kubenswrapper[4808]: I0311 09:55:19.806902 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f48013e-9364-4f4d-8019-af2dc32bce8b" path="/var/lib/kubelet/pods/8f48013e-9364-4f4d-8019-af2dc32bce8b/volumes" Mar 11 09:55:22 crc kubenswrapper[4808]: I0311 09:55:22.789015 4808 scope.go:117] "RemoveContainer" containerID="27e0f48b3822e212a74bc43bb71376ffbdc19fd289c3afa07743f08ceed097ff" Mar 11 09:55:22 crc kubenswrapper[4808]: E0311 09:55:22.789542 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:55:37 crc kubenswrapper[4808]: I0311 09:55:37.790019 4808 scope.go:117] "RemoveContainer" containerID="27e0f48b3822e212a74bc43bb71376ffbdc19fd289c3afa07743f08ceed097ff" Mar 11 09:55:37 crc kubenswrapper[4808]: E0311 09:55:37.790916 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:55:50 crc kubenswrapper[4808]: I0311 09:55:50.789790 4808 scope.go:117] "RemoveContainer" containerID="27e0f48b3822e212a74bc43bb71376ffbdc19fd289c3afa07743f08ceed097ff" Mar 11 09:55:50 crc kubenswrapper[4808]: E0311 09:55:50.790823 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:56:00 crc kubenswrapper[4808]: I0311 09:56:00.141476 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553716-tdbvh"] Mar 11 09:56:00 crc kubenswrapper[4808]: E0311 09:56:00.143137 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f48013e-9364-4f4d-8019-af2dc32bce8b" containerName="extract-content" Mar 11 09:56:00 crc kubenswrapper[4808]: I0311 09:56:00.143155 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f48013e-9364-4f4d-8019-af2dc32bce8b" containerName="extract-content" Mar 11 09:56:00 crc kubenswrapper[4808]: E0311 09:56:00.143176 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f48013e-9364-4f4d-8019-af2dc32bce8b" containerName="registry-server" Mar 11 09:56:00 crc kubenswrapper[4808]: I0311 09:56:00.143184 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f48013e-9364-4f4d-8019-af2dc32bce8b" containerName="registry-server" Mar 11 09:56:00 crc kubenswrapper[4808]: E0311 09:56:00.143215 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f48013e-9364-4f4d-8019-af2dc32bce8b" containerName="extract-utilities" Mar 11 09:56:00 crc kubenswrapper[4808]: I0311 09:56:00.143224 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f48013e-9364-4f4d-8019-af2dc32bce8b" containerName="extract-utilities" Mar 11 09:56:00 crc kubenswrapper[4808]: I0311 09:56:00.143412 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f48013e-9364-4f4d-8019-af2dc32bce8b" containerName="registry-server" Mar 11 09:56:00 crc kubenswrapper[4808]: I0311 09:56:00.143983 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553716-tdbvh" Mar 11 09:56:00 crc kubenswrapper[4808]: I0311 09:56:00.149324 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 09:56:00 crc kubenswrapper[4808]: I0311 09:56:00.149479 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 09:56:00 crc kubenswrapper[4808]: I0311 09:56:00.149502 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553716-tdbvh"] Mar 11 09:56:00 crc kubenswrapper[4808]: I0311 09:56:00.150160 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-8r7cc" Mar 11 09:56:00 crc kubenswrapper[4808]: I0311 09:56:00.253824 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcjwb\" (UniqueName: \"kubernetes.io/projected/ac4af0f0-76f9-4f25-8981-56ced5cee087-kube-api-access-xcjwb\") pod \"auto-csr-approver-29553716-tdbvh\" (UID: \"ac4af0f0-76f9-4f25-8981-56ced5cee087\") " pod="openshift-infra/auto-csr-approver-29553716-tdbvh" Mar 11 09:56:00 crc kubenswrapper[4808]: I0311 09:56:00.356603 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcjwb\" (UniqueName: \"kubernetes.io/projected/ac4af0f0-76f9-4f25-8981-56ced5cee087-kube-api-access-xcjwb\") pod \"auto-csr-approver-29553716-tdbvh\" (UID: \"ac4af0f0-76f9-4f25-8981-56ced5cee087\") " pod="openshift-infra/auto-csr-approver-29553716-tdbvh" Mar 11 09:56:00 crc kubenswrapper[4808]: I0311 09:56:00.386638 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcjwb\" (UniqueName: \"kubernetes.io/projected/ac4af0f0-76f9-4f25-8981-56ced5cee087-kube-api-access-xcjwb\") pod \"auto-csr-approver-29553716-tdbvh\" (UID: \"ac4af0f0-76f9-4f25-8981-56ced5cee087\") " pod="openshift-infra/auto-csr-approver-29553716-tdbvh" Mar 11 09:56:00 crc kubenswrapper[4808]: I0311 09:56:00.462862 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553716-tdbvh" Mar 11 09:56:00 crc kubenswrapper[4808]: W0311 09:56:00.874711 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podac4af0f0_76f9_4f25_8981_56ced5cee087.slice/crio-b104e74c766a6b64881247d6a86c2df935bdaf856fd578131e3163f11861c747 WatchSource:0}: Error finding container b104e74c766a6b64881247d6a86c2df935bdaf856fd578131e3163f11861c747: Status 404 returned error can't find the container with id b104e74c766a6b64881247d6a86c2df935bdaf856fd578131e3163f11861c747 Mar 11 09:56:00 crc kubenswrapper[4808]: I0311 09:56:00.881474 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553716-tdbvh"] Mar 11 09:56:01 crc kubenswrapper[4808]: I0311 09:56:01.719869 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553716-tdbvh" event={"ID":"ac4af0f0-76f9-4f25-8981-56ced5cee087","Type":"ContainerStarted","Data":"b104e74c766a6b64881247d6a86c2df935bdaf856fd578131e3163f11861c747"} Mar 11 09:56:02 crc kubenswrapper[4808]: I0311 09:56:02.731811 4808 generic.go:334] "Generic (PLEG): container finished" podID="ac4af0f0-76f9-4f25-8981-56ced5cee087" containerID="164c244746403c328554c5b61186bf33d0c07d1d3cd9f823f1e3c4e7df5706d8" exitCode=0 Mar 11 09:56:02 crc kubenswrapper[4808]: I0311 09:56:02.731878 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553716-tdbvh" event={"ID":"ac4af0f0-76f9-4f25-8981-56ced5cee087","Type":"ContainerDied","Data":"164c244746403c328554c5b61186bf33d0c07d1d3cd9f823f1e3c4e7df5706d8"} Mar 11 09:56:04 crc kubenswrapper[4808]: I0311 09:56:04.070549 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553716-tdbvh" Mar 11 09:56:04 crc kubenswrapper[4808]: I0311 09:56:04.216427 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcjwb\" (UniqueName: \"kubernetes.io/projected/ac4af0f0-76f9-4f25-8981-56ced5cee087-kube-api-access-xcjwb\") pod \"ac4af0f0-76f9-4f25-8981-56ced5cee087\" (UID: \"ac4af0f0-76f9-4f25-8981-56ced5cee087\") " Mar 11 09:56:04 crc kubenswrapper[4808]: I0311 09:56:04.223629 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac4af0f0-76f9-4f25-8981-56ced5cee087-kube-api-access-xcjwb" (OuterVolumeSpecName: "kube-api-access-xcjwb") pod "ac4af0f0-76f9-4f25-8981-56ced5cee087" (UID: "ac4af0f0-76f9-4f25-8981-56ced5cee087"). InnerVolumeSpecName "kube-api-access-xcjwb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:56:04 crc kubenswrapper[4808]: I0311 09:56:04.318135 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcjwb\" (UniqueName: \"kubernetes.io/projected/ac4af0f0-76f9-4f25-8981-56ced5cee087-kube-api-access-xcjwb\") on node \"crc\" DevicePath \"\"" Mar 11 09:56:04 crc kubenswrapper[4808]: I0311 09:56:04.753852 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553716-tdbvh" event={"ID":"ac4af0f0-76f9-4f25-8981-56ced5cee087","Type":"ContainerDied","Data":"b104e74c766a6b64881247d6a86c2df935bdaf856fd578131e3163f11861c747"} Mar 11 09:56:04 crc kubenswrapper[4808]: I0311 09:56:04.753907 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553716-tdbvh" Mar 11 09:56:04 crc kubenswrapper[4808]: I0311 09:56:04.754337 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b104e74c766a6b64881247d6a86c2df935bdaf856fd578131e3163f11861c747" Mar 11 09:56:05 crc kubenswrapper[4808]: I0311 09:56:05.143964 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553710-8kbd6"] Mar 11 09:56:05 crc kubenswrapper[4808]: I0311 09:56:05.153407 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553710-8kbd6"] Mar 11 09:56:05 crc kubenswrapper[4808]: I0311 09:56:05.789674 4808 scope.go:117] "RemoveContainer" containerID="27e0f48b3822e212a74bc43bb71376ffbdc19fd289c3afa07743f08ceed097ff" Mar 11 09:56:05 crc kubenswrapper[4808]: E0311 09:56:05.789975 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 09:56:05 crc kubenswrapper[4808]: I0311 09:56:05.800244 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c89e377b-73f4-4e03-8d45-b56478ac9d0a" path="/var/lib/kubelet/pods/c89e377b-73f4-4e03-8d45-b56478ac9d0a/volumes" Mar 11 09:56:17 crc kubenswrapper[4808]: I0311 09:56:17.790939 4808 scope.go:117] "RemoveContainer" containerID="27e0f48b3822e212a74bc43bb71376ffbdc19fd289c3afa07743f08ceed097ff" Mar 11 09:56:18 crc kubenswrapper[4808]: I0311 09:56:18.863937 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" event={"ID":"3dda5309-668d-4e3c-b3b2-1d708eecc578","Type":"ContainerStarted","Data":"12c1b7d17d6cdda346703d5fa8021d94fc0d6eeac2ec33be129a049b77ea627c"} Mar 11 09:56:39 crc kubenswrapper[4808]: I0311 09:56:39.940721 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-df4c9"] Mar 11 09:56:39 crc kubenswrapper[4808]: E0311 09:56:39.942790 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac4af0f0-76f9-4f25-8981-56ced5cee087" containerName="oc" Mar 11 09:56:39 crc kubenswrapper[4808]: I0311 09:56:39.942818 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac4af0f0-76f9-4f25-8981-56ced5cee087" containerName="oc" Mar 11 09:56:39 crc kubenswrapper[4808]: I0311 09:56:39.943091 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac4af0f0-76f9-4f25-8981-56ced5cee087" containerName="oc" Mar 11 09:56:39 crc kubenswrapper[4808]: I0311 09:56:39.944778 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-df4c9" Mar 11 09:56:39 crc kubenswrapper[4808]: I0311 09:56:39.955896 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-df4c9"] Mar 11 09:56:40 crc kubenswrapper[4808]: I0311 09:56:40.045658 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74nl2\" (UniqueName: \"kubernetes.io/projected/d6db7f55-c4ab-4c7c-87f3-64b5f6fd2902-kube-api-access-74nl2\") pod \"community-operators-df4c9\" (UID: \"d6db7f55-c4ab-4c7c-87f3-64b5f6fd2902\") " pod="openshift-marketplace/community-operators-df4c9" Mar 11 09:56:40 crc kubenswrapper[4808]: I0311 09:56:40.045974 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6db7f55-c4ab-4c7c-87f3-64b5f6fd2902-catalog-content\") pod \"community-operators-df4c9\" (UID: \"d6db7f55-c4ab-4c7c-87f3-64b5f6fd2902\") " pod="openshift-marketplace/community-operators-df4c9" Mar 11 09:56:40 crc kubenswrapper[4808]: I0311 09:56:40.046084 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6db7f55-c4ab-4c7c-87f3-64b5f6fd2902-utilities\") pod \"community-operators-df4c9\" (UID: \"d6db7f55-c4ab-4c7c-87f3-64b5f6fd2902\") " pod="openshift-marketplace/community-operators-df4c9" Mar 11 09:56:40 crc kubenswrapper[4808]: I0311 09:56:40.148121 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6db7f55-c4ab-4c7c-87f3-64b5f6fd2902-utilities\") pod \"community-operators-df4c9\" (UID: \"d6db7f55-c4ab-4c7c-87f3-64b5f6fd2902\") " pod="openshift-marketplace/community-operators-df4c9" Mar 11 09:56:40 crc kubenswrapper[4808]: I0311 09:56:40.148286 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74nl2\" (UniqueName: \"kubernetes.io/projected/d6db7f55-c4ab-4c7c-87f3-64b5f6fd2902-kube-api-access-74nl2\") pod \"community-operators-df4c9\" (UID: \"d6db7f55-c4ab-4c7c-87f3-64b5f6fd2902\") " pod="openshift-marketplace/community-operators-df4c9" Mar 11 09:56:40 crc kubenswrapper[4808]: I0311 09:56:40.148331 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6db7f55-c4ab-4c7c-87f3-64b5f6fd2902-catalog-content\") pod \"community-operators-df4c9\" (UID: \"d6db7f55-c4ab-4c7c-87f3-64b5f6fd2902\") " pod="openshift-marketplace/community-operators-df4c9" Mar 11 09:56:40 crc kubenswrapper[4808]: I0311 09:56:40.149082 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6db7f55-c4ab-4c7c-87f3-64b5f6fd2902-utilities\") pod \"community-operators-df4c9\" (UID: \"d6db7f55-c4ab-4c7c-87f3-64b5f6fd2902\") " pod="openshift-marketplace/community-operators-df4c9" Mar 11 09:56:40 crc kubenswrapper[4808]: I0311 09:56:40.149110 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6db7f55-c4ab-4c7c-87f3-64b5f6fd2902-catalog-content\") pod \"community-operators-df4c9\" (UID: \"d6db7f55-c4ab-4c7c-87f3-64b5f6fd2902\") " pod="openshift-marketplace/community-operators-df4c9" Mar 11 09:56:40 crc kubenswrapper[4808]: I0311 09:56:40.175377 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74nl2\" (UniqueName: \"kubernetes.io/projected/d6db7f55-c4ab-4c7c-87f3-64b5f6fd2902-kube-api-access-74nl2\") pod \"community-operators-df4c9\" (UID: \"d6db7f55-c4ab-4c7c-87f3-64b5f6fd2902\") " pod="openshift-marketplace/community-operators-df4c9" Mar 11 09:56:40 crc kubenswrapper[4808]: I0311 09:56:40.266305 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-df4c9" Mar 11 09:56:40 crc kubenswrapper[4808]: I0311 09:56:40.894647 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-df4c9"] Mar 11 09:56:41 crc kubenswrapper[4808]: I0311 09:56:41.029712 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-df4c9" event={"ID":"d6db7f55-c4ab-4c7c-87f3-64b5f6fd2902","Type":"ContainerStarted","Data":"5864992ffb7ea9f13c7d0796292b1eee9fedd228dff907afcb8b84acd8223cc4"} Mar 11 09:56:42 crc kubenswrapper[4808]: I0311 09:56:42.038015 4808 generic.go:334] "Generic (PLEG): container finished" podID="d6db7f55-c4ab-4c7c-87f3-64b5f6fd2902" containerID="7cca0cf0afb12687054e2c53f46cc9a5c05890ff45981a771b7ea0d7d2923652" exitCode=0 Mar 11 09:56:42 crc kubenswrapper[4808]: I0311 09:56:42.038193 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-df4c9" event={"ID":"d6db7f55-c4ab-4c7c-87f3-64b5f6fd2902","Type":"ContainerDied","Data":"7cca0cf0afb12687054e2c53f46cc9a5c05890ff45981a771b7ea0d7d2923652"} Mar 11 09:56:43 crc kubenswrapper[4808]: I0311 09:56:43.718598 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9nwbh"] Mar 11 09:56:43 crc kubenswrapper[4808]: I0311 09:56:43.720763 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9nwbh" Mar 11 09:56:43 crc kubenswrapper[4808]: I0311 09:56:43.726849 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9nwbh"] Mar 11 09:56:43 crc kubenswrapper[4808]: I0311 09:56:43.894056 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5krtd\" (UniqueName: \"kubernetes.io/projected/a02b3880-71aa-4511-aa55-c91aabca9769-kube-api-access-5krtd\") pod \"redhat-marketplace-9nwbh\" (UID: \"a02b3880-71aa-4511-aa55-c91aabca9769\") " pod="openshift-marketplace/redhat-marketplace-9nwbh" Mar 11 09:56:43 crc kubenswrapper[4808]: I0311 09:56:43.894128 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a02b3880-71aa-4511-aa55-c91aabca9769-utilities\") pod \"redhat-marketplace-9nwbh\" (UID: \"a02b3880-71aa-4511-aa55-c91aabca9769\") " pod="openshift-marketplace/redhat-marketplace-9nwbh" Mar 11 09:56:43 crc kubenswrapper[4808]: I0311 09:56:43.894147 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a02b3880-71aa-4511-aa55-c91aabca9769-catalog-content\") pod \"redhat-marketplace-9nwbh\" (UID: \"a02b3880-71aa-4511-aa55-c91aabca9769\") " pod="openshift-marketplace/redhat-marketplace-9nwbh" Mar 11 09:56:43 crc kubenswrapper[4808]: I0311 09:56:43.996126 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5krtd\" (UniqueName: \"kubernetes.io/projected/a02b3880-71aa-4511-aa55-c91aabca9769-kube-api-access-5krtd\") pod \"redhat-marketplace-9nwbh\" (UID: \"a02b3880-71aa-4511-aa55-c91aabca9769\") " pod="openshift-marketplace/redhat-marketplace-9nwbh" Mar 11 09:56:43 crc kubenswrapper[4808]: I0311 09:56:43.996210 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a02b3880-71aa-4511-aa55-c91aabca9769-utilities\") pod \"redhat-marketplace-9nwbh\" (UID: \"a02b3880-71aa-4511-aa55-c91aabca9769\") " pod="openshift-marketplace/redhat-marketplace-9nwbh" Mar 11 09:56:43 crc kubenswrapper[4808]: I0311 09:56:43.996240 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a02b3880-71aa-4511-aa55-c91aabca9769-catalog-content\") pod \"redhat-marketplace-9nwbh\" (UID: \"a02b3880-71aa-4511-aa55-c91aabca9769\") " pod="openshift-marketplace/redhat-marketplace-9nwbh" Mar 11 09:56:43 crc kubenswrapper[4808]: I0311 09:56:43.996759 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a02b3880-71aa-4511-aa55-c91aabca9769-utilities\") pod \"redhat-marketplace-9nwbh\" (UID: \"a02b3880-71aa-4511-aa55-c91aabca9769\") " pod="openshift-marketplace/redhat-marketplace-9nwbh" Mar 11 09:56:43 crc kubenswrapper[4808]: I0311 09:56:43.996783 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a02b3880-71aa-4511-aa55-c91aabca9769-catalog-content\") pod \"redhat-marketplace-9nwbh\" (UID: \"a02b3880-71aa-4511-aa55-c91aabca9769\") " pod="openshift-marketplace/redhat-marketplace-9nwbh" Mar 11 09:56:44 crc kubenswrapper[4808]: I0311 09:56:44.017162 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5krtd\" (UniqueName: \"kubernetes.io/projected/a02b3880-71aa-4511-aa55-c91aabca9769-kube-api-access-5krtd\") pod \"redhat-marketplace-9nwbh\" (UID: \"a02b3880-71aa-4511-aa55-c91aabca9769\") " pod="openshift-marketplace/redhat-marketplace-9nwbh" Mar 11 09:56:44 crc kubenswrapper[4808]: I0311 09:56:44.036758 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9nwbh" Mar 11 09:56:44 crc kubenswrapper[4808]: I0311 09:56:44.069733 4808 generic.go:334] "Generic (PLEG): container finished" podID="d6db7f55-c4ab-4c7c-87f3-64b5f6fd2902" containerID="e7e60e4ac9c49847dccd06ea6a1bc4c390ba3fd6f53ca3c3d83213fe041093bf" exitCode=0 Mar 11 09:56:44 crc kubenswrapper[4808]: I0311 09:56:44.069785 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-df4c9" event={"ID":"d6db7f55-c4ab-4c7c-87f3-64b5f6fd2902","Type":"ContainerDied","Data":"e7e60e4ac9c49847dccd06ea6a1bc4c390ba3fd6f53ca3c3d83213fe041093bf"} Mar 11 09:56:44 crc kubenswrapper[4808]: I0311 09:56:44.483662 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9nwbh"] Mar 11 09:56:44 crc kubenswrapper[4808]: W0311 09:56:44.486409 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda02b3880_71aa_4511_aa55_c91aabca9769.slice/crio-d0f776bee0c05cb1c74bd73efe41d8495dc47da9778159a6b35e43a0bbdae3b8 WatchSource:0}: Error finding container d0f776bee0c05cb1c74bd73efe41d8495dc47da9778159a6b35e43a0bbdae3b8: Status 404 returned error can't find the container with id d0f776bee0c05cb1c74bd73efe41d8495dc47da9778159a6b35e43a0bbdae3b8 Mar 11 09:56:45 crc kubenswrapper[4808]: I0311 09:56:45.078690 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-df4c9" event={"ID":"d6db7f55-c4ab-4c7c-87f3-64b5f6fd2902","Type":"ContainerStarted","Data":"e385055686e59f7c0a9aab9f26168a6619e225c5f8586583696987a12b61cf60"} Mar 11 09:56:45 crc kubenswrapper[4808]: I0311 09:56:45.079750 4808 generic.go:334] "Generic (PLEG): container finished" podID="a02b3880-71aa-4511-aa55-c91aabca9769" containerID="090932b234421c568c29a976e014ce5ecdf8798da0d9a367bf1797e03cb94fb4" exitCode=0 Mar 11 09:56:45 crc kubenswrapper[4808]: I0311 09:56:45.079882 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9nwbh" event={"ID":"a02b3880-71aa-4511-aa55-c91aabca9769","Type":"ContainerDied","Data":"090932b234421c568c29a976e014ce5ecdf8798da0d9a367bf1797e03cb94fb4"} Mar 11 09:56:45 crc kubenswrapper[4808]: I0311 09:56:45.080056 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9nwbh" event={"ID":"a02b3880-71aa-4511-aa55-c91aabca9769","Type":"ContainerStarted","Data":"d0f776bee0c05cb1c74bd73efe41d8495dc47da9778159a6b35e43a0bbdae3b8"} Mar 11 09:56:45 crc kubenswrapper[4808]: I0311 09:56:45.106785 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-df4c9" podStartSLOduration=3.309527175 podStartE2EDuration="6.106767048s" podCreationTimestamp="2026-03-11 09:56:39 +0000 UTC" firstStartedPulling="2026-03-11 09:56:42.040026265 +0000 UTC m=+4652.993349585" lastFinishedPulling="2026-03-11 09:56:44.837266138 +0000 UTC m=+4655.790589458" observedRunningTime="2026-03-11 09:56:45.103578388 +0000 UTC m=+4656.056901738" watchObservedRunningTime="2026-03-11 09:56:45.106767048 +0000 UTC m=+4656.060090368" Mar 11 09:56:47 crc kubenswrapper[4808]: I0311 09:56:47.094427 4808 generic.go:334] "Generic (PLEG): container finished" podID="a02b3880-71aa-4511-aa55-c91aabca9769" containerID="fbaa88b22216a3ef9d1a5d927cef0284912f03deebe64868e8410ca1b274364f" exitCode=0 Mar 11 09:56:47 crc kubenswrapper[4808]: I0311 09:56:47.094513 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9nwbh" event={"ID":"a02b3880-71aa-4511-aa55-c91aabca9769","Type":"ContainerDied","Data":"fbaa88b22216a3ef9d1a5d927cef0284912f03deebe64868e8410ca1b274364f"} Mar 11 09:56:48 crc kubenswrapper[4808]: I0311 09:56:48.103015 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9nwbh" event={"ID":"a02b3880-71aa-4511-aa55-c91aabca9769","Type":"ContainerStarted","Data":"65813ccd5d0b758c750552d523599c00cfadca57ea65ba9f9b21be65d1ce075e"} Mar 11 09:56:48 crc kubenswrapper[4808]: I0311 09:56:48.125638 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9nwbh" podStartSLOduration=2.639292654 podStartE2EDuration="5.125619149s" podCreationTimestamp="2026-03-11 09:56:43 +0000 UTC" firstStartedPulling="2026-03-11 09:56:45.081099763 +0000 UTC m=+4656.034423083" lastFinishedPulling="2026-03-11 09:56:47.567426258 +0000 UTC m=+4658.520749578" observedRunningTime="2026-03-11 09:56:48.124507558 +0000 UTC m=+4659.077830888" watchObservedRunningTime="2026-03-11 09:56:48.125619149 +0000 UTC m=+4659.078942479" Mar 11 09:56:50 crc kubenswrapper[4808]: I0311 09:56:50.267102 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-df4c9" Mar 11 09:56:50 crc kubenswrapper[4808]: I0311 09:56:50.267684 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-df4c9" Mar 11 09:56:50 crc kubenswrapper[4808]: I0311 09:56:50.318052 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-df4c9" Mar 11 09:56:51 crc kubenswrapper[4808]: I0311 09:56:51.170549 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-df4c9" Mar 11 09:56:51 crc kubenswrapper[4808]: I0311 09:56:51.511696 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-df4c9"] Mar 11 09:56:53 crc kubenswrapper[4808]: I0311 09:56:53.136548 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-df4c9" podUID="d6db7f55-c4ab-4c7c-87f3-64b5f6fd2902" containerName="registry-server" containerID="cri-o://e385055686e59f7c0a9aab9f26168a6619e225c5f8586583696987a12b61cf60" gracePeriod=2 Mar 11 09:56:53 crc kubenswrapper[4808]: I0311 09:56:53.548519 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-df4c9" Mar 11 09:56:53 crc kubenswrapper[4808]: I0311 09:56:53.628761 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6db7f55-c4ab-4c7c-87f3-64b5f6fd2902-utilities\") pod \"d6db7f55-c4ab-4c7c-87f3-64b5f6fd2902\" (UID: \"d6db7f55-c4ab-4c7c-87f3-64b5f6fd2902\") " Mar 11 09:56:53 crc kubenswrapper[4808]: I0311 09:56:53.628927 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6db7f55-c4ab-4c7c-87f3-64b5f6fd2902-catalog-content\") pod \"d6db7f55-c4ab-4c7c-87f3-64b5f6fd2902\" (UID: \"d6db7f55-c4ab-4c7c-87f3-64b5f6fd2902\") " Mar 11 09:56:53 crc kubenswrapper[4808]: I0311 09:56:53.628993 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-74nl2\" (UniqueName: \"kubernetes.io/projected/d6db7f55-c4ab-4c7c-87f3-64b5f6fd2902-kube-api-access-74nl2\") pod \"d6db7f55-c4ab-4c7c-87f3-64b5f6fd2902\" (UID: \"d6db7f55-c4ab-4c7c-87f3-64b5f6fd2902\") " Mar 11 09:56:53 crc kubenswrapper[4808]: I0311 09:56:53.630401 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d6db7f55-c4ab-4c7c-87f3-64b5f6fd2902-utilities" (OuterVolumeSpecName: "utilities") pod "d6db7f55-c4ab-4c7c-87f3-64b5f6fd2902" (UID: "d6db7f55-c4ab-4c7c-87f3-64b5f6fd2902"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:56:53 crc kubenswrapper[4808]: I0311 09:56:53.637599 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6db7f55-c4ab-4c7c-87f3-64b5f6fd2902-kube-api-access-74nl2" (OuterVolumeSpecName: "kube-api-access-74nl2") pod "d6db7f55-c4ab-4c7c-87f3-64b5f6fd2902" (UID: "d6db7f55-c4ab-4c7c-87f3-64b5f6fd2902"). InnerVolumeSpecName "kube-api-access-74nl2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:56:53 crc kubenswrapper[4808]: I0311 09:56:53.730524 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-74nl2\" (UniqueName: \"kubernetes.io/projected/d6db7f55-c4ab-4c7c-87f3-64b5f6fd2902-kube-api-access-74nl2\") on node \"crc\" DevicePath \"\"" Mar 11 09:56:53 crc kubenswrapper[4808]: I0311 09:56:53.730574 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6db7f55-c4ab-4c7c-87f3-64b5f6fd2902-utilities\") on node \"crc\" DevicePath \"\"" Mar 11 09:56:54 crc kubenswrapper[4808]: I0311 09:56:54.037563 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9nwbh" Mar 11 09:56:54 crc kubenswrapper[4808]: I0311 09:56:54.037623 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9nwbh" Mar 11 09:56:54 crc kubenswrapper[4808]: I0311 09:56:54.075682 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9nwbh" Mar 11 09:56:54 crc kubenswrapper[4808]: I0311 09:56:54.146615 4808 generic.go:334] "Generic (PLEG): container finished" podID="d6db7f55-c4ab-4c7c-87f3-64b5f6fd2902" containerID="e385055686e59f7c0a9aab9f26168a6619e225c5f8586583696987a12b61cf60" exitCode=0 Mar 11 09:56:54 crc kubenswrapper[4808]: I0311 09:56:54.146664 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-df4c9" event={"ID":"d6db7f55-c4ab-4c7c-87f3-64b5f6fd2902","Type":"ContainerDied","Data":"e385055686e59f7c0a9aab9f26168a6619e225c5f8586583696987a12b61cf60"} Mar 11 09:56:54 crc kubenswrapper[4808]: I0311 09:56:54.146731 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-df4c9" event={"ID":"d6db7f55-c4ab-4c7c-87f3-64b5f6fd2902","Type":"ContainerDied","Data":"5864992ffb7ea9f13c7d0796292b1eee9fedd228dff907afcb8b84acd8223cc4"} Mar 11 09:56:54 crc kubenswrapper[4808]: I0311 09:56:54.146767 4808 scope.go:117] "RemoveContainer" containerID="e385055686e59f7c0a9aab9f26168a6619e225c5f8586583696987a12b61cf60" Mar 11 09:56:54 crc kubenswrapper[4808]: I0311 09:56:54.148298 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-df4c9" Mar 11 09:56:54 crc kubenswrapper[4808]: I0311 09:56:54.169810 4808 scope.go:117] "RemoveContainer" containerID="e7e60e4ac9c49847dccd06ea6a1bc4c390ba3fd6f53ca3c3d83213fe041093bf" Mar 11 09:56:54 crc kubenswrapper[4808]: I0311 09:56:54.191137 4808 scope.go:117] "RemoveContainer" containerID="7cca0cf0afb12687054e2c53f46cc9a5c05890ff45981a771b7ea0d7d2923652" Mar 11 09:56:54 crc kubenswrapper[4808]: I0311 09:56:54.191912 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9nwbh" Mar 11 09:56:54 crc kubenswrapper[4808]: I0311 09:56:54.226782 4808 scope.go:117] "RemoveContainer" containerID="e385055686e59f7c0a9aab9f26168a6619e225c5f8586583696987a12b61cf60" Mar 11 09:56:54 crc kubenswrapper[4808]: E0311 09:56:54.227353 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e385055686e59f7c0a9aab9f26168a6619e225c5f8586583696987a12b61cf60\": container with ID starting with e385055686e59f7c0a9aab9f26168a6619e225c5f8586583696987a12b61cf60 not found: ID does not exist" containerID="e385055686e59f7c0a9aab9f26168a6619e225c5f8586583696987a12b61cf60" Mar 11 09:56:54 crc kubenswrapper[4808]: I0311 09:56:54.227427 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e385055686e59f7c0a9aab9f26168a6619e225c5f8586583696987a12b61cf60"} err="failed to get container status \"e385055686e59f7c0a9aab9f26168a6619e225c5f8586583696987a12b61cf60\": rpc error: code = NotFound desc = could not find container \"e385055686e59f7c0a9aab9f26168a6619e225c5f8586583696987a12b61cf60\": container with ID starting with e385055686e59f7c0a9aab9f26168a6619e225c5f8586583696987a12b61cf60 not found: ID does not exist" Mar 11 09:56:54 crc kubenswrapper[4808]: I0311 09:56:54.227459 4808 scope.go:117] "RemoveContainer" containerID="e7e60e4ac9c49847dccd06ea6a1bc4c390ba3fd6f53ca3c3d83213fe041093bf" Mar 11 09:56:54 crc kubenswrapper[4808]: E0311 09:56:54.227936 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7e60e4ac9c49847dccd06ea6a1bc4c390ba3fd6f53ca3c3d83213fe041093bf\": container with ID starting with e7e60e4ac9c49847dccd06ea6a1bc4c390ba3fd6f53ca3c3d83213fe041093bf not found: ID does not exist" containerID="e7e60e4ac9c49847dccd06ea6a1bc4c390ba3fd6f53ca3c3d83213fe041093bf" Mar 11 09:56:54 crc kubenswrapper[4808]: I0311 09:56:54.227974 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7e60e4ac9c49847dccd06ea6a1bc4c390ba3fd6f53ca3c3d83213fe041093bf"} err="failed to get container status \"e7e60e4ac9c49847dccd06ea6a1bc4c390ba3fd6f53ca3c3d83213fe041093bf\": rpc error: code = NotFound desc = could not find container \"e7e60e4ac9c49847dccd06ea6a1bc4c390ba3fd6f53ca3c3d83213fe041093bf\": container with ID starting with e7e60e4ac9c49847dccd06ea6a1bc4c390ba3fd6f53ca3c3d83213fe041093bf not found: ID does not exist" Mar 11 09:56:54 crc kubenswrapper[4808]: I0311 09:56:54.227999 4808 scope.go:117] "RemoveContainer" containerID="7cca0cf0afb12687054e2c53f46cc9a5c05890ff45981a771b7ea0d7d2923652" Mar 11 09:56:54 crc kubenswrapper[4808]: E0311 09:56:54.228281 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7cca0cf0afb12687054e2c53f46cc9a5c05890ff45981a771b7ea0d7d2923652\": container with ID starting with 7cca0cf0afb12687054e2c53f46cc9a5c05890ff45981a771b7ea0d7d2923652 not found: ID does not exist" containerID="7cca0cf0afb12687054e2c53f46cc9a5c05890ff45981a771b7ea0d7d2923652" Mar 11 09:56:54 crc kubenswrapper[4808]: I0311 09:56:54.228325 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7cca0cf0afb12687054e2c53f46cc9a5c05890ff45981a771b7ea0d7d2923652"} err="failed to get container status \"7cca0cf0afb12687054e2c53f46cc9a5c05890ff45981a771b7ea0d7d2923652\": rpc error: code = NotFound desc = could not find container \"7cca0cf0afb12687054e2c53f46cc9a5c05890ff45981a771b7ea0d7d2923652\": container with ID starting with 7cca0cf0afb12687054e2c53f46cc9a5c05890ff45981a771b7ea0d7d2923652 not found: ID does not exist" Mar 11 09:56:54 crc kubenswrapper[4808]: I0311 09:56:54.302354 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d6db7f55-c4ab-4c7c-87f3-64b5f6fd2902-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d6db7f55-c4ab-4c7c-87f3-64b5f6fd2902" (UID: "d6db7f55-c4ab-4c7c-87f3-64b5f6fd2902"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:56:54 crc kubenswrapper[4808]: I0311 09:56:54.338615 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6db7f55-c4ab-4c7c-87f3-64b5f6fd2902-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 11 09:56:54 crc kubenswrapper[4808]: I0311 09:56:54.480482 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-df4c9"] Mar 11 09:56:54 crc kubenswrapper[4808]: I0311 09:56:54.486179 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-df4c9"] Mar 11 09:56:55 crc kubenswrapper[4808]: I0311 09:56:55.784438 4808 scope.go:117] "RemoveContainer" containerID="2cf2592e5817b1526fe38371eb10716efac3d6c42811c750314883ffd5699e04" Mar 11 09:56:55 crc kubenswrapper[4808]: I0311 09:56:55.798468 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6db7f55-c4ab-4c7c-87f3-64b5f6fd2902" path="/var/lib/kubelet/pods/d6db7f55-c4ab-4c7c-87f3-64b5f6fd2902/volumes" Mar 11 09:56:56 crc kubenswrapper[4808]: I0311 09:56:56.308325 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9nwbh"] Mar 11 09:56:56 crc kubenswrapper[4808]: I0311 09:56:56.308568 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9nwbh" podUID="a02b3880-71aa-4511-aa55-c91aabca9769" containerName="registry-server" containerID="cri-o://65813ccd5d0b758c750552d523599c00cfadca57ea65ba9f9b21be65d1ce075e" gracePeriod=2 Mar 11 09:56:56 crc kubenswrapper[4808]: I0311 09:56:56.735238 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9nwbh" Mar 11 09:56:56 crc kubenswrapper[4808]: I0311 09:56:56.875268 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5krtd\" (UniqueName: \"kubernetes.io/projected/a02b3880-71aa-4511-aa55-c91aabca9769-kube-api-access-5krtd\") pod \"a02b3880-71aa-4511-aa55-c91aabca9769\" (UID: \"a02b3880-71aa-4511-aa55-c91aabca9769\") " Mar 11 09:56:56 crc kubenswrapper[4808]: I0311 09:56:56.875443 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a02b3880-71aa-4511-aa55-c91aabca9769-utilities\") pod \"a02b3880-71aa-4511-aa55-c91aabca9769\" (UID: \"a02b3880-71aa-4511-aa55-c91aabca9769\") " Mar 11 09:56:56 crc kubenswrapper[4808]: I0311 09:56:56.875493 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a02b3880-71aa-4511-aa55-c91aabca9769-catalog-content\") pod \"a02b3880-71aa-4511-aa55-c91aabca9769\" (UID: \"a02b3880-71aa-4511-aa55-c91aabca9769\") " Mar 11 09:56:56 crc kubenswrapper[4808]: I0311 09:56:56.877224 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a02b3880-71aa-4511-aa55-c91aabca9769-utilities" (OuterVolumeSpecName: "utilities") pod "a02b3880-71aa-4511-aa55-c91aabca9769" (UID: "a02b3880-71aa-4511-aa55-c91aabca9769"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:56:56 crc kubenswrapper[4808]: I0311 09:56:56.884414 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a02b3880-71aa-4511-aa55-c91aabca9769-kube-api-access-5krtd" (OuterVolumeSpecName: "kube-api-access-5krtd") pod "a02b3880-71aa-4511-aa55-c91aabca9769" (UID: "a02b3880-71aa-4511-aa55-c91aabca9769"). InnerVolumeSpecName "kube-api-access-5krtd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:56:56 crc kubenswrapper[4808]: I0311 09:56:56.977537 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5krtd\" (UniqueName: \"kubernetes.io/projected/a02b3880-71aa-4511-aa55-c91aabca9769-kube-api-access-5krtd\") on node \"crc\" DevicePath \"\"" Mar 11 09:56:56 crc kubenswrapper[4808]: I0311 09:56:56.977579 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a02b3880-71aa-4511-aa55-c91aabca9769-utilities\") on node \"crc\" DevicePath \"\"" Mar 11 09:56:57 crc kubenswrapper[4808]: I0311 09:56:57.030127 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a02b3880-71aa-4511-aa55-c91aabca9769-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a02b3880-71aa-4511-aa55-c91aabca9769" (UID: "a02b3880-71aa-4511-aa55-c91aabca9769"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:56:57 crc kubenswrapper[4808]: I0311 09:56:57.078619 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a02b3880-71aa-4511-aa55-c91aabca9769-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 11 09:56:57 crc kubenswrapper[4808]: I0311 09:56:57.173499 4808 generic.go:334] "Generic (PLEG): container finished" podID="a02b3880-71aa-4511-aa55-c91aabca9769" containerID="65813ccd5d0b758c750552d523599c00cfadca57ea65ba9f9b21be65d1ce075e" exitCode=0 Mar 11 09:56:57 crc kubenswrapper[4808]: I0311 09:56:57.173545 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9nwbh" event={"ID":"a02b3880-71aa-4511-aa55-c91aabca9769","Type":"ContainerDied","Data":"65813ccd5d0b758c750552d523599c00cfadca57ea65ba9f9b21be65d1ce075e"} Mar 11 09:56:57 crc kubenswrapper[4808]: I0311 09:56:57.173580 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9nwbh" event={"ID":"a02b3880-71aa-4511-aa55-c91aabca9769","Type":"ContainerDied","Data":"d0f776bee0c05cb1c74bd73efe41d8495dc47da9778159a6b35e43a0bbdae3b8"} Mar 11 09:56:57 crc kubenswrapper[4808]: I0311 09:56:57.173565 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9nwbh" Mar 11 09:56:57 crc kubenswrapper[4808]: I0311 09:56:57.173598 4808 scope.go:117] "RemoveContainer" containerID="65813ccd5d0b758c750552d523599c00cfadca57ea65ba9f9b21be65d1ce075e" Mar 11 09:56:57 crc kubenswrapper[4808]: I0311 09:56:57.198278 4808 scope.go:117] "RemoveContainer" containerID="fbaa88b22216a3ef9d1a5d927cef0284912f03deebe64868e8410ca1b274364f" Mar 11 09:56:57 crc kubenswrapper[4808]: I0311 09:56:57.214139 4808 scope.go:117] "RemoveContainer" containerID="090932b234421c568c29a976e014ce5ecdf8798da0d9a367bf1797e03cb94fb4" Mar 11 09:56:57 crc kubenswrapper[4808]: I0311 09:56:57.229594 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9nwbh"] Mar 11 09:56:57 crc kubenswrapper[4808]: I0311 09:56:57.236587 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9nwbh"] Mar 11 09:56:57 crc kubenswrapper[4808]: I0311 09:56:57.247955 4808 scope.go:117] "RemoveContainer" containerID="65813ccd5d0b758c750552d523599c00cfadca57ea65ba9f9b21be65d1ce075e" Mar 11 09:56:57 crc kubenswrapper[4808]: E0311 09:56:57.248268 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65813ccd5d0b758c750552d523599c00cfadca57ea65ba9f9b21be65d1ce075e\": container with ID starting with 65813ccd5d0b758c750552d523599c00cfadca57ea65ba9f9b21be65d1ce075e not found: ID does not exist" containerID="65813ccd5d0b758c750552d523599c00cfadca57ea65ba9f9b21be65d1ce075e" Mar 11 09:56:57 crc kubenswrapper[4808]: I0311 09:56:57.248291 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65813ccd5d0b758c750552d523599c00cfadca57ea65ba9f9b21be65d1ce075e"} err="failed to get container status \"65813ccd5d0b758c750552d523599c00cfadca57ea65ba9f9b21be65d1ce075e\": rpc error: code = NotFound desc = could not find container \"65813ccd5d0b758c750552d523599c00cfadca57ea65ba9f9b21be65d1ce075e\": container with ID starting with 65813ccd5d0b758c750552d523599c00cfadca57ea65ba9f9b21be65d1ce075e not found: ID does not exist" Mar 11 09:56:57 crc kubenswrapper[4808]: I0311 09:56:57.248308 4808 scope.go:117] "RemoveContainer" containerID="fbaa88b22216a3ef9d1a5d927cef0284912f03deebe64868e8410ca1b274364f" Mar 11 09:56:57 crc kubenswrapper[4808]: E0311 09:56:57.248546 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbaa88b22216a3ef9d1a5d927cef0284912f03deebe64868e8410ca1b274364f\": container with ID starting with fbaa88b22216a3ef9d1a5d927cef0284912f03deebe64868e8410ca1b274364f not found: ID does not exist" containerID="fbaa88b22216a3ef9d1a5d927cef0284912f03deebe64868e8410ca1b274364f" Mar 11 09:56:57 crc kubenswrapper[4808]: I0311 09:56:57.248564 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbaa88b22216a3ef9d1a5d927cef0284912f03deebe64868e8410ca1b274364f"} err="failed to get container status \"fbaa88b22216a3ef9d1a5d927cef0284912f03deebe64868e8410ca1b274364f\": rpc error: code = NotFound desc = could not find container \"fbaa88b22216a3ef9d1a5d927cef0284912f03deebe64868e8410ca1b274364f\": container with ID starting with fbaa88b22216a3ef9d1a5d927cef0284912f03deebe64868e8410ca1b274364f not found: ID does not exist" Mar 11 09:56:57 crc kubenswrapper[4808]: I0311 09:56:57.248578 4808 scope.go:117] "RemoveContainer" containerID="090932b234421c568c29a976e014ce5ecdf8798da0d9a367bf1797e03cb94fb4" Mar 11 09:56:57 crc kubenswrapper[4808]: E0311 09:56:57.248746 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"090932b234421c568c29a976e014ce5ecdf8798da0d9a367bf1797e03cb94fb4\": container with ID starting with 090932b234421c568c29a976e014ce5ecdf8798da0d9a367bf1797e03cb94fb4 not found: ID does not exist" containerID="090932b234421c568c29a976e014ce5ecdf8798da0d9a367bf1797e03cb94fb4" Mar 11 09:56:57 crc kubenswrapper[4808]: I0311 09:56:57.248763 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"090932b234421c568c29a976e014ce5ecdf8798da0d9a367bf1797e03cb94fb4"} err="failed to get container status \"090932b234421c568c29a976e014ce5ecdf8798da0d9a367bf1797e03cb94fb4\": rpc error: code = NotFound desc = could not find container \"090932b234421c568c29a976e014ce5ecdf8798da0d9a367bf1797e03cb94fb4\": container with ID starting with 090932b234421c568c29a976e014ce5ecdf8798da0d9a367bf1797e03cb94fb4 not found: ID does not exist" Mar 11 09:56:57 crc kubenswrapper[4808]: I0311 09:56:57.804729 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a02b3880-71aa-4511-aa55-c91aabca9769" path="/var/lib/kubelet/pods/a02b3880-71aa-4511-aa55-c91aabca9769/volumes" Mar 11 09:57:20 crc kubenswrapper[4808]: I0311 09:57:20.012295 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mkbbs"] Mar 11 09:57:20 crc kubenswrapper[4808]: E0311 09:57:20.013346 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6db7f55-c4ab-4c7c-87f3-64b5f6fd2902" containerName="extract-content" Mar 11 09:57:20 crc kubenswrapper[4808]: I0311 09:57:20.013360 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6db7f55-c4ab-4c7c-87f3-64b5f6fd2902" containerName="extract-content" Mar 11 09:57:20 crc kubenswrapper[4808]: E0311 09:57:20.013396 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a02b3880-71aa-4511-aa55-c91aabca9769" containerName="extract-utilities" Mar 11 09:57:20 crc kubenswrapper[4808]: I0311 09:57:20.013403 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="a02b3880-71aa-4511-aa55-c91aabca9769" containerName="extract-utilities" Mar 11 09:57:20 crc kubenswrapper[4808]: E0311 09:57:20.013414 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6db7f55-c4ab-4c7c-87f3-64b5f6fd2902" containerName="registry-server" Mar 11 09:57:20 crc kubenswrapper[4808]: I0311 09:57:20.013420 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6db7f55-c4ab-4c7c-87f3-64b5f6fd2902" containerName="registry-server" Mar 11 09:57:20 crc kubenswrapper[4808]: E0311 09:57:20.013433 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a02b3880-71aa-4511-aa55-c91aabca9769" containerName="registry-server" Mar 11 09:57:20 crc kubenswrapper[4808]: I0311 09:57:20.013439 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="a02b3880-71aa-4511-aa55-c91aabca9769" containerName="registry-server" Mar 11 09:57:20 crc kubenswrapper[4808]: E0311 09:57:20.013449 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6db7f55-c4ab-4c7c-87f3-64b5f6fd2902" containerName="extract-utilities" Mar 11 09:57:20 crc kubenswrapper[4808]: I0311 09:57:20.013454 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6db7f55-c4ab-4c7c-87f3-64b5f6fd2902" containerName="extract-utilities" Mar 11 09:57:20 crc kubenswrapper[4808]: E0311 09:57:20.013465 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a02b3880-71aa-4511-aa55-c91aabca9769" containerName="extract-content" Mar 11 09:57:20 crc kubenswrapper[4808]: I0311 09:57:20.013470 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="a02b3880-71aa-4511-aa55-c91aabca9769" containerName="extract-content" Mar 11 09:57:20 crc kubenswrapper[4808]: I0311 09:57:20.013595 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6db7f55-c4ab-4c7c-87f3-64b5f6fd2902" containerName="registry-server" Mar 11 09:57:20 crc kubenswrapper[4808]: I0311 09:57:20.013605 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="a02b3880-71aa-4511-aa55-c91aabca9769" containerName="registry-server" Mar 11 09:57:20 crc kubenswrapper[4808]: I0311 09:57:20.014595 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mkbbs" Mar 11 09:57:20 crc kubenswrapper[4808]: I0311 09:57:20.019740 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mkbbs"] Mar 11 09:57:20 crc kubenswrapper[4808]: I0311 09:57:20.128761 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22cff883-b6e8-4b5c-94c2-e310e66278b1-catalog-content\") pod \"redhat-operators-mkbbs\" (UID: \"22cff883-b6e8-4b5c-94c2-e310e66278b1\") " pod="openshift-marketplace/redhat-operators-mkbbs" Mar 11 09:57:20 crc kubenswrapper[4808]: I0311 09:57:20.129236 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7c7n\" (UniqueName: \"kubernetes.io/projected/22cff883-b6e8-4b5c-94c2-e310e66278b1-kube-api-access-f7c7n\") pod \"redhat-operators-mkbbs\" (UID: \"22cff883-b6e8-4b5c-94c2-e310e66278b1\") " pod="openshift-marketplace/redhat-operators-mkbbs" Mar 11 09:57:20 crc kubenswrapper[4808]: I0311 09:57:20.129348 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22cff883-b6e8-4b5c-94c2-e310e66278b1-utilities\") pod \"redhat-operators-mkbbs\" (UID: \"22cff883-b6e8-4b5c-94c2-e310e66278b1\") " pod="openshift-marketplace/redhat-operators-mkbbs" Mar 11 09:57:20 crc kubenswrapper[4808]: I0311 09:57:20.230452 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22cff883-b6e8-4b5c-94c2-e310e66278b1-utilities\") pod \"redhat-operators-mkbbs\" (UID: \"22cff883-b6e8-4b5c-94c2-e310e66278b1\") " pod="openshift-marketplace/redhat-operators-mkbbs" Mar 11 09:57:20 crc kubenswrapper[4808]: I0311 09:57:20.230535 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22cff883-b6e8-4b5c-94c2-e310e66278b1-catalog-content\") pod \"redhat-operators-mkbbs\" (UID: \"22cff883-b6e8-4b5c-94c2-e310e66278b1\") " pod="openshift-marketplace/redhat-operators-mkbbs" Mar 11 09:57:20 crc kubenswrapper[4808]: I0311 09:57:20.230586 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7c7n\" (UniqueName: \"kubernetes.io/projected/22cff883-b6e8-4b5c-94c2-e310e66278b1-kube-api-access-f7c7n\") pod \"redhat-operators-mkbbs\" (UID: \"22cff883-b6e8-4b5c-94c2-e310e66278b1\") " pod="openshift-marketplace/redhat-operators-mkbbs" Mar 11 09:57:20 crc kubenswrapper[4808]: I0311 09:57:20.231406 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22cff883-b6e8-4b5c-94c2-e310e66278b1-utilities\") pod \"redhat-operators-mkbbs\" (UID: \"22cff883-b6e8-4b5c-94c2-e310e66278b1\") " pod="openshift-marketplace/redhat-operators-mkbbs" Mar 11 09:57:20 crc kubenswrapper[4808]: I0311 09:57:20.231435 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22cff883-b6e8-4b5c-94c2-e310e66278b1-catalog-content\") pod \"redhat-operators-mkbbs\" (UID: \"22cff883-b6e8-4b5c-94c2-e310e66278b1\") " pod="openshift-marketplace/redhat-operators-mkbbs" Mar 11 09:57:20 crc kubenswrapper[4808]: I0311 09:57:20.255425 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7c7n\" (UniqueName: \"kubernetes.io/projected/22cff883-b6e8-4b5c-94c2-e310e66278b1-kube-api-access-f7c7n\") pod \"redhat-operators-mkbbs\" (UID: \"22cff883-b6e8-4b5c-94c2-e310e66278b1\") " pod="openshift-marketplace/redhat-operators-mkbbs" Mar 11 09:57:20 crc kubenswrapper[4808]: I0311 09:57:20.367909 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mkbbs" Mar 11 09:57:20 crc kubenswrapper[4808]: I0311 09:57:20.820832 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mkbbs"] Mar 11 09:57:21 crc kubenswrapper[4808]: I0311 09:57:21.350659 4808 generic.go:334] "Generic (PLEG): container finished" podID="22cff883-b6e8-4b5c-94c2-e310e66278b1" containerID="964a5c5728b4102775fd50640b640c6e5f4388e40a5627be434d46973792dc80" exitCode=0 Mar 11 09:57:21 crc kubenswrapper[4808]: I0311 09:57:21.350760 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mkbbs" event={"ID":"22cff883-b6e8-4b5c-94c2-e310e66278b1","Type":"ContainerDied","Data":"964a5c5728b4102775fd50640b640c6e5f4388e40a5627be434d46973792dc80"} Mar 11 09:57:21 crc kubenswrapper[4808]: I0311 09:57:21.350819 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mkbbs" event={"ID":"22cff883-b6e8-4b5c-94c2-e310e66278b1","Type":"ContainerStarted","Data":"09ebbad72062054e1486b6f4d143541c85cb510ef9cd68ea410a5017bbe5932f"} Mar 11 09:57:21 crc kubenswrapper[4808]: I0311 09:57:21.352863 4808 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 11 09:57:22 crc kubenswrapper[4808]: I0311 09:57:22.360399 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mkbbs" event={"ID":"22cff883-b6e8-4b5c-94c2-e310e66278b1","Type":"ContainerStarted","Data":"2026fc799b0bbe96d91a7503f81c6912f2758a3ff8ba29cc9269468b5eb7d0dd"} Mar 11 09:57:23 crc kubenswrapper[4808]: I0311 09:57:23.367168 4808 generic.go:334] "Generic (PLEG): container finished" podID="22cff883-b6e8-4b5c-94c2-e310e66278b1" containerID="2026fc799b0bbe96d91a7503f81c6912f2758a3ff8ba29cc9269468b5eb7d0dd" exitCode=0 Mar 11 09:57:23 crc kubenswrapper[4808]: I0311 09:57:23.367244 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mkbbs" event={"ID":"22cff883-b6e8-4b5c-94c2-e310e66278b1","Type":"ContainerDied","Data":"2026fc799b0bbe96d91a7503f81c6912f2758a3ff8ba29cc9269468b5eb7d0dd"} Mar 11 09:57:24 crc kubenswrapper[4808]: I0311 09:57:24.375826 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mkbbs" event={"ID":"22cff883-b6e8-4b5c-94c2-e310e66278b1","Type":"ContainerStarted","Data":"7de640f1e74bb2a15d3b12ca6731529674763ec7c9d68c6dd2009de2b485e708"} Mar 11 09:57:24 crc kubenswrapper[4808]: I0311 09:57:24.394122 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mkbbs" podStartSLOduration=2.946997157 podStartE2EDuration="5.394106764s" podCreationTimestamp="2026-03-11 09:57:19 +0000 UTC" firstStartedPulling="2026-03-11 09:57:21.352591723 +0000 UTC m=+4692.305915043" lastFinishedPulling="2026-03-11 09:57:23.79970133 +0000 UTC m=+4694.753024650" observedRunningTime="2026-03-11 09:57:24.390405499 +0000 UTC m=+4695.343728829" watchObservedRunningTime="2026-03-11 09:57:24.394106764 +0000 UTC m=+4695.347430084" Mar 11 09:57:30 crc kubenswrapper[4808]: I0311 09:57:30.368555 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mkbbs" Mar 11 09:57:30 crc kubenswrapper[4808]: I0311 09:57:30.369126 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mkbbs" Mar 11 09:57:31 crc kubenswrapper[4808]: I0311 09:57:31.417935 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mkbbs" podUID="22cff883-b6e8-4b5c-94c2-e310e66278b1" containerName="registry-server" probeResult="failure" output=< Mar 11 09:57:31 crc kubenswrapper[4808]: timeout: failed to connect service ":50051" within 1s Mar 11 09:57:31 crc kubenswrapper[4808]: > Mar 11 09:57:40 crc kubenswrapper[4808]: I0311 09:57:40.413588 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mkbbs" Mar 11 09:57:40 crc kubenswrapper[4808]: I0311 09:57:40.475577 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mkbbs" Mar 11 09:57:40 crc kubenswrapper[4808]: I0311 09:57:40.648037 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mkbbs"] Mar 11 09:57:41 crc kubenswrapper[4808]: I0311 09:57:41.509301 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-mkbbs" podUID="22cff883-b6e8-4b5c-94c2-e310e66278b1" containerName="registry-server" containerID="cri-o://7de640f1e74bb2a15d3b12ca6731529674763ec7c9d68c6dd2009de2b485e708" gracePeriod=2 Mar 11 09:57:41 crc kubenswrapper[4808]: I0311 09:57:41.909923 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mkbbs" Mar 11 09:57:42 crc kubenswrapper[4808]: I0311 09:57:42.040454 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22cff883-b6e8-4b5c-94c2-e310e66278b1-utilities\") pod \"22cff883-b6e8-4b5c-94c2-e310e66278b1\" (UID: \"22cff883-b6e8-4b5c-94c2-e310e66278b1\") " Mar 11 09:57:42 crc kubenswrapper[4808]: I0311 09:57:42.040528 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22cff883-b6e8-4b5c-94c2-e310e66278b1-catalog-content\") pod \"22cff883-b6e8-4b5c-94c2-e310e66278b1\" (UID: \"22cff883-b6e8-4b5c-94c2-e310e66278b1\") " Mar 11 09:57:42 crc kubenswrapper[4808]: I0311 09:57:42.040686 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f7c7n\" (UniqueName: \"kubernetes.io/projected/22cff883-b6e8-4b5c-94c2-e310e66278b1-kube-api-access-f7c7n\") pod \"22cff883-b6e8-4b5c-94c2-e310e66278b1\" (UID: \"22cff883-b6e8-4b5c-94c2-e310e66278b1\") " Mar 11 09:57:42 crc kubenswrapper[4808]: I0311 09:57:42.043299 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/22cff883-b6e8-4b5c-94c2-e310e66278b1-utilities" (OuterVolumeSpecName: "utilities") pod "22cff883-b6e8-4b5c-94c2-e310e66278b1" (UID: "22cff883-b6e8-4b5c-94c2-e310e66278b1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:57:42 crc kubenswrapper[4808]: I0311 09:57:42.048209 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22cff883-b6e8-4b5c-94c2-e310e66278b1-kube-api-access-f7c7n" (OuterVolumeSpecName: "kube-api-access-f7c7n") pod "22cff883-b6e8-4b5c-94c2-e310e66278b1" (UID: "22cff883-b6e8-4b5c-94c2-e310e66278b1"). InnerVolumeSpecName "kube-api-access-f7c7n". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:57:42 crc kubenswrapper[4808]: I0311 09:57:42.143713 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f7c7n\" (UniqueName: \"kubernetes.io/projected/22cff883-b6e8-4b5c-94c2-e310e66278b1-kube-api-access-f7c7n\") on node \"crc\" DevicePath \"\"" Mar 11 09:57:42 crc kubenswrapper[4808]: I0311 09:57:42.143786 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22cff883-b6e8-4b5c-94c2-e310e66278b1-utilities\") on node \"crc\" DevicePath \"\"" Mar 11 09:57:42 crc kubenswrapper[4808]: I0311 09:57:42.177034 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/22cff883-b6e8-4b5c-94c2-e310e66278b1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "22cff883-b6e8-4b5c-94c2-e310e66278b1" (UID: "22cff883-b6e8-4b5c-94c2-e310e66278b1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:57:42 crc kubenswrapper[4808]: I0311 09:57:42.244974 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22cff883-b6e8-4b5c-94c2-e310e66278b1-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 11 09:57:42 crc kubenswrapper[4808]: I0311 09:57:42.521306 4808 generic.go:334] "Generic (PLEG): container finished" podID="22cff883-b6e8-4b5c-94c2-e310e66278b1" containerID="7de640f1e74bb2a15d3b12ca6731529674763ec7c9d68c6dd2009de2b485e708" exitCode=0 Mar 11 09:57:42 crc kubenswrapper[4808]: I0311 09:57:42.521398 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mkbbs" event={"ID":"22cff883-b6e8-4b5c-94c2-e310e66278b1","Type":"ContainerDied","Data":"7de640f1e74bb2a15d3b12ca6731529674763ec7c9d68c6dd2009de2b485e708"} Mar 11 09:57:42 crc kubenswrapper[4808]: I0311 09:57:42.522697 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mkbbs" event={"ID":"22cff883-b6e8-4b5c-94c2-e310e66278b1","Type":"ContainerDied","Data":"09ebbad72062054e1486b6f4d143541c85cb510ef9cd68ea410a5017bbe5932f"} Mar 11 09:57:42 crc kubenswrapper[4808]: I0311 09:57:42.522760 4808 scope.go:117] "RemoveContainer" containerID="7de640f1e74bb2a15d3b12ca6731529674763ec7c9d68c6dd2009de2b485e708" Mar 11 09:57:42 crc kubenswrapper[4808]: I0311 09:57:42.521430 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mkbbs" Mar 11 09:57:42 crc kubenswrapper[4808]: I0311 09:57:42.559861 4808 scope.go:117] "RemoveContainer" containerID="2026fc799b0bbe96d91a7503f81c6912f2758a3ff8ba29cc9269468b5eb7d0dd" Mar 11 09:57:42 crc kubenswrapper[4808]: I0311 09:57:42.572006 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mkbbs"] Mar 11 09:57:42 crc kubenswrapper[4808]: I0311 09:57:42.581531 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-mkbbs"] Mar 11 09:57:42 crc kubenswrapper[4808]: I0311 09:57:42.588805 4808 scope.go:117] "RemoveContainer" containerID="964a5c5728b4102775fd50640b640c6e5f4388e40a5627be434d46973792dc80" Mar 11 09:57:42 crc kubenswrapper[4808]: I0311 09:57:42.608839 4808 scope.go:117] "RemoveContainer" containerID="7de640f1e74bb2a15d3b12ca6731529674763ec7c9d68c6dd2009de2b485e708" Mar 11 09:57:42 crc kubenswrapper[4808]: E0311 09:57:42.609492 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7de640f1e74bb2a15d3b12ca6731529674763ec7c9d68c6dd2009de2b485e708\": container with ID starting with 7de640f1e74bb2a15d3b12ca6731529674763ec7c9d68c6dd2009de2b485e708 not found: ID does not exist" containerID="7de640f1e74bb2a15d3b12ca6731529674763ec7c9d68c6dd2009de2b485e708" Mar 11 09:57:42 crc kubenswrapper[4808]: I0311 09:57:42.609584 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7de640f1e74bb2a15d3b12ca6731529674763ec7c9d68c6dd2009de2b485e708"} err="failed to get container status \"7de640f1e74bb2a15d3b12ca6731529674763ec7c9d68c6dd2009de2b485e708\": rpc error: code = NotFound desc = could not find container \"7de640f1e74bb2a15d3b12ca6731529674763ec7c9d68c6dd2009de2b485e708\": container with ID starting with 7de640f1e74bb2a15d3b12ca6731529674763ec7c9d68c6dd2009de2b485e708 not found: ID does not exist" Mar 11 09:57:42 crc kubenswrapper[4808]: I0311 09:57:42.609631 4808 scope.go:117] "RemoveContainer" containerID="2026fc799b0bbe96d91a7503f81c6912f2758a3ff8ba29cc9269468b5eb7d0dd" Mar 11 09:57:42 crc kubenswrapper[4808]: E0311 09:57:42.610107 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2026fc799b0bbe96d91a7503f81c6912f2758a3ff8ba29cc9269468b5eb7d0dd\": container with ID starting with 2026fc799b0bbe96d91a7503f81c6912f2758a3ff8ba29cc9269468b5eb7d0dd not found: ID does not exist" containerID="2026fc799b0bbe96d91a7503f81c6912f2758a3ff8ba29cc9269468b5eb7d0dd" Mar 11 09:57:42 crc kubenswrapper[4808]: I0311 09:57:42.610147 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2026fc799b0bbe96d91a7503f81c6912f2758a3ff8ba29cc9269468b5eb7d0dd"} err="failed to get container status \"2026fc799b0bbe96d91a7503f81c6912f2758a3ff8ba29cc9269468b5eb7d0dd\": rpc error: code = NotFound desc = could not find container \"2026fc799b0bbe96d91a7503f81c6912f2758a3ff8ba29cc9269468b5eb7d0dd\": container with ID starting with 2026fc799b0bbe96d91a7503f81c6912f2758a3ff8ba29cc9269468b5eb7d0dd not found: ID does not exist" Mar 11 09:57:42 crc kubenswrapper[4808]: I0311 09:57:42.610173 4808 scope.go:117] "RemoveContainer" containerID="964a5c5728b4102775fd50640b640c6e5f4388e40a5627be434d46973792dc80" Mar 11 09:57:42 crc kubenswrapper[4808]: E0311 09:57:42.610652 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"964a5c5728b4102775fd50640b640c6e5f4388e40a5627be434d46973792dc80\": container with ID starting with 964a5c5728b4102775fd50640b640c6e5f4388e40a5627be434d46973792dc80 not found: ID does not exist" containerID="964a5c5728b4102775fd50640b640c6e5f4388e40a5627be434d46973792dc80" Mar 11 09:57:42 crc kubenswrapper[4808]: I0311 09:57:42.610704 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"964a5c5728b4102775fd50640b640c6e5f4388e40a5627be434d46973792dc80"} err="failed to get container status \"964a5c5728b4102775fd50640b640c6e5f4388e40a5627be434d46973792dc80\": rpc error: code = NotFound desc = could not find container \"964a5c5728b4102775fd50640b640c6e5f4388e40a5627be434d46973792dc80\": container with ID starting with 964a5c5728b4102775fd50640b640c6e5f4388e40a5627be434d46973792dc80 not found: ID does not exist" Mar 11 09:57:42 crc kubenswrapper[4808]: E0311 09:57:42.692974 4808 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod22cff883_b6e8_4b5c_94c2_e310e66278b1.slice\": RecentStats: unable to find data in memory cache]" Mar 11 09:57:43 crc kubenswrapper[4808]: I0311 09:57:43.797885 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22cff883-b6e8-4b5c-94c2-e310e66278b1" path="/var/lib/kubelet/pods/22cff883-b6e8-4b5c-94c2-e310e66278b1/volumes" Mar 11 09:58:00 crc kubenswrapper[4808]: I0311 09:58:00.147082 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553718-d5vq6"] Mar 11 09:58:00 crc kubenswrapper[4808]: E0311 09:58:00.147785 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22cff883-b6e8-4b5c-94c2-e310e66278b1" containerName="extract-content" Mar 11 09:58:00 crc kubenswrapper[4808]: I0311 09:58:00.147800 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="22cff883-b6e8-4b5c-94c2-e310e66278b1" containerName="extract-content" Mar 11 09:58:00 crc kubenswrapper[4808]: E0311 09:58:00.147822 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22cff883-b6e8-4b5c-94c2-e310e66278b1" containerName="registry-server" Mar 11 09:58:00 crc kubenswrapper[4808]: I0311 09:58:00.147831 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="22cff883-b6e8-4b5c-94c2-e310e66278b1" containerName="registry-server" Mar 11 09:58:00 crc kubenswrapper[4808]: E0311 09:58:00.147856 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22cff883-b6e8-4b5c-94c2-e310e66278b1" containerName="extract-utilities" Mar 11 09:58:00 crc kubenswrapper[4808]: I0311 09:58:00.147864 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="22cff883-b6e8-4b5c-94c2-e310e66278b1" containerName="extract-utilities" Mar 11 09:58:00 crc kubenswrapper[4808]: I0311 09:58:00.148061 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="22cff883-b6e8-4b5c-94c2-e310e66278b1" containerName="registry-server" Mar 11 09:58:00 crc kubenswrapper[4808]: I0311 09:58:00.148627 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553718-d5vq6" Mar 11 09:58:00 crc kubenswrapper[4808]: I0311 09:58:00.151327 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 09:58:00 crc kubenswrapper[4808]: I0311 09:58:00.152029 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 09:58:00 crc kubenswrapper[4808]: I0311 09:58:00.155003 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-8r7cc" Mar 11 09:58:00 crc kubenswrapper[4808]: I0311 09:58:00.156351 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553718-d5vq6"] Mar 11 09:58:00 crc kubenswrapper[4808]: I0311 09:58:00.269923 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v95qr\" (UniqueName: \"kubernetes.io/projected/58408cc7-ec42-47a6-b60e-d052fbda4505-kube-api-access-v95qr\") pod \"auto-csr-approver-29553718-d5vq6\" (UID: \"58408cc7-ec42-47a6-b60e-d052fbda4505\") " pod="openshift-infra/auto-csr-approver-29553718-d5vq6" Mar 11 09:58:00 crc kubenswrapper[4808]: I0311 09:58:00.371492 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v95qr\" (UniqueName: \"kubernetes.io/projected/58408cc7-ec42-47a6-b60e-d052fbda4505-kube-api-access-v95qr\") pod \"auto-csr-approver-29553718-d5vq6\" (UID: \"58408cc7-ec42-47a6-b60e-d052fbda4505\") " pod="openshift-infra/auto-csr-approver-29553718-d5vq6" Mar 11 09:58:00 crc kubenswrapper[4808]: I0311 09:58:00.391639 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v95qr\" (UniqueName: \"kubernetes.io/projected/58408cc7-ec42-47a6-b60e-d052fbda4505-kube-api-access-v95qr\") pod \"auto-csr-approver-29553718-d5vq6\" (UID: \"58408cc7-ec42-47a6-b60e-d052fbda4505\") " pod="openshift-infra/auto-csr-approver-29553718-d5vq6" Mar 11 09:58:00 crc kubenswrapper[4808]: I0311 09:58:00.469112 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553718-d5vq6" Mar 11 09:58:00 crc kubenswrapper[4808]: I0311 09:58:00.898936 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553718-d5vq6"] Mar 11 09:58:01 crc kubenswrapper[4808]: I0311 09:58:01.663655 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553718-d5vq6" event={"ID":"58408cc7-ec42-47a6-b60e-d052fbda4505","Type":"ContainerStarted","Data":"a63646a638b594f2ccce7ec1ccd9c5e432fc002852d6ea0661f18ea544b845aa"} Mar 11 09:58:02 crc kubenswrapper[4808]: I0311 09:58:02.673498 4808 generic.go:334] "Generic (PLEG): container finished" podID="58408cc7-ec42-47a6-b60e-d052fbda4505" containerID="5ec5c632c5460562e2ba770836fd32ff4618db46aa71c437352fdb621c2205af" exitCode=0 Mar 11 09:58:02 crc kubenswrapper[4808]: I0311 09:58:02.673593 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553718-d5vq6" event={"ID":"58408cc7-ec42-47a6-b60e-d052fbda4505","Type":"ContainerDied","Data":"5ec5c632c5460562e2ba770836fd32ff4618db46aa71c437352fdb621c2205af"} Mar 11 09:58:03 crc kubenswrapper[4808]: I0311 09:58:03.946014 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553718-d5vq6" Mar 11 09:58:04 crc kubenswrapper[4808]: I0311 09:58:04.038166 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v95qr\" (UniqueName: \"kubernetes.io/projected/58408cc7-ec42-47a6-b60e-d052fbda4505-kube-api-access-v95qr\") pod \"58408cc7-ec42-47a6-b60e-d052fbda4505\" (UID: \"58408cc7-ec42-47a6-b60e-d052fbda4505\") " Mar 11 09:58:04 crc kubenswrapper[4808]: I0311 09:58:04.046094 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58408cc7-ec42-47a6-b60e-d052fbda4505-kube-api-access-v95qr" (OuterVolumeSpecName: "kube-api-access-v95qr") pod "58408cc7-ec42-47a6-b60e-d052fbda4505" (UID: "58408cc7-ec42-47a6-b60e-d052fbda4505"). InnerVolumeSpecName "kube-api-access-v95qr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:58:04 crc kubenswrapper[4808]: I0311 09:58:04.139766 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v95qr\" (UniqueName: \"kubernetes.io/projected/58408cc7-ec42-47a6-b60e-d052fbda4505-kube-api-access-v95qr\") on node \"crc\" DevicePath \"\"" Mar 11 09:58:04 crc kubenswrapper[4808]: I0311 09:58:04.693216 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553718-d5vq6" event={"ID":"58408cc7-ec42-47a6-b60e-d052fbda4505","Type":"ContainerDied","Data":"a63646a638b594f2ccce7ec1ccd9c5e432fc002852d6ea0661f18ea544b845aa"} Mar 11 09:58:04 crc kubenswrapper[4808]: I0311 09:58:04.693275 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a63646a638b594f2ccce7ec1ccd9c5e432fc002852d6ea0661f18ea544b845aa" Mar 11 09:58:04 crc kubenswrapper[4808]: I0311 09:58:04.693292 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553718-d5vq6" Mar 11 09:58:05 crc kubenswrapper[4808]: I0311 09:58:05.028693 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553712-wtjhc"] Mar 11 09:58:05 crc kubenswrapper[4808]: I0311 09:58:05.035253 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553712-wtjhc"] Mar 11 09:58:05 crc kubenswrapper[4808]: I0311 09:58:05.799502 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e557c917-d211-48a5-8e7d-e854cf2c8fb9" path="/var/lib/kubelet/pods/e557c917-d211-48a5-8e7d-e854cf2c8fb9/volumes" Mar 11 09:58:46 crc kubenswrapper[4808]: I0311 09:58:46.028192 4808 patch_prober.go:28] interesting pod/machine-config-daemon-tfsm9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 09:58:46 crc kubenswrapper[4808]: I0311 09:58:46.028743 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 09:58:55 crc kubenswrapper[4808]: I0311 09:58:55.891117 4808 scope.go:117] "RemoveContainer" containerID="8bc2a52eb96f9670c872e3553df816c354c711b0341e7cc1792562c85fd573e2" Mar 11 09:59:16 crc kubenswrapper[4808]: I0311 09:59:16.027095 4808 patch_prober.go:28] interesting pod/machine-config-daemon-tfsm9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 09:59:16 crc kubenswrapper[4808]: I0311 09:59:16.027729 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 09:59:46 crc kubenswrapper[4808]: I0311 09:59:46.027775 4808 patch_prober.go:28] interesting pod/machine-config-daemon-tfsm9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 09:59:46 crc kubenswrapper[4808]: I0311 09:59:46.029549 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 09:59:46 crc kubenswrapper[4808]: I0311 09:59:46.029651 4808 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" Mar 11 09:59:46 crc kubenswrapper[4808]: I0311 09:59:46.030625 4808 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"12c1b7d17d6cdda346703d5fa8021d94fc0d6eeac2ec33be129a049b77ea627c"} pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 11 09:59:46 crc kubenswrapper[4808]: I0311 09:59:46.030735 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerName="machine-config-daemon" containerID="cri-o://12c1b7d17d6cdda346703d5fa8021d94fc0d6eeac2ec33be129a049b77ea627c" gracePeriod=600 Mar 11 09:59:46 crc kubenswrapper[4808]: I0311 09:59:46.554879 4808 generic.go:334] "Generic (PLEG): container finished" podID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerID="12c1b7d17d6cdda346703d5fa8021d94fc0d6eeac2ec33be129a049b77ea627c" exitCode=0 Mar 11 09:59:46 crc kubenswrapper[4808]: I0311 09:59:46.555277 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" event={"ID":"3dda5309-668d-4e3c-b3b2-1d708eecc578","Type":"ContainerDied","Data":"12c1b7d17d6cdda346703d5fa8021d94fc0d6eeac2ec33be129a049b77ea627c"} Mar 11 09:59:46 crc kubenswrapper[4808]: I0311 09:59:46.555452 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" event={"ID":"3dda5309-668d-4e3c-b3b2-1d708eecc578","Type":"ContainerStarted","Data":"d524673f3a545738ac720864e37310705b7ec99d76cdcdbcad5a32411f049b97"} Mar 11 09:59:46 crc kubenswrapper[4808]: I0311 09:59:46.555490 4808 scope.go:117] "RemoveContainer" containerID="27e0f48b3822e212a74bc43bb71376ffbdc19fd289c3afa07743f08ceed097ff" Mar 11 10:00:00 crc kubenswrapper[4808]: I0311 10:00:00.160800 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553720-ppv88"] Mar 11 10:00:00 crc kubenswrapper[4808]: E0311 10:00:00.161728 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58408cc7-ec42-47a6-b60e-d052fbda4505" containerName="oc" Mar 11 10:00:00 crc kubenswrapper[4808]: I0311 10:00:00.161745 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="58408cc7-ec42-47a6-b60e-d052fbda4505" containerName="oc" Mar 11 10:00:00 crc kubenswrapper[4808]: I0311 10:00:00.161892 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="58408cc7-ec42-47a6-b60e-d052fbda4505" containerName="oc" Mar 11 10:00:00 crc kubenswrapper[4808]: I0311 10:00:00.162392 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553720-ppv88" Mar 11 10:00:00 crc kubenswrapper[4808]: I0311 10:00:00.166440 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 10:00:00 crc kubenswrapper[4808]: I0311 10:00:00.166511 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 10:00:00 crc kubenswrapper[4808]: I0311 10:00:00.166440 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-8r7cc" Mar 11 10:00:00 crc kubenswrapper[4808]: I0311 10:00:00.175295 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29553720-82pmn"] Mar 11 10:00:00 crc kubenswrapper[4808]: I0311 10:00:00.176960 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29553720-82pmn" Mar 11 10:00:00 crc kubenswrapper[4808]: I0311 10:00:00.179689 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 11 10:00:00 crc kubenswrapper[4808]: I0311 10:00:00.180001 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 11 10:00:00 crc kubenswrapper[4808]: I0311 10:00:00.201704 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553720-ppv88"] Mar 11 10:00:00 crc kubenswrapper[4808]: I0311 10:00:00.210256 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29553720-82pmn"] Mar 11 10:00:00 crc kubenswrapper[4808]: I0311 10:00:00.218807 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d0aaa25d-7347-4716-96b6-d128fd062575-config-volume\") pod \"collect-profiles-29553720-82pmn\" (UID: \"d0aaa25d-7347-4716-96b6-d128fd062575\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553720-82pmn" Mar 11 10:00:00 crc kubenswrapper[4808]: I0311 10:00:00.218917 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rhfs\" (UniqueName: \"kubernetes.io/projected/d0aaa25d-7347-4716-96b6-d128fd062575-kube-api-access-4rhfs\") pod \"collect-profiles-29553720-82pmn\" (UID: \"d0aaa25d-7347-4716-96b6-d128fd062575\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553720-82pmn" Mar 11 10:00:00 crc kubenswrapper[4808]: I0311 10:00:00.218981 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zx2q\" (UniqueName: \"kubernetes.io/projected/b2488fd6-56fe-4c9b-a3e6-6cd4a0fe14fd-kube-api-access-9zx2q\") pod \"auto-csr-approver-29553720-ppv88\" (UID: \"b2488fd6-56fe-4c9b-a3e6-6cd4a0fe14fd\") " pod="openshift-infra/auto-csr-approver-29553720-ppv88" Mar 11 10:00:00 crc kubenswrapper[4808]: I0311 10:00:00.219014 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d0aaa25d-7347-4716-96b6-d128fd062575-secret-volume\") pod \"collect-profiles-29553720-82pmn\" (UID: \"d0aaa25d-7347-4716-96b6-d128fd062575\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553720-82pmn" Mar 11 10:00:00 crc kubenswrapper[4808]: I0311 10:00:00.320032 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d0aaa25d-7347-4716-96b6-d128fd062575-config-volume\") pod \"collect-profiles-29553720-82pmn\" (UID: \"d0aaa25d-7347-4716-96b6-d128fd062575\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553720-82pmn" Mar 11 10:00:00 crc kubenswrapper[4808]: I0311 10:00:00.320124 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rhfs\" (UniqueName: \"kubernetes.io/projected/d0aaa25d-7347-4716-96b6-d128fd062575-kube-api-access-4rhfs\") pod \"collect-profiles-29553720-82pmn\" (UID: \"d0aaa25d-7347-4716-96b6-d128fd062575\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553720-82pmn" Mar 11 10:00:00 crc kubenswrapper[4808]: I0311 10:00:00.320169 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zx2q\" (UniqueName: \"kubernetes.io/projected/b2488fd6-56fe-4c9b-a3e6-6cd4a0fe14fd-kube-api-access-9zx2q\") pod \"auto-csr-approver-29553720-ppv88\" (UID: \"b2488fd6-56fe-4c9b-a3e6-6cd4a0fe14fd\") " pod="openshift-infra/auto-csr-approver-29553720-ppv88" Mar 11 10:00:00 crc kubenswrapper[4808]: I0311 10:00:00.320193 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d0aaa25d-7347-4716-96b6-d128fd062575-secret-volume\") pod \"collect-profiles-29553720-82pmn\" (UID: \"d0aaa25d-7347-4716-96b6-d128fd062575\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553720-82pmn" Mar 11 10:00:00 crc kubenswrapper[4808]: I0311 10:00:00.322065 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d0aaa25d-7347-4716-96b6-d128fd062575-config-volume\") pod \"collect-profiles-29553720-82pmn\" (UID: \"d0aaa25d-7347-4716-96b6-d128fd062575\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553720-82pmn" Mar 11 10:00:00 crc kubenswrapper[4808]: I0311 10:00:00.326408 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d0aaa25d-7347-4716-96b6-d128fd062575-secret-volume\") pod \"collect-profiles-29553720-82pmn\" (UID: \"d0aaa25d-7347-4716-96b6-d128fd062575\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553720-82pmn" Mar 11 10:00:00 crc kubenswrapper[4808]: I0311 10:00:00.346163 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rhfs\" (UniqueName: \"kubernetes.io/projected/d0aaa25d-7347-4716-96b6-d128fd062575-kube-api-access-4rhfs\") pod \"collect-profiles-29553720-82pmn\" (UID: \"d0aaa25d-7347-4716-96b6-d128fd062575\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553720-82pmn" Mar 11 10:00:00 crc kubenswrapper[4808]: I0311 10:00:00.347937 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zx2q\" (UniqueName: \"kubernetes.io/projected/b2488fd6-56fe-4c9b-a3e6-6cd4a0fe14fd-kube-api-access-9zx2q\") pod \"auto-csr-approver-29553720-ppv88\" (UID: \"b2488fd6-56fe-4c9b-a3e6-6cd4a0fe14fd\") " pod="openshift-infra/auto-csr-approver-29553720-ppv88" Mar 11 10:00:00 crc kubenswrapper[4808]: I0311 10:00:00.521900 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553720-ppv88" Mar 11 10:00:00 crc kubenswrapper[4808]: I0311 10:00:00.530877 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29553720-82pmn" Mar 11 10:00:00 crc kubenswrapper[4808]: I0311 10:00:00.782272 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29553720-82pmn"] Mar 11 10:00:00 crc kubenswrapper[4808]: W0311 10:00:00.785132 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd0aaa25d_7347_4716_96b6_d128fd062575.slice/crio-55998fb58e34211f47286885d3d5b95631824bf41df96a78033d1dcf2b99f168 WatchSource:0}: Error finding container 55998fb58e34211f47286885d3d5b95631824bf41df96a78033d1dcf2b99f168: Status 404 returned error can't find the container with id 55998fb58e34211f47286885d3d5b95631824bf41df96a78033d1dcf2b99f168 Mar 11 10:00:01 crc kubenswrapper[4808]: W0311 10:00:01.064526 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2488fd6_56fe_4c9b_a3e6_6cd4a0fe14fd.slice/crio-6da93e6dfeea9129b45b639beff9d4c4802b89ac4efd9e0e0c73af050a519f76 WatchSource:0}: Error finding container 6da93e6dfeea9129b45b639beff9d4c4802b89ac4efd9e0e0c73af050a519f76: Status 404 returned error can't find the container with id 6da93e6dfeea9129b45b639beff9d4c4802b89ac4efd9e0e0c73af050a519f76 Mar 11 10:00:01 crc kubenswrapper[4808]: I0311 10:00:01.071059 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553720-ppv88"] Mar 11 10:00:01 crc kubenswrapper[4808]: I0311 10:00:01.686480 4808 generic.go:334] "Generic (PLEG): container finished" podID="d0aaa25d-7347-4716-96b6-d128fd062575" containerID="a370a2cddc53940229a3a11eb7d05259a851519bf1adb0ba52bc202fe0a23f15" exitCode=0 Mar 11 10:00:01 crc kubenswrapper[4808]: I0311 10:00:01.686692 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29553720-82pmn" event={"ID":"d0aaa25d-7347-4716-96b6-d128fd062575","Type":"ContainerDied","Data":"a370a2cddc53940229a3a11eb7d05259a851519bf1adb0ba52bc202fe0a23f15"} Mar 11 10:00:01 crc kubenswrapper[4808]: I0311 10:00:01.686923 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29553720-82pmn" event={"ID":"d0aaa25d-7347-4716-96b6-d128fd062575","Type":"ContainerStarted","Data":"55998fb58e34211f47286885d3d5b95631824bf41df96a78033d1dcf2b99f168"} Mar 11 10:00:01 crc kubenswrapper[4808]: I0311 10:00:01.689936 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553720-ppv88" event={"ID":"b2488fd6-56fe-4c9b-a3e6-6cd4a0fe14fd","Type":"ContainerStarted","Data":"6da93e6dfeea9129b45b639beff9d4c4802b89ac4efd9e0e0c73af050a519f76"} Mar 11 10:00:02 crc kubenswrapper[4808]: I0311 10:00:02.955498 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29553720-82pmn" Mar 11 10:00:03 crc kubenswrapper[4808]: I0311 10:00:03.060680 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4rhfs\" (UniqueName: \"kubernetes.io/projected/d0aaa25d-7347-4716-96b6-d128fd062575-kube-api-access-4rhfs\") pod \"d0aaa25d-7347-4716-96b6-d128fd062575\" (UID: \"d0aaa25d-7347-4716-96b6-d128fd062575\") " Mar 11 10:00:03 crc kubenswrapper[4808]: I0311 10:00:03.060719 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d0aaa25d-7347-4716-96b6-d128fd062575-config-volume\") pod \"d0aaa25d-7347-4716-96b6-d128fd062575\" (UID: \"d0aaa25d-7347-4716-96b6-d128fd062575\") " Mar 11 10:00:03 crc kubenswrapper[4808]: I0311 10:00:03.060781 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d0aaa25d-7347-4716-96b6-d128fd062575-secret-volume\") pod \"d0aaa25d-7347-4716-96b6-d128fd062575\" (UID: \"d0aaa25d-7347-4716-96b6-d128fd062575\") " Mar 11 10:00:03 crc kubenswrapper[4808]: I0311 10:00:03.061643 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0aaa25d-7347-4716-96b6-d128fd062575-config-volume" (OuterVolumeSpecName: "config-volume") pod "d0aaa25d-7347-4716-96b6-d128fd062575" (UID: "d0aaa25d-7347-4716-96b6-d128fd062575"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 10:00:03 crc kubenswrapper[4808]: I0311 10:00:03.065965 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0aaa25d-7347-4716-96b6-d128fd062575-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d0aaa25d-7347-4716-96b6-d128fd062575" (UID: "d0aaa25d-7347-4716-96b6-d128fd062575"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 10:00:03 crc kubenswrapper[4808]: I0311 10:00:03.066063 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0aaa25d-7347-4716-96b6-d128fd062575-kube-api-access-4rhfs" (OuterVolumeSpecName: "kube-api-access-4rhfs") pod "d0aaa25d-7347-4716-96b6-d128fd062575" (UID: "d0aaa25d-7347-4716-96b6-d128fd062575"). InnerVolumeSpecName "kube-api-access-4rhfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:00:03 crc kubenswrapper[4808]: I0311 10:00:03.162906 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4rhfs\" (UniqueName: \"kubernetes.io/projected/d0aaa25d-7347-4716-96b6-d128fd062575-kube-api-access-4rhfs\") on node \"crc\" DevicePath \"\"" Mar 11 10:00:03 crc kubenswrapper[4808]: I0311 10:00:03.163263 4808 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d0aaa25d-7347-4716-96b6-d128fd062575-config-volume\") on node \"crc\" DevicePath \"\"" Mar 11 10:00:03 crc kubenswrapper[4808]: I0311 10:00:03.163274 4808 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d0aaa25d-7347-4716-96b6-d128fd062575-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 11 10:00:03 crc kubenswrapper[4808]: I0311 10:00:03.707122 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29553720-82pmn" event={"ID":"d0aaa25d-7347-4716-96b6-d128fd062575","Type":"ContainerDied","Data":"55998fb58e34211f47286885d3d5b95631824bf41df96a78033d1dcf2b99f168"} Mar 11 10:00:03 crc kubenswrapper[4808]: I0311 10:00:03.707162 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55998fb58e34211f47286885d3d5b95631824bf41df96a78033d1dcf2b99f168" Mar 11 10:00:03 crc kubenswrapper[4808]: I0311 10:00:03.707571 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29553720-82pmn" Mar 11 10:00:04 crc kubenswrapper[4808]: I0311 10:00:04.028047 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29553675-pmqvq"] Mar 11 10:00:04 crc kubenswrapper[4808]: I0311 10:00:04.034142 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29553675-pmqvq"] Mar 11 10:00:05 crc kubenswrapper[4808]: I0311 10:00:05.730253 4808 generic.go:334] "Generic (PLEG): container finished" podID="b2488fd6-56fe-4c9b-a3e6-6cd4a0fe14fd" containerID="36aaee5e6c833a2206b267909ae8eb7c63c75287f60de1ed4e09234e56826124" exitCode=0 Mar 11 10:00:05 crc kubenswrapper[4808]: I0311 10:00:05.730345 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553720-ppv88" event={"ID":"b2488fd6-56fe-4c9b-a3e6-6cd4a0fe14fd","Type":"ContainerDied","Data":"36aaee5e6c833a2206b267909ae8eb7c63c75287f60de1ed4e09234e56826124"} Mar 11 10:00:05 crc kubenswrapper[4808]: I0311 10:00:05.805929 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="069d0ae5-50fb-466e-878f-14380d00e7e4" path="/var/lib/kubelet/pods/069d0ae5-50fb-466e-878f-14380d00e7e4/volumes" Mar 11 10:00:07 crc kubenswrapper[4808]: I0311 10:00:07.015415 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553720-ppv88" Mar 11 10:00:07 crc kubenswrapper[4808]: I0311 10:00:07.128139 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9zx2q\" (UniqueName: \"kubernetes.io/projected/b2488fd6-56fe-4c9b-a3e6-6cd4a0fe14fd-kube-api-access-9zx2q\") pod \"b2488fd6-56fe-4c9b-a3e6-6cd4a0fe14fd\" (UID: \"b2488fd6-56fe-4c9b-a3e6-6cd4a0fe14fd\") " Mar 11 10:00:07 crc kubenswrapper[4808]: I0311 10:00:07.134466 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2488fd6-56fe-4c9b-a3e6-6cd4a0fe14fd-kube-api-access-9zx2q" (OuterVolumeSpecName: "kube-api-access-9zx2q") pod "b2488fd6-56fe-4c9b-a3e6-6cd4a0fe14fd" (UID: "b2488fd6-56fe-4c9b-a3e6-6cd4a0fe14fd"). InnerVolumeSpecName "kube-api-access-9zx2q". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:00:07 crc kubenswrapper[4808]: I0311 10:00:07.229312 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9zx2q\" (UniqueName: \"kubernetes.io/projected/b2488fd6-56fe-4c9b-a3e6-6cd4a0fe14fd-kube-api-access-9zx2q\") on node \"crc\" DevicePath \"\"" Mar 11 10:00:07 crc kubenswrapper[4808]: I0311 10:00:07.749654 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553720-ppv88" event={"ID":"b2488fd6-56fe-4c9b-a3e6-6cd4a0fe14fd","Type":"ContainerDied","Data":"6da93e6dfeea9129b45b639beff9d4c4802b89ac4efd9e0e0c73af050a519f76"} Mar 11 10:00:07 crc kubenswrapper[4808]: I0311 10:00:07.750193 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6da93e6dfeea9129b45b639beff9d4c4802b89ac4efd9e0e0c73af050a519f76" Mar 11 10:00:07 crc kubenswrapper[4808]: I0311 10:00:07.749737 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553720-ppv88" Mar 11 10:00:08 crc kubenswrapper[4808]: I0311 10:00:08.084195 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553714-l8q9s"] Mar 11 10:00:08 crc kubenswrapper[4808]: I0311 10:00:08.093973 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553714-l8q9s"] Mar 11 10:00:09 crc kubenswrapper[4808]: I0311 10:00:09.803818 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1597b1b-7f3f-4285-a9b4-909a884416e0" path="/var/lib/kubelet/pods/f1597b1b-7f3f-4285-a9b4-909a884416e0/volumes" Mar 11 10:00:55 crc kubenswrapper[4808]: I0311 10:00:55.972299 4808 scope.go:117] "RemoveContainer" containerID="25b72a33bfd5cc9f1796797f22bee5e2e0653ceb7aae48a65ad3d8d0e86aaa31" Mar 11 10:00:55 crc kubenswrapper[4808]: I0311 10:00:55.996549 4808 scope.go:117] "RemoveContainer" containerID="670cd217bb0ebacbb7aebfe37a6ba8da7d71a66b2675490b95d3ea421bc30b86" Mar 11 10:01:46 crc kubenswrapper[4808]: I0311 10:01:46.027335 4808 patch_prober.go:28] interesting pod/machine-config-daemon-tfsm9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 10:01:46 crc kubenswrapper[4808]: I0311 10:01:46.027913 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 10:02:00 crc kubenswrapper[4808]: I0311 10:02:00.158190 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553722-67vl8"] Mar 11 10:02:00 crc kubenswrapper[4808]: E0311 10:02:00.159228 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2488fd6-56fe-4c9b-a3e6-6cd4a0fe14fd" containerName="oc" Mar 11 10:02:00 crc kubenswrapper[4808]: I0311 10:02:00.159251 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2488fd6-56fe-4c9b-a3e6-6cd4a0fe14fd" containerName="oc" Mar 11 10:02:00 crc kubenswrapper[4808]: E0311 10:02:00.159301 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0aaa25d-7347-4716-96b6-d128fd062575" containerName="collect-profiles" Mar 11 10:02:00 crc kubenswrapper[4808]: I0311 10:02:00.159315 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0aaa25d-7347-4716-96b6-d128fd062575" containerName="collect-profiles" Mar 11 10:02:00 crc kubenswrapper[4808]: I0311 10:02:00.159551 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0aaa25d-7347-4716-96b6-d128fd062575" containerName="collect-profiles" Mar 11 10:02:00 crc kubenswrapper[4808]: I0311 10:02:00.159597 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2488fd6-56fe-4c9b-a3e6-6cd4a0fe14fd" containerName="oc" Mar 11 10:02:00 crc kubenswrapper[4808]: I0311 10:02:00.160268 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553722-67vl8" Mar 11 10:02:00 crc kubenswrapper[4808]: I0311 10:02:00.162894 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-8r7cc" Mar 11 10:02:00 crc kubenswrapper[4808]: I0311 10:02:00.163825 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 10:02:00 crc kubenswrapper[4808]: I0311 10:02:00.164487 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 10:02:00 crc kubenswrapper[4808]: I0311 10:02:00.170142 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553722-67vl8"] Mar 11 10:02:00 crc kubenswrapper[4808]: I0311 10:02:00.323682 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjtrt\" (UniqueName: \"kubernetes.io/projected/d924bd5e-3ed1-4cb4-94f1-87c160c51b47-kube-api-access-tjtrt\") pod \"auto-csr-approver-29553722-67vl8\" (UID: \"d924bd5e-3ed1-4cb4-94f1-87c160c51b47\") " pod="openshift-infra/auto-csr-approver-29553722-67vl8" Mar 11 10:02:00 crc kubenswrapper[4808]: I0311 10:02:00.425446 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjtrt\" (UniqueName: \"kubernetes.io/projected/d924bd5e-3ed1-4cb4-94f1-87c160c51b47-kube-api-access-tjtrt\") pod \"auto-csr-approver-29553722-67vl8\" (UID: \"d924bd5e-3ed1-4cb4-94f1-87c160c51b47\") " pod="openshift-infra/auto-csr-approver-29553722-67vl8" Mar 11 10:02:00 crc kubenswrapper[4808]: I0311 10:02:00.454337 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjtrt\" (UniqueName: \"kubernetes.io/projected/d924bd5e-3ed1-4cb4-94f1-87c160c51b47-kube-api-access-tjtrt\") pod \"auto-csr-approver-29553722-67vl8\" (UID: \"d924bd5e-3ed1-4cb4-94f1-87c160c51b47\") " pod="openshift-infra/auto-csr-approver-29553722-67vl8" Mar 11 10:02:00 crc kubenswrapper[4808]: I0311 10:02:00.491519 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553722-67vl8" Mar 11 10:02:00 crc kubenswrapper[4808]: I0311 10:02:00.918806 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553722-67vl8"] Mar 11 10:02:01 crc kubenswrapper[4808]: I0311 10:02:01.750619 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553722-67vl8" event={"ID":"d924bd5e-3ed1-4cb4-94f1-87c160c51b47","Type":"ContainerStarted","Data":"a4aadcf71560a7a2e5f881486bc63ac4ce9433d56b1049998d09aabff655beba"} Mar 11 10:02:02 crc kubenswrapper[4808]: I0311 10:02:02.760520 4808 generic.go:334] "Generic (PLEG): container finished" podID="d924bd5e-3ed1-4cb4-94f1-87c160c51b47" containerID="f1e2ed260f77d424fc8de77efcebda27dd8376e683f2446d3fab90affdbe47a8" exitCode=0 Mar 11 10:02:02 crc kubenswrapper[4808]: I0311 10:02:02.760607 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553722-67vl8" event={"ID":"d924bd5e-3ed1-4cb4-94f1-87c160c51b47","Type":"ContainerDied","Data":"f1e2ed260f77d424fc8de77efcebda27dd8376e683f2446d3fab90affdbe47a8"} Mar 11 10:02:04 crc kubenswrapper[4808]: I0311 10:02:04.029436 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553722-67vl8" Mar 11 10:02:04 crc kubenswrapper[4808]: I0311 10:02:04.175287 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tjtrt\" (UniqueName: \"kubernetes.io/projected/d924bd5e-3ed1-4cb4-94f1-87c160c51b47-kube-api-access-tjtrt\") pod \"d924bd5e-3ed1-4cb4-94f1-87c160c51b47\" (UID: \"d924bd5e-3ed1-4cb4-94f1-87c160c51b47\") " Mar 11 10:02:04 crc kubenswrapper[4808]: I0311 10:02:04.180213 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d924bd5e-3ed1-4cb4-94f1-87c160c51b47-kube-api-access-tjtrt" (OuterVolumeSpecName: "kube-api-access-tjtrt") pod "d924bd5e-3ed1-4cb4-94f1-87c160c51b47" (UID: "d924bd5e-3ed1-4cb4-94f1-87c160c51b47"). InnerVolumeSpecName "kube-api-access-tjtrt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:02:04 crc kubenswrapper[4808]: I0311 10:02:04.278577 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tjtrt\" (UniqueName: \"kubernetes.io/projected/d924bd5e-3ed1-4cb4-94f1-87c160c51b47-kube-api-access-tjtrt\") on node \"crc\" DevicePath \"\"" Mar 11 10:02:04 crc kubenswrapper[4808]: I0311 10:02:04.777898 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553722-67vl8" event={"ID":"d924bd5e-3ed1-4cb4-94f1-87c160c51b47","Type":"ContainerDied","Data":"a4aadcf71560a7a2e5f881486bc63ac4ce9433d56b1049998d09aabff655beba"} Mar 11 10:02:04 crc kubenswrapper[4808]: I0311 10:02:04.778239 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a4aadcf71560a7a2e5f881486bc63ac4ce9433d56b1049998d09aabff655beba" Mar 11 10:02:04 crc kubenswrapper[4808]: I0311 10:02:04.777980 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553722-67vl8" Mar 11 10:02:05 crc kubenswrapper[4808]: I0311 10:02:05.102476 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553716-tdbvh"] Mar 11 10:02:05 crc kubenswrapper[4808]: I0311 10:02:05.108325 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553716-tdbvh"] Mar 11 10:02:05 crc kubenswrapper[4808]: I0311 10:02:05.797462 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac4af0f0-76f9-4f25-8981-56ced5cee087" path="/var/lib/kubelet/pods/ac4af0f0-76f9-4f25-8981-56ced5cee087/volumes" Mar 11 10:02:16 crc kubenswrapper[4808]: I0311 10:02:16.028037 4808 patch_prober.go:28] interesting pod/machine-config-daemon-tfsm9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 10:02:16 crc kubenswrapper[4808]: I0311 10:02:16.028610 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 10:02:46 crc kubenswrapper[4808]: I0311 10:02:46.028120 4808 patch_prober.go:28] interesting pod/machine-config-daemon-tfsm9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 10:02:46 crc kubenswrapper[4808]: I0311 10:02:46.029021 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 10:02:46 crc kubenswrapper[4808]: I0311 10:02:46.029094 4808 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" Mar 11 10:02:46 crc kubenswrapper[4808]: I0311 10:02:46.030174 4808 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d524673f3a545738ac720864e37310705b7ec99d76cdcdbcad5a32411f049b97"} pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 11 10:02:46 crc kubenswrapper[4808]: I0311 10:02:46.030304 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerName="machine-config-daemon" containerID="cri-o://d524673f3a545738ac720864e37310705b7ec99d76cdcdbcad5a32411f049b97" gracePeriod=600 Mar 11 10:02:46 crc kubenswrapper[4808]: E0311 10:02:46.147976 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 10:02:47 crc kubenswrapper[4808]: I0311 10:02:47.093755 4808 generic.go:334] "Generic (PLEG): container finished" podID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerID="d524673f3a545738ac720864e37310705b7ec99d76cdcdbcad5a32411f049b97" exitCode=0 Mar 11 10:02:47 crc kubenswrapper[4808]: I0311 10:02:47.093836 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" event={"ID":"3dda5309-668d-4e3c-b3b2-1d708eecc578","Type":"ContainerDied","Data":"d524673f3a545738ac720864e37310705b7ec99d76cdcdbcad5a32411f049b97"} Mar 11 10:02:47 crc kubenswrapper[4808]: I0311 10:02:47.094101 4808 scope.go:117] "RemoveContainer" containerID="12c1b7d17d6cdda346703d5fa8021d94fc0d6eeac2ec33be129a049b77ea627c" Mar 11 10:02:47 crc kubenswrapper[4808]: I0311 10:02:47.094751 4808 scope.go:117] "RemoveContainer" containerID="d524673f3a545738ac720864e37310705b7ec99d76cdcdbcad5a32411f049b97" Mar 11 10:02:47 crc kubenswrapper[4808]: E0311 10:02:47.095033 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 10:02:56 crc kubenswrapper[4808]: I0311 10:02:56.112392 4808 scope.go:117] "RemoveContainer" containerID="164c244746403c328554c5b61186bf33d0c07d1d3cd9f823f1e3c4e7df5706d8" Mar 11 10:02:59 crc kubenswrapper[4808]: I0311 10:02:59.796780 4808 scope.go:117] "RemoveContainer" containerID="d524673f3a545738ac720864e37310705b7ec99d76cdcdbcad5a32411f049b97" Mar 11 10:02:59 crc kubenswrapper[4808]: E0311 10:02:59.797619 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 10:03:14 crc kubenswrapper[4808]: I0311 10:03:14.789264 4808 scope.go:117] "RemoveContainer" containerID="d524673f3a545738ac720864e37310705b7ec99d76cdcdbcad5a32411f049b97" Mar 11 10:03:14 crc kubenswrapper[4808]: E0311 10:03:14.790261 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 10:03:27 crc kubenswrapper[4808]: I0311 10:03:27.789860 4808 scope.go:117] "RemoveContainer" containerID="d524673f3a545738ac720864e37310705b7ec99d76cdcdbcad5a32411f049b97" Mar 11 10:03:27 crc kubenswrapper[4808]: E0311 10:03:27.790730 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 10:03:40 crc kubenswrapper[4808]: I0311 10:03:40.789610 4808 scope.go:117] "RemoveContainer" containerID="d524673f3a545738ac720864e37310705b7ec99d76cdcdbcad5a32411f049b97" Mar 11 10:03:40 crc kubenswrapper[4808]: E0311 10:03:40.790620 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 10:03:54 crc kubenswrapper[4808]: I0311 10:03:54.789541 4808 scope.go:117] "RemoveContainer" containerID="d524673f3a545738ac720864e37310705b7ec99d76cdcdbcad5a32411f049b97" Mar 11 10:03:54 crc kubenswrapper[4808]: E0311 10:03:54.790352 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 10:04:00 crc kubenswrapper[4808]: I0311 10:04:00.154788 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553724-46br5"] Mar 11 10:04:00 crc kubenswrapper[4808]: E0311 10:04:00.155534 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d924bd5e-3ed1-4cb4-94f1-87c160c51b47" containerName="oc" Mar 11 10:04:00 crc kubenswrapper[4808]: I0311 10:04:00.155555 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="d924bd5e-3ed1-4cb4-94f1-87c160c51b47" containerName="oc" Mar 11 10:04:00 crc kubenswrapper[4808]: I0311 10:04:00.155726 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="d924bd5e-3ed1-4cb4-94f1-87c160c51b47" containerName="oc" Mar 11 10:04:00 crc kubenswrapper[4808]: I0311 10:04:00.156200 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553724-46br5" Mar 11 10:04:00 crc kubenswrapper[4808]: I0311 10:04:00.159273 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-8r7cc" Mar 11 10:04:00 crc kubenswrapper[4808]: I0311 10:04:00.159523 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 10:04:00 crc kubenswrapper[4808]: I0311 10:04:00.164396 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 10:04:00 crc kubenswrapper[4808]: I0311 10:04:00.175389 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553724-46br5"] Mar 11 10:04:00 crc kubenswrapper[4808]: I0311 10:04:00.285481 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cd2v\" (UniqueName: \"kubernetes.io/projected/71f66e51-0d86-444d-a754-9ccd5aab1449-kube-api-access-2cd2v\") pod \"auto-csr-approver-29553724-46br5\" (UID: \"71f66e51-0d86-444d-a754-9ccd5aab1449\") " pod="openshift-infra/auto-csr-approver-29553724-46br5" Mar 11 10:04:00 crc kubenswrapper[4808]: I0311 10:04:00.387525 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cd2v\" (UniqueName: \"kubernetes.io/projected/71f66e51-0d86-444d-a754-9ccd5aab1449-kube-api-access-2cd2v\") pod \"auto-csr-approver-29553724-46br5\" (UID: \"71f66e51-0d86-444d-a754-9ccd5aab1449\") " pod="openshift-infra/auto-csr-approver-29553724-46br5" Mar 11 10:04:00 crc kubenswrapper[4808]: I0311 10:04:00.417927 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cd2v\" (UniqueName: \"kubernetes.io/projected/71f66e51-0d86-444d-a754-9ccd5aab1449-kube-api-access-2cd2v\") pod \"auto-csr-approver-29553724-46br5\" (UID: \"71f66e51-0d86-444d-a754-9ccd5aab1449\") " pod="openshift-infra/auto-csr-approver-29553724-46br5" Mar 11 10:04:00 crc kubenswrapper[4808]: I0311 10:04:00.475768 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553724-46br5" Mar 11 10:04:00 crc kubenswrapper[4808]: I0311 10:04:00.810332 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["crc-storage/crc-storage-crc-kprp5"] Mar 11 10:04:00 crc kubenswrapper[4808]: I0311 10:04:00.814946 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["crc-storage/crc-storage-crc-kprp5"] Mar 11 10:04:00 crc kubenswrapper[4808]: I0311 10:04:00.927331 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-dt4dx"] Mar 11 10:04:00 crc kubenswrapper[4808]: I0311 10:04:00.928437 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-dt4dx" Mar 11 10:04:00 crc kubenswrapper[4808]: I0311 10:04:00.935729 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Mar 11 10:04:00 crc kubenswrapper[4808]: I0311 10:04:00.935772 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Mar 11 10:04:00 crc kubenswrapper[4808]: I0311 10:04:00.935730 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Mar 11 10:04:00 crc kubenswrapper[4808]: I0311 10:04:00.936239 4808 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-hjgnt" Mar 11 10:04:00 crc kubenswrapper[4808]: I0311 10:04:00.939779 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-dt4dx"] Mar 11 10:04:00 crc kubenswrapper[4808]: I0311 10:04:00.946226 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553724-46br5"] Mar 11 10:04:00 crc kubenswrapper[4808]: I0311 10:04:00.954197 4808 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 11 10:04:01 crc kubenswrapper[4808]: I0311 10:04:01.096926 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/03c37041-c59b-4ba1-9935-5bfd40e32bdc-crc-storage\") pod \"crc-storage-crc-dt4dx\" (UID: \"03c37041-c59b-4ba1-9935-5bfd40e32bdc\") " pod="crc-storage/crc-storage-crc-dt4dx" Mar 11 10:04:01 crc kubenswrapper[4808]: I0311 10:04:01.097109 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/03c37041-c59b-4ba1-9935-5bfd40e32bdc-node-mnt\") pod \"crc-storage-crc-dt4dx\" (UID: \"03c37041-c59b-4ba1-9935-5bfd40e32bdc\") " pod="crc-storage/crc-storage-crc-dt4dx" Mar 11 10:04:01 crc kubenswrapper[4808]: I0311 10:04:01.097175 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wgqj\" (UniqueName: \"kubernetes.io/projected/03c37041-c59b-4ba1-9935-5bfd40e32bdc-kube-api-access-9wgqj\") pod \"crc-storage-crc-dt4dx\" (UID: \"03c37041-c59b-4ba1-9935-5bfd40e32bdc\") " pod="crc-storage/crc-storage-crc-dt4dx" Mar 11 10:04:01 crc kubenswrapper[4808]: I0311 10:04:01.198976 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wgqj\" (UniqueName: \"kubernetes.io/projected/03c37041-c59b-4ba1-9935-5bfd40e32bdc-kube-api-access-9wgqj\") pod \"crc-storage-crc-dt4dx\" (UID: \"03c37041-c59b-4ba1-9935-5bfd40e32bdc\") " pod="crc-storage/crc-storage-crc-dt4dx" Mar 11 10:04:01 crc kubenswrapper[4808]: I0311 10:04:01.199090 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/03c37041-c59b-4ba1-9935-5bfd40e32bdc-crc-storage\") pod \"crc-storage-crc-dt4dx\" (UID: \"03c37041-c59b-4ba1-9935-5bfd40e32bdc\") " pod="crc-storage/crc-storage-crc-dt4dx" Mar 11 10:04:01 crc kubenswrapper[4808]: I0311 10:04:01.199205 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/03c37041-c59b-4ba1-9935-5bfd40e32bdc-node-mnt\") pod \"crc-storage-crc-dt4dx\" (UID: \"03c37041-c59b-4ba1-9935-5bfd40e32bdc\") " pod="crc-storage/crc-storage-crc-dt4dx" Mar 11 10:04:01 crc kubenswrapper[4808]: I0311 10:04:01.199604 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/03c37041-c59b-4ba1-9935-5bfd40e32bdc-node-mnt\") pod \"crc-storage-crc-dt4dx\" (UID: \"03c37041-c59b-4ba1-9935-5bfd40e32bdc\") " pod="crc-storage/crc-storage-crc-dt4dx" Mar 11 10:04:01 crc kubenswrapper[4808]: I0311 10:04:01.200804 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/03c37041-c59b-4ba1-9935-5bfd40e32bdc-crc-storage\") pod \"crc-storage-crc-dt4dx\" (UID: \"03c37041-c59b-4ba1-9935-5bfd40e32bdc\") " pod="crc-storage/crc-storage-crc-dt4dx" Mar 11 10:04:01 crc kubenswrapper[4808]: I0311 10:04:01.225068 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wgqj\" (UniqueName: \"kubernetes.io/projected/03c37041-c59b-4ba1-9935-5bfd40e32bdc-kube-api-access-9wgqj\") pod \"crc-storage-crc-dt4dx\" (UID: \"03c37041-c59b-4ba1-9935-5bfd40e32bdc\") " pod="crc-storage/crc-storage-crc-dt4dx" Mar 11 10:04:01 crc kubenswrapper[4808]: I0311 10:04:01.272857 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-dt4dx" Mar 11 10:04:01 crc kubenswrapper[4808]: I0311 10:04:01.488954 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-dt4dx"] Mar 11 10:04:01 crc kubenswrapper[4808]: I0311 10:04:01.714235 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553724-46br5" event={"ID":"71f66e51-0d86-444d-a754-9ccd5aab1449","Type":"ContainerStarted","Data":"09c7b591e1478936d9fbc9c1a57bbf80243347553325526e40e3d71a803d256b"} Mar 11 10:04:01 crc kubenswrapper[4808]: I0311 10:04:01.716294 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-dt4dx" event={"ID":"03c37041-c59b-4ba1-9935-5bfd40e32bdc","Type":"ContainerStarted","Data":"d9ff99c1718cda629d81a855862ba178b5599afa6a22dc5a7c20cb45840960b4"} Mar 11 10:04:01 crc kubenswrapper[4808]: I0311 10:04:01.803317 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0de397de-380f-4ee1-9990-455ee195a71f" path="/var/lib/kubelet/pods/0de397de-380f-4ee1-9990-455ee195a71f/volumes" Mar 11 10:04:02 crc kubenswrapper[4808]: I0311 10:04:02.731032 4808 generic.go:334] "Generic (PLEG): container finished" podID="71f66e51-0d86-444d-a754-9ccd5aab1449" containerID="690c04ed0981ae8f63343af98ac6b0beb2df6a26eea293b03e71b92a88641915" exitCode=0 Mar 11 10:04:02 crc kubenswrapper[4808]: I0311 10:04:02.731143 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553724-46br5" event={"ID":"71f66e51-0d86-444d-a754-9ccd5aab1449","Type":"ContainerDied","Data":"690c04ed0981ae8f63343af98ac6b0beb2df6a26eea293b03e71b92a88641915"} Mar 11 10:04:02 crc kubenswrapper[4808]: I0311 10:04:02.733254 4808 generic.go:334] "Generic (PLEG): container finished" podID="03c37041-c59b-4ba1-9935-5bfd40e32bdc" containerID="06763e31982846dd7285bad417060d27456bee8f54c74baff14643169b2b3a35" exitCode=0 Mar 11 10:04:02 crc kubenswrapper[4808]: I0311 10:04:02.733299 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-dt4dx" event={"ID":"03c37041-c59b-4ba1-9935-5bfd40e32bdc","Type":"ContainerDied","Data":"06763e31982846dd7285bad417060d27456bee8f54c74baff14643169b2b3a35"} Mar 11 10:04:04 crc kubenswrapper[4808]: I0311 10:04:04.033771 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553724-46br5" Mar 11 10:04:04 crc kubenswrapper[4808]: I0311 10:04:04.113946 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-dt4dx" Mar 11 10:04:04 crc kubenswrapper[4808]: I0311 10:04:04.145480 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2cd2v\" (UniqueName: \"kubernetes.io/projected/71f66e51-0d86-444d-a754-9ccd5aab1449-kube-api-access-2cd2v\") pod \"71f66e51-0d86-444d-a754-9ccd5aab1449\" (UID: \"71f66e51-0d86-444d-a754-9ccd5aab1449\") " Mar 11 10:04:04 crc kubenswrapper[4808]: I0311 10:04:04.162717 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71f66e51-0d86-444d-a754-9ccd5aab1449-kube-api-access-2cd2v" (OuterVolumeSpecName: "kube-api-access-2cd2v") pod "71f66e51-0d86-444d-a754-9ccd5aab1449" (UID: "71f66e51-0d86-444d-a754-9ccd5aab1449"). InnerVolumeSpecName "kube-api-access-2cd2v". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:04:04 crc kubenswrapper[4808]: I0311 10:04:04.247284 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9wgqj\" (UniqueName: \"kubernetes.io/projected/03c37041-c59b-4ba1-9935-5bfd40e32bdc-kube-api-access-9wgqj\") pod \"03c37041-c59b-4ba1-9935-5bfd40e32bdc\" (UID: \"03c37041-c59b-4ba1-9935-5bfd40e32bdc\") " Mar 11 10:04:04 crc kubenswrapper[4808]: I0311 10:04:04.247350 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/03c37041-c59b-4ba1-9935-5bfd40e32bdc-node-mnt\") pod \"03c37041-c59b-4ba1-9935-5bfd40e32bdc\" (UID: \"03c37041-c59b-4ba1-9935-5bfd40e32bdc\") " Mar 11 10:04:04 crc kubenswrapper[4808]: I0311 10:04:04.247384 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/03c37041-c59b-4ba1-9935-5bfd40e32bdc-crc-storage\") pod \"03c37041-c59b-4ba1-9935-5bfd40e32bdc\" (UID: \"03c37041-c59b-4ba1-9935-5bfd40e32bdc\") " Mar 11 10:04:04 crc kubenswrapper[4808]: I0311 10:04:04.247445 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03c37041-c59b-4ba1-9935-5bfd40e32bdc-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "03c37041-c59b-4ba1-9935-5bfd40e32bdc" (UID: "03c37041-c59b-4ba1-9935-5bfd40e32bdc"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 11 10:04:04 crc kubenswrapper[4808]: I0311 10:04:04.247604 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2cd2v\" (UniqueName: \"kubernetes.io/projected/71f66e51-0d86-444d-a754-9ccd5aab1449-kube-api-access-2cd2v\") on node \"crc\" DevicePath \"\"" Mar 11 10:04:04 crc kubenswrapper[4808]: I0311 10:04:04.247618 4808 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/03c37041-c59b-4ba1-9935-5bfd40e32bdc-node-mnt\") on node \"crc\" DevicePath \"\"" Mar 11 10:04:04 crc kubenswrapper[4808]: I0311 10:04:04.250334 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03c37041-c59b-4ba1-9935-5bfd40e32bdc-kube-api-access-9wgqj" (OuterVolumeSpecName: "kube-api-access-9wgqj") pod "03c37041-c59b-4ba1-9935-5bfd40e32bdc" (UID: "03c37041-c59b-4ba1-9935-5bfd40e32bdc"). InnerVolumeSpecName "kube-api-access-9wgqj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:04:04 crc kubenswrapper[4808]: I0311 10:04:04.263041 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03c37041-c59b-4ba1-9935-5bfd40e32bdc-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "03c37041-c59b-4ba1-9935-5bfd40e32bdc" (UID: "03c37041-c59b-4ba1-9935-5bfd40e32bdc"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 10:04:04 crc kubenswrapper[4808]: I0311 10:04:04.349058 4808 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/03c37041-c59b-4ba1-9935-5bfd40e32bdc-crc-storage\") on node \"crc\" DevicePath \"\"" Mar 11 10:04:04 crc kubenswrapper[4808]: I0311 10:04:04.349093 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9wgqj\" (UniqueName: \"kubernetes.io/projected/03c37041-c59b-4ba1-9935-5bfd40e32bdc-kube-api-access-9wgqj\") on node \"crc\" DevicePath \"\"" Mar 11 10:04:04 crc kubenswrapper[4808]: I0311 10:04:04.753986 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553724-46br5" event={"ID":"71f66e51-0d86-444d-a754-9ccd5aab1449","Type":"ContainerDied","Data":"09c7b591e1478936d9fbc9c1a57bbf80243347553325526e40e3d71a803d256b"} Mar 11 10:04:04 crc kubenswrapper[4808]: I0311 10:04:04.754029 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="09c7b591e1478936d9fbc9c1a57bbf80243347553325526e40e3d71a803d256b" Mar 11 10:04:04 crc kubenswrapper[4808]: I0311 10:04:04.754096 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553724-46br5" Mar 11 10:04:04 crc kubenswrapper[4808]: I0311 10:04:04.756337 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-dt4dx" event={"ID":"03c37041-c59b-4ba1-9935-5bfd40e32bdc","Type":"ContainerDied","Data":"d9ff99c1718cda629d81a855862ba178b5599afa6a22dc5a7c20cb45840960b4"} Mar 11 10:04:04 crc kubenswrapper[4808]: I0311 10:04:04.756418 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-dt4dx" Mar 11 10:04:04 crc kubenswrapper[4808]: I0311 10:04:04.756425 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d9ff99c1718cda629d81a855862ba178b5599afa6a22dc5a7c20cb45840960b4" Mar 11 10:04:05 crc kubenswrapper[4808]: I0311 10:04:05.102725 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553718-d5vq6"] Mar 11 10:04:05 crc kubenswrapper[4808]: I0311 10:04:05.110057 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553718-d5vq6"] Mar 11 10:04:05 crc kubenswrapper[4808]: I0311 10:04:05.802430 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58408cc7-ec42-47a6-b60e-d052fbda4505" path="/var/lib/kubelet/pods/58408cc7-ec42-47a6-b60e-d052fbda4505/volumes" Mar 11 10:04:06 crc kubenswrapper[4808]: I0311 10:04:06.168319 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["crc-storage/crc-storage-crc-dt4dx"] Mar 11 10:04:06 crc kubenswrapper[4808]: I0311 10:04:06.178306 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["crc-storage/crc-storage-crc-dt4dx"] Mar 11 10:04:06 crc kubenswrapper[4808]: I0311 10:04:06.320820 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-dkvlk"] Mar 11 10:04:06 crc kubenswrapper[4808]: E0311 10:04:06.321085 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03c37041-c59b-4ba1-9935-5bfd40e32bdc" containerName="storage" Mar 11 10:04:06 crc kubenswrapper[4808]: I0311 10:04:06.321100 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="03c37041-c59b-4ba1-9935-5bfd40e32bdc" containerName="storage" Mar 11 10:04:06 crc kubenswrapper[4808]: E0311 10:04:06.321118 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71f66e51-0d86-444d-a754-9ccd5aab1449" containerName="oc" Mar 11 10:04:06 crc kubenswrapper[4808]: I0311 10:04:06.321125 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="71f66e51-0d86-444d-a754-9ccd5aab1449" containerName="oc" Mar 11 10:04:06 crc kubenswrapper[4808]: I0311 10:04:06.321263 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="03c37041-c59b-4ba1-9935-5bfd40e32bdc" containerName="storage" Mar 11 10:04:06 crc kubenswrapper[4808]: I0311 10:04:06.321282 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="71f66e51-0d86-444d-a754-9ccd5aab1449" containerName="oc" Mar 11 10:04:06 crc kubenswrapper[4808]: I0311 10:04:06.321705 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-dkvlk" Mar 11 10:04:06 crc kubenswrapper[4808]: I0311 10:04:06.326037 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Mar 11 10:04:06 crc kubenswrapper[4808]: I0311 10:04:06.326343 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Mar 11 10:04:06 crc kubenswrapper[4808]: I0311 10:04:06.326376 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Mar 11 10:04:06 crc kubenswrapper[4808]: I0311 10:04:06.327628 4808 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-hjgnt" Mar 11 10:04:06 crc kubenswrapper[4808]: I0311 10:04:06.335045 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-dkvlk"] Mar 11 10:04:06 crc kubenswrapper[4808]: I0311 10:04:06.481971 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvfhs\" (UniqueName: \"kubernetes.io/projected/d311c97e-4ef9-427b-8e27-cb0a1eb311f5-kube-api-access-hvfhs\") pod \"crc-storage-crc-dkvlk\" (UID: \"d311c97e-4ef9-427b-8e27-cb0a1eb311f5\") " pod="crc-storage/crc-storage-crc-dkvlk" Mar 11 10:04:06 crc kubenswrapper[4808]: I0311 10:04:06.482059 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/d311c97e-4ef9-427b-8e27-cb0a1eb311f5-crc-storage\") pod \"crc-storage-crc-dkvlk\" (UID: \"d311c97e-4ef9-427b-8e27-cb0a1eb311f5\") " pod="crc-storage/crc-storage-crc-dkvlk" Mar 11 10:04:06 crc kubenswrapper[4808]: I0311 10:04:06.482101 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/d311c97e-4ef9-427b-8e27-cb0a1eb311f5-node-mnt\") pod \"crc-storage-crc-dkvlk\" (UID: \"d311c97e-4ef9-427b-8e27-cb0a1eb311f5\") " pod="crc-storage/crc-storage-crc-dkvlk" Mar 11 10:04:06 crc kubenswrapper[4808]: I0311 10:04:06.584103 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvfhs\" (UniqueName: \"kubernetes.io/projected/d311c97e-4ef9-427b-8e27-cb0a1eb311f5-kube-api-access-hvfhs\") pod \"crc-storage-crc-dkvlk\" (UID: \"d311c97e-4ef9-427b-8e27-cb0a1eb311f5\") " pod="crc-storage/crc-storage-crc-dkvlk" Mar 11 10:04:06 crc kubenswrapper[4808]: I0311 10:04:06.584216 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/d311c97e-4ef9-427b-8e27-cb0a1eb311f5-crc-storage\") pod \"crc-storage-crc-dkvlk\" (UID: \"d311c97e-4ef9-427b-8e27-cb0a1eb311f5\") " pod="crc-storage/crc-storage-crc-dkvlk" Mar 11 10:04:06 crc kubenswrapper[4808]: I0311 10:04:06.584328 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/d311c97e-4ef9-427b-8e27-cb0a1eb311f5-node-mnt\") pod \"crc-storage-crc-dkvlk\" (UID: \"d311c97e-4ef9-427b-8e27-cb0a1eb311f5\") " pod="crc-storage/crc-storage-crc-dkvlk" Mar 11 10:04:06 crc kubenswrapper[4808]: I0311 10:04:06.585019 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/d311c97e-4ef9-427b-8e27-cb0a1eb311f5-node-mnt\") pod \"crc-storage-crc-dkvlk\" (UID: \"d311c97e-4ef9-427b-8e27-cb0a1eb311f5\") " pod="crc-storage/crc-storage-crc-dkvlk" Mar 11 10:04:06 crc kubenswrapper[4808]: I0311 10:04:06.585819 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/d311c97e-4ef9-427b-8e27-cb0a1eb311f5-crc-storage\") pod \"crc-storage-crc-dkvlk\" (UID: \"d311c97e-4ef9-427b-8e27-cb0a1eb311f5\") " pod="crc-storage/crc-storage-crc-dkvlk" Mar 11 10:04:06 crc kubenswrapper[4808]: I0311 10:04:06.602028 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvfhs\" (UniqueName: \"kubernetes.io/projected/d311c97e-4ef9-427b-8e27-cb0a1eb311f5-kube-api-access-hvfhs\") pod \"crc-storage-crc-dkvlk\" (UID: \"d311c97e-4ef9-427b-8e27-cb0a1eb311f5\") " pod="crc-storage/crc-storage-crc-dkvlk" Mar 11 10:04:06 crc kubenswrapper[4808]: I0311 10:04:06.641197 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-dkvlk" Mar 11 10:04:06 crc kubenswrapper[4808]: I0311 10:04:06.793675 4808 scope.go:117] "RemoveContainer" containerID="d524673f3a545738ac720864e37310705b7ec99d76cdcdbcad5a32411f049b97" Mar 11 10:04:06 crc kubenswrapper[4808]: E0311 10:04:06.794493 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 10:04:07 crc kubenswrapper[4808]: I0311 10:04:07.061087 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-dkvlk"] Mar 11 10:04:07 crc kubenswrapper[4808]: I0311 10:04:07.785093 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-dkvlk" event={"ID":"d311c97e-4ef9-427b-8e27-cb0a1eb311f5","Type":"ContainerStarted","Data":"4e5a23d838f1312de239b2a2e438f5c27cd3857f36334f1d95236fac66aa0d7f"} Mar 11 10:04:07 crc kubenswrapper[4808]: I0311 10:04:07.800731 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03c37041-c59b-4ba1-9935-5bfd40e32bdc" path="/var/lib/kubelet/pods/03c37041-c59b-4ba1-9935-5bfd40e32bdc/volumes" Mar 11 10:04:08 crc kubenswrapper[4808]: I0311 10:04:08.818814 4808 generic.go:334] "Generic (PLEG): container finished" podID="d311c97e-4ef9-427b-8e27-cb0a1eb311f5" containerID="bc96324751832b51d60ec09f2081be27117a2224371c51e4ababebd7096f3791" exitCode=0 Mar 11 10:04:08 crc kubenswrapper[4808]: I0311 10:04:08.819196 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-dkvlk" event={"ID":"d311c97e-4ef9-427b-8e27-cb0a1eb311f5","Type":"ContainerDied","Data":"bc96324751832b51d60ec09f2081be27117a2224371c51e4ababebd7096f3791"} Mar 11 10:04:10 crc kubenswrapper[4808]: I0311 10:04:10.125400 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-dkvlk" Mar 11 10:04:10 crc kubenswrapper[4808]: I0311 10:04:10.248478 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/d311c97e-4ef9-427b-8e27-cb0a1eb311f5-crc-storage\") pod \"d311c97e-4ef9-427b-8e27-cb0a1eb311f5\" (UID: \"d311c97e-4ef9-427b-8e27-cb0a1eb311f5\") " Mar 11 10:04:10 crc kubenswrapper[4808]: I0311 10:04:10.248684 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/d311c97e-4ef9-427b-8e27-cb0a1eb311f5-node-mnt\") pod \"d311c97e-4ef9-427b-8e27-cb0a1eb311f5\" (UID: \"d311c97e-4ef9-427b-8e27-cb0a1eb311f5\") " Mar 11 10:04:10 crc kubenswrapper[4808]: I0311 10:04:10.248762 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hvfhs\" (UniqueName: \"kubernetes.io/projected/d311c97e-4ef9-427b-8e27-cb0a1eb311f5-kube-api-access-hvfhs\") pod \"d311c97e-4ef9-427b-8e27-cb0a1eb311f5\" (UID: \"d311c97e-4ef9-427b-8e27-cb0a1eb311f5\") " Mar 11 10:04:10 crc kubenswrapper[4808]: I0311 10:04:10.248951 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d311c97e-4ef9-427b-8e27-cb0a1eb311f5-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "d311c97e-4ef9-427b-8e27-cb0a1eb311f5" (UID: "d311c97e-4ef9-427b-8e27-cb0a1eb311f5"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 11 10:04:10 crc kubenswrapper[4808]: I0311 10:04:10.249317 4808 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/d311c97e-4ef9-427b-8e27-cb0a1eb311f5-node-mnt\") on node \"crc\" DevicePath \"\"" Mar 11 10:04:10 crc kubenswrapper[4808]: I0311 10:04:10.253704 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d311c97e-4ef9-427b-8e27-cb0a1eb311f5-kube-api-access-hvfhs" (OuterVolumeSpecName: "kube-api-access-hvfhs") pod "d311c97e-4ef9-427b-8e27-cb0a1eb311f5" (UID: "d311c97e-4ef9-427b-8e27-cb0a1eb311f5"). InnerVolumeSpecName "kube-api-access-hvfhs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:04:10 crc kubenswrapper[4808]: I0311 10:04:10.272614 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d311c97e-4ef9-427b-8e27-cb0a1eb311f5-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "d311c97e-4ef9-427b-8e27-cb0a1eb311f5" (UID: "d311c97e-4ef9-427b-8e27-cb0a1eb311f5"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 10:04:10 crc kubenswrapper[4808]: I0311 10:04:10.350838 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hvfhs\" (UniqueName: \"kubernetes.io/projected/d311c97e-4ef9-427b-8e27-cb0a1eb311f5-kube-api-access-hvfhs\") on node \"crc\" DevicePath \"\"" Mar 11 10:04:10 crc kubenswrapper[4808]: I0311 10:04:10.350884 4808 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/d311c97e-4ef9-427b-8e27-cb0a1eb311f5-crc-storage\") on node \"crc\" DevicePath \"\"" Mar 11 10:04:10 crc kubenswrapper[4808]: I0311 10:04:10.838944 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-dkvlk" event={"ID":"d311c97e-4ef9-427b-8e27-cb0a1eb311f5","Type":"ContainerDied","Data":"4e5a23d838f1312de239b2a2e438f5c27cd3857f36334f1d95236fac66aa0d7f"} Mar 11 10:04:10 crc kubenswrapper[4808]: I0311 10:04:10.839016 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e5a23d838f1312de239b2a2e438f5c27cd3857f36334f1d95236fac66aa0d7f" Mar 11 10:04:10 crc kubenswrapper[4808]: I0311 10:04:10.839050 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-dkvlk" Mar 11 10:04:17 crc kubenswrapper[4808]: I0311 10:04:17.789519 4808 scope.go:117] "RemoveContainer" containerID="d524673f3a545738ac720864e37310705b7ec99d76cdcdbcad5a32411f049b97" Mar 11 10:04:17 crc kubenswrapper[4808]: E0311 10:04:17.790508 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 10:04:31 crc kubenswrapper[4808]: I0311 10:04:31.789504 4808 scope.go:117] "RemoveContainer" containerID="d524673f3a545738ac720864e37310705b7ec99d76cdcdbcad5a32411f049b97" Mar 11 10:04:31 crc kubenswrapper[4808]: E0311 10:04:31.790401 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 10:04:42 crc kubenswrapper[4808]: I0311 10:04:42.790814 4808 scope.go:117] "RemoveContainer" containerID="d524673f3a545738ac720864e37310705b7ec99d76cdcdbcad5a32411f049b97" Mar 11 10:04:42 crc kubenswrapper[4808]: E0311 10:04:42.793251 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 10:04:53 crc kubenswrapper[4808]: I0311 10:04:53.790081 4808 scope.go:117] "RemoveContainer" containerID="d524673f3a545738ac720864e37310705b7ec99d76cdcdbcad5a32411f049b97" Mar 11 10:04:53 crc kubenswrapper[4808]: E0311 10:04:53.790910 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 10:04:56 crc kubenswrapper[4808]: I0311 10:04:56.187085 4808 scope.go:117] "RemoveContainer" containerID="5ec5c632c5460562e2ba770836fd32ff4618db46aa71c437352fdb621c2205af" Mar 11 10:04:56 crc kubenswrapper[4808]: I0311 10:04:56.247213 4808 scope.go:117] "RemoveContainer" containerID="3ce4fd899ebb7cd15a305e5567fd153aa982549d70f7108f8fcbccbee1533a07" Mar 11 10:05:05 crc kubenswrapper[4808]: I0311 10:05:05.789818 4808 scope.go:117] "RemoveContainer" containerID="d524673f3a545738ac720864e37310705b7ec99d76cdcdbcad5a32411f049b97" Mar 11 10:05:05 crc kubenswrapper[4808]: E0311 10:05:05.790859 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 10:05:18 crc kubenswrapper[4808]: I0311 10:05:18.789688 4808 scope.go:117] "RemoveContainer" containerID="d524673f3a545738ac720864e37310705b7ec99d76cdcdbcad5a32411f049b97" Mar 11 10:05:18 crc kubenswrapper[4808]: E0311 10:05:18.790435 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 10:05:29 crc kubenswrapper[4808]: I0311 10:05:29.797787 4808 scope.go:117] "RemoveContainer" containerID="d524673f3a545738ac720864e37310705b7ec99d76cdcdbcad5a32411f049b97" Mar 11 10:05:29 crc kubenswrapper[4808]: E0311 10:05:29.798493 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 10:05:43 crc kubenswrapper[4808]: I0311 10:05:43.789833 4808 scope.go:117] "RemoveContainer" containerID="d524673f3a545738ac720864e37310705b7ec99d76cdcdbcad5a32411f049b97" Mar 11 10:05:43 crc kubenswrapper[4808]: E0311 10:05:43.790617 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 10:05:54 crc kubenswrapper[4808]: I0311 10:05:54.789853 4808 scope.go:117] "RemoveContainer" containerID="d524673f3a545738ac720864e37310705b7ec99d76cdcdbcad5a32411f049b97" Mar 11 10:05:54 crc kubenswrapper[4808]: E0311 10:05:54.790536 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 10:06:00 crc kubenswrapper[4808]: I0311 10:06:00.143113 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553726-5rpz8"] Mar 11 10:06:00 crc kubenswrapper[4808]: E0311 10:06:00.143760 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d311c97e-4ef9-427b-8e27-cb0a1eb311f5" containerName="storage" Mar 11 10:06:00 crc kubenswrapper[4808]: I0311 10:06:00.143774 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="d311c97e-4ef9-427b-8e27-cb0a1eb311f5" containerName="storage" Mar 11 10:06:00 crc kubenswrapper[4808]: I0311 10:06:00.143972 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="d311c97e-4ef9-427b-8e27-cb0a1eb311f5" containerName="storage" Mar 11 10:06:00 crc kubenswrapper[4808]: I0311 10:06:00.144577 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553726-5rpz8" Mar 11 10:06:00 crc kubenswrapper[4808]: I0311 10:06:00.147147 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-8r7cc" Mar 11 10:06:00 crc kubenswrapper[4808]: I0311 10:06:00.147748 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 10:06:00 crc kubenswrapper[4808]: I0311 10:06:00.148847 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 10:06:00 crc kubenswrapper[4808]: I0311 10:06:00.155260 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553726-5rpz8"] Mar 11 10:06:00 crc kubenswrapper[4808]: I0311 10:06:00.309944 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8l7wq\" (UniqueName: \"kubernetes.io/projected/e1be8e5d-de4f-413c-94a5-eb6535cdd87f-kube-api-access-8l7wq\") pod \"auto-csr-approver-29553726-5rpz8\" (UID: \"e1be8e5d-de4f-413c-94a5-eb6535cdd87f\") " pod="openshift-infra/auto-csr-approver-29553726-5rpz8" Mar 11 10:06:00 crc kubenswrapper[4808]: I0311 10:06:00.411629 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8l7wq\" (UniqueName: \"kubernetes.io/projected/e1be8e5d-de4f-413c-94a5-eb6535cdd87f-kube-api-access-8l7wq\") pod \"auto-csr-approver-29553726-5rpz8\" (UID: \"e1be8e5d-de4f-413c-94a5-eb6535cdd87f\") " pod="openshift-infra/auto-csr-approver-29553726-5rpz8" Mar 11 10:06:00 crc kubenswrapper[4808]: I0311 10:06:00.444606 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8l7wq\" (UniqueName: \"kubernetes.io/projected/e1be8e5d-de4f-413c-94a5-eb6535cdd87f-kube-api-access-8l7wq\") pod \"auto-csr-approver-29553726-5rpz8\" (UID: \"e1be8e5d-de4f-413c-94a5-eb6535cdd87f\") " pod="openshift-infra/auto-csr-approver-29553726-5rpz8" Mar 11 10:06:00 crc kubenswrapper[4808]: I0311 10:06:00.478861 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553726-5rpz8" Mar 11 10:06:00 crc kubenswrapper[4808]: I0311 10:06:00.920887 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553726-5rpz8"] Mar 11 10:06:00 crc kubenswrapper[4808]: W0311 10:06:00.932406 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode1be8e5d_de4f_413c_94a5_eb6535cdd87f.slice/crio-a215d995ccfef2bd462a694cd1ef16066c294d8ec161dc8367b3143e04b97782 WatchSource:0}: Error finding container a215d995ccfef2bd462a694cd1ef16066c294d8ec161dc8367b3143e04b97782: Status 404 returned error can't find the container with id a215d995ccfef2bd462a694cd1ef16066c294d8ec161dc8367b3143e04b97782 Mar 11 10:06:01 crc kubenswrapper[4808]: I0311 10:06:01.718000 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553726-5rpz8" event={"ID":"e1be8e5d-de4f-413c-94a5-eb6535cdd87f","Type":"ContainerStarted","Data":"a215d995ccfef2bd462a694cd1ef16066c294d8ec161dc8367b3143e04b97782"} Mar 11 10:06:02 crc kubenswrapper[4808]: I0311 10:06:02.725277 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553726-5rpz8" event={"ID":"e1be8e5d-de4f-413c-94a5-eb6535cdd87f","Type":"ContainerStarted","Data":"48249d8d185d7a10c8c2f354e39b16aef09249a962ff163cd9614612c427fcee"} Mar 11 10:06:02 crc kubenswrapper[4808]: I0311 10:06:02.741222 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29553726-5rpz8" podStartSLOduration=1.236620346 podStartE2EDuration="2.741202359s" podCreationTimestamp="2026-03-11 10:06:00 +0000 UTC" firstStartedPulling="2026-03-11 10:06:00.935611936 +0000 UTC m=+5211.888935256" lastFinishedPulling="2026-03-11 10:06:02.440193949 +0000 UTC m=+5213.393517269" observedRunningTime="2026-03-11 10:06:02.73808045 +0000 UTC m=+5213.691403770" watchObservedRunningTime="2026-03-11 10:06:02.741202359 +0000 UTC m=+5213.694525679" Mar 11 10:06:03 crc kubenswrapper[4808]: I0311 10:06:03.736014 4808 generic.go:334] "Generic (PLEG): container finished" podID="e1be8e5d-de4f-413c-94a5-eb6535cdd87f" containerID="48249d8d185d7a10c8c2f354e39b16aef09249a962ff163cd9614612c427fcee" exitCode=0 Mar 11 10:06:03 crc kubenswrapper[4808]: I0311 10:06:03.736072 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553726-5rpz8" event={"ID":"e1be8e5d-de4f-413c-94a5-eb6535cdd87f","Type":"ContainerDied","Data":"48249d8d185d7a10c8c2f354e39b16aef09249a962ff163cd9614612c427fcee"} Mar 11 10:06:05 crc kubenswrapper[4808]: I0311 10:06:05.128494 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553726-5rpz8" Mar 11 10:06:05 crc kubenswrapper[4808]: I0311 10:06:05.283127 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8l7wq\" (UniqueName: \"kubernetes.io/projected/e1be8e5d-de4f-413c-94a5-eb6535cdd87f-kube-api-access-8l7wq\") pod \"e1be8e5d-de4f-413c-94a5-eb6535cdd87f\" (UID: \"e1be8e5d-de4f-413c-94a5-eb6535cdd87f\") " Mar 11 10:06:05 crc kubenswrapper[4808]: I0311 10:06:05.288476 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1be8e5d-de4f-413c-94a5-eb6535cdd87f-kube-api-access-8l7wq" (OuterVolumeSpecName: "kube-api-access-8l7wq") pod "e1be8e5d-de4f-413c-94a5-eb6535cdd87f" (UID: "e1be8e5d-de4f-413c-94a5-eb6535cdd87f"). InnerVolumeSpecName "kube-api-access-8l7wq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:06:05 crc kubenswrapper[4808]: I0311 10:06:05.384815 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8l7wq\" (UniqueName: \"kubernetes.io/projected/e1be8e5d-de4f-413c-94a5-eb6535cdd87f-kube-api-access-8l7wq\") on node \"crc\" DevicePath \"\"" Mar 11 10:06:05 crc kubenswrapper[4808]: I0311 10:06:05.753544 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553726-5rpz8" event={"ID":"e1be8e5d-de4f-413c-94a5-eb6535cdd87f","Type":"ContainerDied","Data":"a215d995ccfef2bd462a694cd1ef16066c294d8ec161dc8367b3143e04b97782"} Mar 11 10:06:05 crc kubenswrapper[4808]: I0311 10:06:05.753616 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a215d995ccfef2bd462a694cd1ef16066c294d8ec161dc8367b3143e04b97782" Mar 11 10:06:05 crc kubenswrapper[4808]: I0311 10:06:05.753690 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553726-5rpz8" Mar 11 10:06:05 crc kubenswrapper[4808]: I0311 10:06:05.810329 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553720-ppv88"] Mar 11 10:06:05 crc kubenswrapper[4808]: I0311 10:06:05.816326 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553720-ppv88"] Mar 11 10:06:07 crc kubenswrapper[4808]: I0311 10:06:07.789756 4808 scope.go:117] "RemoveContainer" containerID="d524673f3a545738ac720864e37310705b7ec99d76cdcdbcad5a32411f049b97" Mar 11 10:06:07 crc kubenswrapper[4808]: E0311 10:06:07.790252 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 10:06:07 crc kubenswrapper[4808]: I0311 10:06:07.797894 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2488fd6-56fe-4c9b-a3e6-6cd4a0fe14fd" path="/var/lib/kubelet/pods/b2488fd6-56fe-4c9b-a3e6-6cd4a0fe14fd/volumes" Mar 11 10:06:10 crc kubenswrapper[4808]: I0311 10:06:10.552312 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-c44667757-sfdn5"] Mar 11 10:06:10 crc kubenswrapper[4808]: E0311 10:06:10.552869 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1be8e5d-de4f-413c-94a5-eb6535cdd87f" containerName="oc" Mar 11 10:06:10 crc kubenswrapper[4808]: I0311 10:06:10.552883 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1be8e5d-de4f-413c-94a5-eb6535cdd87f" containerName="oc" Mar 11 10:06:10 crc kubenswrapper[4808]: I0311 10:06:10.553030 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1be8e5d-de4f-413c-94a5-eb6535cdd87f" containerName="oc" Mar 11 10:06:10 crc kubenswrapper[4808]: I0311 10:06:10.553726 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c44667757-sfdn5" Mar 11 10:06:10 crc kubenswrapper[4808]: I0311 10:06:10.559887 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Mar 11 10:06:10 crc kubenswrapper[4808]: I0311 10:06:10.560045 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Mar 11 10:06:10 crc kubenswrapper[4808]: I0311 10:06:10.561138 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Mar 11 10:06:10 crc kubenswrapper[4808]: I0311 10:06:10.561438 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-6gw8m" Mar 11 10:06:10 crc kubenswrapper[4808]: I0311 10:06:10.579343 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-c44667757-sfdn5"] Mar 11 10:06:10 crc kubenswrapper[4808]: I0311 10:06:10.625926 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55c76fd6b7-8jbn2"] Mar 11 10:06:10 crc kubenswrapper[4808]: I0311 10:06:10.627126 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55c76fd6b7-8jbn2" Mar 11 10:06:10 crc kubenswrapper[4808]: I0311 10:06:10.630739 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Mar 11 10:06:10 crc kubenswrapper[4808]: I0311 10:06:10.651206 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55c76fd6b7-8jbn2"] Mar 11 10:06:10 crc kubenswrapper[4808]: I0311 10:06:10.667652 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ld28k\" (UniqueName: \"kubernetes.io/projected/5a909308-68d8-4eab-8822-d9764cff7577-kube-api-access-ld28k\") pod \"dnsmasq-dns-c44667757-sfdn5\" (UID: \"5a909308-68d8-4eab-8822-d9764cff7577\") " pod="openstack/dnsmasq-dns-c44667757-sfdn5" Mar 11 10:06:10 crc kubenswrapper[4808]: I0311 10:06:10.667830 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a909308-68d8-4eab-8822-d9764cff7577-config\") pod \"dnsmasq-dns-c44667757-sfdn5\" (UID: \"5a909308-68d8-4eab-8822-d9764cff7577\") " pod="openstack/dnsmasq-dns-c44667757-sfdn5" Mar 11 10:06:10 crc kubenswrapper[4808]: I0311 10:06:10.769445 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvls4\" (UniqueName: \"kubernetes.io/projected/ffaf89d6-5f66-4615-968e-2b57213d113a-kube-api-access-wvls4\") pod \"dnsmasq-dns-55c76fd6b7-8jbn2\" (UID: \"ffaf89d6-5f66-4615-968e-2b57213d113a\") " pod="openstack/dnsmasq-dns-55c76fd6b7-8jbn2" Mar 11 10:06:10 crc kubenswrapper[4808]: I0311 10:06:10.769515 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffaf89d6-5f66-4615-968e-2b57213d113a-config\") pod \"dnsmasq-dns-55c76fd6b7-8jbn2\" (UID: \"ffaf89d6-5f66-4615-968e-2b57213d113a\") " pod="openstack/dnsmasq-dns-55c76fd6b7-8jbn2" Mar 11 10:06:10 crc kubenswrapper[4808]: I0311 10:06:10.769544 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ld28k\" (UniqueName: \"kubernetes.io/projected/5a909308-68d8-4eab-8822-d9764cff7577-kube-api-access-ld28k\") pod \"dnsmasq-dns-c44667757-sfdn5\" (UID: \"5a909308-68d8-4eab-8822-d9764cff7577\") " pod="openstack/dnsmasq-dns-c44667757-sfdn5" Mar 11 10:06:10 crc kubenswrapper[4808]: I0311 10:06:10.769595 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ffaf89d6-5f66-4615-968e-2b57213d113a-dns-svc\") pod \"dnsmasq-dns-55c76fd6b7-8jbn2\" (UID: \"ffaf89d6-5f66-4615-968e-2b57213d113a\") " pod="openstack/dnsmasq-dns-55c76fd6b7-8jbn2" Mar 11 10:06:10 crc kubenswrapper[4808]: I0311 10:06:10.769775 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a909308-68d8-4eab-8822-d9764cff7577-config\") pod \"dnsmasq-dns-c44667757-sfdn5\" (UID: \"5a909308-68d8-4eab-8822-d9764cff7577\") " pod="openstack/dnsmasq-dns-c44667757-sfdn5" Mar 11 10:06:10 crc kubenswrapper[4808]: I0311 10:06:10.770646 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a909308-68d8-4eab-8822-d9764cff7577-config\") pod \"dnsmasq-dns-c44667757-sfdn5\" (UID: \"5a909308-68d8-4eab-8822-d9764cff7577\") " pod="openstack/dnsmasq-dns-c44667757-sfdn5" Mar 11 10:06:10 crc kubenswrapper[4808]: I0311 10:06:10.846002 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ld28k\" (UniqueName: \"kubernetes.io/projected/5a909308-68d8-4eab-8822-d9764cff7577-kube-api-access-ld28k\") pod \"dnsmasq-dns-c44667757-sfdn5\" (UID: \"5a909308-68d8-4eab-8822-d9764cff7577\") " pod="openstack/dnsmasq-dns-c44667757-sfdn5" Mar 11 10:06:10 crc kubenswrapper[4808]: I0311 10:06:10.870945 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c44667757-sfdn5" Mar 11 10:06:10 crc kubenswrapper[4808]: I0311 10:06:10.871205 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvls4\" (UniqueName: \"kubernetes.io/projected/ffaf89d6-5f66-4615-968e-2b57213d113a-kube-api-access-wvls4\") pod \"dnsmasq-dns-55c76fd6b7-8jbn2\" (UID: \"ffaf89d6-5f66-4615-968e-2b57213d113a\") " pod="openstack/dnsmasq-dns-55c76fd6b7-8jbn2" Mar 11 10:06:10 crc kubenswrapper[4808]: I0311 10:06:10.871684 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffaf89d6-5f66-4615-968e-2b57213d113a-config\") pod \"dnsmasq-dns-55c76fd6b7-8jbn2\" (UID: \"ffaf89d6-5f66-4615-968e-2b57213d113a\") " pod="openstack/dnsmasq-dns-55c76fd6b7-8jbn2" Mar 11 10:06:10 crc kubenswrapper[4808]: I0311 10:06:10.871949 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ffaf89d6-5f66-4615-968e-2b57213d113a-dns-svc\") pod \"dnsmasq-dns-55c76fd6b7-8jbn2\" (UID: \"ffaf89d6-5f66-4615-968e-2b57213d113a\") " pod="openstack/dnsmasq-dns-55c76fd6b7-8jbn2" Mar 11 10:06:10 crc kubenswrapper[4808]: I0311 10:06:10.872483 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffaf89d6-5f66-4615-968e-2b57213d113a-config\") pod \"dnsmasq-dns-55c76fd6b7-8jbn2\" (UID: \"ffaf89d6-5f66-4615-968e-2b57213d113a\") " pod="openstack/dnsmasq-dns-55c76fd6b7-8jbn2" Mar 11 10:06:10 crc kubenswrapper[4808]: I0311 10:06:10.873010 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ffaf89d6-5f66-4615-968e-2b57213d113a-dns-svc\") pod \"dnsmasq-dns-55c76fd6b7-8jbn2\" (UID: \"ffaf89d6-5f66-4615-968e-2b57213d113a\") " pod="openstack/dnsmasq-dns-55c76fd6b7-8jbn2" Mar 11 10:06:10 crc kubenswrapper[4808]: I0311 10:06:10.891279 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvls4\" (UniqueName: \"kubernetes.io/projected/ffaf89d6-5f66-4615-968e-2b57213d113a-kube-api-access-wvls4\") pod \"dnsmasq-dns-55c76fd6b7-8jbn2\" (UID: \"ffaf89d6-5f66-4615-968e-2b57213d113a\") " pod="openstack/dnsmasq-dns-55c76fd6b7-8jbn2" Mar 11 10:06:10 crc kubenswrapper[4808]: I0311 10:06:10.941175 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55c76fd6b7-8jbn2" Mar 11 10:06:11 crc kubenswrapper[4808]: I0311 10:06:11.087860 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55c76fd6b7-8jbn2"] Mar 11 10:06:11 crc kubenswrapper[4808]: I0311 10:06:11.124403 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-76b5b778c5-5gvnc"] Mar 11 10:06:11 crc kubenswrapper[4808]: I0311 10:06:11.127323 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76b5b778c5-5gvnc" Mar 11 10:06:11 crc kubenswrapper[4808]: I0311 10:06:11.135306 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76b5b778c5-5gvnc"] Mar 11 10:06:11 crc kubenswrapper[4808]: I0311 10:06:11.277441 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7250cc2-0d67-4b87-a1bf-61901854e242-config\") pod \"dnsmasq-dns-76b5b778c5-5gvnc\" (UID: \"e7250cc2-0d67-4b87-a1bf-61901854e242\") " pod="openstack/dnsmasq-dns-76b5b778c5-5gvnc" Mar 11 10:06:11 crc kubenswrapper[4808]: I0311 10:06:11.277779 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e7250cc2-0d67-4b87-a1bf-61901854e242-dns-svc\") pod \"dnsmasq-dns-76b5b778c5-5gvnc\" (UID: \"e7250cc2-0d67-4b87-a1bf-61901854e242\") " pod="openstack/dnsmasq-dns-76b5b778c5-5gvnc" Mar 11 10:06:11 crc kubenswrapper[4808]: I0311 10:06:11.277811 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bk86\" (UniqueName: \"kubernetes.io/projected/e7250cc2-0d67-4b87-a1bf-61901854e242-kube-api-access-5bk86\") pod \"dnsmasq-dns-76b5b778c5-5gvnc\" (UID: \"e7250cc2-0d67-4b87-a1bf-61901854e242\") " pod="openstack/dnsmasq-dns-76b5b778c5-5gvnc" Mar 11 10:06:11 crc kubenswrapper[4808]: I0311 10:06:11.379560 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e7250cc2-0d67-4b87-a1bf-61901854e242-dns-svc\") pod \"dnsmasq-dns-76b5b778c5-5gvnc\" (UID: \"e7250cc2-0d67-4b87-a1bf-61901854e242\") " pod="openstack/dnsmasq-dns-76b5b778c5-5gvnc" Mar 11 10:06:11 crc kubenswrapper[4808]: I0311 10:06:11.379615 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bk86\" (UniqueName: \"kubernetes.io/projected/e7250cc2-0d67-4b87-a1bf-61901854e242-kube-api-access-5bk86\") pod \"dnsmasq-dns-76b5b778c5-5gvnc\" (UID: \"e7250cc2-0d67-4b87-a1bf-61901854e242\") " pod="openstack/dnsmasq-dns-76b5b778c5-5gvnc" Mar 11 10:06:11 crc kubenswrapper[4808]: I0311 10:06:11.379711 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7250cc2-0d67-4b87-a1bf-61901854e242-config\") pod \"dnsmasq-dns-76b5b778c5-5gvnc\" (UID: \"e7250cc2-0d67-4b87-a1bf-61901854e242\") " pod="openstack/dnsmasq-dns-76b5b778c5-5gvnc" Mar 11 10:06:11 crc kubenswrapper[4808]: I0311 10:06:11.380419 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e7250cc2-0d67-4b87-a1bf-61901854e242-dns-svc\") pod \"dnsmasq-dns-76b5b778c5-5gvnc\" (UID: \"e7250cc2-0d67-4b87-a1bf-61901854e242\") " pod="openstack/dnsmasq-dns-76b5b778c5-5gvnc" Mar 11 10:06:11 crc kubenswrapper[4808]: I0311 10:06:11.380435 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7250cc2-0d67-4b87-a1bf-61901854e242-config\") pod \"dnsmasq-dns-76b5b778c5-5gvnc\" (UID: \"e7250cc2-0d67-4b87-a1bf-61901854e242\") " pod="openstack/dnsmasq-dns-76b5b778c5-5gvnc" Mar 11 10:06:11 crc kubenswrapper[4808]: I0311 10:06:11.400094 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bk86\" (UniqueName: \"kubernetes.io/projected/e7250cc2-0d67-4b87-a1bf-61901854e242-kube-api-access-5bk86\") pod \"dnsmasq-dns-76b5b778c5-5gvnc\" (UID: \"e7250cc2-0d67-4b87-a1bf-61901854e242\") " pod="openstack/dnsmasq-dns-76b5b778c5-5gvnc" Mar 11 10:06:11 crc kubenswrapper[4808]: I0311 10:06:11.429429 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-c44667757-sfdn5"] Mar 11 10:06:11 crc kubenswrapper[4808]: I0311 10:06:11.454471 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76b5b778c5-5gvnc" Mar 11 10:06:11 crc kubenswrapper[4808]: I0311 10:06:11.536165 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-c44667757-sfdn5"] Mar 11 10:06:11 crc kubenswrapper[4808]: I0311 10:06:11.552765 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-ff89b6977-dkqvb"] Mar 11 10:06:11 crc kubenswrapper[4808]: I0311 10:06:11.554481 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-ff89b6977-dkqvb" Mar 11 10:06:11 crc kubenswrapper[4808]: I0311 10:06:11.568329 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-ff89b6977-dkqvb"] Mar 11 10:06:11 crc kubenswrapper[4808]: I0311 10:06:11.589709 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2d68f53-525a-46d8-81c6-a3668c7fb842-dns-svc\") pod \"dnsmasq-dns-ff89b6977-dkqvb\" (UID: \"d2d68f53-525a-46d8-81c6-a3668c7fb842\") " pod="openstack/dnsmasq-dns-ff89b6977-dkqvb" Mar 11 10:06:11 crc kubenswrapper[4808]: I0311 10:06:11.589922 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5tsx\" (UniqueName: \"kubernetes.io/projected/d2d68f53-525a-46d8-81c6-a3668c7fb842-kube-api-access-k5tsx\") pod \"dnsmasq-dns-ff89b6977-dkqvb\" (UID: \"d2d68f53-525a-46d8-81c6-a3668c7fb842\") " pod="openstack/dnsmasq-dns-ff89b6977-dkqvb" Mar 11 10:06:11 crc kubenswrapper[4808]: I0311 10:06:11.589986 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2d68f53-525a-46d8-81c6-a3668c7fb842-config\") pod \"dnsmasq-dns-ff89b6977-dkqvb\" (UID: \"d2d68f53-525a-46d8-81c6-a3668c7fb842\") " pod="openstack/dnsmasq-dns-ff89b6977-dkqvb" Mar 11 10:06:11 crc kubenswrapper[4808]: I0311 10:06:11.599706 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55c76fd6b7-8jbn2"] Mar 11 10:06:11 crc kubenswrapper[4808]: I0311 10:06:11.691187 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2d68f53-525a-46d8-81c6-a3668c7fb842-dns-svc\") pod \"dnsmasq-dns-ff89b6977-dkqvb\" (UID: \"d2d68f53-525a-46d8-81c6-a3668c7fb842\") " pod="openstack/dnsmasq-dns-ff89b6977-dkqvb" Mar 11 10:06:11 crc kubenswrapper[4808]: I0311 10:06:11.691512 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5tsx\" (UniqueName: \"kubernetes.io/projected/d2d68f53-525a-46d8-81c6-a3668c7fb842-kube-api-access-k5tsx\") pod \"dnsmasq-dns-ff89b6977-dkqvb\" (UID: \"d2d68f53-525a-46d8-81c6-a3668c7fb842\") " pod="openstack/dnsmasq-dns-ff89b6977-dkqvb" Mar 11 10:06:11 crc kubenswrapper[4808]: I0311 10:06:11.691539 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2d68f53-525a-46d8-81c6-a3668c7fb842-config\") pod \"dnsmasq-dns-ff89b6977-dkqvb\" (UID: \"d2d68f53-525a-46d8-81c6-a3668c7fb842\") " pod="openstack/dnsmasq-dns-ff89b6977-dkqvb" Mar 11 10:06:11 crc kubenswrapper[4808]: I0311 10:06:11.692346 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2d68f53-525a-46d8-81c6-a3668c7fb842-config\") pod \"dnsmasq-dns-ff89b6977-dkqvb\" (UID: \"d2d68f53-525a-46d8-81c6-a3668c7fb842\") " pod="openstack/dnsmasq-dns-ff89b6977-dkqvb" Mar 11 10:06:11 crc kubenswrapper[4808]: I0311 10:06:11.692867 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2d68f53-525a-46d8-81c6-a3668c7fb842-dns-svc\") pod \"dnsmasq-dns-ff89b6977-dkqvb\" (UID: \"d2d68f53-525a-46d8-81c6-a3668c7fb842\") " pod="openstack/dnsmasq-dns-ff89b6977-dkqvb" Mar 11 10:06:11 crc kubenswrapper[4808]: I0311 10:06:11.721561 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5tsx\" (UniqueName: \"kubernetes.io/projected/d2d68f53-525a-46d8-81c6-a3668c7fb842-kube-api-access-k5tsx\") pod \"dnsmasq-dns-ff89b6977-dkqvb\" (UID: \"d2d68f53-525a-46d8-81c6-a3668c7fb842\") " pod="openstack/dnsmasq-dns-ff89b6977-dkqvb" Mar 11 10:06:11 crc kubenswrapper[4808]: I0311 10:06:11.841388 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55c76fd6b7-8jbn2" event={"ID":"ffaf89d6-5f66-4615-968e-2b57213d113a","Type":"ContainerStarted","Data":"731afcbc899e3f4ac48383ebd458bceb24b135898657c86ee9da5f1e707b3617"} Mar 11 10:06:11 crc kubenswrapper[4808]: I0311 10:06:11.841689 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55c76fd6b7-8jbn2" event={"ID":"ffaf89d6-5f66-4615-968e-2b57213d113a","Type":"ContainerStarted","Data":"da7a90250ff7ffae21b52ad58a54dce8fed8e2b7a6f503c330fa8d47651dd05c"} Mar 11 10:06:11 crc kubenswrapper[4808]: I0311 10:06:11.846441 4808 generic.go:334] "Generic (PLEG): container finished" podID="5a909308-68d8-4eab-8822-d9764cff7577" containerID="258fcda1d40f40ddcd602f31376809887afc35901a48dc8295b766a1986b949c" exitCode=0 Mar 11 10:06:11 crc kubenswrapper[4808]: I0311 10:06:11.846479 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c44667757-sfdn5" event={"ID":"5a909308-68d8-4eab-8822-d9764cff7577","Type":"ContainerDied","Data":"258fcda1d40f40ddcd602f31376809887afc35901a48dc8295b766a1986b949c"} Mar 11 10:06:11 crc kubenswrapper[4808]: I0311 10:06:11.846513 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c44667757-sfdn5" event={"ID":"5a909308-68d8-4eab-8822-d9764cff7577","Type":"ContainerStarted","Data":"3126baa575e52318e84ca0795ab9ab890429e86a98b2dd0ed29725c0992e835b"} Mar 11 10:06:11 crc kubenswrapper[4808]: I0311 10:06:11.896126 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-ff89b6977-dkqvb" Mar 11 10:06:11 crc kubenswrapper[4808]: E0311 10:06:11.962891 4808 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podffaf89d6_5f66_4615_968e_2b57213d113a.slice/crio-conmon-731afcbc899e3f4ac48383ebd458bceb24b135898657c86ee9da5f1e707b3617.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podffaf89d6_5f66_4615_968e_2b57213d113a.slice/crio-731afcbc899e3f4ac48383ebd458bceb24b135898657c86ee9da5f1e707b3617.scope\": RecentStats: unable to find data in memory cache]" Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.045190 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76b5b778c5-5gvnc"] Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.239771 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.241220 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.243087 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.243222 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.243456 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.243492 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.249587 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.249946 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-88mh2" Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.256787 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.256963 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.307674 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c44667757-sfdn5" Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.375805 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55c76fd6b7-8jbn2" Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.407145 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a909308-68d8-4eab-8822-d9764cff7577-config\") pod \"5a909308-68d8-4eab-8822-d9764cff7577\" (UID: \"5a909308-68d8-4eab-8822-d9764cff7577\") " Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.407334 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ld28k\" (UniqueName: \"kubernetes.io/projected/5a909308-68d8-4eab-8822-d9764cff7577-kube-api-access-ld28k\") pod \"5a909308-68d8-4eab-8822-d9764cff7577\" (UID: \"5a909308-68d8-4eab-8822-d9764cff7577\") " Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.407517 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1ae82137-73f8-4f99-a5b2-acecdb1e372a-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"1ae82137-73f8-4f99-a5b2-acecdb1e372a\") " pod="openstack/rabbitmq-server-0" Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.407573 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1ae82137-73f8-4f99-a5b2-acecdb1e372a-pod-info\") pod \"rabbitmq-server-0\" (UID: \"1ae82137-73f8-4f99-a5b2-acecdb1e372a\") " pod="openstack/rabbitmq-server-0" Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.407635 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1ae82137-73f8-4f99-a5b2-acecdb1e372a-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"1ae82137-73f8-4f99-a5b2-acecdb1e372a\") " pod="openstack/rabbitmq-server-0" Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.407659 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1ae82137-73f8-4f99-a5b2-acecdb1e372a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"1ae82137-73f8-4f99-a5b2-acecdb1e372a\") " pod="openstack/rabbitmq-server-0" Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.407742 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1ae82137-73f8-4f99-a5b2-acecdb1e372a-config-data\") pod \"rabbitmq-server-0\" (UID: \"1ae82137-73f8-4f99-a5b2-acecdb1e372a\") " pod="openstack/rabbitmq-server-0" Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.407841 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1ae82137-73f8-4f99-a5b2-acecdb1e372a-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"1ae82137-73f8-4f99-a5b2-acecdb1e372a\") " pod="openstack/rabbitmq-server-0" Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.407970 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1ae82137-73f8-4f99-a5b2-acecdb1e372a-server-conf\") pod \"rabbitmq-server-0\" (UID: \"1ae82137-73f8-4f99-a5b2-acecdb1e372a\") " pod="openstack/rabbitmq-server-0" Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.408025 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-fab05de8-314b-4d20-a094-6a424085124f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fab05de8-314b-4d20-a094-6a424085124f\") pod \"rabbitmq-server-0\" (UID: \"1ae82137-73f8-4f99-a5b2-acecdb1e372a\") " pod="openstack/rabbitmq-server-0" Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.408088 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1ae82137-73f8-4f99-a5b2-acecdb1e372a-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"1ae82137-73f8-4f99-a5b2-acecdb1e372a\") " pod="openstack/rabbitmq-server-0" Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.408137 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pks5\" (UniqueName: \"kubernetes.io/projected/1ae82137-73f8-4f99-a5b2-acecdb1e372a-kube-api-access-7pks5\") pod \"rabbitmq-server-0\" (UID: \"1ae82137-73f8-4f99-a5b2-acecdb1e372a\") " pod="openstack/rabbitmq-server-0" Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.408180 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1ae82137-73f8-4f99-a5b2-acecdb1e372a-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"1ae82137-73f8-4f99-a5b2-acecdb1e372a\") " pod="openstack/rabbitmq-server-0" Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.410724 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a909308-68d8-4eab-8822-d9764cff7577-kube-api-access-ld28k" (OuterVolumeSpecName: "kube-api-access-ld28k") pod "5a909308-68d8-4eab-8822-d9764cff7577" (UID: "5a909308-68d8-4eab-8822-d9764cff7577"). InnerVolumeSpecName "kube-api-access-ld28k". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.423781 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a909308-68d8-4eab-8822-d9764cff7577-config" (OuterVolumeSpecName: "config") pod "5a909308-68d8-4eab-8822-d9764cff7577" (UID: "5a909308-68d8-4eab-8822-d9764cff7577"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.508734 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffaf89d6-5f66-4615-968e-2b57213d113a-config\") pod \"ffaf89d6-5f66-4615-968e-2b57213d113a\" (UID: \"ffaf89d6-5f66-4615-968e-2b57213d113a\") " Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.508820 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ffaf89d6-5f66-4615-968e-2b57213d113a-dns-svc\") pod \"ffaf89d6-5f66-4615-968e-2b57213d113a\" (UID: \"ffaf89d6-5f66-4615-968e-2b57213d113a\") " Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.508880 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wvls4\" (UniqueName: \"kubernetes.io/projected/ffaf89d6-5f66-4615-968e-2b57213d113a-kube-api-access-wvls4\") pod \"ffaf89d6-5f66-4615-968e-2b57213d113a\" (UID: \"ffaf89d6-5f66-4615-968e-2b57213d113a\") " Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.509061 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1ae82137-73f8-4f99-a5b2-acecdb1e372a-server-conf\") pod \"rabbitmq-server-0\" (UID: \"1ae82137-73f8-4f99-a5b2-acecdb1e372a\") " pod="openstack/rabbitmq-server-0" Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.509094 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-fab05de8-314b-4d20-a094-6a424085124f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fab05de8-314b-4d20-a094-6a424085124f\") pod \"rabbitmq-server-0\" (UID: \"1ae82137-73f8-4f99-a5b2-acecdb1e372a\") " pod="openstack/rabbitmq-server-0" Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.509116 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1ae82137-73f8-4f99-a5b2-acecdb1e372a-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"1ae82137-73f8-4f99-a5b2-acecdb1e372a\") " pod="openstack/rabbitmq-server-0" Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.509139 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pks5\" (UniqueName: \"kubernetes.io/projected/1ae82137-73f8-4f99-a5b2-acecdb1e372a-kube-api-access-7pks5\") pod \"rabbitmq-server-0\" (UID: \"1ae82137-73f8-4f99-a5b2-acecdb1e372a\") " pod="openstack/rabbitmq-server-0" Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.509160 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1ae82137-73f8-4f99-a5b2-acecdb1e372a-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"1ae82137-73f8-4f99-a5b2-acecdb1e372a\") " pod="openstack/rabbitmq-server-0" Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.509189 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1ae82137-73f8-4f99-a5b2-acecdb1e372a-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"1ae82137-73f8-4f99-a5b2-acecdb1e372a\") " pod="openstack/rabbitmq-server-0" Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.509208 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1ae82137-73f8-4f99-a5b2-acecdb1e372a-pod-info\") pod \"rabbitmq-server-0\" (UID: \"1ae82137-73f8-4f99-a5b2-acecdb1e372a\") " pod="openstack/rabbitmq-server-0" Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.509232 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1ae82137-73f8-4f99-a5b2-acecdb1e372a-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"1ae82137-73f8-4f99-a5b2-acecdb1e372a\") " pod="openstack/rabbitmq-server-0" Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.509254 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1ae82137-73f8-4f99-a5b2-acecdb1e372a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"1ae82137-73f8-4f99-a5b2-acecdb1e372a\") " pod="openstack/rabbitmq-server-0" Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.509269 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1ae82137-73f8-4f99-a5b2-acecdb1e372a-config-data\") pod \"rabbitmq-server-0\" (UID: \"1ae82137-73f8-4f99-a5b2-acecdb1e372a\") " pod="openstack/rabbitmq-server-0" Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.509291 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1ae82137-73f8-4f99-a5b2-acecdb1e372a-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"1ae82137-73f8-4f99-a5b2-acecdb1e372a\") " pod="openstack/rabbitmq-server-0" Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.509344 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ld28k\" (UniqueName: \"kubernetes.io/projected/5a909308-68d8-4eab-8822-d9764cff7577-kube-api-access-ld28k\") on node \"crc\" DevicePath \"\"" Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.509392 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a909308-68d8-4eab-8822-d9764cff7577-config\") on node \"crc\" DevicePath \"\"" Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.510218 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1ae82137-73f8-4f99-a5b2-acecdb1e372a-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"1ae82137-73f8-4f99-a5b2-acecdb1e372a\") " pod="openstack/rabbitmq-server-0" Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.510659 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1ae82137-73f8-4f99-a5b2-acecdb1e372a-server-conf\") pod \"rabbitmq-server-0\" (UID: \"1ae82137-73f8-4f99-a5b2-acecdb1e372a\") " pod="openstack/rabbitmq-server-0" Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.512298 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1ae82137-73f8-4f99-a5b2-acecdb1e372a-config-data\") pod \"rabbitmq-server-0\" (UID: \"1ae82137-73f8-4f99-a5b2-acecdb1e372a\") " pod="openstack/rabbitmq-server-0" Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.513625 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1ae82137-73f8-4f99-a5b2-acecdb1e372a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"1ae82137-73f8-4f99-a5b2-acecdb1e372a\") " pod="openstack/rabbitmq-server-0" Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.514549 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffaf89d6-5f66-4615-968e-2b57213d113a-kube-api-access-wvls4" (OuterVolumeSpecName: "kube-api-access-wvls4") pod "ffaf89d6-5f66-4615-968e-2b57213d113a" (UID: "ffaf89d6-5f66-4615-968e-2b57213d113a"). InnerVolumeSpecName "kube-api-access-wvls4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.516328 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1ae82137-73f8-4f99-a5b2-acecdb1e372a-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"1ae82137-73f8-4f99-a5b2-acecdb1e372a\") " pod="openstack/rabbitmq-server-0" Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.517489 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1ae82137-73f8-4f99-a5b2-acecdb1e372a-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"1ae82137-73f8-4f99-a5b2-acecdb1e372a\") " pod="openstack/rabbitmq-server-0" Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.517743 4808 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.517790 4808 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-fab05de8-314b-4d20-a094-6a424085124f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fab05de8-314b-4d20-a094-6a424085124f\") pod \"rabbitmq-server-0\" (UID: \"1ae82137-73f8-4f99-a5b2-acecdb1e372a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1cb73165314b57b01ce9da0b6744027cf77411009629d1be7e0160691d4d6f11/globalmount\"" pod="openstack/rabbitmq-server-0" Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.518821 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1ae82137-73f8-4f99-a5b2-acecdb1e372a-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"1ae82137-73f8-4f99-a5b2-acecdb1e372a\") " pod="openstack/rabbitmq-server-0" Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.518942 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1ae82137-73f8-4f99-a5b2-acecdb1e372a-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"1ae82137-73f8-4f99-a5b2-acecdb1e372a\") " pod="openstack/rabbitmq-server-0" Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.522161 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1ae82137-73f8-4f99-a5b2-acecdb1e372a-pod-info\") pod \"rabbitmq-server-0\" (UID: \"1ae82137-73f8-4f99-a5b2-acecdb1e372a\") " pod="openstack/rabbitmq-server-0" Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.529403 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-ff89b6977-dkqvb"] Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.534776 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ffaf89d6-5f66-4615-968e-2b57213d113a-config" (OuterVolumeSpecName: "config") pod "ffaf89d6-5f66-4615-968e-2b57213d113a" (UID: "ffaf89d6-5f66-4615-968e-2b57213d113a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.534913 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pks5\" (UniqueName: \"kubernetes.io/projected/1ae82137-73f8-4f99-a5b2-acecdb1e372a-kube-api-access-7pks5\") pod \"rabbitmq-server-0\" (UID: \"1ae82137-73f8-4f99-a5b2-acecdb1e372a\") " pod="openstack/rabbitmq-server-0" Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.541620 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ffaf89d6-5f66-4615-968e-2b57213d113a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ffaf89d6-5f66-4615-968e-2b57213d113a" (UID: "ffaf89d6-5f66-4615-968e-2b57213d113a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.552170 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-fab05de8-314b-4d20-a094-6a424085124f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fab05de8-314b-4d20-a094-6a424085124f\") pod \"rabbitmq-server-0\" (UID: \"1ae82137-73f8-4f99-a5b2-acecdb1e372a\") " pod="openstack/rabbitmq-server-0" Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.582192 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.610176 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wvls4\" (UniqueName: \"kubernetes.io/projected/ffaf89d6-5f66-4615-968e-2b57213d113a-kube-api-access-wvls4\") on node \"crc\" DevicePath \"\"" Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.610210 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffaf89d6-5f66-4615-968e-2b57213d113a-config\") on node \"crc\" DevicePath \"\"" Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.610221 4808 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ffaf89d6-5f66-4615-968e-2b57213d113a-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.706245 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 11 10:06:12 crc kubenswrapper[4808]: E0311 10:06:12.706580 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffaf89d6-5f66-4615-968e-2b57213d113a" containerName="init" Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.706598 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffaf89d6-5f66-4615-968e-2b57213d113a" containerName="init" Mar 11 10:06:12 crc kubenswrapper[4808]: E0311 10:06:12.706617 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a909308-68d8-4eab-8822-d9764cff7577" containerName="init" Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.706626 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a909308-68d8-4eab-8822-d9764cff7577" containerName="init" Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.706778 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffaf89d6-5f66-4615-968e-2b57213d113a" containerName="init" Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.706794 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a909308-68d8-4eab-8822-d9764cff7577" containerName="init" Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.707650 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.713875 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.713916 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.713880 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.714152 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.714172 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.714335 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.714410 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-9gvhv" Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.724579 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.827026 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4c8053c2-7a71-4381-aa2f-9e0d4d1963e9-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c8053c2-7a71-4381-aa2f-9e0d4d1963e9\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.827384 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4c8053c2-7a71-4381-aa2f-9e0d4d1963e9-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c8053c2-7a71-4381-aa2f-9e0d4d1963e9\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.827412 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4c8053c2-7a71-4381-aa2f-9e0d4d1963e9-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c8053c2-7a71-4381-aa2f-9e0d4d1963e9\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.827453 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0f751401-42db-445a-851f-76536f4e37c8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0f751401-42db-445a-851f-76536f4e37c8\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c8053c2-7a71-4381-aa2f-9e0d4d1963e9\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.827510 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4c8053c2-7a71-4381-aa2f-9e0d4d1963e9-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c8053c2-7a71-4381-aa2f-9e0d4d1963e9\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.827530 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4c8053c2-7a71-4381-aa2f-9e0d4d1963e9-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c8053c2-7a71-4381-aa2f-9e0d4d1963e9\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.827552 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4c8053c2-7a71-4381-aa2f-9e0d4d1963e9-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c8053c2-7a71-4381-aa2f-9e0d4d1963e9\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.827577 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4c8053c2-7a71-4381-aa2f-9e0d4d1963e9-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c8053c2-7a71-4381-aa2f-9e0d4d1963e9\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.827592 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4c8053c2-7a71-4381-aa2f-9e0d4d1963e9-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c8053c2-7a71-4381-aa2f-9e0d4d1963e9\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.827644 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzbd2\" (UniqueName: \"kubernetes.io/projected/4c8053c2-7a71-4381-aa2f-9e0d4d1963e9-kube-api-access-gzbd2\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c8053c2-7a71-4381-aa2f-9e0d4d1963e9\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.827662 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4c8053c2-7a71-4381-aa2f-9e0d4d1963e9-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c8053c2-7a71-4381-aa2f-9e0d4d1963e9\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.875674 4808 generic.go:334] "Generic (PLEG): container finished" podID="e7250cc2-0d67-4b87-a1bf-61901854e242" containerID="e5ae237309c088a61c0c3e0b20562cb16bbc4102a131423c32eb6225d15911e0" exitCode=0 Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.875763 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76b5b778c5-5gvnc" event={"ID":"e7250cc2-0d67-4b87-a1bf-61901854e242","Type":"ContainerDied","Data":"e5ae237309c088a61c0c3e0b20562cb16bbc4102a131423c32eb6225d15911e0"} Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.875792 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76b5b778c5-5gvnc" event={"ID":"e7250cc2-0d67-4b87-a1bf-61901854e242","Type":"ContainerStarted","Data":"547b0fdac18a9a92d4055f97be1f1e8003bfdd2f286fb7807086ad19164c6a57"} Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.890792 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c44667757-sfdn5" event={"ID":"5a909308-68d8-4eab-8822-d9764cff7577","Type":"ContainerDied","Data":"3126baa575e52318e84ca0795ab9ab890429e86a98b2dd0ed29725c0992e835b"} Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.890865 4808 scope.go:117] "RemoveContainer" containerID="258fcda1d40f40ddcd602f31376809887afc35901a48dc8295b766a1986b949c" Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.891047 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c44667757-sfdn5" Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.907636 4808 generic.go:334] "Generic (PLEG): container finished" podID="d2d68f53-525a-46d8-81c6-a3668c7fb842" containerID="36cf4051a2e640019b01ad9abd8dc3ffdffafaa54c2a7bed6dc6b073be48da07" exitCode=0 Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.907721 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-ff89b6977-dkqvb" event={"ID":"d2d68f53-525a-46d8-81c6-a3668c7fb842","Type":"ContainerDied","Data":"36cf4051a2e640019b01ad9abd8dc3ffdffafaa54c2a7bed6dc6b073be48da07"} Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.907745 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-ff89b6977-dkqvb" event={"ID":"d2d68f53-525a-46d8-81c6-a3668c7fb842","Type":"ContainerStarted","Data":"54f1f7ead4d496835fa98c7d6254569b9adf9f8054008eeb9af02f1ca0d83774"} Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.930048 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4c8053c2-7a71-4381-aa2f-9e0d4d1963e9-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c8053c2-7a71-4381-aa2f-9e0d4d1963e9\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.930088 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4c8053c2-7a71-4381-aa2f-9e0d4d1963e9-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c8053c2-7a71-4381-aa2f-9e0d4d1963e9\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.930114 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4c8053c2-7a71-4381-aa2f-9e0d4d1963e9-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c8053c2-7a71-4381-aa2f-9e0d4d1963e9\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.930155 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0f751401-42db-445a-851f-76536f4e37c8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0f751401-42db-445a-851f-76536f4e37c8\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c8053c2-7a71-4381-aa2f-9e0d4d1963e9\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.930204 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4c8053c2-7a71-4381-aa2f-9e0d4d1963e9-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c8053c2-7a71-4381-aa2f-9e0d4d1963e9\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.930226 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4c8053c2-7a71-4381-aa2f-9e0d4d1963e9-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c8053c2-7a71-4381-aa2f-9e0d4d1963e9\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.930250 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4c8053c2-7a71-4381-aa2f-9e0d4d1963e9-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c8053c2-7a71-4381-aa2f-9e0d4d1963e9\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.930277 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4c8053c2-7a71-4381-aa2f-9e0d4d1963e9-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c8053c2-7a71-4381-aa2f-9e0d4d1963e9\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.930291 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4c8053c2-7a71-4381-aa2f-9e0d4d1963e9-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c8053c2-7a71-4381-aa2f-9e0d4d1963e9\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.930318 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzbd2\" (UniqueName: \"kubernetes.io/projected/4c8053c2-7a71-4381-aa2f-9e0d4d1963e9-kube-api-access-gzbd2\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c8053c2-7a71-4381-aa2f-9e0d4d1963e9\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.930338 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4c8053c2-7a71-4381-aa2f-9e0d4d1963e9-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c8053c2-7a71-4381-aa2f-9e0d4d1963e9\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.930800 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4c8053c2-7a71-4381-aa2f-9e0d4d1963e9-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c8053c2-7a71-4381-aa2f-9e0d4d1963e9\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.937220 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4c8053c2-7a71-4381-aa2f-9e0d4d1963e9-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c8053c2-7a71-4381-aa2f-9e0d4d1963e9\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.937783 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4c8053c2-7a71-4381-aa2f-9e0d4d1963e9-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c8053c2-7a71-4381-aa2f-9e0d4d1963e9\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.937974 4808 generic.go:334] "Generic (PLEG): container finished" podID="ffaf89d6-5f66-4615-968e-2b57213d113a" containerID="731afcbc899e3f4ac48383ebd458bceb24b135898657c86ee9da5f1e707b3617" exitCode=0 Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.938007 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55c76fd6b7-8jbn2" event={"ID":"ffaf89d6-5f66-4615-968e-2b57213d113a","Type":"ContainerDied","Data":"731afcbc899e3f4ac48383ebd458bceb24b135898657c86ee9da5f1e707b3617"} Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.938033 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55c76fd6b7-8jbn2" event={"ID":"ffaf89d6-5f66-4615-968e-2b57213d113a","Type":"ContainerDied","Data":"da7a90250ff7ffae21b52ad58a54dce8fed8e2b7a6f503c330fa8d47651dd05c"} Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.938090 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55c76fd6b7-8jbn2" Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.938798 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4c8053c2-7a71-4381-aa2f-9e0d4d1963e9-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c8053c2-7a71-4381-aa2f-9e0d4d1963e9\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.949708 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4c8053c2-7a71-4381-aa2f-9e0d4d1963e9-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c8053c2-7a71-4381-aa2f-9e0d4d1963e9\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.952405 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4c8053c2-7a71-4381-aa2f-9e0d4d1963e9-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c8053c2-7a71-4381-aa2f-9e0d4d1963e9\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.952856 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4c8053c2-7a71-4381-aa2f-9e0d4d1963e9-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c8053c2-7a71-4381-aa2f-9e0d4d1963e9\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.955982 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4c8053c2-7a71-4381-aa2f-9e0d4d1963e9-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c8053c2-7a71-4381-aa2f-9e0d4d1963e9\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.959930 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4c8053c2-7a71-4381-aa2f-9e0d4d1963e9-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c8053c2-7a71-4381-aa2f-9e0d4d1963e9\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.966869 4808 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.966905 4808 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0f751401-42db-445a-851f-76536f4e37c8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0f751401-42db-445a-851f-76536f4e37c8\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c8053c2-7a71-4381-aa2f-9e0d4d1963e9\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e00aa266669966b723e5adf11419961b42a5bc0d2a6dcd9225857088f1f90cdd/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:06:12 crc kubenswrapper[4808]: I0311 10:06:12.991382 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzbd2\" (UniqueName: \"kubernetes.io/projected/4c8053c2-7a71-4381-aa2f-9e0d4d1963e9-kube-api-access-gzbd2\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c8053c2-7a71-4381-aa2f-9e0d4d1963e9\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:06:13 crc kubenswrapper[4808]: I0311 10:06:13.022092 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0f751401-42db-445a-851f-76536f4e37c8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0f751401-42db-445a-851f-76536f4e37c8\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c8053c2-7a71-4381-aa2f-9e0d4d1963e9\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:06:13 crc kubenswrapper[4808]: I0311 10:06:13.050461 4808 scope.go:117] "RemoveContainer" containerID="731afcbc899e3f4ac48383ebd458bceb24b135898657c86ee9da5f1e707b3617" Mar 11 10:06:13 crc kubenswrapper[4808]: I0311 10:06:13.061835 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 11 10:06:13 crc kubenswrapper[4808]: W0311 10:06:13.084644 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ae82137_73f8_4f99_a5b2_acecdb1e372a.slice/crio-11844c7e18b454bf43814677e82e276b083d36d6497f36e6daba52fac1098903 WatchSource:0}: Error finding container 11844c7e18b454bf43814677e82e276b083d36d6497f36e6daba52fac1098903: Status 404 returned error can't find the container with id 11844c7e18b454bf43814677e82e276b083d36d6497f36e6daba52fac1098903 Mar 11 10:06:13 crc kubenswrapper[4808]: I0311 10:06:13.091493 4808 scope.go:117] "RemoveContainer" containerID="731afcbc899e3f4ac48383ebd458bceb24b135898657c86ee9da5f1e707b3617" Mar 11 10:06:13 crc kubenswrapper[4808]: E0311 10:06:13.101420 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"731afcbc899e3f4ac48383ebd458bceb24b135898657c86ee9da5f1e707b3617\": container with ID starting with 731afcbc899e3f4ac48383ebd458bceb24b135898657c86ee9da5f1e707b3617 not found: ID does not exist" containerID="731afcbc899e3f4ac48383ebd458bceb24b135898657c86ee9da5f1e707b3617" Mar 11 10:06:13 crc kubenswrapper[4808]: I0311 10:06:13.101715 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"731afcbc899e3f4ac48383ebd458bceb24b135898657c86ee9da5f1e707b3617"} err="failed to get container status \"731afcbc899e3f4ac48383ebd458bceb24b135898657c86ee9da5f1e707b3617\": rpc error: code = NotFound desc = could not find container \"731afcbc899e3f4ac48383ebd458bceb24b135898657c86ee9da5f1e707b3617\": container with ID starting with 731afcbc899e3f4ac48383ebd458bceb24b135898657c86ee9da5f1e707b3617 not found: ID does not exist" Mar 11 10:06:13 crc kubenswrapper[4808]: I0311 10:06:13.101690 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-c44667757-sfdn5"] Mar 11 10:06:13 crc kubenswrapper[4808]: I0311 10:06:13.112869 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-c44667757-sfdn5"] Mar 11 10:06:13 crc kubenswrapper[4808]: I0311 10:06:13.153118 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55c76fd6b7-8jbn2"] Mar 11 10:06:13 crc kubenswrapper[4808]: I0311 10:06:13.164107 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55c76fd6b7-8jbn2"] Mar 11 10:06:13 crc kubenswrapper[4808]: E0311 10:06:13.173216 4808 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Mar 11 10:06:13 crc kubenswrapper[4808]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/e7250cc2-0d67-4b87-a1bf-61901854e242/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Mar 11 10:06:13 crc kubenswrapper[4808]: > podSandboxID="547b0fdac18a9a92d4055f97be1f1e8003bfdd2f286fb7807086ad19164c6a57" Mar 11 10:06:13 crc kubenswrapper[4808]: E0311 10:06:13.179058 4808 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 11 10:06:13 crc kubenswrapper[4808]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:944833b50342d462c10637342bc85197a8cf099a3650df12e23854dde99af514,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n8chc6h5bh56fh546hb7hc8h67h5bchffh577h697h5b5h5bdh59bhf6hf4h558hb5h578h595h5cchfbh644h59ch7fh654h547h587h5cbh5d5h8fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5bk86,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-76b5b778c5-5gvnc_openstack(e7250cc2-0d67-4b87-a1bf-61901854e242): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/e7250cc2-0d67-4b87-a1bf-61901854e242/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Mar 11 10:06:13 crc kubenswrapper[4808]: > logger="UnhandledError" Mar 11 10:06:13 crc kubenswrapper[4808]: E0311 10:06:13.181688 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/e7250cc2-0d67-4b87-a1bf-61901854e242/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-76b5b778c5-5gvnc" podUID="e7250cc2-0d67-4b87-a1bf-61901854e242" Mar 11 10:06:13 crc kubenswrapper[4808]: I0311 10:06:13.198793 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Mar 11 10:06:13 crc kubenswrapper[4808]: I0311 10:06:13.201677 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Mar 11 10:06:13 crc kubenswrapper[4808]: I0311 10:06:13.204312 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-vqh22" Mar 11 10:06:13 crc kubenswrapper[4808]: I0311 10:06:13.207613 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Mar 11 10:06:13 crc kubenswrapper[4808]: I0311 10:06:13.207671 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Mar 11 10:06:13 crc kubenswrapper[4808]: I0311 10:06:13.206880 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Mar 11 10:06:13 crc kubenswrapper[4808]: I0311 10:06:13.212640 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Mar 11 10:06:13 crc kubenswrapper[4808]: I0311 10:06:13.235440 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Mar 11 10:06:13 crc kubenswrapper[4808]: I0311 10:06:13.329251 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:06:13 crc kubenswrapper[4808]: I0311 10:06:13.335960 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a8ead8e6-bc4c-48cf-9375-cc9c27c5db50-kolla-config\") pod \"openstack-galera-0\" (UID: \"a8ead8e6-bc4c-48cf-9375-cc9c27c5db50\") " pod="openstack/openstack-galera-0" Mar 11 10:06:13 crc kubenswrapper[4808]: I0311 10:06:13.336027 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a8ead8e6-bc4c-48cf-9375-cc9c27c5db50-config-data-default\") pod \"openstack-galera-0\" (UID: \"a8ead8e6-bc4c-48cf-9375-cc9c27c5db50\") " pod="openstack/openstack-galera-0" Mar 11 10:06:13 crc kubenswrapper[4808]: I0311 10:06:13.336092 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8ead8e6-bc4c-48cf-9375-cc9c27c5db50-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"a8ead8e6-bc4c-48cf-9375-cc9c27c5db50\") " pod="openstack/openstack-galera-0" Mar 11 10:06:13 crc kubenswrapper[4808]: I0311 10:06:13.336148 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6zzp\" (UniqueName: \"kubernetes.io/projected/a8ead8e6-bc4c-48cf-9375-cc9c27c5db50-kube-api-access-t6zzp\") pod \"openstack-galera-0\" (UID: \"a8ead8e6-bc4c-48cf-9375-cc9c27c5db50\") " pod="openstack/openstack-galera-0" Mar 11 10:06:13 crc kubenswrapper[4808]: I0311 10:06:13.336173 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a8ead8e6-bc4c-48cf-9375-cc9c27c5db50-config-data-generated\") pod \"openstack-galera-0\" (UID: \"a8ead8e6-bc4c-48cf-9375-cc9c27c5db50\") " pod="openstack/openstack-galera-0" Mar 11 10:06:13 crc kubenswrapper[4808]: I0311 10:06:13.336204 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-dcc1fa39-eed0-4904-96eb-5195bd22f1f5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dcc1fa39-eed0-4904-96eb-5195bd22f1f5\") pod \"openstack-galera-0\" (UID: \"a8ead8e6-bc4c-48cf-9375-cc9c27c5db50\") " pod="openstack/openstack-galera-0" Mar 11 10:06:13 crc kubenswrapper[4808]: I0311 10:06:13.336257 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8ead8e6-bc4c-48cf-9375-cc9c27c5db50-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"a8ead8e6-bc4c-48cf-9375-cc9c27c5db50\") " pod="openstack/openstack-galera-0" Mar 11 10:06:13 crc kubenswrapper[4808]: I0311 10:06:13.336342 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a8ead8e6-bc4c-48cf-9375-cc9c27c5db50-operator-scripts\") pod \"openstack-galera-0\" (UID: \"a8ead8e6-bc4c-48cf-9375-cc9c27c5db50\") " pod="openstack/openstack-galera-0" Mar 11 10:06:13 crc kubenswrapper[4808]: I0311 10:06:13.437728 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a8ead8e6-bc4c-48cf-9375-cc9c27c5db50-operator-scripts\") pod \"openstack-galera-0\" (UID: \"a8ead8e6-bc4c-48cf-9375-cc9c27c5db50\") " pod="openstack/openstack-galera-0" Mar 11 10:06:13 crc kubenswrapper[4808]: I0311 10:06:13.437988 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a8ead8e6-bc4c-48cf-9375-cc9c27c5db50-kolla-config\") pod \"openstack-galera-0\" (UID: \"a8ead8e6-bc4c-48cf-9375-cc9c27c5db50\") " pod="openstack/openstack-galera-0" Mar 11 10:06:13 crc kubenswrapper[4808]: I0311 10:06:13.438028 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a8ead8e6-bc4c-48cf-9375-cc9c27c5db50-config-data-default\") pod \"openstack-galera-0\" (UID: \"a8ead8e6-bc4c-48cf-9375-cc9c27c5db50\") " pod="openstack/openstack-galera-0" Mar 11 10:06:13 crc kubenswrapper[4808]: I0311 10:06:13.438044 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8ead8e6-bc4c-48cf-9375-cc9c27c5db50-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"a8ead8e6-bc4c-48cf-9375-cc9c27c5db50\") " pod="openstack/openstack-galera-0" Mar 11 10:06:13 crc kubenswrapper[4808]: I0311 10:06:13.438066 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6zzp\" (UniqueName: \"kubernetes.io/projected/a8ead8e6-bc4c-48cf-9375-cc9c27c5db50-kube-api-access-t6zzp\") pod \"openstack-galera-0\" (UID: \"a8ead8e6-bc4c-48cf-9375-cc9c27c5db50\") " pod="openstack/openstack-galera-0" Mar 11 10:06:13 crc kubenswrapper[4808]: I0311 10:06:13.438079 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a8ead8e6-bc4c-48cf-9375-cc9c27c5db50-config-data-generated\") pod \"openstack-galera-0\" (UID: \"a8ead8e6-bc4c-48cf-9375-cc9c27c5db50\") " pod="openstack/openstack-galera-0" Mar 11 10:06:13 crc kubenswrapper[4808]: I0311 10:06:13.438099 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-dcc1fa39-eed0-4904-96eb-5195bd22f1f5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dcc1fa39-eed0-4904-96eb-5195bd22f1f5\") pod \"openstack-galera-0\" (UID: \"a8ead8e6-bc4c-48cf-9375-cc9c27c5db50\") " pod="openstack/openstack-galera-0" Mar 11 10:06:13 crc kubenswrapper[4808]: I0311 10:06:13.438124 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8ead8e6-bc4c-48cf-9375-cc9c27c5db50-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"a8ead8e6-bc4c-48cf-9375-cc9c27c5db50\") " pod="openstack/openstack-galera-0" Mar 11 10:06:13 crc kubenswrapper[4808]: I0311 10:06:13.439023 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a8ead8e6-bc4c-48cf-9375-cc9c27c5db50-kolla-config\") pod \"openstack-galera-0\" (UID: \"a8ead8e6-bc4c-48cf-9375-cc9c27c5db50\") " pod="openstack/openstack-galera-0" Mar 11 10:06:13 crc kubenswrapper[4808]: I0311 10:06:13.439145 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a8ead8e6-bc4c-48cf-9375-cc9c27c5db50-config-data-default\") pod \"openstack-galera-0\" (UID: \"a8ead8e6-bc4c-48cf-9375-cc9c27c5db50\") " pod="openstack/openstack-galera-0" Mar 11 10:06:13 crc kubenswrapper[4808]: I0311 10:06:13.439388 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a8ead8e6-bc4c-48cf-9375-cc9c27c5db50-config-data-generated\") pod \"openstack-galera-0\" (UID: \"a8ead8e6-bc4c-48cf-9375-cc9c27c5db50\") " pod="openstack/openstack-galera-0" Mar 11 10:06:13 crc kubenswrapper[4808]: I0311 10:06:13.439719 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a8ead8e6-bc4c-48cf-9375-cc9c27c5db50-operator-scripts\") pod \"openstack-galera-0\" (UID: \"a8ead8e6-bc4c-48cf-9375-cc9c27c5db50\") " pod="openstack/openstack-galera-0" Mar 11 10:06:13 crc kubenswrapper[4808]: I0311 10:06:13.443510 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8ead8e6-bc4c-48cf-9375-cc9c27c5db50-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"a8ead8e6-bc4c-48cf-9375-cc9c27c5db50\") " pod="openstack/openstack-galera-0" Mar 11 10:06:13 crc kubenswrapper[4808]: I0311 10:06:13.444160 4808 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 11 10:06:13 crc kubenswrapper[4808]: I0311 10:06:13.444184 4808 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-dcc1fa39-eed0-4904-96eb-5195bd22f1f5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dcc1fa39-eed0-4904-96eb-5195bd22f1f5\") pod \"openstack-galera-0\" (UID: \"a8ead8e6-bc4c-48cf-9375-cc9c27c5db50\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/aa8eeb489a0c0fbafce527eb98d39ad0e760468e20b0f033faaa779435d03646/globalmount\"" pod="openstack/openstack-galera-0" Mar 11 10:06:13 crc kubenswrapper[4808]: I0311 10:06:13.444230 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8ead8e6-bc4c-48cf-9375-cc9c27c5db50-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"a8ead8e6-bc4c-48cf-9375-cc9c27c5db50\") " pod="openstack/openstack-galera-0" Mar 11 10:06:13 crc kubenswrapper[4808]: I0311 10:06:13.460510 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6zzp\" (UniqueName: \"kubernetes.io/projected/a8ead8e6-bc4c-48cf-9375-cc9c27c5db50-kube-api-access-t6zzp\") pod \"openstack-galera-0\" (UID: \"a8ead8e6-bc4c-48cf-9375-cc9c27c5db50\") " pod="openstack/openstack-galera-0" Mar 11 10:06:13 crc kubenswrapper[4808]: I0311 10:06:13.492185 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-dcc1fa39-eed0-4904-96eb-5195bd22f1f5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dcc1fa39-eed0-4904-96eb-5195bd22f1f5\") pod \"openstack-galera-0\" (UID: \"a8ead8e6-bc4c-48cf-9375-cc9c27c5db50\") " pod="openstack/openstack-galera-0" Mar 11 10:06:13 crc kubenswrapper[4808]: I0311 10:06:13.558782 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Mar 11 10:06:13 crc kubenswrapper[4808]: W0311 10:06:13.780200 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4c8053c2_7a71_4381_aa2f_9e0d4d1963e9.slice/crio-3da859e0059f3d69b4d4bb3c734a37479e83deb78dab31a8b8a486601e8ca88a WatchSource:0}: Error finding container 3da859e0059f3d69b4d4bb3c734a37479e83deb78dab31a8b8a486601e8ca88a: Status 404 returned error can't find the container with id 3da859e0059f3d69b4d4bb3c734a37479e83deb78dab31a8b8a486601e8ca88a Mar 11 10:06:13 crc kubenswrapper[4808]: I0311 10:06:13.782393 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 11 10:06:13 crc kubenswrapper[4808]: I0311 10:06:13.801600 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a909308-68d8-4eab-8822-d9764cff7577" path="/var/lib/kubelet/pods/5a909308-68d8-4eab-8822-d9764cff7577/volumes" Mar 11 10:06:13 crc kubenswrapper[4808]: I0311 10:06:13.802574 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffaf89d6-5f66-4615-968e-2b57213d113a" path="/var/lib/kubelet/pods/ffaf89d6-5f66-4615-968e-2b57213d113a/volumes" Mar 11 10:06:13 crc kubenswrapper[4808]: I0311 10:06:13.956329 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1ae82137-73f8-4f99-a5b2-acecdb1e372a","Type":"ContainerStarted","Data":"11844c7e18b454bf43814677e82e276b083d36d6497f36e6daba52fac1098903"} Mar 11 10:06:13 crc kubenswrapper[4808]: I0311 10:06:13.957247 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4c8053c2-7a71-4381-aa2f-9e0d4d1963e9","Type":"ContainerStarted","Data":"3da859e0059f3d69b4d4bb3c734a37479e83deb78dab31a8b8a486601e8ca88a"} Mar 11 10:06:13 crc kubenswrapper[4808]: I0311 10:06:13.962304 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-ff89b6977-dkqvb" event={"ID":"d2d68f53-525a-46d8-81c6-a3668c7fb842","Type":"ContainerStarted","Data":"ba7ec146f53b187a45c3d7c05afcec11292a61d2823be9b7d0ef800b222b7f4c"} Mar 11 10:06:13 crc kubenswrapper[4808]: I0311 10:06:13.999907 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-ff89b6977-dkqvb" podStartSLOduration=2.99988833 podStartE2EDuration="2.99988833s" podCreationTimestamp="2026-03-11 10:06:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 10:06:13.99600175 +0000 UTC m=+5224.949325070" watchObservedRunningTime="2026-03-11 10:06:13.99988833 +0000 UTC m=+5224.953211650" Mar 11 10:06:14 crc kubenswrapper[4808]: I0311 10:06:14.043150 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Mar 11 10:06:14 crc kubenswrapper[4808]: I0311 10:06:14.697639 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 11 10:06:14 crc kubenswrapper[4808]: I0311 10:06:14.699072 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Mar 11 10:06:14 crc kubenswrapper[4808]: I0311 10:06:14.705491 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Mar 11 10:06:14 crc kubenswrapper[4808]: I0311 10:06:14.706075 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Mar 11 10:06:14 crc kubenswrapper[4808]: I0311 10:06:14.706274 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-6rczw" Mar 11 10:06:14 crc kubenswrapper[4808]: I0311 10:06:14.712485 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 11 10:06:14 crc kubenswrapper[4808]: I0311 10:06:14.712811 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Mar 11 10:06:14 crc kubenswrapper[4808]: I0311 10:06:14.857755 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67bbd1bb-f8a9-4de4-b4a0-fbe575d7a85d-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"67bbd1bb-f8a9-4de4-b4a0-fbe575d7a85d\") " pod="openstack/openstack-cell1-galera-0" Mar 11 10:06:14 crc kubenswrapper[4808]: I0311 10:06:14.857834 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q76zq\" (UniqueName: \"kubernetes.io/projected/67bbd1bb-f8a9-4de4-b4a0-fbe575d7a85d-kube-api-access-q76zq\") pod \"openstack-cell1-galera-0\" (UID: \"67bbd1bb-f8a9-4de4-b4a0-fbe575d7a85d\") " pod="openstack/openstack-cell1-galera-0" Mar 11 10:06:14 crc kubenswrapper[4808]: I0311 10:06:14.857894 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/67bbd1bb-f8a9-4de4-b4a0-fbe575d7a85d-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"67bbd1bb-f8a9-4de4-b4a0-fbe575d7a85d\") " pod="openstack/openstack-cell1-galera-0" Mar 11 10:06:14 crc kubenswrapper[4808]: I0311 10:06:14.857948 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/67bbd1bb-f8a9-4de4-b4a0-fbe575d7a85d-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"67bbd1bb-f8a9-4de4-b4a0-fbe575d7a85d\") " pod="openstack/openstack-cell1-galera-0" Mar 11 10:06:14 crc kubenswrapper[4808]: I0311 10:06:14.858067 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/67bbd1bb-f8a9-4de4-b4a0-fbe575d7a85d-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"67bbd1bb-f8a9-4de4-b4a0-fbe575d7a85d\") " pod="openstack/openstack-cell1-galera-0" Mar 11 10:06:14 crc kubenswrapper[4808]: I0311 10:06:14.858201 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/67bbd1bb-f8a9-4de4-b4a0-fbe575d7a85d-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"67bbd1bb-f8a9-4de4-b4a0-fbe575d7a85d\") " pod="openstack/openstack-cell1-galera-0" Mar 11 10:06:14 crc kubenswrapper[4808]: I0311 10:06:14.858239 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67bbd1bb-f8a9-4de4-b4a0-fbe575d7a85d-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"67bbd1bb-f8a9-4de4-b4a0-fbe575d7a85d\") " pod="openstack/openstack-cell1-galera-0" Mar 11 10:06:14 crc kubenswrapper[4808]: I0311 10:06:14.858418 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-865ab514-766b-4822-add9-28d035e241c4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-865ab514-766b-4822-add9-28d035e241c4\") pod \"openstack-cell1-galera-0\" (UID: \"67bbd1bb-f8a9-4de4-b4a0-fbe575d7a85d\") " pod="openstack/openstack-cell1-galera-0" Mar 11 10:06:14 crc kubenswrapper[4808]: I0311 10:06:14.959624 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67bbd1bb-f8a9-4de4-b4a0-fbe575d7a85d-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"67bbd1bb-f8a9-4de4-b4a0-fbe575d7a85d\") " pod="openstack/openstack-cell1-galera-0" Mar 11 10:06:14 crc kubenswrapper[4808]: I0311 10:06:14.959706 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q76zq\" (UniqueName: \"kubernetes.io/projected/67bbd1bb-f8a9-4de4-b4a0-fbe575d7a85d-kube-api-access-q76zq\") pod \"openstack-cell1-galera-0\" (UID: \"67bbd1bb-f8a9-4de4-b4a0-fbe575d7a85d\") " pod="openstack/openstack-cell1-galera-0" Mar 11 10:06:14 crc kubenswrapper[4808]: I0311 10:06:14.959759 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/67bbd1bb-f8a9-4de4-b4a0-fbe575d7a85d-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"67bbd1bb-f8a9-4de4-b4a0-fbe575d7a85d\") " pod="openstack/openstack-cell1-galera-0" Mar 11 10:06:14 crc kubenswrapper[4808]: I0311 10:06:14.959799 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/67bbd1bb-f8a9-4de4-b4a0-fbe575d7a85d-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"67bbd1bb-f8a9-4de4-b4a0-fbe575d7a85d\") " pod="openstack/openstack-cell1-galera-0" Mar 11 10:06:14 crc kubenswrapper[4808]: I0311 10:06:14.959841 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/67bbd1bb-f8a9-4de4-b4a0-fbe575d7a85d-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"67bbd1bb-f8a9-4de4-b4a0-fbe575d7a85d\") " pod="openstack/openstack-cell1-galera-0" Mar 11 10:06:14 crc kubenswrapper[4808]: I0311 10:06:14.959908 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/67bbd1bb-f8a9-4de4-b4a0-fbe575d7a85d-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"67bbd1bb-f8a9-4de4-b4a0-fbe575d7a85d\") " pod="openstack/openstack-cell1-galera-0" Mar 11 10:06:14 crc kubenswrapper[4808]: I0311 10:06:14.959945 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67bbd1bb-f8a9-4de4-b4a0-fbe575d7a85d-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"67bbd1bb-f8a9-4de4-b4a0-fbe575d7a85d\") " pod="openstack/openstack-cell1-galera-0" Mar 11 10:06:14 crc kubenswrapper[4808]: I0311 10:06:14.960017 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-865ab514-766b-4822-add9-28d035e241c4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-865ab514-766b-4822-add9-28d035e241c4\") pod \"openstack-cell1-galera-0\" (UID: \"67bbd1bb-f8a9-4de4-b4a0-fbe575d7a85d\") " pod="openstack/openstack-cell1-galera-0" Mar 11 10:06:14 crc kubenswrapper[4808]: I0311 10:06:14.960610 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/67bbd1bb-f8a9-4de4-b4a0-fbe575d7a85d-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"67bbd1bb-f8a9-4de4-b4a0-fbe575d7a85d\") " pod="openstack/openstack-cell1-galera-0" Mar 11 10:06:14 crc kubenswrapper[4808]: I0311 10:06:14.962086 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/67bbd1bb-f8a9-4de4-b4a0-fbe575d7a85d-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"67bbd1bb-f8a9-4de4-b4a0-fbe575d7a85d\") " pod="openstack/openstack-cell1-galera-0" Mar 11 10:06:14 crc kubenswrapper[4808]: I0311 10:06:14.962699 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/67bbd1bb-f8a9-4de4-b4a0-fbe575d7a85d-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"67bbd1bb-f8a9-4de4-b4a0-fbe575d7a85d\") " pod="openstack/openstack-cell1-galera-0" Mar 11 10:06:14 crc kubenswrapper[4808]: I0311 10:06:14.962793 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67bbd1bb-f8a9-4de4-b4a0-fbe575d7a85d-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"67bbd1bb-f8a9-4de4-b4a0-fbe575d7a85d\") " pod="openstack/openstack-cell1-galera-0" Mar 11 10:06:14 crc kubenswrapper[4808]: I0311 10:06:14.965208 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/67bbd1bb-f8a9-4de4-b4a0-fbe575d7a85d-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"67bbd1bb-f8a9-4de4-b4a0-fbe575d7a85d\") " pod="openstack/openstack-cell1-galera-0" Mar 11 10:06:14 crc kubenswrapper[4808]: I0311 10:06:14.967660 4808 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 11 10:06:14 crc kubenswrapper[4808]: I0311 10:06:14.967709 4808 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-865ab514-766b-4822-add9-28d035e241c4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-865ab514-766b-4822-add9-28d035e241c4\") pod \"openstack-cell1-galera-0\" (UID: \"67bbd1bb-f8a9-4de4-b4a0-fbe575d7a85d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/23ad402358e7631db914be1c7d59478908e0d1e199c1783f2a70ea0a66d5bd2a/globalmount\"" pod="openstack/openstack-cell1-galera-0" Mar 11 10:06:14 crc kubenswrapper[4808]: I0311 10:06:14.972928 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1ae82137-73f8-4f99-a5b2-acecdb1e372a","Type":"ContainerStarted","Data":"efd94cab5a3481e535445c098eba28875e16625a175a3c3316016fc0ef43cc2c"} Mar 11 10:06:14 crc kubenswrapper[4808]: I0311 10:06:14.973761 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67bbd1bb-f8a9-4de4-b4a0-fbe575d7a85d-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"67bbd1bb-f8a9-4de4-b4a0-fbe575d7a85d\") " pod="openstack/openstack-cell1-galera-0" Mar 11 10:06:14 crc kubenswrapper[4808]: I0311 10:06:14.985136 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76b5b778c5-5gvnc" event={"ID":"e7250cc2-0d67-4b87-a1bf-61901854e242","Type":"ContainerStarted","Data":"6477bccdf303187da7449c0f61d22c7aeedca02d5dd55a45b5da4b9cbcab7fda"} Mar 11 10:06:14 crc kubenswrapper[4808]: I0311 10:06:14.986153 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-76b5b778c5-5gvnc" Mar 11 10:06:14 crc kubenswrapper[4808]: I0311 10:06:14.989003 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"a8ead8e6-bc4c-48cf-9375-cc9c27c5db50","Type":"ContainerStarted","Data":"d4d3296cdd411fd5d40bb1210e9440f2b8c589ac54b5143e986c1597b4727836"} Mar 11 10:06:14 crc kubenswrapper[4808]: I0311 10:06:14.989048 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"a8ead8e6-bc4c-48cf-9375-cc9c27c5db50","Type":"ContainerStarted","Data":"c6449b47b78b61c22374c17aab37000f1946e69b413ea421b9e059c9ef64248a"} Mar 11 10:06:14 crc kubenswrapper[4808]: I0311 10:06:14.989065 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-ff89b6977-dkqvb" Mar 11 10:06:15 crc kubenswrapper[4808]: I0311 10:06:15.005991 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q76zq\" (UniqueName: \"kubernetes.io/projected/67bbd1bb-f8a9-4de4-b4a0-fbe575d7a85d-kube-api-access-q76zq\") pod \"openstack-cell1-galera-0\" (UID: \"67bbd1bb-f8a9-4de4-b4a0-fbe575d7a85d\") " pod="openstack/openstack-cell1-galera-0" Mar 11 10:06:15 crc kubenswrapper[4808]: I0311 10:06:15.022138 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-865ab514-766b-4822-add9-28d035e241c4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-865ab514-766b-4822-add9-28d035e241c4\") pod \"openstack-cell1-galera-0\" (UID: \"67bbd1bb-f8a9-4de4-b4a0-fbe575d7a85d\") " pod="openstack/openstack-cell1-galera-0" Mar 11 10:06:15 crc kubenswrapper[4808]: I0311 10:06:15.045546 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-76b5b778c5-5gvnc" podStartSLOduration=4.045523718 podStartE2EDuration="4.045523718s" podCreationTimestamp="2026-03-11 10:06:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 10:06:15.038795037 +0000 UTC m=+5225.992118377" watchObservedRunningTime="2026-03-11 10:06:15.045523718 +0000 UTC m=+5225.998847038" Mar 11 10:06:15 crc kubenswrapper[4808]: I0311 10:06:15.103532 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Mar 11 10:06:15 crc kubenswrapper[4808]: I0311 10:06:15.104471 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Mar 11 10:06:15 crc kubenswrapper[4808]: I0311 10:06:15.106628 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Mar 11 10:06:15 crc kubenswrapper[4808]: I0311 10:06:15.107215 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-jslm4" Mar 11 10:06:15 crc kubenswrapper[4808]: I0311 10:06:15.108245 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Mar 11 10:06:15 crc kubenswrapper[4808]: I0311 10:06:15.124137 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Mar 11 10:06:15 crc kubenswrapper[4808]: I0311 10:06:15.163668 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7qvk\" (UniqueName: \"kubernetes.io/projected/9687850f-8da8-4ab1-9316-7d0d9d4f75f6-kube-api-access-n7qvk\") pod \"memcached-0\" (UID: \"9687850f-8da8-4ab1-9316-7d0d9d4f75f6\") " pod="openstack/memcached-0" Mar 11 10:06:15 crc kubenswrapper[4808]: I0311 10:06:15.163812 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9687850f-8da8-4ab1-9316-7d0d9d4f75f6-combined-ca-bundle\") pod \"memcached-0\" (UID: \"9687850f-8da8-4ab1-9316-7d0d9d4f75f6\") " pod="openstack/memcached-0" Mar 11 10:06:15 crc kubenswrapper[4808]: I0311 10:06:15.163864 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9687850f-8da8-4ab1-9316-7d0d9d4f75f6-kolla-config\") pod \"memcached-0\" (UID: \"9687850f-8da8-4ab1-9316-7d0d9d4f75f6\") " pod="openstack/memcached-0" Mar 11 10:06:15 crc kubenswrapper[4808]: I0311 10:06:15.163916 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/9687850f-8da8-4ab1-9316-7d0d9d4f75f6-memcached-tls-certs\") pod \"memcached-0\" (UID: \"9687850f-8da8-4ab1-9316-7d0d9d4f75f6\") " pod="openstack/memcached-0" Mar 11 10:06:15 crc kubenswrapper[4808]: I0311 10:06:15.164094 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9687850f-8da8-4ab1-9316-7d0d9d4f75f6-config-data\") pod \"memcached-0\" (UID: \"9687850f-8da8-4ab1-9316-7d0d9d4f75f6\") " pod="openstack/memcached-0" Mar 11 10:06:15 crc kubenswrapper[4808]: I0311 10:06:15.266190 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7qvk\" (UniqueName: \"kubernetes.io/projected/9687850f-8da8-4ab1-9316-7d0d9d4f75f6-kube-api-access-n7qvk\") pod \"memcached-0\" (UID: \"9687850f-8da8-4ab1-9316-7d0d9d4f75f6\") " pod="openstack/memcached-0" Mar 11 10:06:15 crc kubenswrapper[4808]: I0311 10:06:15.266570 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9687850f-8da8-4ab1-9316-7d0d9d4f75f6-combined-ca-bundle\") pod \"memcached-0\" (UID: \"9687850f-8da8-4ab1-9316-7d0d9d4f75f6\") " pod="openstack/memcached-0" Mar 11 10:06:15 crc kubenswrapper[4808]: I0311 10:06:15.266708 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9687850f-8da8-4ab1-9316-7d0d9d4f75f6-kolla-config\") pod \"memcached-0\" (UID: \"9687850f-8da8-4ab1-9316-7d0d9d4f75f6\") " pod="openstack/memcached-0" Mar 11 10:06:15 crc kubenswrapper[4808]: I0311 10:06:15.266919 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/9687850f-8da8-4ab1-9316-7d0d9d4f75f6-memcached-tls-certs\") pod \"memcached-0\" (UID: \"9687850f-8da8-4ab1-9316-7d0d9d4f75f6\") " pod="openstack/memcached-0" Mar 11 10:06:15 crc kubenswrapper[4808]: I0311 10:06:15.267612 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9687850f-8da8-4ab1-9316-7d0d9d4f75f6-config-data\") pod \"memcached-0\" (UID: \"9687850f-8da8-4ab1-9316-7d0d9d4f75f6\") " pod="openstack/memcached-0" Mar 11 10:06:15 crc kubenswrapper[4808]: I0311 10:06:15.267565 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9687850f-8da8-4ab1-9316-7d0d9d4f75f6-kolla-config\") pod \"memcached-0\" (UID: \"9687850f-8da8-4ab1-9316-7d0d9d4f75f6\") " pod="openstack/memcached-0" Mar 11 10:06:15 crc kubenswrapper[4808]: I0311 10:06:15.268145 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9687850f-8da8-4ab1-9316-7d0d9d4f75f6-config-data\") pod \"memcached-0\" (UID: \"9687850f-8da8-4ab1-9316-7d0d9d4f75f6\") " pod="openstack/memcached-0" Mar 11 10:06:15 crc kubenswrapper[4808]: I0311 10:06:15.271258 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/9687850f-8da8-4ab1-9316-7d0d9d4f75f6-memcached-tls-certs\") pod \"memcached-0\" (UID: \"9687850f-8da8-4ab1-9316-7d0d9d4f75f6\") " pod="openstack/memcached-0" Mar 11 10:06:15 crc kubenswrapper[4808]: I0311 10:06:15.273996 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9687850f-8da8-4ab1-9316-7d0d9d4f75f6-combined-ca-bundle\") pod \"memcached-0\" (UID: \"9687850f-8da8-4ab1-9316-7d0d9d4f75f6\") " pod="openstack/memcached-0" Mar 11 10:06:15 crc kubenswrapper[4808]: I0311 10:06:15.284914 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7qvk\" (UniqueName: \"kubernetes.io/projected/9687850f-8da8-4ab1-9316-7d0d9d4f75f6-kube-api-access-n7qvk\") pod \"memcached-0\" (UID: \"9687850f-8da8-4ab1-9316-7d0d9d4f75f6\") " pod="openstack/memcached-0" Mar 11 10:06:15 crc kubenswrapper[4808]: I0311 10:06:15.316185 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Mar 11 10:06:15 crc kubenswrapper[4808]: I0311 10:06:15.420070 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Mar 11 10:06:15 crc kubenswrapper[4808]: I0311 10:06:15.862497 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 11 10:06:15 crc kubenswrapper[4808]: W0311 10:06:15.863979 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod67bbd1bb_f8a9_4de4_b4a0_fbe575d7a85d.slice/crio-a23498e956975ac038fd1ff219ba4b9ac37a36c48180d350e333ad6690416197 WatchSource:0}: Error finding container a23498e956975ac038fd1ff219ba4b9ac37a36c48180d350e333ad6690416197: Status 404 returned error can't find the container with id a23498e956975ac038fd1ff219ba4b9ac37a36c48180d350e333ad6690416197 Mar 11 10:06:15 crc kubenswrapper[4808]: I0311 10:06:15.987335 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Mar 11 10:06:16 crc kubenswrapper[4808]: W0311 10:06:16.018133 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9687850f_8da8_4ab1_9316_7d0d9d4f75f6.slice/crio-0437e6d866fdd2900a7d89ad7dd6fcba66cba16a874e5186ee20a29a8f402294 WatchSource:0}: Error finding container 0437e6d866fdd2900a7d89ad7dd6fcba66cba16a874e5186ee20a29a8f402294: Status 404 returned error can't find the container with id 0437e6d866fdd2900a7d89ad7dd6fcba66cba16a874e5186ee20a29a8f402294 Mar 11 10:06:16 crc kubenswrapper[4808]: I0311 10:06:16.033779 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"67bbd1bb-f8a9-4de4-b4a0-fbe575d7a85d","Type":"ContainerStarted","Data":"a23498e956975ac038fd1ff219ba4b9ac37a36c48180d350e333ad6690416197"} Mar 11 10:06:16 crc kubenswrapper[4808]: I0311 10:06:16.037856 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4c8053c2-7a71-4381-aa2f-9e0d4d1963e9","Type":"ContainerStarted","Data":"b9980c4f47b1d66eb44f1d68026dd866d210be555e2077515b80ae42fc2070e5"} Mar 11 10:06:17 crc kubenswrapper[4808]: I0311 10:06:17.042538 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"9687850f-8da8-4ab1-9316-7d0d9d4f75f6","Type":"ContainerStarted","Data":"279dc1fa43cd277003d560a343e15800b1ae93b768d1033ddb015c4ff9d922a4"} Mar 11 10:06:17 crc kubenswrapper[4808]: I0311 10:06:17.042881 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"9687850f-8da8-4ab1-9316-7d0d9d4f75f6","Type":"ContainerStarted","Data":"0437e6d866fdd2900a7d89ad7dd6fcba66cba16a874e5186ee20a29a8f402294"} Mar 11 10:06:17 crc kubenswrapper[4808]: I0311 10:06:17.043020 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Mar 11 10:06:17 crc kubenswrapper[4808]: I0311 10:06:17.044101 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"67bbd1bb-f8a9-4de4-b4a0-fbe575d7a85d","Type":"ContainerStarted","Data":"da6d546387ebb1bde7c4bd9fc4486e2dcd38721d738bafd8a89e4492b335a82d"} Mar 11 10:06:17 crc kubenswrapper[4808]: I0311 10:06:17.066948 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=2.066922463 podStartE2EDuration="2.066922463s" podCreationTimestamp="2026-03-11 10:06:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 10:06:17.062652283 +0000 UTC m=+5228.015975613" watchObservedRunningTime="2026-03-11 10:06:17.066922463 +0000 UTC m=+5228.020245783" Mar 11 10:06:18 crc kubenswrapper[4808]: I0311 10:06:18.055079 4808 generic.go:334] "Generic (PLEG): container finished" podID="a8ead8e6-bc4c-48cf-9375-cc9c27c5db50" containerID="d4d3296cdd411fd5d40bb1210e9440f2b8c589ac54b5143e986c1597b4727836" exitCode=0 Mar 11 10:06:18 crc kubenswrapper[4808]: I0311 10:06:18.055233 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"a8ead8e6-bc4c-48cf-9375-cc9c27c5db50","Type":"ContainerDied","Data":"d4d3296cdd411fd5d40bb1210e9440f2b8c589ac54b5143e986c1597b4727836"} Mar 11 10:06:18 crc kubenswrapper[4808]: I0311 10:06:18.789719 4808 scope.go:117] "RemoveContainer" containerID="d524673f3a545738ac720864e37310705b7ec99d76cdcdbcad5a32411f049b97" Mar 11 10:06:18 crc kubenswrapper[4808]: E0311 10:06:18.790262 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 10:06:19 crc kubenswrapper[4808]: I0311 10:06:19.062962 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"a8ead8e6-bc4c-48cf-9375-cc9c27c5db50","Type":"ContainerStarted","Data":"5ab445deec93aabdc0e0bbd95bc2adf62a501694dd076a4f39d0ee61af7ac910"} Mar 11 10:06:19 crc kubenswrapper[4808]: I0311 10:06:19.087206 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=7.087183817 podStartE2EDuration="7.087183817s" podCreationTimestamp="2026-03-11 10:06:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 10:06:19.080805576 +0000 UTC m=+5230.034128936" watchObservedRunningTime="2026-03-11 10:06:19.087183817 +0000 UTC m=+5230.040507137" Mar 11 10:06:20 crc kubenswrapper[4808]: I0311 10:06:20.075123 4808 generic.go:334] "Generic (PLEG): container finished" podID="67bbd1bb-f8a9-4de4-b4a0-fbe575d7a85d" containerID="da6d546387ebb1bde7c4bd9fc4486e2dcd38721d738bafd8a89e4492b335a82d" exitCode=0 Mar 11 10:06:20 crc kubenswrapper[4808]: I0311 10:06:20.075197 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"67bbd1bb-f8a9-4de4-b4a0-fbe575d7a85d","Type":"ContainerDied","Data":"da6d546387ebb1bde7c4bd9fc4486e2dcd38721d738bafd8a89e4492b335a82d"} Mar 11 10:06:21 crc kubenswrapper[4808]: I0311 10:06:21.086631 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"67bbd1bb-f8a9-4de4-b4a0-fbe575d7a85d","Type":"ContainerStarted","Data":"eaf27a541be2e9776fb30c02f69f316a6039170afa61dbf935db86754cd07244"} Mar 11 10:06:21 crc kubenswrapper[4808]: I0311 10:06:21.113239 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=8.113216074 podStartE2EDuration="8.113216074s" podCreationTimestamp="2026-03-11 10:06:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 10:06:21.109481209 +0000 UTC m=+5232.062804549" watchObservedRunningTime="2026-03-11 10:06:21.113216074 +0000 UTC m=+5232.066539404" Mar 11 10:06:21 crc kubenswrapper[4808]: I0311 10:06:21.456641 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-76b5b778c5-5gvnc" Mar 11 10:06:21 crc kubenswrapper[4808]: I0311 10:06:21.898594 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-ff89b6977-dkqvb" Mar 11 10:06:21 crc kubenswrapper[4808]: I0311 10:06:21.999440 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76b5b778c5-5gvnc"] Mar 11 10:06:22 crc kubenswrapper[4808]: I0311 10:06:22.092035 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-76b5b778c5-5gvnc" podUID="e7250cc2-0d67-4b87-a1bf-61901854e242" containerName="dnsmasq-dns" containerID="cri-o://6477bccdf303187da7449c0f61d22c7aeedca02d5dd55a45b5da4b9cbcab7fda" gracePeriod=10 Mar 11 10:06:22 crc kubenswrapper[4808]: E0311 10:06:22.163766 4808 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode7250cc2_0d67_4b87_a1bf_61901854e242.slice/crio-conmon-6477bccdf303187da7449c0f61d22c7aeedca02d5dd55a45b5da4b9cbcab7fda.scope\": RecentStats: unable to find data in memory cache]" Mar 11 10:06:22 crc kubenswrapper[4808]: I0311 10:06:22.582244 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76b5b778c5-5gvnc" Mar 11 10:06:22 crc kubenswrapper[4808]: I0311 10:06:22.713975 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e7250cc2-0d67-4b87-a1bf-61901854e242-dns-svc\") pod \"e7250cc2-0d67-4b87-a1bf-61901854e242\" (UID: \"e7250cc2-0d67-4b87-a1bf-61901854e242\") " Mar 11 10:06:22 crc kubenswrapper[4808]: I0311 10:06:22.714124 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7250cc2-0d67-4b87-a1bf-61901854e242-config\") pod \"e7250cc2-0d67-4b87-a1bf-61901854e242\" (UID: \"e7250cc2-0d67-4b87-a1bf-61901854e242\") " Mar 11 10:06:22 crc kubenswrapper[4808]: I0311 10:06:22.714174 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5bk86\" (UniqueName: \"kubernetes.io/projected/e7250cc2-0d67-4b87-a1bf-61901854e242-kube-api-access-5bk86\") pod \"e7250cc2-0d67-4b87-a1bf-61901854e242\" (UID: \"e7250cc2-0d67-4b87-a1bf-61901854e242\") " Mar 11 10:06:22 crc kubenswrapper[4808]: I0311 10:06:22.718571 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7250cc2-0d67-4b87-a1bf-61901854e242-kube-api-access-5bk86" (OuterVolumeSpecName: "kube-api-access-5bk86") pod "e7250cc2-0d67-4b87-a1bf-61901854e242" (UID: "e7250cc2-0d67-4b87-a1bf-61901854e242"). InnerVolumeSpecName "kube-api-access-5bk86". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:06:22 crc kubenswrapper[4808]: I0311 10:06:22.751322 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7250cc2-0d67-4b87-a1bf-61901854e242-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e7250cc2-0d67-4b87-a1bf-61901854e242" (UID: "e7250cc2-0d67-4b87-a1bf-61901854e242"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 10:06:22 crc kubenswrapper[4808]: I0311 10:06:22.752095 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7250cc2-0d67-4b87-a1bf-61901854e242-config" (OuterVolumeSpecName: "config") pod "e7250cc2-0d67-4b87-a1bf-61901854e242" (UID: "e7250cc2-0d67-4b87-a1bf-61901854e242"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 10:06:22 crc kubenswrapper[4808]: I0311 10:06:22.815636 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7250cc2-0d67-4b87-a1bf-61901854e242-config\") on node \"crc\" DevicePath \"\"" Mar 11 10:06:22 crc kubenswrapper[4808]: I0311 10:06:22.815673 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5bk86\" (UniqueName: \"kubernetes.io/projected/e7250cc2-0d67-4b87-a1bf-61901854e242-kube-api-access-5bk86\") on node \"crc\" DevicePath \"\"" Mar 11 10:06:22 crc kubenswrapper[4808]: I0311 10:06:22.815686 4808 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e7250cc2-0d67-4b87-a1bf-61901854e242-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 11 10:06:23 crc kubenswrapper[4808]: I0311 10:06:23.102709 4808 generic.go:334] "Generic (PLEG): container finished" podID="e7250cc2-0d67-4b87-a1bf-61901854e242" containerID="6477bccdf303187da7449c0f61d22c7aeedca02d5dd55a45b5da4b9cbcab7fda" exitCode=0 Mar 11 10:06:23 crc kubenswrapper[4808]: I0311 10:06:23.102753 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76b5b778c5-5gvnc" event={"ID":"e7250cc2-0d67-4b87-a1bf-61901854e242","Type":"ContainerDied","Data":"6477bccdf303187da7449c0f61d22c7aeedca02d5dd55a45b5da4b9cbcab7fda"} Mar 11 10:06:23 crc kubenswrapper[4808]: I0311 10:06:23.102778 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76b5b778c5-5gvnc" event={"ID":"e7250cc2-0d67-4b87-a1bf-61901854e242","Type":"ContainerDied","Data":"547b0fdac18a9a92d4055f97be1f1e8003bfdd2f286fb7807086ad19164c6a57"} Mar 11 10:06:23 crc kubenswrapper[4808]: I0311 10:06:23.102803 4808 scope.go:117] "RemoveContainer" containerID="6477bccdf303187da7449c0f61d22c7aeedca02d5dd55a45b5da4b9cbcab7fda" Mar 11 10:06:23 crc kubenswrapper[4808]: I0311 10:06:23.102914 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76b5b778c5-5gvnc" Mar 11 10:06:23 crc kubenswrapper[4808]: I0311 10:06:23.119764 4808 scope.go:117] "RemoveContainer" containerID="e5ae237309c088a61c0c3e0b20562cb16bbc4102a131423c32eb6225d15911e0" Mar 11 10:06:23 crc kubenswrapper[4808]: I0311 10:06:23.146856 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76b5b778c5-5gvnc"] Mar 11 10:06:23 crc kubenswrapper[4808]: I0311 10:06:23.152967 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-76b5b778c5-5gvnc"] Mar 11 10:06:23 crc kubenswrapper[4808]: I0311 10:06:23.161860 4808 scope.go:117] "RemoveContainer" containerID="6477bccdf303187da7449c0f61d22c7aeedca02d5dd55a45b5da4b9cbcab7fda" Mar 11 10:06:23 crc kubenswrapper[4808]: E0311 10:06:23.162274 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6477bccdf303187da7449c0f61d22c7aeedca02d5dd55a45b5da4b9cbcab7fda\": container with ID starting with 6477bccdf303187da7449c0f61d22c7aeedca02d5dd55a45b5da4b9cbcab7fda not found: ID does not exist" containerID="6477bccdf303187da7449c0f61d22c7aeedca02d5dd55a45b5da4b9cbcab7fda" Mar 11 10:06:23 crc kubenswrapper[4808]: I0311 10:06:23.162319 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6477bccdf303187da7449c0f61d22c7aeedca02d5dd55a45b5da4b9cbcab7fda"} err="failed to get container status \"6477bccdf303187da7449c0f61d22c7aeedca02d5dd55a45b5da4b9cbcab7fda\": rpc error: code = NotFound desc = could not find container \"6477bccdf303187da7449c0f61d22c7aeedca02d5dd55a45b5da4b9cbcab7fda\": container with ID starting with 6477bccdf303187da7449c0f61d22c7aeedca02d5dd55a45b5da4b9cbcab7fda not found: ID does not exist" Mar 11 10:06:23 crc kubenswrapper[4808]: I0311 10:06:23.162350 4808 scope.go:117] "RemoveContainer" containerID="e5ae237309c088a61c0c3e0b20562cb16bbc4102a131423c32eb6225d15911e0" Mar 11 10:06:23 crc kubenswrapper[4808]: E0311 10:06:23.162666 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5ae237309c088a61c0c3e0b20562cb16bbc4102a131423c32eb6225d15911e0\": container with ID starting with e5ae237309c088a61c0c3e0b20562cb16bbc4102a131423c32eb6225d15911e0 not found: ID does not exist" containerID="e5ae237309c088a61c0c3e0b20562cb16bbc4102a131423c32eb6225d15911e0" Mar 11 10:06:23 crc kubenswrapper[4808]: I0311 10:06:23.162707 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5ae237309c088a61c0c3e0b20562cb16bbc4102a131423c32eb6225d15911e0"} err="failed to get container status \"e5ae237309c088a61c0c3e0b20562cb16bbc4102a131423c32eb6225d15911e0\": rpc error: code = NotFound desc = could not find container \"e5ae237309c088a61c0c3e0b20562cb16bbc4102a131423c32eb6225d15911e0\": container with ID starting with e5ae237309c088a61c0c3e0b20562cb16bbc4102a131423c32eb6225d15911e0 not found: ID does not exist" Mar 11 10:06:23 crc kubenswrapper[4808]: I0311 10:06:23.559614 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Mar 11 10:06:23 crc kubenswrapper[4808]: I0311 10:06:23.559672 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Mar 11 10:06:23 crc kubenswrapper[4808]: I0311 10:06:23.800229 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7250cc2-0d67-4b87-a1bf-61901854e242" path="/var/lib/kubelet/pods/e7250cc2-0d67-4b87-a1bf-61901854e242/volumes" Mar 11 10:06:25 crc kubenswrapper[4808]: I0311 10:06:25.316955 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Mar 11 10:06:25 crc kubenswrapper[4808]: I0311 10:06:25.317459 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Mar 11 10:06:25 crc kubenswrapper[4808]: I0311 10:06:25.422690 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Mar 11 10:06:25 crc kubenswrapper[4808]: I0311 10:06:25.433463 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Mar 11 10:06:25 crc kubenswrapper[4808]: I0311 10:06:25.982900 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Mar 11 10:06:26 crc kubenswrapper[4808]: I0311 10:06:26.082579 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Mar 11 10:06:26 crc kubenswrapper[4808]: I0311 10:06:26.206120 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Mar 11 10:06:32 crc kubenswrapper[4808]: I0311 10:06:32.118273 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-vhnpm"] Mar 11 10:06:32 crc kubenswrapper[4808]: E0311 10:06:32.119130 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7250cc2-0d67-4b87-a1bf-61901854e242" containerName="init" Mar 11 10:06:32 crc kubenswrapper[4808]: I0311 10:06:32.119142 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7250cc2-0d67-4b87-a1bf-61901854e242" containerName="init" Mar 11 10:06:32 crc kubenswrapper[4808]: E0311 10:06:32.119155 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7250cc2-0d67-4b87-a1bf-61901854e242" containerName="dnsmasq-dns" Mar 11 10:06:32 crc kubenswrapper[4808]: I0311 10:06:32.119161 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7250cc2-0d67-4b87-a1bf-61901854e242" containerName="dnsmasq-dns" Mar 11 10:06:32 crc kubenswrapper[4808]: I0311 10:06:32.119323 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7250cc2-0d67-4b87-a1bf-61901854e242" containerName="dnsmasq-dns" Mar 11 10:06:32 crc kubenswrapper[4808]: I0311 10:06:32.119857 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-vhnpm" Mar 11 10:06:32 crc kubenswrapper[4808]: I0311 10:06:32.126043 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Mar 11 10:06:32 crc kubenswrapper[4808]: I0311 10:06:32.126673 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-vhnpm"] Mar 11 10:06:32 crc kubenswrapper[4808]: I0311 10:06:32.298838 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csgpm\" (UniqueName: \"kubernetes.io/projected/0c6bab65-a38e-4901-9176-b7d388bb6acd-kube-api-access-csgpm\") pod \"root-account-create-update-vhnpm\" (UID: \"0c6bab65-a38e-4901-9176-b7d388bb6acd\") " pod="openstack/root-account-create-update-vhnpm" Mar 11 10:06:32 crc kubenswrapper[4808]: I0311 10:06:32.298908 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c6bab65-a38e-4901-9176-b7d388bb6acd-operator-scripts\") pod \"root-account-create-update-vhnpm\" (UID: \"0c6bab65-a38e-4901-9176-b7d388bb6acd\") " pod="openstack/root-account-create-update-vhnpm" Mar 11 10:06:32 crc kubenswrapper[4808]: I0311 10:06:32.400536 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csgpm\" (UniqueName: \"kubernetes.io/projected/0c6bab65-a38e-4901-9176-b7d388bb6acd-kube-api-access-csgpm\") pod \"root-account-create-update-vhnpm\" (UID: \"0c6bab65-a38e-4901-9176-b7d388bb6acd\") " pod="openstack/root-account-create-update-vhnpm" Mar 11 10:06:32 crc kubenswrapper[4808]: I0311 10:06:32.400870 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c6bab65-a38e-4901-9176-b7d388bb6acd-operator-scripts\") pod \"root-account-create-update-vhnpm\" (UID: \"0c6bab65-a38e-4901-9176-b7d388bb6acd\") " pod="openstack/root-account-create-update-vhnpm" Mar 11 10:06:32 crc kubenswrapper[4808]: I0311 10:06:32.401828 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c6bab65-a38e-4901-9176-b7d388bb6acd-operator-scripts\") pod \"root-account-create-update-vhnpm\" (UID: \"0c6bab65-a38e-4901-9176-b7d388bb6acd\") " pod="openstack/root-account-create-update-vhnpm" Mar 11 10:06:32 crc kubenswrapper[4808]: I0311 10:06:32.418954 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-csgpm\" (UniqueName: \"kubernetes.io/projected/0c6bab65-a38e-4901-9176-b7d388bb6acd-kube-api-access-csgpm\") pod \"root-account-create-update-vhnpm\" (UID: \"0c6bab65-a38e-4901-9176-b7d388bb6acd\") " pod="openstack/root-account-create-update-vhnpm" Mar 11 10:06:32 crc kubenswrapper[4808]: I0311 10:06:32.437273 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-vhnpm" Mar 11 10:06:32 crc kubenswrapper[4808]: I0311 10:06:32.789079 4808 scope.go:117] "RemoveContainer" containerID="d524673f3a545738ac720864e37310705b7ec99d76cdcdbcad5a32411f049b97" Mar 11 10:06:32 crc kubenswrapper[4808]: E0311 10:06:32.789684 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 10:06:32 crc kubenswrapper[4808]: I0311 10:06:32.884148 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-vhnpm"] Mar 11 10:06:33 crc kubenswrapper[4808]: I0311 10:06:33.183309 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-vhnpm" event={"ID":"0c6bab65-a38e-4901-9176-b7d388bb6acd","Type":"ContainerStarted","Data":"6cad207c79c3a1871cddd5408012ce1fba7291ba5dac44dd2bedeab6da4e97ef"} Mar 11 10:06:33 crc kubenswrapper[4808]: I0311 10:06:33.183700 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-vhnpm" event={"ID":"0c6bab65-a38e-4901-9176-b7d388bb6acd","Type":"ContainerStarted","Data":"60c659934aa7720f31b363e2cf9f27f6eeea49ab992ea05946fff83e667f9544"} Mar 11 10:06:33 crc kubenswrapper[4808]: I0311 10:06:33.206778 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-vhnpm" podStartSLOduration=1.206754953 podStartE2EDuration="1.206754953s" podCreationTimestamp="2026-03-11 10:06:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 10:06:33.199665592 +0000 UTC m=+5244.152988932" watchObservedRunningTime="2026-03-11 10:06:33.206754953 +0000 UTC m=+5244.160078293" Mar 11 10:06:34 crc kubenswrapper[4808]: I0311 10:06:34.191605 4808 generic.go:334] "Generic (PLEG): container finished" podID="0c6bab65-a38e-4901-9176-b7d388bb6acd" containerID="6cad207c79c3a1871cddd5408012ce1fba7291ba5dac44dd2bedeab6da4e97ef" exitCode=0 Mar 11 10:06:34 crc kubenswrapper[4808]: I0311 10:06:34.191656 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-vhnpm" event={"ID":"0c6bab65-a38e-4901-9176-b7d388bb6acd","Type":"ContainerDied","Data":"6cad207c79c3a1871cddd5408012ce1fba7291ba5dac44dd2bedeab6da4e97ef"} Mar 11 10:06:35 crc kubenswrapper[4808]: I0311 10:06:35.535177 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-vhnpm" Mar 11 10:06:35 crc kubenswrapper[4808]: I0311 10:06:35.653847 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c6bab65-a38e-4901-9176-b7d388bb6acd-operator-scripts\") pod \"0c6bab65-a38e-4901-9176-b7d388bb6acd\" (UID: \"0c6bab65-a38e-4901-9176-b7d388bb6acd\") " Mar 11 10:06:35 crc kubenswrapper[4808]: I0311 10:06:35.653945 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-csgpm\" (UniqueName: \"kubernetes.io/projected/0c6bab65-a38e-4901-9176-b7d388bb6acd-kube-api-access-csgpm\") pod \"0c6bab65-a38e-4901-9176-b7d388bb6acd\" (UID: \"0c6bab65-a38e-4901-9176-b7d388bb6acd\") " Mar 11 10:06:35 crc kubenswrapper[4808]: I0311 10:06:35.654795 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c6bab65-a38e-4901-9176-b7d388bb6acd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0c6bab65-a38e-4901-9176-b7d388bb6acd" (UID: "0c6bab65-a38e-4901-9176-b7d388bb6acd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 10:06:35 crc kubenswrapper[4808]: I0311 10:06:35.659309 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c6bab65-a38e-4901-9176-b7d388bb6acd-kube-api-access-csgpm" (OuterVolumeSpecName: "kube-api-access-csgpm") pod "0c6bab65-a38e-4901-9176-b7d388bb6acd" (UID: "0c6bab65-a38e-4901-9176-b7d388bb6acd"). InnerVolumeSpecName "kube-api-access-csgpm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:06:35 crc kubenswrapper[4808]: I0311 10:06:35.755236 4808 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c6bab65-a38e-4901-9176-b7d388bb6acd-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 10:06:35 crc kubenswrapper[4808]: I0311 10:06:35.755308 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-csgpm\" (UniqueName: \"kubernetes.io/projected/0c6bab65-a38e-4901-9176-b7d388bb6acd-kube-api-access-csgpm\") on node \"crc\" DevicePath \"\"" Mar 11 10:06:36 crc kubenswrapper[4808]: I0311 10:06:36.209493 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-vhnpm" event={"ID":"0c6bab65-a38e-4901-9176-b7d388bb6acd","Type":"ContainerDied","Data":"60c659934aa7720f31b363e2cf9f27f6eeea49ab992ea05946fff83e667f9544"} Mar 11 10:06:36 crc kubenswrapper[4808]: I0311 10:06:36.209539 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-vhnpm" Mar 11 10:06:36 crc kubenswrapper[4808]: I0311 10:06:36.209551 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60c659934aa7720f31b363e2cf9f27f6eeea49ab992ea05946fff83e667f9544" Mar 11 10:06:38 crc kubenswrapper[4808]: I0311 10:06:38.702963 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-vhnpm"] Mar 11 10:06:38 crc kubenswrapper[4808]: I0311 10:06:38.711223 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-vhnpm"] Mar 11 10:06:39 crc kubenswrapper[4808]: I0311 10:06:39.805509 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c6bab65-a38e-4901-9176-b7d388bb6acd" path="/var/lib/kubelet/pods/0c6bab65-a38e-4901-9176-b7d388bb6acd/volumes" Mar 11 10:06:43 crc kubenswrapper[4808]: I0311 10:06:43.711966 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-d6pnc"] Mar 11 10:06:43 crc kubenswrapper[4808]: E0311 10:06:43.713540 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c6bab65-a38e-4901-9176-b7d388bb6acd" containerName="mariadb-account-create-update" Mar 11 10:06:43 crc kubenswrapper[4808]: I0311 10:06:43.713628 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c6bab65-a38e-4901-9176-b7d388bb6acd" containerName="mariadb-account-create-update" Mar 11 10:06:43 crc kubenswrapper[4808]: I0311 10:06:43.713830 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c6bab65-a38e-4901-9176-b7d388bb6acd" containerName="mariadb-account-create-update" Mar 11 10:06:43 crc kubenswrapper[4808]: I0311 10:06:43.714394 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-d6pnc" Mar 11 10:06:43 crc kubenswrapper[4808]: I0311 10:06:43.716965 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Mar 11 10:06:43 crc kubenswrapper[4808]: I0311 10:06:43.725839 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-d6pnc"] Mar 11 10:06:43 crc kubenswrapper[4808]: I0311 10:06:43.880569 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmr4m\" (UniqueName: \"kubernetes.io/projected/37025f25-23b4-48b8-ba7c-c28df7c641f2-kube-api-access-zmr4m\") pod \"root-account-create-update-d6pnc\" (UID: \"37025f25-23b4-48b8-ba7c-c28df7c641f2\") " pod="openstack/root-account-create-update-d6pnc" Mar 11 10:06:43 crc kubenswrapper[4808]: I0311 10:06:43.880633 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37025f25-23b4-48b8-ba7c-c28df7c641f2-operator-scripts\") pod \"root-account-create-update-d6pnc\" (UID: \"37025f25-23b4-48b8-ba7c-c28df7c641f2\") " pod="openstack/root-account-create-update-d6pnc" Mar 11 10:06:43 crc kubenswrapper[4808]: I0311 10:06:43.982026 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmr4m\" (UniqueName: \"kubernetes.io/projected/37025f25-23b4-48b8-ba7c-c28df7c641f2-kube-api-access-zmr4m\") pod \"root-account-create-update-d6pnc\" (UID: \"37025f25-23b4-48b8-ba7c-c28df7c641f2\") " pod="openstack/root-account-create-update-d6pnc" Mar 11 10:06:43 crc kubenswrapper[4808]: I0311 10:06:43.982092 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37025f25-23b4-48b8-ba7c-c28df7c641f2-operator-scripts\") pod \"root-account-create-update-d6pnc\" (UID: \"37025f25-23b4-48b8-ba7c-c28df7c641f2\") " pod="openstack/root-account-create-update-d6pnc" Mar 11 10:06:43 crc kubenswrapper[4808]: I0311 10:06:43.983095 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37025f25-23b4-48b8-ba7c-c28df7c641f2-operator-scripts\") pod \"root-account-create-update-d6pnc\" (UID: \"37025f25-23b4-48b8-ba7c-c28df7c641f2\") " pod="openstack/root-account-create-update-d6pnc" Mar 11 10:06:43 crc kubenswrapper[4808]: I0311 10:06:43.999746 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmr4m\" (UniqueName: \"kubernetes.io/projected/37025f25-23b4-48b8-ba7c-c28df7c641f2-kube-api-access-zmr4m\") pod \"root-account-create-update-d6pnc\" (UID: \"37025f25-23b4-48b8-ba7c-c28df7c641f2\") " pod="openstack/root-account-create-update-d6pnc" Mar 11 10:06:44 crc kubenswrapper[4808]: I0311 10:06:44.030797 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-d6pnc" Mar 11 10:06:44 crc kubenswrapper[4808]: I0311 10:06:44.490985 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-d6pnc"] Mar 11 10:06:45 crc kubenswrapper[4808]: I0311 10:06:45.275741 4808 generic.go:334] "Generic (PLEG): container finished" podID="37025f25-23b4-48b8-ba7c-c28df7c641f2" containerID="00f3757d9aa1b70765b5497cd1f8c0ea6fea815311691b3a94fb50e2d6c4ba6f" exitCode=0 Mar 11 10:06:45 crc kubenswrapper[4808]: I0311 10:06:45.275803 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-d6pnc" event={"ID":"37025f25-23b4-48b8-ba7c-c28df7c641f2","Type":"ContainerDied","Data":"00f3757d9aa1b70765b5497cd1f8c0ea6fea815311691b3a94fb50e2d6c4ba6f"} Mar 11 10:06:45 crc kubenswrapper[4808]: I0311 10:06:45.276051 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-d6pnc" event={"ID":"37025f25-23b4-48b8-ba7c-c28df7c641f2","Type":"ContainerStarted","Data":"2701933325cb4e3e36df629fc54ac96e83e99783d374d90b518a602cef2877ac"} Mar 11 10:06:46 crc kubenswrapper[4808]: I0311 10:06:46.284085 4808 generic.go:334] "Generic (PLEG): container finished" podID="1ae82137-73f8-4f99-a5b2-acecdb1e372a" containerID="efd94cab5a3481e535445c098eba28875e16625a175a3c3316016fc0ef43cc2c" exitCode=0 Mar 11 10:06:46 crc kubenswrapper[4808]: I0311 10:06:46.284388 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1ae82137-73f8-4f99-a5b2-acecdb1e372a","Type":"ContainerDied","Data":"efd94cab5a3481e535445c098eba28875e16625a175a3c3316016fc0ef43cc2c"} Mar 11 10:06:46 crc kubenswrapper[4808]: I0311 10:06:46.580769 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-d6pnc" Mar 11 10:06:46 crc kubenswrapper[4808]: I0311 10:06:46.723042 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37025f25-23b4-48b8-ba7c-c28df7c641f2-operator-scripts\") pod \"37025f25-23b4-48b8-ba7c-c28df7c641f2\" (UID: \"37025f25-23b4-48b8-ba7c-c28df7c641f2\") " Mar 11 10:06:46 crc kubenswrapper[4808]: I0311 10:06:46.723503 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zmr4m\" (UniqueName: \"kubernetes.io/projected/37025f25-23b4-48b8-ba7c-c28df7c641f2-kube-api-access-zmr4m\") pod \"37025f25-23b4-48b8-ba7c-c28df7c641f2\" (UID: \"37025f25-23b4-48b8-ba7c-c28df7c641f2\") " Mar 11 10:06:46 crc kubenswrapper[4808]: I0311 10:06:46.724003 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37025f25-23b4-48b8-ba7c-c28df7c641f2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "37025f25-23b4-48b8-ba7c-c28df7c641f2" (UID: "37025f25-23b4-48b8-ba7c-c28df7c641f2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 10:06:46 crc kubenswrapper[4808]: I0311 10:06:46.728778 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37025f25-23b4-48b8-ba7c-c28df7c641f2-kube-api-access-zmr4m" (OuterVolumeSpecName: "kube-api-access-zmr4m") pod "37025f25-23b4-48b8-ba7c-c28df7c641f2" (UID: "37025f25-23b4-48b8-ba7c-c28df7c641f2"). InnerVolumeSpecName "kube-api-access-zmr4m". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:06:46 crc kubenswrapper[4808]: I0311 10:06:46.825111 4808 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37025f25-23b4-48b8-ba7c-c28df7c641f2-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 10:06:46 crc kubenswrapper[4808]: I0311 10:06:46.825151 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zmr4m\" (UniqueName: \"kubernetes.io/projected/37025f25-23b4-48b8-ba7c-c28df7c641f2-kube-api-access-zmr4m\") on node \"crc\" DevicePath \"\"" Mar 11 10:06:47 crc kubenswrapper[4808]: I0311 10:06:47.293665 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-d6pnc" Mar 11 10:06:47 crc kubenswrapper[4808]: I0311 10:06:47.293677 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-d6pnc" event={"ID":"37025f25-23b4-48b8-ba7c-c28df7c641f2","Type":"ContainerDied","Data":"2701933325cb4e3e36df629fc54ac96e83e99783d374d90b518a602cef2877ac"} Mar 11 10:06:47 crc kubenswrapper[4808]: I0311 10:06:47.293731 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2701933325cb4e3e36df629fc54ac96e83e99783d374d90b518a602cef2877ac" Mar 11 10:06:47 crc kubenswrapper[4808]: I0311 10:06:47.295454 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1ae82137-73f8-4f99-a5b2-acecdb1e372a","Type":"ContainerStarted","Data":"c8262d53763b70bbc6a215e549e9c81dc4bd21c1b8b87dfcdf6589b63875188b"} Mar 11 10:06:47 crc kubenswrapper[4808]: I0311 10:06:47.295692 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Mar 11 10:06:47 crc kubenswrapper[4808]: I0311 10:06:47.297583 4808 generic.go:334] "Generic (PLEG): container finished" podID="4c8053c2-7a71-4381-aa2f-9e0d4d1963e9" containerID="b9980c4f47b1d66eb44f1d68026dd866d210be555e2077515b80ae42fc2070e5" exitCode=0 Mar 11 10:06:47 crc kubenswrapper[4808]: I0311 10:06:47.297625 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4c8053c2-7a71-4381-aa2f-9e0d4d1963e9","Type":"ContainerDied","Data":"b9980c4f47b1d66eb44f1d68026dd866d210be555e2077515b80ae42fc2070e5"} Mar 11 10:06:47 crc kubenswrapper[4808]: I0311 10:06:47.325990 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.325964197 podStartE2EDuration="36.325964197s" podCreationTimestamp="2026-03-11 10:06:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 10:06:47.322023766 +0000 UTC m=+5258.275347086" watchObservedRunningTime="2026-03-11 10:06:47.325964197 +0000 UTC m=+5258.279287517" Mar 11 10:06:47 crc kubenswrapper[4808]: I0311 10:06:47.793495 4808 scope.go:117] "RemoveContainer" containerID="d524673f3a545738ac720864e37310705b7ec99d76cdcdbcad5a32411f049b97" Mar 11 10:06:47 crc kubenswrapper[4808]: E0311 10:06:47.793716 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 10:06:48 crc kubenswrapper[4808]: I0311 10:06:48.307954 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4c8053c2-7a71-4381-aa2f-9e0d4d1963e9","Type":"ContainerStarted","Data":"b9d3bf6a00bbc8a90ce67d58f9420240476fdfe68cf36c1bfee3ee4dc71c9b4f"} Mar 11 10:06:48 crc kubenswrapper[4808]: I0311 10:06:48.308585 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:06:48 crc kubenswrapper[4808]: I0311 10:06:48.331568 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.33154959 podStartE2EDuration="37.33154959s" podCreationTimestamp="2026-03-11 10:06:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 10:06:48.329250224 +0000 UTC m=+5259.282573564" watchObservedRunningTime="2026-03-11 10:06:48.33154959 +0000 UTC m=+5259.284872910" Mar 11 10:06:56 crc kubenswrapper[4808]: I0311 10:06:56.341812 4808 scope.go:117] "RemoveContainer" containerID="36aaee5e6c833a2206b267909ae8eb7c63c75287f60de1ed4e09234e56826124" Mar 11 10:06:59 crc kubenswrapper[4808]: I0311 10:06:59.176235 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jcgrt"] Mar 11 10:06:59 crc kubenswrapper[4808]: E0311 10:06:59.176789 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37025f25-23b4-48b8-ba7c-c28df7c641f2" containerName="mariadb-account-create-update" Mar 11 10:06:59 crc kubenswrapper[4808]: I0311 10:06:59.176800 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="37025f25-23b4-48b8-ba7c-c28df7c641f2" containerName="mariadb-account-create-update" Mar 11 10:06:59 crc kubenswrapper[4808]: I0311 10:06:59.176936 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="37025f25-23b4-48b8-ba7c-c28df7c641f2" containerName="mariadb-account-create-update" Mar 11 10:06:59 crc kubenswrapper[4808]: I0311 10:06:59.177915 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jcgrt" Mar 11 10:06:59 crc kubenswrapper[4808]: I0311 10:06:59.193938 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jcgrt"] Mar 11 10:06:59 crc kubenswrapper[4808]: I0311 10:06:59.304870 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01cc6986-a4ad-4d8b-aa3d-6fc1cfc22179-catalog-content\") pod \"certified-operators-jcgrt\" (UID: \"01cc6986-a4ad-4d8b-aa3d-6fc1cfc22179\") " pod="openshift-marketplace/certified-operators-jcgrt" Mar 11 10:06:59 crc kubenswrapper[4808]: I0311 10:06:59.305070 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01cc6986-a4ad-4d8b-aa3d-6fc1cfc22179-utilities\") pod \"certified-operators-jcgrt\" (UID: \"01cc6986-a4ad-4d8b-aa3d-6fc1cfc22179\") " pod="openshift-marketplace/certified-operators-jcgrt" Mar 11 10:06:59 crc kubenswrapper[4808]: I0311 10:06:59.305207 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlrw6\" (UniqueName: \"kubernetes.io/projected/01cc6986-a4ad-4d8b-aa3d-6fc1cfc22179-kube-api-access-rlrw6\") pod \"certified-operators-jcgrt\" (UID: \"01cc6986-a4ad-4d8b-aa3d-6fc1cfc22179\") " pod="openshift-marketplace/certified-operators-jcgrt" Mar 11 10:06:59 crc kubenswrapper[4808]: I0311 10:06:59.369443 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-g57w5"] Mar 11 10:06:59 crc kubenswrapper[4808]: I0311 10:06:59.371124 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g57w5" Mar 11 10:06:59 crc kubenswrapper[4808]: I0311 10:06:59.381419 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-g57w5"] Mar 11 10:06:59 crc kubenswrapper[4808]: I0311 10:06:59.406534 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01cc6986-a4ad-4d8b-aa3d-6fc1cfc22179-utilities\") pod \"certified-operators-jcgrt\" (UID: \"01cc6986-a4ad-4d8b-aa3d-6fc1cfc22179\") " pod="openshift-marketplace/certified-operators-jcgrt" Mar 11 10:06:59 crc kubenswrapper[4808]: I0311 10:06:59.406626 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlrw6\" (UniqueName: \"kubernetes.io/projected/01cc6986-a4ad-4d8b-aa3d-6fc1cfc22179-kube-api-access-rlrw6\") pod \"certified-operators-jcgrt\" (UID: \"01cc6986-a4ad-4d8b-aa3d-6fc1cfc22179\") " pod="openshift-marketplace/certified-operators-jcgrt" Mar 11 10:06:59 crc kubenswrapper[4808]: I0311 10:06:59.406673 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01cc6986-a4ad-4d8b-aa3d-6fc1cfc22179-catalog-content\") pod \"certified-operators-jcgrt\" (UID: \"01cc6986-a4ad-4d8b-aa3d-6fc1cfc22179\") " pod="openshift-marketplace/certified-operators-jcgrt" Mar 11 10:06:59 crc kubenswrapper[4808]: I0311 10:06:59.407291 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01cc6986-a4ad-4d8b-aa3d-6fc1cfc22179-catalog-content\") pod \"certified-operators-jcgrt\" (UID: \"01cc6986-a4ad-4d8b-aa3d-6fc1cfc22179\") " pod="openshift-marketplace/certified-operators-jcgrt" Mar 11 10:06:59 crc kubenswrapper[4808]: I0311 10:06:59.407586 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01cc6986-a4ad-4d8b-aa3d-6fc1cfc22179-utilities\") pod \"certified-operators-jcgrt\" (UID: \"01cc6986-a4ad-4d8b-aa3d-6fc1cfc22179\") " pod="openshift-marketplace/certified-operators-jcgrt" Mar 11 10:06:59 crc kubenswrapper[4808]: I0311 10:06:59.429796 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlrw6\" (UniqueName: \"kubernetes.io/projected/01cc6986-a4ad-4d8b-aa3d-6fc1cfc22179-kube-api-access-rlrw6\") pod \"certified-operators-jcgrt\" (UID: \"01cc6986-a4ad-4d8b-aa3d-6fc1cfc22179\") " pod="openshift-marketplace/certified-operators-jcgrt" Mar 11 10:06:59 crc kubenswrapper[4808]: I0311 10:06:59.496529 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jcgrt" Mar 11 10:06:59 crc kubenswrapper[4808]: I0311 10:06:59.508396 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d9f2e22-7067-48cb-9486-d4affde68298-catalog-content\") pod \"community-operators-g57w5\" (UID: \"4d9f2e22-7067-48cb-9486-d4affde68298\") " pod="openshift-marketplace/community-operators-g57w5" Mar 11 10:06:59 crc kubenswrapper[4808]: I0311 10:06:59.508437 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d9f2e22-7067-48cb-9486-d4affde68298-utilities\") pod \"community-operators-g57w5\" (UID: \"4d9f2e22-7067-48cb-9486-d4affde68298\") " pod="openshift-marketplace/community-operators-g57w5" Mar 11 10:06:59 crc kubenswrapper[4808]: I0311 10:06:59.508493 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9mns\" (UniqueName: \"kubernetes.io/projected/4d9f2e22-7067-48cb-9486-d4affde68298-kube-api-access-h9mns\") pod \"community-operators-g57w5\" (UID: \"4d9f2e22-7067-48cb-9486-d4affde68298\") " pod="openshift-marketplace/community-operators-g57w5" Mar 11 10:06:59 crc kubenswrapper[4808]: I0311 10:06:59.611154 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d9f2e22-7067-48cb-9486-d4affde68298-catalog-content\") pod \"community-operators-g57w5\" (UID: \"4d9f2e22-7067-48cb-9486-d4affde68298\") " pod="openshift-marketplace/community-operators-g57w5" Mar 11 10:06:59 crc kubenswrapper[4808]: I0311 10:06:59.611468 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d9f2e22-7067-48cb-9486-d4affde68298-utilities\") pod \"community-operators-g57w5\" (UID: \"4d9f2e22-7067-48cb-9486-d4affde68298\") " pod="openshift-marketplace/community-operators-g57w5" Mar 11 10:06:59 crc kubenswrapper[4808]: I0311 10:06:59.611515 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9mns\" (UniqueName: \"kubernetes.io/projected/4d9f2e22-7067-48cb-9486-d4affde68298-kube-api-access-h9mns\") pod \"community-operators-g57w5\" (UID: \"4d9f2e22-7067-48cb-9486-d4affde68298\") " pod="openshift-marketplace/community-operators-g57w5" Mar 11 10:06:59 crc kubenswrapper[4808]: I0311 10:06:59.611923 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d9f2e22-7067-48cb-9486-d4affde68298-catalog-content\") pod \"community-operators-g57w5\" (UID: \"4d9f2e22-7067-48cb-9486-d4affde68298\") " pod="openshift-marketplace/community-operators-g57w5" Mar 11 10:06:59 crc kubenswrapper[4808]: I0311 10:06:59.612205 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d9f2e22-7067-48cb-9486-d4affde68298-utilities\") pod \"community-operators-g57w5\" (UID: \"4d9f2e22-7067-48cb-9486-d4affde68298\") " pod="openshift-marketplace/community-operators-g57w5" Mar 11 10:06:59 crc kubenswrapper[4808]: I0311 10:06:59.645470 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9mns\" (UniqueName: \"kubernetes.io/projected/4d9f2e22-7067-48cb-9486-d4affde68298-kube-api-access-h9mns\") pod \"community-operators-g57w5\" (UID: \"4d9f2e22-7067-48cb-9486-d4affde68298\") " pod="openshift-marketplace/community-operators-g57w5" Mar 11 10:06:59 crc kubenswrapper[4808]: I0311 10:06:59.690082 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g57w5" Mar 11 10:06:59 crc kubenswrapper[4808]: I0311 10:06:59.987865 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jcgrt"] Mar 11 10:07:00 crc kubenswrapper[4808]: I0311 10:07:00.195398 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-g57w5"] Mar 11 10:07:00 crc kubenswrapper[4808]: W0311 10:07:00.255594 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4d9f2e22_7067_48cb_9486_d4affde68298.slice/crio-b1391db41b4c6026ebc713f5327efa4a0bb0b6f41540aa9c00337e4bf788591c WatchSource:0}: Error finding container b1391db41b4c6026ebc713f5327efa4a0bb0b6f41540aa9c00337e4bf788591c: Status 404 returned error can't find the container with id b1391db41b4c6026ebc713f5327efa4a0bb0b6f41540aa9c00337e4bf788591c Mar 11 10:07:00 crc kubenswrapper[4808]: I0311 10:07:00.395186 4808 generic.go:334] "Generic (PLEG): container finished" podID="01cc6986-a4ad-4d8b-aa3d-6fc1cfc22179" containerID="44f84f3638f5d20fc6313ee5d68921338e3aa36c969eb367b7ae6ac117d13630" exitCode=0 Mar 11 10:07:00 crc kubenswrapper[4808]: I0311 10:07:00.395281 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jcgrt" event={"ID":"01cc6986-a4ad-4d8b-aa3d-6fc1cfc22179","Type":"ContainerDied","Data":"44f84f3638f5d20fc6313ee5d68921338e3aa36c969eb367b7ae6ac117d13630"} Mar 11 10:07:00 crc kubenswrapper[4808]: I0311 10:07:00.395404 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jcgrt" event={"ID":"01cc6986-a4ad-4d8b-aa3d-6fc1cfc22179","Type":"ContainerStarted","Data":"1340fc1b46f987ad9b86855d658d1be086b7519a439710c025d12689b3b5d7f9"} Mar 11 10:07:00 crc kubenswrapper[4808]: I0311 10:07:00.397303 4808 generic.go:334] "Generic (PLEG): container finished" podID="4d9f2e22-7067-48cb-9486-d4affde68298" containerID="76ca212a738a34f94f5d49b450a9dfa06f6fe1da40123186a4824a88d3749fc2" exitCode=0 Mar 11 10:07:00 crc kubenswrapper[4808]: I0311 10:07:00.397342 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g57w5" event={"ID":"4d9f2e22-7067-48cb-9486-d4affde68298","Type":"ContainerDied","Data":"76ca212a738a34f94f5d49b450a9dfa06f6fe1da40123186a4824a88d3749fc2"} Mar 11 10:07:00 crc kubenswrapper[4808]: I0311 10:07:00.397422 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g57w5" event={"ID":"4d9f2e22-7067-48cb-9486-d4affde68298","Type":"ContainerStarted","Data":"b1391db41b4c6026ebc713f5327efa4a0bb0b6f41540aa9c00337e4bf788591c"} Mar 11 10:07:01 crc kubenswrapper[4808]: I0311 10:07:01.406093 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jcgrt" event={"ID":"01cc6986-a4ad-4d8b-aa3d-6fc1cfc22179","Type":"ContainerStarted","Data":"b3842a5f45216adcc48b393ad3ac53fb89dea7ef5bdd1526250d389121d348a5"} Mar 11 10:07:01 crc kubenswrapper[4808]: I0311 10:07:01.597789 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wl7kp"] Mar 11 10:07:01 crc kubenswrapper[4808]: I0311 10:07:01.616232 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wl7kp" Mar 11 10:07:01 crc kubenswrapper[4808]: I0311 10:07:01.621233 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wl7kp"] Mar 11 10:07:01 crc kubenswrapper[4808]: I0311 10:07:01.755378 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf38487c-78bd-4002-8ebc-80badc161631-catalog-content\") pod \"redhat-marketplace-wl7kp\" (UID: \"cf38487c-78bd-4002-8ebc-80badc161631\") " pod="openshift-marketplace/redhat-marketplace-wl7kp" Mar 11 10:07:01 crc kubenswrapper[4808]: I0311 10:07:01.755475 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf38487c-78bd-4002-8ebc-80badc161631-utilities\") pod \"redhat-marketplace-wl7kp\" (UID: \"cf38487c-78bd-4002-8ebc-80badc161631\") " pod="openshift-marketplace/redhat-marketplace-wl7kp" Mar 11 10:07:01 crc kubenswrapper[4808]: I0311 10:07:01.755585 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9fp6\" (UniqueName: \"kubernetes.io/projected/cf38487c-78bd-4002-8ebc-80badc161631-kube-api-access-r9fp6\") pod \"redhat-marketplace-wl7kp\" (UID: \"cf38487c-78bd-4002-8ebc-80badc161631\") " pod="openshift-marketplace/redhat-marketplace-wl7kp" Mar 11 10:07:01 crc kubenswrapper[4808]: I0311 10:07:01.789280 4808 scope.go:117] "RemoveContainer" containerID="d524673f3a545738ac720864e37310705b7ec99d76cdcdbcad5a32411f049b97" Mar 11 10:07:01 crc kubenswrapper[4808]: E0311 10:07:01.789637 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 10:07:01 crc kubenswrapper[4808]: I0311 10:07:01.856718 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf38487c-78bd-4002-8ebc-80badc161631-catalog-content\") pod \"redhat-marketplace-wl7kp\" (UID: \"cf38487c-78bd-4002-8ebc-80badc161631\") " pod="openshift-marketplace/redhat-marketplace-wl7kp" Mar 11 10:07:01 crc kubenswrapper[4808]: I0311 10:07:01.856847 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf38487c-78bd-4002-8ebc-80badc161631-utilities\") pod \"redhat-marketplace-wl7kp\" (UID: \"cf38487c-78bd-4002-8ebc-80badc161631\") " pod="openshift-marketplace/redhat-marketplace-wl7kp" Mar 11 10:07:01 crc kubenswrapper[4808]: I0311 10:07:01.856886 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9fp6\" (UniqueName: \"kubernetes.io/projected/cf38487c-78bd-4002-8ebc-80badc161631-kube-api-access-r9fp6\") pod \"redhat-marketplace-wl7kp\" (UID: \"cf38487c-78bd-4002-8ebc-80badc161631\") " pod="openshift-marketplace/redhat-marketplace-wl7kp" Mar 11 10:07:01 crc kubenswrapper[4808]: I0311 10:07:01.857845 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf38487c-78bd-4002-8ebc-80badc161631-catalog-content\") pod \"redhat-marketplace-wl7kp\" (UID: \"cf38487c-78bd-4002-8ebc-80badc161631\") " pod="openshift-marketplace/redhat-marketplace-wl7kp" Mar 11 10:07:01 crc kubenswrapper[4808]: I0311 10:07:01.857992 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf38487c-78bd-4002-8ebc-80badc161631-utilities\") pod \"redhat-marketplace-wl7kp\" (UID: \"cf38487c-78bd-4002-8ebc-80badc161631\") " pod="openshift-marketplace/redhat-marketplace-wl7kp" Mar 11 10:07:01 crc kubenswrapper[4808]: I0311 10:07:01.886455 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9fp6\" (UniqueName: \"kubernetes.io/projected/cf38487c-78bd-4002-8ebc-80badc161631-kube-api-access-r9fp6\") pod \"redhat-marketplace-wl7kp\" (UID: \"cf38487c-78bd-4002-8ebc-80badc161631\") " pod="openshift-marketplace/redhat-marketplace-wl7kp" Mar 11 10:07:01 crc kubenswrapper[4808]: I0311 10:07:01.960387 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wl7kp" Mar 11 10:07:02 crc kubenswrapper[4808]: I0311 10:07:02.414201 4808 generic.go:334] "Generic (PLEG): container finished" podID="4d9f2e22-7067-48cb-9486-d4affde68298" containerID="ddef391bfa4b68732345593f8ca9670174e604f78601d4be09f2d1e1ce7ac8d6" exitCode=0 Mar 11 10:07:02 crc kubenswrapper[4808]: I0311 10:07:02.414284 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g57w5" event={"ID":"4d9f2e22-7067-48cb-9486-d4affde68298","Type":"ContainerDied","Data":"ddef391bfa4b68732345593f8ca9670174e604f78601d4be09f2d1e1ce7ac8d6"} Mar 11 10:07:02 crc kubenswrapper[4808]: I0311 10:07:02.420677 4808 generic.go:334] "Generic (PLEG): container finished" podID="01cc6986-a4ad-4d8b-aa3d-6fc1cfc22179" containerID="b3842a5f45216adcc48b393ad3ac53fb89dea7ef5bdd1526250d389121d348a5" exitCode=0 Mar 11 10:07:02 crc kubenswrapper[4808]: I0311 10:07:02.420759 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jcgrt" event={"ID":"01cc6986-a4ad-4d8b-aa3d-6fc1cfc22179","Type":"ContainerDied","Data":"b3842a5f45216adcc48b393ad3ac53fb89dea7ef5bdd1526250d389121d348a5"} Mar 11 10:07:02 crc kubenswrapper[4808]: I0311 10:07:02.427239 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wl7kp"] Mar 11 10:07:02 crc kubenswrapper[4808]: W0311 10:07:02.435707 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf38487c_78bd_4002_8ebc_80badc161631.slice/crio-4ec6e1dcde65a2b3fe51a8655ccc25872aa0e991da4d890635947bb3ffed0214 WatchSource:0}: Error finding container 4ec6e1dcde65a2b3fe51a8655ccc25872aa0e991da4d890635947bb3ffed0214: Status 404 returned error can't find the container with id 4ec6e1dcde65a2b3fe51a8655ccc25872aa0e991da4d890635947bb3ffed0214 Mar 11 10:07:02 crc kubenswrapper[4808]: I0311 10:07:02.586563 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Mar 11 10:07:03 crc kubenswrapper[4808]: I0311 10:07:03.333630 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:07:03 crc kubenswrapper[4808]: I0311 10:07:03.429652 4808 generic.go:334] "Generic (PLEG): container finished" podID="cf38487c-78bd-4002-8ebc-80badc161631" containerID="66d36981cb1cb49e92dede616046b3e9394b01cb29ac63a3a189504924e4755f" exitCode=0 Mar 11 10:07:03 crc kubenswrapper[4808]: I0311 10:07:03.429748 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wl7kp" event={"ID":"cf38487c-78bd-4002-8ebc-80badc161631","Type":"ContainerDied","Data":"66d36981cb1cb49e92dede616046b3e9394b01cb29ac63a3a189504924e4755f"} Mar 11 10:07:03 crc kubenswrapper[4808]: I0311 10:07:03.430016 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wl7kp" event={"ID":"cf38487c-78bd-4002-8ebc-80badc161631","Type":"ContainerStarted","Data":"4ec6e1dcde65a2b3fe51a8655ccc25872aa0e991da4d890635947bb3ffed0214"} Mar 11 10:07:03 crc kubenswrapper[4808]: I0311 10:07:03.434084 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jcgrt" event={"ID":"01cc6986-a4ad-4d8b-aa3d-6fc1cfc22179","Type":"ContainerStarted","Data":"e3479e183db53999834b3e4320a45e4e8b5d406bd60d8e11f0dd28988d1672b0"} Mar 11 10:07:03 crc kubenswrapper[4808]: I0311 10:07:03.438018 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g57w5" event={"ID":"4d9f2e22-7067-48cb-9486-d4affde68298","Type":"ContainerStarted","Data":"a564ce5089ffaffdb5a698ebca448f4d6ccc9b23e1902d1903ea426301f783e3"} Mar 11 10:07:03 crc kubenswrapper[4808]: I0311 10:07:03.477429 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jcgrt" podStartSLOduration=1.8425154209999999 podStartE2EDuration="4.477405234s" podCreationTimestamp="2026-03-11 10:06:59 +0000 UTC" firstStartedPulling="2026-03-11 10:07:00.396462486 +0000 UTC m=+5271.349785836" lastFinishedPulling="2026-03-11 10:07:03.031352339 +0000 UTC m=+5273.984675649" observedRunningTime="2026-03-11 10:07:03.46774385 +0000 UTC m=+5274.421067170" watchObservedRunningTime="2026-03-11 10:07:03.477405234 +0000 UTC m=+5274.430728554" Mar 11 10:07:03 crc kubenswrapper[4808]: I0311 10:07:03.492445 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-g57w5" podStartSLOduration=2.03418976 podStartE2EDuration="4.492426269s" podCreationTimestamp="2026-03-11 10:06:59 +0000 UTC" firstStartedPulling="2026-03-11 10:07:00.399256336 +0000 UTC m=+5271.352579696" lastFinishedPulling="2026-03-11 10:07:02.857492875 +0000 UTC m=+5273.810816205" observedRunningTime="2026-03-11 10:07:03.490173535 +0000 UTC m=+5274.443496865" watchObservedRunningTime="2026-03-11 10:07:03.492426269 +0000 UTC m=+5274.445749589" Mar 11 10:07:05 crc kubenswrapper[4808]: I0311 10:07:05.455451 4808 generic.go:334] "Generic (PLEG): container finished" podID="cf38487c-78bd-4002-8ebc-80badc161631" containerID="81edc2b353ca98da1788118bc9c412fb98ff964f8c9b969a42e29f4a7c882013" exitCode=0 Mar 11 10:07:05 crc kubenswrapper[4808]: I0311 10:07:05.455543 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wl7kp" event={"ID":"cf38487c-78bd-4002-8ebc-80badc161631","Type":"ContainerDied","Data":"81edc2b353ca98da1788118bc9c412fb98ff964f8c9b969a42e29f4a7c882013"} Mar 11 10:07:06 crc kubenswrapper[4808]: I0311 10:07:06.463780 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wl7kp" event={"ID":"cf38487c-78bd-4002-8ebc-80badc161631","Type":"ContainerStarted","Data":"5e78bc7afbb4895c00efb1e082506b170327a772ac71c576b2b54ff9ebae4506"} Mar 11 10:07:06 crc kubenswrapper[4808]: I0311 10:07:06.483840 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wl7kp" podStartSLOduration=3.077198483 podStartE2EDuration="5.48382265s" podCreationTimestamp="2026-03-11 10:07:01 +0000 UTC" firstStartedPulling="2026-03-11 10:07:03.431641838 +0000 UTC m=+5274.384965158" lastFinishedPulling="2026-03-11 10:07:05.838265995 +0000 UTC m=+5276.791589325" observedRunningTime="2026-03-11 10:07:06.477026238 +0000 UTC m=+5277.430349558" watchObservedRunningTime="2026-03-11 10:07:06.48382265 +0000 UTC m=+5277.437145960" Mar 11 10:07:09 crc kubenswrapper[4808]: I0311 10:07:09.496661 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jcgrt" Mar 11 10:07:09 crc kubenswrapper[4808]: I0311 10:07:09.497113 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jcgrt" Mar 11 10:07:09 crc kubenswrapper[4808]: I0311 10:07:09.545477 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jcgrt" Mar 11 10:07:09 crc kubenswrapper[4808]: I0311 10:07:09.691228 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-g57w5" Mar 11 10:07:09 crc kubenswrapper[4808]: I0311 10:07:09.691294 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-g57w5" Mar 11 10:07:09 crc kubenswrapper[4808]: I0311 10:07:09.771640 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-g57w5" Mar 11 10:07:10 crc kubenswrapper[4808]: I0311 10:07:10.540880 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-g57w5" Mar 11 10:07:10 crc kubenswrapper[4808]: I0311 10:07:10.547293 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jcgrt" Mar 11 10:07:11 crc kubenswrapper[4808]: I0311 10:07:11.020674 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-66d5bf7c87-hzmp7"] Mar 11 10:07:11 crc kubenswrapper[4808]: I0311 10:07:11.022002 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66d5bf7c87-hzmp7" Mar 11 10:07:11 crc kubenswrapper[4808]: I0311 10:07:11.039557 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-66d5bf7c87-hzmp7"] Mar 11 10:07:11 crc kubenswrapper[4808]: I0311 10:07:11.103348 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e28621f-b235-4e1a-a082-2c7542a2c3ac-config\") pod \"dnsmasq-dns-66d5bf7c87-hzmp7\" (UID: \"8e28621f-b235-4e1a-a082-2c7542a2c3ac\") " pod="openstack/dnsmasq-dns-66d5bf7c87-hzmp7" Mar 11 10:07:11 crc kubenswrapper[4808]: I0311 10:07:11.103423 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8e28621f-b235-4e1a-a082-2c7542a2c3ac-dns-svc\") pod \"dnsmasq-dns-66d5bf7c87-hzmp7\" (UID: \"8e28621f-b235-4e1a-a082-2c7542a2c3ac\") " pod="openstack/dnsmasq-dns-66d5bf7c87-hzmp7" Mar 11 10:07:11 crc kubenswrapper[4808]: I0311 10:07:11.103761 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2d7p\" (UniqueName: \"kubernetes.io/projected/8e28621f-b235-4e1a-a082-2c7542a2c3ac-kube-api-access-b2d7p\") pod \"dnsmasq-dns-66d5bf7c87-hzmp7\" (UID: \"8e28621f-b235-4e1a-a082-2c7542a2c3ac\") " pod="openstack/dnsmasq-dns-66d5bf7c87-hzmp7" Mar 11 10:07:11 crc kubenswrapper[4808]: I0311 10:07:11.205824 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2d7p\" (UniqueName: \"kubernetes.io/projected/8e28621f-b235-4e1a-a082-2c7542a2c3ac-kube-api-access-b2d7p\") pod \"dnsmasq-dns-66d5bf7c87-hzmp7\" (UID: \"8e28621f-b235-4e1a-a082-2c7542a2c3ac\") " pod="openstack/dnsmasq-dns-66d5bf7c87-hzmp7" Mar 11 10:07:11 crc kubenswrapper[4808]: I0311 10:07:11.205901 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e28621f-b235-4e1a-a082-2c7542a2c3ac-config\") pod \"dnsmasq-dns-66d5bf7c87-hzmp7\" (UID: \"8e28621f-b235-4e1a-a082-2c7542a2c3ac\") " pod="openstack/dnsmasq-dns-66d5bf7c87-hzmp7" Mar 11 10:07:11 crc kubenswrapper[4808]: I0311 10:07:11.205943 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8e28621f-b235-4e1a-a082-2c7542a2c3ac-dns-svc\") pod \"dnsmasq-dns-66d5bf7c87-hzmp7\" (UID: \"8e28621f-b235-4e1a-a082-2c7542a2c3ac\") " pod="openstack/dnsmasq-dns-66d5bf7c87-hzmp7" Mar 11 10:07:11 crc kubenswrapper[4808]: I0311 10:07:11.207182 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8e28621f-b235-4e1a-a082-2c7542a2c3ac-dns-svc\") pod \"dnsmasq-dns-66d5bf7c87-hzmp7\" (UID: \"8e28621f-b235-4e1a-a082-2c7542a2c3ac\") " pod="openstack/dnsmasq-dns-66d5bf7c87-hzmp7" Mar 11 10:07:11 crc kubenswrapper[4808]: I0311 10:07:11.207271 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e28621f-b235-4e1a-a082-2c7542a2c3ac-config\") pod \"dnsmasq-dns-66d5bf7c87-hzmp7\" (UID: \"8e28621f-b235-4e1a-a082-2c7542a2c3ac\") " pod="openstack/dnsmasq-dns-66d5bf7c87-hzmp7" Mar 11 10:07:11 crc kubenswrapper[4808]: I0311 10:07:11.236786 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2d7p\" (UniqueName: \"kubernetes.io/projected/8e28621f-b235-4e1a-a082-2c7542a2c3ac-kube-api-access-b2d7p\") pod \"dnsmasq-dns-66d5bf7c87-hzmp7\" (UID: \"8e28621f-b235-4e1a-a082-2c7542a2c3ac\") " pod="openstack/dnsmasq-dns-66d5bf7c87-hzmp7" Mar 11 10:07:11 crc kubenswrapper[4808]: I0311 10:07:11.338670 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66d5bf7c87-hzmp7" Mar 11 10:07:11 crc kubenswrapper[4808]: I0311 10:07:11.564146 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-g57w5"] Mar 11 10:07:11 crc kubenswrapper[4808]: I0311 10:07:11.801800 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-66d5bf7c87-hzmp7"] Mar 11 10:07:11 crc kubenswrapper[4808]: I0311 10:07:11.960866 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wl7kp" Mar 11 10:07:11 crc kubenswrapper[4808]: I0311 10:07:11.960935 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wl7kp" Mar 11 10:07:12 crc kubenswrapper[4808]: I0311 10:07:12.013452 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wl7kp" Mar 11 10:07:12 crc kubenswrapper[4808]: I0311 10:07:12.310626 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 11 10:07:12 crc kubenswrapper[4808]: I0311 10:07:12.418919 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 11 10:07:12 crc kubenswrapper[4808]: I0311 10:07:12.517504 4808 generic.go:334] "Generic (PLEG): container finished" podID="8e28621f-b235-4e1a-a082-2c7542a2c3ac" containerID="5e32985ef7339e383f65d096a935cbb7822fb5916dd1de35bbd6becec5a0c0b6" exitCode=0 Mar 11 10:07:12 crc kubenswrapper[4808]: I0311 10:07:12.517561 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66d5bf7c87-hzmp7" event={"ID":"8e28621f-b235-4e1a-a082-2c7542a2c3ac","Type":"ContainerDied","Data":"5e32985ef7339e383f65d096a935cbb7822fb5916dd1de35bbd6becec5a0c0b6"} Mar 11 10:07:12 crc kubenswrapper[4808]: I0311 10:07:12.517621 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66d5bf7c87-hzmp7" event={"ID":"8e28621f-b235-4e1a-a082-2c7542a2c3ac","Type":"ContainerStarted","Data":"1ba4e2910f776022b10c1f5ca821d75cea0581e4dcfce5f0421ea6ef39d40667"} Mar 11 10:07:12 crc kubenswrapper[4808]: I0311 10:07:12.517860 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-g57w5" podUID="4d9f2e22-7067-48cb-9486-d4affde68298" containerName="registry-server" containerID="cri-o://a564ce5089ffaffdb5a698ebca448f4d6ccc9b23e1902d1903ea426301f783e3" gracePeriod=2 Mar 11 10:07:12 crc kubenswrapper[4808]: I0311 10:07:12.583582 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wl7kp" Mar 11 10:07:12 crc kubenswrapper[4808]: I0311 10:07:12.962703 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jcgrt"] Mar 11 10:07:12 crc kubenswrapper[4808]: I0311 10:07:12.963188 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jcgrt" podUID="01cc6986-a4ad-4d8b-aa3d-6fc1cfc22179" containerName="registry-server" containerID="cri-o://e3479e183db53999834b3e4320a45e4e8b5d406bd60d8e11f0dd28988d1672b0" gracePeriod=2 Mar 11 10:07:13 crc kubenswrapper[4808]: I0311 10:07:13.526332 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66d5bf7c87-hzmp7" event={"ID":"8e28621f-b235-4e1a-a082-2c7542a2c3ac","Type":"ContainerStarted","Data":"e0c317b2a6b610e601fc1aad6a1e17203ba8ecaf2fef2ad73672db5cf2f8e6a4"} Mar 11 10:07:13 crc kubenswrapper[4808]: I0311 10:07:13.526563 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-66d5bf7c87-hzmp7" Mar 11 10:07:13 crc kubenswrapper[4808]: I0311 10:07:13.529172 4808 generic.go:334] "Generic (PLEG): container finished" podID="01cc6986-a4ad-4d8b-aa3d-6fc1cfc22179" containerID="e3479e183db53999834b3e4320a45e4e8b5d406bd60d8e11f0dd28988d1672b0" exitCode=0 Mar 11 10:07:13 crc kubenswrapper[4808]: I0311 10:07:13.529216 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jcgrt" event={"ID":"01cc6986-a4ad-4d8b-aa3d-6fc1cfc22179","Type":"ContainerDied","Data":"e3479e183db53999834b3e4320a45e4e8b5d406bd60d8e11f0dd28988d1672b0"} Mar 11 10:07:13 crc kubenswrapper[4808]: I0311 10:07:13.531898 4808 generic.go:334] "Generic (PLEG): container finished" podID="4d9f2e22-7067-48cb-9486-d4affde68298" containerID="a564ce5089ffaffdb5a698ebca448f4d6ccc9b23e1902d1903ea426301f783e3" exitCode=0 Mar 11 10:07:13 crc kubenswrapper[4808]: I0311 10:07:13.531972 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g57w5" event={"ID":"4d9f2e22-7067-48cb-9486-d4affde68298","Type":"ContainerDied","Data":"a564ce5089ffaffdb5a698ebca448f4d6ccc9b23e1902d1903ea426301f783e3"} Mar 11 10:07:13 crc kubenswrapper[4808]: I0311 10:07:13.546303 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-66d5bf7c87-hzmp7" podStartSLOduration=3.546282323 podStartE2EDuration="3.546282323s" podCreationTimestamp="2026-03-11 10:07:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 10:07:13.544623406 +0000 UTC m=+5284.497946726" watchObservedRunningTime="2026-03-11 10:07:13.546282323 +0000 UTC m=+5284.499605643" Mar 11 10:07:14 crc kubenswrapper[4808]: I0311 10:07:14.003251 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g57w5" Mar 11 10:07:14 crc kubenswrapper[4808]: I0311 10:07:14.056677 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h9mns\" (UniqueName: \"kubernetes.io/projected/4d9f2e22-7067-48cb-9486-d4affde68298-kube-api-access-h9mns\") pod \"4d9f2e22-7067-48cb-9486-d4affde68298\" (UID: \"4d9f2e22-7067-48cb-9486-d4affde68298\") " Mar 11 10:07:14 crc kubenswrapper[4808]: I0311 10:07:14.056812 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d9f2e22-7067-48cb-9486-d4affde68298-utilities\") pod \"4d9f2e22-7067-48cb-9486-d4affde68298\" (UID: \"4d9f2e22-7067-48cb-9486-d4affde68298\") " Mar 11 10:07:14 crc kubenswrapper[4808]: I0311 10:07:14.056865 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d9f2e22-7067-48cb-9486-d4affde68298-catalog-content\") pod \"4d9f2e22-7067-48cb-9486-d4affde68298\" (UID: \"4d9f2e22-7067-48cb-9486-d4affde68298\") " Mar 11 10:07:14 crc kubenswrapper[4808]: I0311 10:07:14.070330 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d9f2e22-7067-48cb-9486-d4affde68298-utilities" (OuterVolumeSpecName: "utilities") pod "4d9f2e22-7067-48cb-9486-d4affde68298" (UID: "4d9f2e22-7067-48cb-9486-d4affde68298"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 10:07:14 crc kubenswrapper[4808]: I0311 10:07:14.083644 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d9f2e22-7067-48cb-9486-d4affde68298-kube-api-access-h9mns" (OuterVolumeSpecName: "kube-api-access-h9mns") pod "4d9f2e22-7067-48cb-9486-d4affde68298" (UID: "4d9f2e22-7067-48cb-9486-d4affde68298"). InnerVolumeSpecName "kube-api-access-h9mns". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:07:14 crc kubenswrapper[4808]: I0311 10:07:14.115387 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d9f2e22-7067-48cb-9486-d4affde68298-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4d9f2e22-7067-48cb-9486-d4affde68298" (UID: "4d9f2e22-7067-48cb-9486-d4affde68298"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 10:07:14 crc kubenswrapper[4808]: I0311 10:07:14.158520 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h9mns\" (UniqueName: \"kubernetes.io/projected/4d9f2e22-7067-48cb-9486-d4affde68298-kube-api-access-h9mns\") on node \"crc\" DevicePath \"\"" Mar 11 10:07:14 crc kubenswrapper[4808]: I0311 10:07:14.158560 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d9f2e22-7067-48cb-9486-d4affde68298-utilities\") on node \"crc\" DevicePath \"\"" Mar 11 10:07:14 crc kubenswrapper[4808]: I0311 10:07:14.158574 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d9f2e22-7067-48cb-9486-d4affde68298-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 11 10:07:14 crc kubenswrapper[4808]: I0311 10:07:14.320009 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jcgrt" Mar 11 10:07:14 crc kubenswrapper[4808]: I0311 10:07:14.360250 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01cc6986-a4ad-4d8b-aa3d-6fc1cfc22179-catalog-content\") pod \"01cc6986-a4ad-4d8b-aa3d-6fc1cfc22179\" (UID: \"01cc6986-a4ad-4d8b-aa3d-6fc1cfc22179\") " Mar 11 10:07:14 crc kubenswrapper[4808]: I0311 10:07:14.360389 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01cc6986-a4ad-4d8b-aa3d-6fc1cfc22179-utilities\") pod \"01cc6986-a4ad-4d8b-aa3d-6fc1cfc22179\" (UID: \"01cc6986-a4ad-4d8b-aa3d-6fc1cfc22179\") " Mar 11 10:07:14 crc kubenswrapper[4808]: I0311 10:07:14.360479 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rlrw6\" (UniqueName: \"kubernetes.io/projected/01cc6986-a4ad-4d8b-aa3d-6fc1cfc22179-kube-api-access-rlrw6\") pod \"01cc6986-a4ad-4d8b-aa3d-6fc1cfc22179\" (UID: \"01cc6986-a4ad-4d8b-aa3d-6fc1cfc22179\") " Mar 11 10:07:14 crc kubenswrapper[4808]: I0311 10:07:14.361015 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01cc6986-a4ad-4d8b-aa3d-6fc1cfc22179-utilities" (OuterVolumeSpecName: "utilities") pod "01cc6986-a4ad-4d8b-aa3d-6fc1cfc22179" (UID: "01cc6986-a4ad-4d8b-aa3d-6fc1cfc22179"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 10:07:14 crc kubenswrapper[4808]: I0311 10:07:14.363648 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01cc6986-a4ad-4d8b-aa3d-6fc1cfc22179-kube-api-access-rlrw6" (OuterVolumeSpecName: "kube-api-access-rlrw6") pod "01cc6986-a4ad-4d8b-aa3d-6fc1cfc22179" (UID: "01cc6986-a4ad-4d8b-aa3d-6fc1cfc22179"). InnerVolumeSpecName "kube-api-access-rlrw6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:07:14 crc kubenswrapper[4808]: I0311 10:07:14.415125 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01cc6986-a4ad-4d8b-aa3d-6fc1cfc22179-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "01cc6986-a4ad-4d8b-aa3d-6fc1cfc22179" (UID: "01cc6986-a4ad-4d8b-aa3d-6fc1cfc22179"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 10:07:14 crc kubenswrapper[4808]: I0311 10:07:14.462154 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01cc6986-a4ad-4d8b-aa3d-6fc1cfc22179-utilities\") on node \"crc\" DevicePath \"\"" Mar 11 10:07:14 crc kubenswrapper[4808]: I0311 10:07:14.462234 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rlrw6\" (UniqueName: \"kubernetes.io/projected/01cc6986-a4ad-4d8b-aa3d-6fc1cfc22179-kube-api-access-rlrw6\") on node \"crc\" DevicePath \"\"" Mar 11 10:07:14 crc kubenswrapper[4808]: I0311 10:07:14.462248 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01cc6986-a4ad-4d8b-aa3d-6fc1cfc22179-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 11 10:07:14 crc kubenswrapper[4808]: I0311 10:07:14.541534 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g57w5" event={"ID":"4d9f2e22-7067-48cb-9486-d4affde68298","Type":"ContainerDied","Data":"b1391db41b4c6026ebc713f5327efa4a0bb0b6f41540aa9c00337e4bf788591c"} Mar 11 10:07:14 crc kubenswrapper[4808]: I0311 10:07:14.541582 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g57w5" Mar 11 10:07:14 crc kubenswrapper[4808]: I0311 10:07:14.541610 4808 scope.go:117] "RemoveContainer" containerID="a564ce5089ffaffdb5a698ebca448f4d6ccc9b23e1902d1903ea426301f783e3" Mar 11 10:07:14 crc kubenswrapper[4808]: I0311 10:07:14.545563 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jcgrt" Mar 11 10:07:14 crc kubenswrapper[4808]: I0311 10:07:14.553168 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jcgrt" event={"ID":"01cc6986-a4ad-4d8b-aa3d-6fc1cfc22179","Type":"ContainerDied","Data":"1340fc1b46f987ad9b86855d658d1be086b7519a439710c025d12689b3b5d7f9"} Mar 11 10:07:14 crc kubenswrapper[4808]: I0311 10:07:14.561515 4808 scope.go:117] "RemoveContainer" containerID="ddef391bfa4b68732345593f8ca9670174e604f78601d4be09f2d1e1ce7ac8d6" Mar 11 10:07:14 crc kubenswrapper[4808]: I0311 10:07:14.576686 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-g57w5"] Mar 11 10:07:14 crc kubenswrapper[4808]: I0311 10:07:14.581654 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-g57w5"] Mar 11 10:07:14 crc kubenswrapper[4808]: I0311 10:07:14.595274 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jcgrt"] Mar 11 10:07:14 crc kubenswrapper[4808]: I0311 10:07:14.600304 4808 scope.go:117] "RemoveContainer" containerID="76ca212a738a34f94f5d49b450a9dfa06f6fe1da40123186a4824a88d3749fc2" Mar 11 10:07:14 crc kubenswrapper[4808]: I0311 10:07:14.604807 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jcgrt"] Mar 11 10:07:14 crc kubenswrapper[4808]: I0311 10:07:14.623213 4808 scope.go:117] "RemoveContainer" containerID="e3479e183db53999834b3e4320a45e4e8b5d406bd60d8e11f0dd28988d1672b0" Mar 11 10:07:14 crc kubenswrapper[4808]: I0311 10:07:14.651860 4808 scope.go:117] "RemoveContainer" containerID="b3842a5f45216adcc48b393ad3ac53fb89dea7ef5bdd1526250d389121d348a5" Mar 11 10:07:14 crc kubenswrapper[4808]: I0311 10:07:14.672604 4808 scope.go:117] "RemoveContainer" containerID="44f84f3638f5d20fc6313ee5d68921338e3aa36c969eb367b7ae6ac117d13630" Mar 11 10:07:14 crc kubenswrapper[4808]: I0311 10:07:14.789616 4808 scope.go:117] "RemoveContainer" containerID="d524673f3a545738ac720864e37310705b7ec99d76cdcdbcad5a32411f049b97" Mar 11 10:07:14 crc kubenswrapper[4808]: E0311 10:07:14.789834 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 10:07:15 crc kubenswrapper[4808]: I0311 10:07:15.361291 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wl7kp"] Mar 11 10:07:15 crc kubenswrapper[4808]: I0311 10:07:15.361553 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wl7kp" podUID="cf38487c-78bd-4002-8ebc-80badc161631" containerName="registry-server" containerID="cri-o://5e78bc7afbb4895c00efb1e082506b170327a772ac71c576b2b54ff9ebae4506" gracePeriod=2 Mar 11 10:07:15 crc kubenswrapper[4808]: I0311 10:07:15.580276 4808 generic.go:334] "Generic (PLEG): container finished" podID="cf38487c-78bd-4002-8ebc-80badc161631" containerID="5e78bc7afbb4895c00efb1e082506b170327a772ac71c576b2b54ff9ebae4506" exitCode=0 Mar 11 10:07:15 crc kubenswrapper[4808]: I0311 10:07:15.580758 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wl7kp" event={"ID":"cf38487c-78bd-4002-8ebc-80badc161631","Type":"ContainerDied","Data":"5e78bc7afbb4895c00efb1e082506b170327a772ac71c576b2b54ff9ebae4506"} Mar 11 10:07:15 crc kubenswrapper[4808]: I0311 10:07:15.741639 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wl7kp" Mar 11 10:07:15 crc kubenswrapper[4808]: I0311 10:07:15.784920 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf38487c-78bd-4002-8ebc-80badc161631-catalog-content\") pod \"cf38487c-78bd-4002-8ebc-80badc161631\" (UID: \"cf38487c-78bd-4002-8ebc-80badc161631\") " Mar 11 10:07:15 crc kubenswrapper[4808]: I0311 10:07:15.785069 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r9fp6\" (UniqueName: \"kubernetes.io/projected/cf38487c-78bd-4002-8ebc-80badc161631-kube-api-access-r9fp6\") pod \"cf38487c-78bd-4002-8ebc-80badc161631\" (UID: \"cf38487c-78bd-4002-8ebc-80badc161631\") " Mar 11 10:07:15 crc kubenswrapper[4808]: I0311 10:07:15.785121 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf38487c-78bd-4002-8ebc-80badc161631-utilities\") pod \"cf38487c-78bd-4002-8ebc-80badc161631\" (UID: \"cf38487c-78bd-4002-8ebc-80badc161631\") " Mar 11 10:07:15 crc kubenswrapper[4808]: I0311 10:07:15.786193 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf38487c-78bd-4002-8ebc-80badc161631-utilities" (OuterVolumeSpecName: "utilities") pod "cf38487c-78bd-4002-8ebc-80badc161631" (UID: "cf38487c-78bd-4002-8ebc-80badc161631"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 10:07:15 crc kubenswrapper[4808]: I0311 10:07:15.791574 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf38487c-78bd-4002-8ebc-80badc161631-kube-api-access-r9fp6" (OuterVolumeSpecName: "kube-api-access-r9fp6") pod "cf38487c-78bd-4002-8ebc-80badc161631" (UID: "cf38487c-78bd-4002-8ebc-80badc161631"). InnerVolumeSpecName "kube-api-access-r9fp6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:07:15 crc kubenswrapper[4808]: I0311 10:07:15.797372 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01cc6986-a4ad-4d8b-aa3d-6fc1cfc22179" path="/var/lib/kubelet/pods/01cc6986-a4ad-4d8b-aa3d-6fc1cfc22179/volumes" Mar 11 10:07:15 crc kubenswrapper[4808]: I0311 10:07:15.798121 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d9f2e22-7067-48cb-9486-d4affde68298" path="/var/lib/kubelet/pods/4d9f2e22-7067-48cb-9486-d4affde68298/volumes" Mar 11 10:07:15 crc kubenswrapper[4808]: I0311 10:07:15.817284 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf38487c-78bd-4002-8ebc-80badc161631-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cf38487c-78bd-4002-8ebc-80badc161631" (UID: "cf38487c-78bd-4002-8ebc-80badc161631"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 10:07:15 crc kubenswrapper[4808]: I0311 10:07:15.887575 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf38487c-78bd-4002-8ebc-80badc161631-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 11 10:07:15 crc kubenswrapper[4808]: I0311 10:07:15.887608 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r9fp6\" (UniqueName: \"kubernetes.io/projected/cf38487c-78bd-4002-8ebc-80badc161631-kube-api-access-r9fp6\") on node \"crc\" DevicePath \"\"" Mar 11 10:07:15 crc kubenswrapper[4808]: I0311 10:07:15.887624 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf38487c-78bd-4002-8ebc-80badc161631-utilities\") on node \"crc\" DevicePath \"\"" Mar 11 10:07:16 crc kubenswrapper[4808]: I0311 10:07:16.540899 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="1ae82137-73f8-4f99-a5b2-acecdb1e372a" containerName="rabbitmq" containerID="cri-o://c8262d53763b70bbc6a215e549e9c81dc4bd21c1b8b87dfcdf6589b63875188b" gracePeriod=604796 Mar 11 10:07:16 crc kubenswrapper[4808]: I0311 10:07:16.595759 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wl7kp" event={"ID":"cf38487c-78bd-4002-8ebc-80badc161631","Type":"ContainerDied","Data":"4ec6e1dcde65a2b3fe51a8655ccc25872aa0e991da4d890635947bb3ffed0214"} Mar 11 10:07:16 crc kubenswrapper[4808]: I0311 10:07:16.595828 4808 scope.go:117] "RemoveContainer" containerID="5e78bc7afbb4895c00efb1e082506b170327a772ac71c576b2b54ff9ebae4506" Mar 11 10:07:16 crc kubenswrapper[4808]: I0311 10:07:16.596230 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wl7kp" Mar 11 10:07:16 crc kubenswrapper[4808]: I0311 10:07:16.627009 4808 scope.go:117] "RemoveContainer" containerID="81edc2b353ca98da1788118bc9c412fb98ff964f8c9b969a42e29f4a7c882013" Mar 11 10:07:16 crc kubenswrapper[4808]: I0311 10:07:16.631190 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wl7kp"] Mar 11 10:07:16 crc kubenswrapper[4808]: I0311 10:07:16.636434 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wl7kp"] Mar 11 10:07:16 crc kubenswrapper[4808]: I0311 10:07:16.645626 4808 scope.go:117] "RemoveContainer" containerID="66d36981cb1cb49e92dede616046b3e9394b01cb29ac63a3a189504924e4755f" Mar 11 10:07:16 crc kubenswrapper[4808]: I0311 10:07:16.825639 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="4c8053c2-7a71-4381-aa2f-9e0d4d1963e9" containerName="rabbitmq" containerID="cri-o://b9d3bf6a00bbc8a90ce67d58f9420240476fdfe68cf36c1bfee3ee4dc71c9b4f" gracePeriod=604796 Mar 11 10:07:17 crc kubenswrapper[4808]: I0311 10:07:17.800107 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf38487c-78bd-4002-8ebc-80badc161631" path="/var/lib/kubelet/pods/cf38487c-78bd-4002-8ebc-80badc161631/volumes" Mar 11 10:07:21 crc kubenswrapper[4808]: I0311 10:07:21.340665 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-66d5bf7c87-hzmp7" Mar 11 10:07:21 crc kubenswrapper[4808]: I0311 10:07:21.408180 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-ff89b6977-dkqvb"] Mar 11 10:07:21 crc kubenswrapper[4808]: I0311 10:07:21.413062 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-ff89b6977-dkqvb" podUID="d2d68f53-525a-46d8-81c6-a3668c7fb842" containerName="dnsmasq-dns" containerID="cri-o://ba7ec146f53b187a45c3d7c05afcec11292a61d2823be9b7d0ef800b222b7f4c" gracePeriod=10 Mar 11 10:07:21 crc kubenswrapper[4808]: I0311 10:07:21.640569 4808 generic.go:334] "Generic (PLEG): container finished" podID="d2d68f53-525a-46d8-81c6-a3668c7fb842" containerID="ba7ec146f53b187a45c3d7c05afcec11292a61d2823be9b7d0ef800b222b7f4c" exitCode=0 Mar 11 10:07:21 crc kubenswrapper[4808]: I0311 10:07:21.640624 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-ff89b6977-dkqvb" event={"ID":"d2d68f53-525a-46d8-81c6-a3668c7fb842","Type":"ContainerDied","Data":"ba7ec146f53b187a45c3d7c05afcec11292a61d2823be9b7d0ef800b222b7f4c"} Mar 11 10:07:21 crc kubenswrapper[4808]: I0311 10:07:21.858370 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-ff89b6977-dkqvb" Mar 11 10:07:21 crc kubenswrapper[4808]: I0311 10:07:21.983302 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2d68f53-525a-46d8-81c6-a3668c7fb842-dns-svc\") pod \"d2d68f53-525a-46d8-81c6-a3668c7fb842\" (UID: \"d2d68f53-525a-46d8-81c6-a3668c7fb842\") " Mar 11 10:07:21 crc kubenswrapper[4808]: I0311 10:07:21.983396 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2d68f53-525a-46d8-81c6-a3668c7fb842-config\") pod \"d2d68f53-525a-46d8-81c6-a3668c7fb842\" (UID: \"d2d68f53-525a-46d8-81c6-a3668c7fb842\") " Mar 11 10:07:21 crc kubenswrapper[4808]: I0311 10:07:21.983565 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5tsx\" (UniqueName: \"kubernetes.io/projected/d2d68f53-525a-46d8-81c6-a3668c7fb842-kube-api-access-k5tsx\") pod \"d2d68f53-525a-46d8-81c6-a3668c7fb842\" (UID: \"d2d68f53-525a-46d8-81c6-a3668c7fb842\") " Mar 11 10:07:21 crc kubenswrapper[4808]: I0311 10:07:21.988623 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2d68f53-525a-46d8-81c6-a3668c7fb842-kube-api-access-k5tsx" (OuterVolumeSpecName: "kube-api-access-k5tsx") pod "d2d68f53-525a-46d8-81c6-a3668c7fb842" (UID: "d2d68f53-525a-46d8-81c6-a3668c7fb842"). InnerVolumeSpecName "kube-api-access-k5tsx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:07:22 crc kubenswrapper[4808]: I0311 10:07:22.021112 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2d68f53-525a-46d8-81c6-a3668c7fb842-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d2d68f53-525a-46d8-81c6-a3668c7fb842" (UID: "d2d68f53-525a-46d8-81c6-a3668c7fb842"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 10:07:22 crc kubenswrapper[4808]: I0311 10:07:22.029137 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2d68f53-525a-46d8-81c6-a3668c7fb842-config" (OuterVolumeSpecName: "config") pod "d2d68f53-525a-46d8-81c6-a3668c7fb842" (UID: "d2d68f53-525a-46d8-81c6-a3668c7fb842"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 10:07:22 crc kubenswrapper[4808]: I0311 10:07:22.085339 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2d68f53-525a-46d8-81c6-a3668c7fb842-config\") on node \"crc\" DevicePath \"\"" Mar 11 10:07:22 crc kubenswrapper[4808]: I0311 10:07:22.085392 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k5tsx\" (UniqueName: \"kubernetes.io/projected/d2d68f53-525a-46d8-81c6-a3668c7fb842-kube-api-access-k5tsx\") on node \"crc\" DevicePath \"\"" Mar 11 10:07:22 crc kubenswrapper[4808]: I0311 10:07:22.085406 4808 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2d68f53-525a-46d8-81c6-a3668c7fb842-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 11 10:07:22 crc kubenswrapper[4808]: I0311 10:07:22.583567 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="1ae82137-73f8-4f99-a5b2-acecdb1e372a" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.1.28:5671: connect: connection refused" Mar 11 10:07:22 crc kubenswrapper[4808]: I0311 10:07:22.652271 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-ff89b6977-dkqvb" event={"ID":"d2d68f53-525a-46d8-81c6-a3668c7fb842","Type":"ContainerDied","Data":"54f1f7ead4d496835fa98c7d6254569b9adf9f8054008eeb9af02f1ca0d83774"} Mar 11 10:07:22 crc kubenswrapper[4808]: I0311 10:07:22.652582 4808 scope.go:117] "RemoveContainer" containerID="ba7ec146f53b187a45c3d7c05afcec11292a61d2823be9b7d0ef800b222b7f4c" Mar 11 10:07:22 crc kubenswrapper[4808]: I0311 10:07:22.652405 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-ff89b6977-dkqvb" Mar 11 10:07:22 crc kubenswrapper[4808]: I0311 10:07:22.690795 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-ff89b6977-dkqvb"] Mar 11 10:07:22 crc kubenswrapper[4808]: I0311 10:07:22.694630 4808 scope.go:117] "RemoveContainer" containerID="36cf4051a2e640019b01ad9abd8dc3ffdffafaa54c2a7bed6dc6b073be48da07" Mar 11 10:07:22 crc kubenswrapper[4808]: I0311 10:07:22.698857 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-ff89b6977-dkqvb"] Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.152743 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.202526 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fab05de8-314b-4d20-a094-6a424085124f\") pod \"1ae82137-73f8-4f99-a5b2-acecdb1e372a\" (UID: \"1ae82137-73f8-4f99-a5b2-acecdb1e372a\") " Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.202583 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1ae82137-73f8-4f99-a5b2-acecdb1e372a-plugins-conf\") pod \"1ae82137-73f8-4f99-a5b2-acecdb1e372a\" (UID: \"1ae82137-73f8-4f99-a5b2-acecdb1e372a\") " Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.202606 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1ae82137-73f8-4f99-a5b2-acecdb1e372a-pod-info\") pod \"1ae82137-73f8-4f99-a5b2-acecdb1e372a\" (UID: \"1ae82137-73f8-4f99-a5b2-acecdb1e372a\") " Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.202627 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1ae82137-73f8-4f99-a5b2-acecdb1e372a-rabbitmq-tls\") pod \"1ae82137-73f8-4f99-a5b2-acecdb1e372a\" (UID: \"1ae82137-73f8-4f99-a5b2-acecdb1e372a\") " Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.202685 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1ae82137-73f8-4f99-a5b2-acecdb1e372a-config-data\") pod \"1ae82137-73f8-4f99-a5b2-acecdb1e372a\" (UID: \"1ae82137-73f8-4f99-a5b2-acecdb1e372a\") " Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.202716 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1ae82137-73f8-4f99-a5b2-acecdb1e372a-rabbitmq-erlang-cookie\") pod \"1ae82137-73f8-4f99-a5b2-acecdb1e372a\" (UID: \"1ae82137-73f8-4f99-a5b2-acecdb1e372a\") " Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.202758 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1ae82137-73f8-4f99-a5b2-acecdb1e372a-rabbitmq-plugins\") pod \"1ae82137-73f8-4f99-a5b2-acecdb1e372a\" (UID: \"1ae82137-73f8-4f99-a5b2-acecdb1e372a\") " Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.202809 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1ae82137-73f8-4f99-a5b2-acecdb1e372a-erlang-cookie-secret\") pod \"1ae82137-73f8-4f99-a5b2-acecdb1e372a\" (UID: \"1ae82137-73f8-4f99-a5b2-acecdb1e372a\") " Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.202852 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7pks5\" (UniqueName: \"kubernetes.io/projected/1ae82137-73f8-4f99-a5b2-acecdb1e372a-kube-api-access-7pks5\") pod \"1ae82137-73f8-4f99-a5b2-acecdb1e372a\" (UID: \"1ae82137-73f8-4f99-a5b2-acecdb1e372a\") " Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.202869 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1ae82137-73f8-4f99-a5b2-acecdb1e372a-server-conf\") pod \"1ae82137-73f8-4f99-a5b2-acecdb1e372a\" (UID: \"1ae82137-73f8-4f99-a5b2-acecdb1e372a\") " Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.202895 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1ae82137-73f8-4f99-a5b2-acecdb1e372a-rabbitmq-confd\") pod \"1ae82137-73f8-4f99-a5b2-acecdb1e372a\" (UID: \"1ae82137-73f8-4f99-a5b2-acecdb1e372a\") " Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.209658 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ae82137-73f8-4f99-a5b2-acecdb1e372a-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "1ae82137-73f8-4f99-a5b2-acecdb1e372a" (UID: "1ae82137-73f8-4f99-a5b2-acecdb1e372a"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.210074 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ae82137-73f8-4f99-a5b2-acecdb1e372a-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "1ae82137-73f8-4f99-a5b2-acecdb1e372a" (UID: "1ae82137-73f8-4f99-a5b2-acecdb1e372a"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.210265 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ae82137-73f8-4f99-a5b2-acecdb1e372a-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "1ae82137-73f8-4f99-a5b2-acecdb1e372a" (UID: "1ae82137-73f8-4f99-a5b2-acecdb1e372a"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.214403 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ae82137-73f8-4f99-a5b2-acecdb1e372a-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "1ae82137-73f8-4f99-a5b2-acecdb1e372a" (UID: "1ae82137-73f8-4f99-a5b2-acecdb1e372a"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.221013 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ae82137-73f8-4f99-a5b2-acecdb1e372a-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "1ae82137-73f8-4f99-a5b2-acecdb1e372a" (UID: "1ae82137-73f8-4f99-a5b2-acecdb1e372a"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.220963 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/1ae82137-73f8-4f99-a5b2-acecdb1e372a-pod-info" (OuterVolumeSpecName: "pod-info") pod "1ae82137-73f8-4f99-a5b2-acecdb1e372a" (UID: "1ae82137-73f8-4f99-a5b2-acecdb1e372a"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.221059 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ae82137-73f8-4f99-a5b2-acecdb1e372a-kube-api-access-7pks5" (OuterVolumeSpecName: "kube-api-access-7pks5") pod "1ae82137-73f8-4f99-a5b2-acecdb1e372a" (UID: "1ae82137-73f8-4f99-a5b2-acecdb1e372a"). InnerVolumeSpecName "kube-api-access-7pks5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.232148 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fab05de8-314b-4d20-a094-6a424085124f" (OuterVolumeSpecName: "persistence") pod "1ae82137-73f8-4f99-a5b2-acecdb1e372a" (UID: "1ae82137-73f8-4f99-a5b2-acecdb1e372a"). InnerVolumeSpecName "pvc-fab05de8-314b-4d20-a094-6a424085124f". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.253738 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ae82137-73f8-4f99-a5b2-acecdb1e372a-config-data" (OuterVolumeSpecName: "config-data") pod "1ae82137-73f8-4f99-a5b2-acecdb1e372a" (UID: "1ae82137-73f8-4f99-a5b2-acecdb1e372a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.267422 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ae82137-73f8-4f99-a5b2-acecdb1e372a-server-conf" (OuterVolumeSpecName: "server-conf") pod "1ae82137-73f8-4f99-a5b2-acecdb1e372a" (UID: "1ae82137-73f8-4f99-a5b2-acecdb1e372a"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.294553 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ae82137-73f8-4f99-a5b2-acecdb1e372a-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "1ae82137-73f8-4f99-a5b2-acecdb1e372a" (UID: "1ae82137-73f8-4f99-a5b2-acecdb1e372a"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.303954 4808 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1ae82137-73f8-4f99-a5b2-acecdb1e372a-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.303983 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7pks5\" (UniqueName: \"kubernetes.io/projected/1ae82137-73f8-4f99-a5b2-acecdb1e372a-kube-api-access-7pks5\") on node \"crc\" DevicePath \"\"" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.303993 4808 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1ae82137-73f8-4f99-a5b2-acecdb1e372a-server-conf\") on node \"crc\" DevicePath \"\"" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.304002 4808 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1ae82137-73f8-4f99-a5b2-acecdb1e372a-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.304038 4808 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-fab05de8-314b-4d20-a094-6a424085124f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fab05de8-314b-4d20-a094-6a424085124f\") on node \"crc\" " Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.304050 4808 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1ae82137-73f8-4f99-a5b2-acecdb1e372a-plugins-conf\") on node \"crc\" DevicePath \"\"" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.304062 4808 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1ae82137-73f8-4f99-a5b2-acecdb1e372a-pod-info\") on node \"crc\" DevicePath \"\"" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.304072 4808 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1ae82137-73f8-4f99-a5b2-acecdb1e372a-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.304080 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1ae82137-73f8-4f99-a5b2-acecdb1e372a-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.304090 4808 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1ae82137-73f8-4f99-a5b2-acecdb1e372a-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.304099 4808 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1ae82137-73f8-4f99-a5b2-acecdb1e372a-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.320456 4808 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.320646 4808 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-fab05de8-314b-4d20-a094-6a424085124f" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fab05de8-314b-4d20-a094-6a424085124f") on node "crc" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.360644 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.404898 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4c8053c2-7a71-4381-aa2f-9e0d4d1963e9-rabbitmq-plugins\") pod \"4c8053c2-7a71-4381-aa2f-9e0d4d1963e9\" (UID: \"4c8053c2-7a71-4381-aa2f-9e0d4d1963e9\") " Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.405738 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4c8053c2-7a71-4381-aa2f-9e0d4d1963e9-config-data\") pod \"4c8053c2-7a71-4381-aa2f-9e0d4d1963e9\" (UID: \"4c8053c2-7a71-4381-aa2f-9e0d4d1963e9\") " Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.405788 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4c8053c2-7a71-4381-aa2f-9e0d4d1963e9-rabbitmq-erlang-cookie\") pod \"4c8053c2-7a71-4381-aa2f-9e0d4d1963e9\" (UID: \"4c8053c2-7a71-4381-aa2f-9e0d4d1963e9\") " Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.405840 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4c8053c2-7a71-4381-aa2f-9e0d4d1963e9-pod-info\") pod \"4c8053c2-7a71-4381-aa2f-9e0d4d1963e9\" (UID: \"4c8053c2-7a71-4381-aa2f-9e0d4d1963e9\") " Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.405888 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c8053c2-7a71-4381-aa2f-9e0d4d1963e9-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "4c8053c2-7a71-4381-aa2f-9e0d4d1963e9" (UID: "4c8053c2-7a71-4381-aa2f-9e0d4d1963e9"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.405982 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0f751401-42db-445a-851f-76536f4e37c8\") pod \"4c8053c2-7a71-4381-aa2f-9e0d4d1963e9\" (UID: \"4c8053c2-7a71-4381-aa2f-9e0d4d1963e9\") " Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.406021 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4c8053c2-7a71-4381-aa2f-9e0d4d1963e9-rabbitmq-confd\") pod \"4c8053c2-7a71-4381-aa2f-9e0d4d1963e9\" (UID: \"4c8053c2-7a71-4381-aa2f-9e0d4d1963e9\") " Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.406056 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4c8053c2-7a71-4381-aa2f-9e0d4d1963e9-server-conf\") pod \"4c8053c2-7a71-4381-aa2f-9e0d4d1963e9\" (UID: \"4c8053c2-7a71-4381-aa2f-9e0d4d1963e9\") " Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.406117 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4c8053c2-7a71-4381-aa2f-9e0d4d1963e9-erlang-cookie-secret\") pod \"4c8053c2-7a71-4381-aa2f-9e0d4d1963e9\" (UID: \"4c8053c2-7a71-4381-aa2f-9e0d4d1963e9\") " Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.406139 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4c8053c2-7a71-4381-aa2f-9e0d4d1963e9-plugins-conf\") pod \"4c8053c2-7a71-4381-aa2f-9e0d4d1963e9\" (UID: \"4c8053c2-7a71-4381-aa2f-9e0d4d1963e9\") " Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.406163 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4c8053c2-7a71-4381-aa2f-9e0d4d1963e9-rabbitmq-tls\") pod \"4c8053c2-7a71-4381-aa2f-9e0d4d1963e9\" (UID: \"4c8053c2-7a71-4381-aa2f-9e0d4d1963e9\") " Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.406189 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gzbd2\" (UniqueName: \"kubernetes.io/projected/4c8053c2-7a71-4381-aa2f-9e0d4d1963e9-kube-api-access-gzbd2\") pod \"4c8053c2-7a71-4381-aa2f-9e0d4d1963e9\" (UID: \"4c8053c2-7a71-4381-aa2f-9e0d4d1963e9\") " Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.406553 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c8053c2-7a71-4381-aa2f-9e0d4d1963e9-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "4c8053c2-7a71-4381-aa2f-9e0d4d1963e9" (UID: "4c8053c2-7a71-4381-aa2f-9e0d4d1963e9"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.407182 4808 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4c8053c2-7a71-4381-aa2f-9e0d4d1963e9-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.407207 4808 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4c8053c2-7a71-4381-aa2f-9e0d4d1963e9-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.407221 4808 reconciler_common.go:293] "Volume detached for volume \"pvc-fab05de8-314b-4d20-a094-6a424085124f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fab05de8-314b-4d20-a094-6a424085124f\") on node \"crc\" DevicePath \"\"" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.411174 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c8053c2-7a71-4381-aa2f-9e0d4d1963e9-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "4c8053c2-7a71-4381-aa2f-9e0d4d1963e9" (UID: "4c8053c2-7a71-4381-aa2f-9e0d4d1963e9"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.413031 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/4c8053c2-7a71-4381-aa2f-9e0d4d1963e9-pod-info" (OuterVolumeSpecName: "pod-info") pod "4c8053c2-7a71-4381-aa2f-9e0d4d1963e9" (UID: "4c8053c2-7a71-4381-aa2f-9e0d4d1963e9"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.414188 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c8053c2-7a71-4381-aa2f-9e0d4d1963e9-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "4c8053c2-7a71-4381-aa2f-9e0d4d1963e9" (UID: "4c8053c2-7a71-4381-aa2f-9e0d4d1963e9"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.415585 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c8053c2-7a71-4381-aa2f-9e0d4d1963e9-kube-api-access-gzbd2" (OuterVolumeSpecName: "kube-api-access-gzbd2") pod "4c8053c2-7a71-4381-aa2f-9e0d4d1963e9" (UID: "4c8053c2-7a71-4381-aa2f-9e0d4d1963e9"). InnerVolumeSpecName "kube-api-access-gzbd2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.419028 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c8053c2-7a71-4381-aa2f-9e0d4d1963e9-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "4c8053c2-7a71-4381-aa2f-9e0d4d1963e9" (UID: "4c8053c2-7a71-4381-aa2f-9e0d4d1963e9"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.422274 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c8053c2-7a71-4381-aa2f-9e0d4d1963e9-config-data" (OuterVolumeSpecName: "config-data") pod "4c8053c2-7a71-4381-aa2f-9e0d4d1963e9" (UID: "4c8053c2-7a71-4381-aa2f-9e0d4d1963e9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.429751 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0f751401-42db-445a-851f-76536f4e37c8" (OuterVolumeSpecName: "persistence") pod "4c8053c2-7a71-4381-aa2f-9e0d4d1963e9" (UID: "4c8053c2-7a71-4381-aa2f-9e0d4d1963e9"). InnerVolumeSpecName "pvc-0f751401-42db-445a-851f-76536f4e37c8". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.451612 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c8053c2-7a71-4381-aa2f-9e0d4d1963e9-server-conf" (OuterVolumeSpecName: "server-conf") pod "4c8053c2-7a71-4381-aa2f-9e0d4d1963e9" (UID: "4c8053c2-7a71-4381-aa2f-9e0d4d1963e9"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.489380 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c8053c2-7a71-4381-aa2f-9e0d4d1963e9-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "4c8053c2-7a71-4381-aa2f-9e0d4d1963e9" (UID: "4c8053c2-7a71-4381-aa2f-9e0d4d1963e9"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.508261 4808 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4c8053c2-7a71-4381-aa2f-9e0d4d1963e9-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.508304 4808 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4c8053c2-7a71-4381-aa2f-9e0d4d1963e9-plugins-conf\") on node \"crc\" DevicePath \"\"" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.508315 4808 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4c8053c2-7a71-4381-aa2f-9e0d4d1963e9-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.508324 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gzbd2\" (UniqueName: \"kubernetes.io/projected/4c8053c2-7a71-4381-aa2f-9e0d4d1963e9-kube-api-access-gzbd2\") on node \"crc\" DevicePath \"\"" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.508333 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4c8053c2-7a71-4381-aa2f-9e0d4d1963e9-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.508340 4808 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4c8053c2-7a71-4381-aa2f-9e0d4d1963e9-pod-info\") on node \"crc\" DevicePath \"\"" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.508389 4808 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-0f751401-42db-445a-851f-76536f4e37c8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0f751401-42db-445a-851f-76536f4e37c8\") on node \"crc\" " Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.508399 4808 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4c8053c2-7a71-4381-aa2f-9e0d4d1963e9-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.508410 4808 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4c8053c2-7a71-4381-aa2f-9e0d4d1963e9-server-conf\") on node \"crc\" DevicePath \"\"" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.526530 4808 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.526680 4808 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-0f751401-42db-445a-851f-76536f4e37c8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0f751401-42db-445a-851f-76536f4e37c8") on node "crc" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.609494 4808 reconciler_common.go:293] "Volume detached for volume \"pvc-0f751401-42db-445a-851f-76536f4e37c8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0f751401-42db-445a-851f-76536f4e37c8\") on node \"crc\" DevicePath \"\"" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.663234 4808 generic.go:334] "Generic (PLEG): container finished" podID="1ae82137-73f8-4f99-a5b2-acecdb1e372a" containerID="c8262d53763b70bbc6a215e549e9c81dc4bd21c1b8b87dfcdf6589b63875188b" exitCode=0 Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.663290 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.663326 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1ae82137-73f8-4f99-a5b2-acecdb1e372a","Type":"ContainerDied","Data":"c8262d53763b70bbc6a215e549e9c81dc4bd21c1b8b87dfcdf6589b63875188b"} Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.663376 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1ae82137-73f8-4f99-a5b2-acecdb1e372a","Type":"ContainerDied","Data":"11844c7e18b454bf43814677e82e276b083d36d6497f36e6daba52fac1098903"} Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.663400 4808 scope.go:117] "RemoveContainer" containerID="c8262d53763b70bbc6a215e549e9c81dc4bd21c1b8b87dfcdf6589b63875188b" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.666109 4808 generic.go:334] "Generic (PLEG): container finished" podID="4c8053c2-7a71-4381-aa2f-9e0d4d1963e9" containerID="b9d3bf6a00bbc8a90ce67d58f9420240476fdfe68cf36c1bfee3ee4dc71c9b4f" exitCode=0 Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.666131 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.666147 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4c8053c2-7a71-4381-aa2f-9e0d4d1963e9","Type":"ContainerDied","Data":"b9d3bf6a00bbc8a90ce67d58f9420240476fdfe68cf36c1bfee3ee4dc71c9b4f"} Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.666533 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4c8053c2-7a71-4381-aa2f-9e0d4d1963e9","Type":"ContainerDied","Data":"3da859e0059f3d69b4d4bb3c734a37479e83deb78dab31a8b8a486601e8ca88a"} Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.683710 4808 scope.go:117] "RemoveContainer" containerID="efd94cab5a3481e535445c098eba28875e16625a175a3c3316016fc0ef43cc2c" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.702803 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.722762 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.732933 4808 scope.go:117] "RemoveContainer" containerID="c8262d53763b70bbc6a215e549e9c81dc4bd21c1b8b87dfcdf6589b63875188b" Mar 11 10:07:23 crc kubenswrapper[4808]: E0311 10:07:23.733484 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8262d53763b70bbc6a215e549e9c81dc4bd21c1b8b87dfcdf6589b63875188b\": container with ID starting with c8262d53763b70bbc6a215e549e9c81dc4bd21c1b8b87dfcdf6589b63875188b not found: ID does not exist" containerID="c8262d53763b70bbc6a215e549e9c81dc4bd21c1b8b87dfcdf6589b63875188b" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.733528 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8262d53763b70bbc6a215e549e9c81dc4bd21c1b8b87dfcdf6589b63875188b"} err="failed to get container status \"c8262d53763b70bbc6a215e549e9c81dc4bd21c1b8b87dfcdf6589b63875188b\": rpc error: code = NotFound desc = could not find container \"c8262d53763b70bbc6a215e549e9c81dc4bd21c1b8b87dfcdf6589b63875188b\": container with ID starting with c8262d53763b70bbc6a215e549e9c81dc4bd21c1b8b87dfcdf6589b63875188b not found: ID does not exist" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.733546 4808 scope.go:117] "RemoveContainer" containerID="efd94cab5a3481e535445c098eba28875e16625a175a3c3316016fc0ef43cc2c" Mar 11 10:07:23 crc kubenswrapper[4808]: E0311 10:07:23.733856 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"efd94cab5a3481e535445c098eba28875e16625a175a3c3316016fc0ef43cc2c\": container with ID starting with efd94cab5a3481e535445c098eba28875e16625a175a3c3316016fc0ef43cc2c not found: ID does not exist" containerID="efd94cab5a3481e535445c098eba28875e16625a175a3c3316016fc0ef43cc2c" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.733876 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efd94cab5a3481e535445c098eba28875e16625a175a3c3316016fc0ef43cc2c"} err="failed to get container status \"efd94cab5a3481e535445c098eba28875e16625a175a3c3316016fc0ef43cc2c\": rpc error: code = NotFound desc = could not find container \"efd94cab5a3481e535445c098eba28875e16625a175a3c3316016fc0ef43cc2c\": container with ID starting with efd94cab5a3481e535445c098eba28875e16625a175a3c3316016fc0ef43cc2c not found: ID does not exist" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.733889 4808 scope.go:117] "RemoveContainer" containerID="b9d3bf6a00bbc8a90ce67d58f9420240476fdfe68cf36c1bfee3ee4dc71c9b4f" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.745157 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.757849 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.759428 4808 scope.go:117] "RemoveContainer" containerID="b9980c4f47b1d66eb44f1d68026dd866d210be555e2077515b80ae42fc2070e5" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.764613 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Mar 11 10:07:23 crc kubenswrapper[4808]: E0311 10:07:23.764986 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf38487c-78bd-4002-8ebc-80badc161631" containerName="extract-content" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.765022 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf38487c-78bd-4002-8ebc-80badc161631" containerName="extract-content" Mar 11 10:07:23 crc kubenswrapper[4808]: E0311 10:07:23.765035 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d9f2e22-7067-48cb-9486-d4affde68298" containerName="extract-content" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.765041 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d9f2e22-7067-48cb-9486-d4affde68298" containerName="extract-content" Mar 11 10:07:23 crc kubenswrapper[4808]: E0311 10:07:23.765052 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01cc6986-a4ad-4d8b-aa3d-6fc1cfc22179" containerName="extract-content" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.765057 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="01cc6986-a4ad-4d8b-aa3d-6fc1cfc22179" containerName="extract-content" Mar 11 10:07:23 crc kubenswrapper[4808]: E0311 10:07:23.765105 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c8053c2-7a71-4381-aa2f-9e0d4d1963e9" containerName="setup-container" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.765111 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c8053c2-7a71-4381-aa2f-9e0d4d1963e9" containerName="setup-container" Mar 11 10:07:23 crc kubenswrapper[4808]: E0311 10:07:23.765120 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2d68f53-525a-46d8-81c6-a3668c7fb842" containerName="init" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.765126 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2d68f53-525a-46d8-81c6-a3668c7fb842" containerName="init" Mar 11 10:07:23 crc kubenswrapper[4808]: E0311 10:07:23.765176 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf38487c-78bd-4002-8ebc-80badc161631" containerName="registry-server" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.765239 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf38487c-78bd-4002-8ebc-80badc161631" containerName="registry-server" Mar 11 10:07:23 crc kubenswrapper[4808]: E0311 10:07:23.765257 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ae82137-73f8-4f99-a5b2-acecdb1e372a" containerName="rabbitmq" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.765267 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ae82137-73f8-4f99-a5b2-acecdb1e372a" containerName="rabbitmq" Mar 11 10:07:23 crc kubenswrapper[4808]: E0311 10:07:23.765292 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ae82137-73f8-4f99-a5b2-acecdb1e372a" containerName="setup-container" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.765300 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ae82137-73f8-4f99-a5b2-acecdb1e372a" containerName="setup-container" Mar 11 10:07:23 crc kubenswrapper[4808]: E0311 10:07:23.765308 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d9f2e22-7067-48cb-9486-d4affde68298" containerName="extract-utilities" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.765313 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d9f2e22-7067-48cb-9486-d4affde68298" containerName="extract-utilities" Mar 11 10:07:23 crc kubenswrapper[4808]: E0311 10:07:23.765323 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d9f2e22-7067-48cb-9486-d4affde68298" containerName="registry-server" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.765329 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d9f2e22-7067-48cb-9486-d4affde68298" containerName="registry-server" Mar 11 10:07:23 crc kubenswrapper[4808]: E0311 10:07:23.765335 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01cc6986-a4ad-4d8b-aa3d-6fc1cfc22179" containerName="registry-server" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.765340 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="01cc6986-a4ad-4d8b-aa3d-6fc1cfc22179" containerName="registry-server" Mar 11 10:07:23 crc kubenswrapper[4808]: E0311 10:07:23.765349 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01cc6986-a4ad-4d8b-aa3d-6fc1cfc22179" containerName="extract-utilities" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.765391 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="01cc6986-a4ad-4d8b-aa3d-6fc1cfc22179" containerName="extract-utilities" Mar 11 10:07:23 crc kubenswrapper[4808]: E0311 10:07:23.765404 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c8053c2-7a71-4381-aa2f-9e0d4d1963e9" containerName="rabbitmq" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.765412 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c8053c2-7a71-4381-aa2f-9e0d4d1963e9" containerName="rabbitmq" Mar 11 10:07:23 crc kubenswrapper[4808]: E0311 10:07:23.765426 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2d68f53-525a-46d8-81c6-a3668c7fb842" containerName="dnsmasq-dns" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.765433 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2d68f53-525a-46d8-81c6-a3668c7fb842" containerName="dnsmasq-dns" Mar 11 10:07:23 crc kubenswrapper[4808]: E0311 10:07:23.765446 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf38487c-78bd-4002-8ebc-80badc161631" containerName="extract-utilities" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.765454 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf38487c-78bd-4002-8ebc-80badc161631" containerName="extract-utilities" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.765682 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c8053c2-7a71-4381-aa2f-9e0d4d1963e9" containerName="rabbitmq" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.765708 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2d68f53-525a-46d8-81c6-a3668c7fb842" containerName="dnsmasq-dns" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.765725 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d9f2e22-7067-48cb-9486-d4affde68298" containerName="registry-server" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.765756 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ae82137-73f8-4f99-a5b2-acecdb1e372a" containerName="rabbitmq" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.765770 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf38487c-78bd-4002-8ebc-80badc161631" containerName="registry-server" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.765782 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="01cc6986-a4ad-4d8b-aa3d-6fc1cfc22179" containerName="registry-server" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.766895 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.768846 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.769134 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.769341 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.769698 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.769697 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-88mh2" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.770133 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.770392 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.778607 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.781580 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.784792 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.785115 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-9gvhv" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.785174 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.785289 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.785378 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.785495 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.785527 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.786126 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.796461 4808 scope.go:117] "RemoveContainer" containerID="b9d3bf6a00bbc8a90ce67d58f9420240476fdfe68cf36c1bfee3ee4dc71c9b4f" Mar 11 10:07:23 crc kubenswrapper[4808]: E0311 10:07:23.796859 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9d3bf6a00bbc8a90ce67d58f9420240476fdfe68cf36c1bfee3ee4dc71c9b4f\": container with ID starting with b9d3bf6a00bbc8a90ce67d58f9420240476fdfe68cf36c1bfee3ee4dc71c9b4f not found: ID does not exist" containerID="b9d3bf6a00bbc8a90ce67d58f9420240476fdfe68cf36c1bfee3ee4dc71c9b4f" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.796895 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9d3bf6a00bbc8a90ce67d58f9420240476fdfe68cf36c1bfee3ee4dc71c9b4f"} err="failed to get container status \"b9d3bf6a00bbc8a90ce67d58f9420240476fdfe68cf36c1bfee3ee4dc71c9b4f\": rpc error: code = NotFound desc = could not find container \"b9d3bf6a00bbc8a90ce67d58f9420240476fdfe68cf36c1bfee3ee4dc71c9b4f\": container with ID starting with b9d3bf6a00bbc8a90ce67d58f9420240476fdfe68cf36c1bfee3ee4dc71c9b4f not found: ID does not exist" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.796931 4808 scope.go:117] "RemoveContainer" containerID="b9980c4f47b1d66eb44f1d68026dd866d210be555e2077515b80ae42fc2070e5" Mar 11 10:07:23 crc kubenswrapper[4808]: E0311 10:07:23.797317 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9980c4f47b1d66eb44f1d68026dd866d210be555e2077515b80ae42fc2070e5\": container with ID starting with b9980c4f47b1d66eb44f1d68026dd866d210be555e2077515b80ae42fc2070e5 not found: ID does not exist" containerID="b9980c4f47b1d66eb44f1d68026dd866d210be555e2077515b80ae42fc2070e5" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.797345 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9980c4f47b1d66eb44f1d68026dd866d210be555e2077515b80ae42fc2070e5"} err="failed to get container status \"b9980c4f47b1d66eb44f1d68026dd866d210be555e2077515b80ae42fc2070e5\": rpc error: code = NotFound desc = could not find container \"b9980c4f47b1d66eb44f1d68026dd866d210be555e2077515b80ae42fc2070e5\": container with ID starting with b9980c4f47b1d66eb44f1d68026dd866d210be555e2077515b80ae42fc2070e5 not found: ID does not exist" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.799269 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ae82137-73f8-4f99-a5b2-acecdb1e372a" path="/var/lib/kubelet/pods/1ae82137-73f8-4f99-a5b2-acecdb1e372a/volumes" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.800265 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c8053c2-7a71-4381-aa2f-9e0d4d1963e9" path="/var/lib/kubelet/pods/4c8053c2-7a71-4381-aa2f-9e0d4d1963e9/volumes" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.802019 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2d68f53-525a-46d8-81c6-a3668c7fb842" path="/var/lib/kubelet/pods/d2d68f53-525a-46d8-81c6-a3668c7fb842/volumes" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.802749 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.813852 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/36603a85-71a0-4dc5-ab71-cfee0f3331c3-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"36603a85-71a0-4dc5-ab71-cfee0f3331c3\") " pod="openstack/rabbitmq-server-0" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.813903 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/36603a85-71a0-4dc5-ab71-cfee0f3331c3-pod-info\") pod \"rabbitmq-server-0\" (UID: \"36603a85-71a0-4dc5-ab71-cfee0f3331c3\") " pod="openstack/rabbitmq-server-0" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.813953 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/36603a85-71a0-4dc5-ab71-cfee0f3331c3-config-data\") pod \"rabbitmq-server-0\" (UID: \"36603a85-71a0-4dc5-ab71-cfee0f3331c3\") " pod="openstack/rabbitmq-server-0" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.813978 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/36603a85-71a0-4dc5-ab71-cfee0f3331c3-server-conf\") pod \"rabbitmq-server-0\" (UID: \"36603a85-71a0-4dc5-ab71-cfee0f3331c3\") " pod="openstack/rabbitmq-server-0" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.814028 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/36603a85-71a0-4dc5-ab71-cfee0f3331c3-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"36603a85-71a0-4dc5-ab71-cfee0f3331c3\") " pod="openstack/rabbitmq-server-0" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.814102 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkrnr\" (UniqueName: \"kubernetes.io/projected/36603a85-71a0-4dc5-ab71-cfee0f3331c3-kube-api-access-qkrnr\") pod \"rabbitmq-server-0\" (UID: \"36603a85-71a0-4dc5-ab71-cfee0f3331c3\") " pod="openstack/rabbitmq-server-0" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.814149 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-fab05de8-314b-4d20-a094-6a424085124f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fab05de8-314b-4d20-a094-6a424085124f\") pod \"rabbitmq-server-0\" (UID: \"36603a85-71a0-4dc5-ab71-cfee0f3331c3\") " pod="openstack/rabbitmq-server-0" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.814184 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/36603a85-71a0-4dc5-ab71-cfee0f3331c3-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"36603a85-71a0-4dc5-ab71-cfee0f3331c3\") " pod="openstack/rabbitmq-server-0" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.814214 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/36603a85-71a0-4dc5-ab71-cfee0f3331c3-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"36603a85-71a0-4dc5-ab71-cfee0f3331c3\") " pod="openstack/rabbitmq-server-0" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.814238 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/36603a85-71a0-4dc5-ab71-cfee0f3331c3-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"36603a85-71a0-4dc5-ab71-cfee0f3331c3\") " pod="openstack/rabbitmq-server-0" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.814264 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/36603a85-71a0-4dc5-ab71-cfee0f3331c3-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"36603a85-71a0-4dc5-ab71-cfee0f3331c3\") " pod="openstack/rabbitmq-server-0" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.916049 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/36603a85-71a0-4dc5-ab71-cfee0f3331c3-config-data\") pod \"rabbitmq-server-0\" (UID: \"36603a85-71a0-4dc5-ab71-cfee0f3331c3\") " pod="openstack/rabbitmq-server-0" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.916111 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/36603a85-71a0-4dc5-ab71-cfee0f3331c3-server-conf\") pod \"rabbitmq-server-0\" (UID: \"36603a85-71a0-4dc5-ab71-cfee0f3331c3\") " pod="openstack/rabbitmq-server-0" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.916144 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e1274333-a9d8-468f-be72-671f26f26d78-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1274333-a9d8-468f-be72-671f26f26d78\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.916179 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/36603a85-71a0-4dc5-ab71-cfee0f3331c3-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"36603a85-71a0-4dc5-ab71-cfee0f3331c3\") " pod="openstack/rabbitmq-server-0" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.916204 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e1274333-a9d8-468f-be72-671f26f26d78-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1274333-a9d8-468f-be72-671f26f26d78\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.916232 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkrnr\" (UniqueName: \"kubernetes.io/projected/36603a85-71a0-4dc5-ab71-cfee0f3331c3-kube-api-access-qkrnr\") pod \"rabbitmq-server-0\" (UID: \"36603a85-71a0-4dc5-ab71-cfee0f3331c3\") " pod="openstack/rabbitmq-server-0" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.916255 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e1274333-a9d8-468f-be72-671f26f26d78-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1274333-a9d8-468f-be72-671f26f26d78\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.916629 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-fab05de8-314b-4d20-a094-6a424085124f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fab05de8-314b-4d20-a094-6a424085124f\") pod \"rabbitmq-server-0\" (UID: \"36603a85-71a0-4dc5-ab71-cfee0f3331c3\") " pod="openstack/rabbitmq-server-0" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.916661 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0f751401-42db-445a-851f-76536f4e37c8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0f751401-42db-445a-851f-76536f4e37c8\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1274333-a9d8-468f-be72-671f26f26d78\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.916682 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e1274333-a9d8-468f-be72-671f26f26d78-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1274333-a9d8-468f-be72-671f26f26d78\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.916712 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/36603a85-71a0-4dc5-ab71-cfee0f3331c3-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"36603a85-71a0-4dc5-ab71-cfee0f3331c3\") " pod="openstack/rabbitmq-server-0" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.916745 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/36603a85-71a0-4dc5-ab71-cfee0f3331c3-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"36603a85-71a0-4dc5-ab71-cfee0f3331c3\") " pod="openstack/rabbitmq-server-0" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.916769 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d49ss\" (UniqueName: \"kubernetes.io/projected/e1274333-a9d8-468f-be72-671f26f26d78-kube-api-access-d49ss\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1274333-a9d8-468f-be72-671f26f26d78\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.916793 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/36603a85-71a0-4dc5-ab71-cfee0f3331c3-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"36603a85-71a0-4dc5-ab71-cfee0f3331c3\") " pod="openstack/rabbitmq-server-0" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.916817 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/36603a85-71a0-4dc5-ab71-cfee0f3331c3-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"36603a85-71a0-4dc5-ab71-cfee0f3331c3\") " pod="openstack/rabbitmq-server-0" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.916846 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e1274333-a9d8-468f-be72-671f26f26d78-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1274333-a9d8-468f-be72-671f26f26d78\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.916896 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e1274333-a9d8-468f-be72-671f26f26d78-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1274333-a9d8-468f-be72-671f26f26d78\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.916917 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e1274333-a9d8-468f-be72-671f26f26d78-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1274333-a9d8-468f-be72-671f26f26d78\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.916952 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/36603a85-71a0-4dc5-ab71-cfee0f3331c3-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"36603a85-71a0-4dc5-ab71-cfee0f3331c3\") " pod="openstack/rabbitmq-server-0" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.916974 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/36603a85-71a0-4dc5-ab71-cfee0f3331c3-pod-info\") pod \"rabbitmq-server-0\" (UID: \"36603a85-71a0-4dc5-ab71-cfee0f3331c3\") " pod="openstack/rabbitmq-server-0" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.916995 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e1274333-a9d8-468f-be72-671f26f26d78-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1274333-a9d8-468f-be72-671f26f26d78\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.917022 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e1274333-a9d8-468f-be72-671f26f26d78-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1274333-a9d8-468f-be72-671f26f26d78\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.917452 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/36603a85-71a0-4dc5-ab71-cfee0f3331c3-config-data\") pod \"rabbitmq-server-0\" (UID: \"36603a85-71a0-4dc5-ab71-cfee0f3331c3\") " pod="openstack/rabbitmq-server-0" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.917550 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/36603a85-71a0-4dc5-ab71-cfee0f3331c3-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"36603a85-71a0-4dc5-ab71-cfee0f3331c3\") " pod="openstack/rabbitmq-server-0" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.917802 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/36603a85-71a0-4dc5-ab71-cfee0f3331c3-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"36603a85-71a0-4dc5-ab71-cfee0f3331c3\") " pod="openstack/rabbitmq-server-0" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.917820 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/36603a85-71a0-4dc5-ab71-cfee0f3331c3-server-conf\") pod \"rabbitmq-server-0\" (UID: \"36603a85-71a0-4dc5-ab71-cfee0f3331c3\") " pod="openstack/rabbitmq-server-0" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.917875 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/36603a85-71a0-4dc5-ab71-cfee0f3331c3-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"36603a85-71a0-4dc5-ab71-cfee0f3331c3\") " pod="openstack/rabbitmq-server-0" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.919905 4808 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.919929 4808 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-fab05de8-314b-4d20-a094-6a424085124f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fab05de8-314b-4d20-a094-6a424085124f\") pod \"rabbitmq-server-0\" (UID: \"36603a85-71a0-4dc5-ab71-cfee0f3331c3\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1cb73165314b57b01ce9da0b6744027cf77411009629d1be7e0160691d4d6f11/globalmount\"" pod="openstack/rabbitmq-server-0" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.920820 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/36603a85-71a0-4dc5-ab71-cfee0f3331c3-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"36603a85-71a0-4dc5-ab71-cfee0f3331c3\") " pod="openstack/rabbitmq-server-0" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.920887 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/36603a85-71a0-4dc5-ab71-cfee0f3331c3-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"36603a85-71a0-4dc5-ab71-cfee0f3331c3\") " pod="openstack/rabbitmq-server-0" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.924977 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/36603a85-71a0-4dc5-ab71-cfee0f3331c3-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"36603a85-71a0-4dc5-ab71-cfee0f3331c3\") " pod="openstack/rabbitmq-server-0" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.929789 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/36603a85-71a0-4dc5-ab71-cfee0f3331c3-pod-info\") pod \"rabbitmq-server-0\" (UID: \"36603a85-71a0-4dc5-ab71-cfee0f3331c3\") " pod="openstack/rabbitmq-server-0" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.938570 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkrnr\" (UniqueName: \"kubernetes.io/projected/36603a85-71a0-4dc5-ab71-cfee0f3331c3-kube-api-access-qkrnr\") pod \"rabbitmq-server-0\" (UID: \"36603a85-71a0-4dc5-ab71-cfee0f3331c3\") " pod="openstack/rabbitmq-server-0" Mar 11 10:07:23 crc kubenswrapper[4808]: I0311 10:07:23.952649 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-fab05de8-314b-4d20-a094-6a424085124f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fab05de8-314b-4d20-a094-6a424085124f\") pod \"rabbitmq-server-0\" (UID: \"36603a85-71a0-4dc5-ab71-cfee0f3331c3\") " pod="openstack/rabbitmq-server-0" Mar 11 10:07:24 crc kubenswrapper[4808]: I0311 10:07:24.017840 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e1274333-a9d8-468f-be72-671f26f26d78-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1274333-a9d8-468f-be72-671f26f26d78\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:07:24 crc kubenswrapper[4808]: I0311 10:07:24.017880 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e1274333-a9d8-468f-be72-671f26f26d78-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1274333-a9d8-468f-be72-671f26f26d78\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:07:24 crc kubenswrapper[4808]: I0311 10:07:24.017908 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e1274333-a9d8-468f-be72-671f26f26d78-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1274333-a9d8-468f-be72-671f26f26d78\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:07:24 crc kubenswrapper[4808]: I0311 10:07:24.017930 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e1274333-a9d8-468f-be72-671f26f26d78-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1274333-a9d8-468f-be72-671f26f26d78\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:07:24 crc kubenswrapper[4808]: I0311 10:07:24.017959 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e1274333-a9d8-468f-be72-671f26f26d78-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1274333-a9d8-468f-be72-671f26f26d78\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:07:24 crc kubenswrapper[4808]: I0311 10:07:24.017985 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e1274333-a9d8-468f-be72-671f26f26d78-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1274333-a9d8-468f-be72-671f26f26d78\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:07:24 crc kubenswrapper[4808]: I0311 10:07:24.018003 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e1274333-a9d8-468f-be72-671f26f26d78-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1274333-a9d8-468f-be72-671f26f26d78\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:07:24 crc kubenswrapper[4808]: I0311 10:07:24.018035 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0f751401-42db-445a-851f-76536f4e37c8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0f751401-42db-445a-851f-76536f4e37c8\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1274333-a9d8-468f-be72-671f26f26d78\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:07:24 crc kubenswrapper[4808]: I0311 10:07:24.018092 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e1274333-a9d8-468f-be72-671f26f26d78-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1274333-a9d8-468f-be72-671f26f26d78\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:07:24 crc kubenswrapper[4808]: I0311 10:07:24.018120 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d49ss\" (UniqueName: \"kubernetes.io/projected/e1274333-a9d8-468f-be72-671f26f26d78-kube-api-access-d49ss\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1274333-a9d8-468f-be72-671f26f26d78\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:07:24 crc kubenswrapper[4808]: I0311 10:07:24.018168 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e1274333-a9d8-468f-be72-671f26f26d78-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1274333-a9d8-468f-be72-671f26f26d78\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:07:24 crc kubenswrapper[4808]: I0311 10:07:24.018459 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e1274333-a9d8-468f-be72-671f26f26d78-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1274333-a9d8-468f-be72-671f26f26d78\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:07:24 crc kubenswrapper[4808]: I0311 10:07:24.018655 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e1274333-a9d8-468f-be72-671f26f26d78-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1274333-a9d8-468f-be72-671f26f26d78\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:07:24 crc kubenswrapper[4808]: I0311 10:07:24.018741 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e1274333-a9d8-468f-be72-671f26f26d78-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1274333-a9d8-468f-be72-671f26f26d78\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:07:24 crc kubenswrapper[4808]: I0311 10:07:24.019294 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e1274333-a9d8-468f-be72-671f26f26d78-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1274333-a9d8-468f-be72-671f26f26d78\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:07:24 crc kubenswrapper[4808]: I0311 10:07:24.019464 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e1274333-a9d8-468f-be72-671f26f26d78-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1274333-a9d8-468f-be72-671f26f26d78\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:07:24 crc kubenswrapper[4808]: I0311 10:07:24.020175 4808 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 11 10:07:24 crc kubenswrapper[4808]: I0311 10:07:24.020207 4808 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0f751401-42db-445a-851f-76536f4e37c8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0f751401-42db-445a-851f-76536f4e37c8\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1274333-a9d8-468f-be72-671f26f26d78\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e00aa266669966b723e5adf11419961b42a5bc0d2a6dcd9225857088f1f90cdd/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:07:24 crc kubenswrapper[4808]: I0311 10:07:24.021379 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e1274333-a9d8-468f-be72-671f26f26d78-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1274333-a9d8-468f-be72-671f26f26d78\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:07:24 crc kubenswrapper[4808]: I0311 10:07:24.021417 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e1274333-a9d8-468f-be72-671f26f26d78-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1274333-a9d8-468f-be72-671f26f26d78\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:07:24 crc kubenswrapper[4808]: I0311 10:07:24.021762 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e1274333-a9d8-468f-be72-671f26f26d78-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1274333-a9d8-468f-be72-671f26f26d78\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:07:24 crc kubenswrapper[4808]: I0311 10:07:24.021871 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e1274333-a9d8-468f-be72-671f26f26d78-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1274333-a9d8-468f-be72-671f26f26d78\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:07:24 crc kubenswrapper[4808]: I0311 10:07:24.042156 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d49ss\" (UniqueName: \"kubernetes.io/projected/e1274333-a9d8-468f-be72-671f26f26d78-kube-api-access-d49ss\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1274333-a9d8-468f-be72-671f26f26d78\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:07:24 crc kubenswrapper[4808]: I0311 10:07:24.044681 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0f751401-42db-445a-851f-76536f4e37c8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0f751401-42db-445a-851f-76536f4e37c8\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1274333-a9d8-468f-be72-671f26f26d78\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:07:24 crc kubenswrapper[4808]: I0311 10:07:24.086092 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 11 10:07:24 crc kubenswrapper[4808]: I0311 10:07:24.133407 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:07:24 crc kubenswrapper[4808]: I0311 10:07:24.548395 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 11 10:07:24 crc kubenswrapper[4808]: I0311 10:07:24.640129 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 11 10:07:24 crc kubenswrapper[4808]: W0311 10:07:24.642536 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode1274333_a9d8_468f_be72_671f26f26d78.slice/crio-09efdd5a06b3288c4bc3714a83196042e45b8e42b81b4c4bb0d09985ac24a473 WatchSource:0}: Error finding container 09efdd5a06b3288c4bc3714a83196042e45b8e42b81b4c4bb0d09985ac24a473: Status 404 returned error can't find the container with id 09efdd5a06b3288c4bc3714a83196042e45b8e42b81b4c4bb0d09985ac24a473 Mar 11 10:07:24 crc kubenswrapper[4808]: I0311 10:07:24.676229 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e1274333-a9d8-468f-be72-671f26f26d78","Type":"ContainerStarted","Data":"09efdd5a06b3288c4bc3714a83196042e45b8e42b81b4c4bb0d09985ac24a473"} Mar 11 10:07:24 crc kubenswrapper[4808]: I0311 10:07:24.677308 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"36603a85-71a0-4dc5-ab71-cfee0f3331c3","Type":"ContainerStarted","Data":"578ef7ffecf2592cd7eea6fb722bd0b7722e1b47c463ec9662f58657d988bb25"} Mar 11 10:07:26 crc kubenswrapper[4808]: I0311 10:07:26.702908 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e1274333-a9d8-468f-be72-671f26f26d78","Type":"ContainerStarted","Data":"214ccbed076ae001e9e2d6dbe3264c9a3c5bcf7bd5249be406a550d9c464eb0f"} Mar 11 10:07:26 crc kubenswrapper[4808]: I0311 10:07:26.706667 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"36603a85-71a0-4dc5-ab71-cfee0f3331c3","Type":"ContainerStarted","Data":"1a80df7e5997fe7be9ef113b1bf395f7c75fd56a202798c997ed960661d2ecd1"} Mar 11 10:07:27 crc kubenswrapper[4808]: I0311 10:07:27.789701 4808 scope.go:117] "RemoveContainer" containerID="d524673f3a545738ac720864e37310705b7ec99d76cdcdbcad5a32411f049b97" Mar 11 10:07:27 crc kubenswrapper[4808]: E0311 10:07:27.789919 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 10:07:28 crc kubenswrapper[4808]: I0311 10:07:28.330073 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="4c8053c2-7a71-4381-aa2f-9e0d4d1963e9" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.1.29:5671: i/o timeout" Mar 11 10:07:31 crc kubenswrapper[4808]: I0311 10:07:31.203590 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9lwl6"] Mar 11 10:07:31 crc kubenswrapper[4808]: I0311 10:07:31.207779 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9lwl6" Mar 11 10:07:31 crc kubenswrapper[4808]: I0311 10:07:31.221261 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9lwl6"] Mar 11 10:07:31 crc kubenswrapper[4808]: I0311 10:07:31.355078 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81d9ed0a-1430-421a-8f1f-7f5837ff2383-catalog-content\") pod \"redhat-operators-9lwl6\" (UID: \"81d9ed0a-1430-421a-8f1f-7f5837ff2383\") " pod="openshift-marketplace/redhat-operators-9lwl6" Mar 11 10:07:31 crc kubenswrapper[4808]: I0311 10:07:31.355128 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81d9ed0a-1430-421a-8f1f-7f5837ff2383-utilities\") pod \"redhat-operators-9lwl6\" (UID: \"81d9ed0a-1430-421a-8f1f-7f5837ff2383\") " pod="openshift-marketplace/redhat-operators-9lwl6" Mar 11 10:07:31 crc kubenswrapper[4808]: I0311 10:07:31.355164 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqxtb\" (UniqueName: \"kubernetes.io/projected/81d9ed0a-1430-421a-8f1f-7f5837ff2383-kube-api-access-pqxtb\") pod \"redhat-operators-9lwl6\" (UID: \"81d9ed0a-1430-421a-8f1f-7f5837ff2383\") " pod="openshift-marketplace/redhat-operators-9lwl6" Mar 11 10:07:31 crc kubenswrapper[4808]: I0311 10:07:31.456692 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81d9ed0a-1430-421a-8f1f-7f5837ff2383-catalog-content\") pod \"redhat-operators-9lwl6\" (UID: \"81d9ed0a-1430-421a-8f1f-7f5837ff2383\") " pod="openshift-marketplace/redhat-operators-9lwl6" Mar 11 10:07:31 crc kubenswrapper[4808]: I0311 10:07:31.457073 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81d9ed0a-1430-421a-8f1f-7f5837ff2383-utilities\") pod \"redhat-operators-9lwl6\" (UID: \"81d9ed0a-1430-421a-8f1f-7f5837ff2383\") " pod="openshift-marketplace/redhat-operators-9lwl6" Mar 11 10:07:31 crc kubenswrapper[4808]: I0311 10:07:31.457128 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqxtb\" (UniqueName: \"kubernetes.io/projected/81d9ed0a-1430-421a-8f1f-7f5837ff2383-kube-api-access-pqxtb\") pod \"redhat-operators-9lwl6\" (UID: \"81d9ed0a-1430-421a-8f1f-7f5837ff2383\") " pod="openshift-marketplace/redhat-operators-9lwl6" Mar 11 10:07:31 crc kubenswrapper[4808]: I0311 10:07:31.457145 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81d9ed0a-1430-421a-8f1f-7f5837ff2383-catalog-content\") pod \"redhat-operators-9lwl6\" (UID: \"81d9ed0a-1430-421a-8f1f-7f5837ff2383\") " pod="openshift-marketplace/redhat-operators-9lwl6" Mar 11 10:07:31 crc kubenswrapper[4808]: I0311 10:07:31.457636 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81d9ed0a-1430-421a-8f1f-7f5837ff2383-utilities\") pod \"redhat-operators-9lwl6\" (UID: \"81d9ed0a-1430-421a-8f1f-7f5837ff2383\") " pod="openshift-marketplace/redhat-operators-9lwl6" Mar 11 10:07:31 crc kubenswrapper[4808]: I0311 10:07:31.475118 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqxtb\" (UniqueName: \"kubernetes.io/projected/81d9ed0a-1430-421a-8f1f-7f5837ff2383-kube-api-access-pqxtb\") pod \"redhat-operators-9lwl6\" (UID: \"81d9ed0a-1430-421a-8f1f-7f5837ff2383\") " pod="openshift-marketplace/redhat-operators-9lwl6" Mar 11 10:07:31 crc kubenswrapper[4808]: I0311 10:07:31.543158 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9lwl6" Mar 11 10:07:31 crc kubenswrapper[4808]: I0311 10:07:31.986818 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9lwl6"] Mar 11 10:07:32 crc kubenswrapper[4808]: I0311 10:07:32.762973 4808 generic.go:334] "Generic (PLEG): container finished" podID="81d9ed0a-1430-421a-8f1f-7f5837ff2383" containerID="a56323ba4e80e54c2895fdbbbcd2b4ad19f3171df4a5dba82f89d35c1641844e" exitCode=0 Mar 11 10:07:32 crc kubenswrapper[4808]: I0311 10:07:32.763025 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9lwl6" event={"ID":"81d9ed0a-1430-421a-8f1f-7f5837ff2383","Type":"ContainerDied","Data":"a56323ba4e80e54c2895fdbbbcd2b4ad19f3171df4a5dba82f89d35c1641844e"} Mar 11 10:07:32 crc kubenswrapper[4808]: I0311 10:07:32.763052 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9lwl6" event={"ID":"81d9ed0a-1430-421a-8f1f-7f5837ff2383","Type":"ContainerStarted","Data":"25103127ba0d5a12ffdee3e170da11683f98478f0b8663c6209c983c720535d3"} Mar 11 10:07:33 crc kubenswrapper[4808]: I0311 10:07:33.771023 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9lwl6" event={"ID":"81d9ed0a-1430-421a-8f1f-7f5837ff2383","Type":"ContainerStarted","Data":"e4609e638318a62b8b9d1f114a066a6d00848853831e624fa1d59e0d7a6f3eeb"} Mar 11 10:07:34 crc kubenswrapper[4808]: I0311 10:07:34.782095 4808 generic.go:334] "Generic (PLEG): container finished" podID="81d9ed0a-1430-421a-8f1f-7f5837ff2383" containerID="e4609e638318a62b8b9d1f114a066a6d00848853831e624fa1d59e0d7a6f3eeb" exitCode=0 Mar 11 10:07:34 crc kubenswrapper[4808]: I0311 10:07:34.782165 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9lwl6" event={"ID":"81d9ed0a-1430-421a-8f1f-7f5837ff2383","Type":"ContainerDied","Data":"e4609e638318a62b8b9d1f114a066a6d00848853831e624fa1d59e0d7a6f3eeb"} Mar 11 10:07:36 crc kubenswrapper[4808]: I0311 10:07:36.806102 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9lwl6" event={"ID":"81d9ed0a-1430-421a-8f1f-7f5837ff2383","Type":"ContainerStarted","Data":"7872949b007dcab0222e5d12faa2237b205ac28e4ee1812d4d0d31c69e6923ef"} Mar 11 10:07:36 crc kubenswrapper[4808]: I0311 10:07:36.835886 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9lwl6" podStartSLOduration=2.802618503 podStartE2EDuration="5.835865959s" podCreationTimestamp="2026-03-11 10:07:31 +0000 UTC" firstStartedPulling="2026-03-11 10:07:32.766057842 +0000 UTC m=+5303.719381202" lastFinishedPulling="2026-03-11 10:07:35.799305288 +0000 UTC m=+5306.752628658" observedRunningTime="2026-03-11 10:07:36.832205645 +0000 UTC m=+5307.785528995" watchObservedRunningTime="2026-03-11 10:07:36.835865959 +0000 UTC m=+5307.789189299" Mar 11 10:07:41 crc kubenswrapper[4808]: I0311 10:07:41.543346 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9lwl6" Mar 11 10:07:41 crc kubenswrapper[4808]: I0311 10:07:41.544522 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9lwl6" Mar 11 10:07:41 crc kubenswrapper[4808]: I0311 10:07:41.790217 4808 scope.go:117] "RemoveContainer" containerID="d524673f3a545738ac720864e37310705b7ec99d76cdcdbcad5a32411f049b97" Mar 11 10:07:41 crc kubenswrapper[4808]: E0311 10:07:41.790852 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 10:07:42 crc kubenswrapper[4808]: I0311 10:07:42.591684 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9lwl6" podUID="81d9ed0a-1430-421a-8f1f-7f5837ff2383" containerName="registry-server" probeResult="failure" output=< Mar 11 10:07:42 crc kubenswrapper[4808]: timeout: failed to connect service ":50051" within 1s Mar 11 10:07:42 crc kubenswrapper[4808]: > Mar 11 10:07:51 crc kubenswrapper[4808]: I0311 10:07:51.605028 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9lwl6" Mar 11 10:07:51 crc kubenswrapper[4808]: I0311 10:07:51.661007 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9lwl6" Mar 11 10:07:51 crc kubenswrapper[4808]: I0311 10:07:51.844546 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9lwl6"] Mar 11 10:07:52 crc kubenswrapper[4808]: I0311 10:07:52.955869 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9lwl6" podUID="81d9ed0a-1430-421a-8f1f-7f5837ff2383" containerName="registry-server" containerID="cri-o://7872949b007dcab0222e5d12faa2237b205ac28e4ee1812d4d0d31c69e6923ef" gracePeriod=2 Mar 11 10:07:53 crc kubenswrapper[4808]: I0311 10:07:53.370806 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9lwl6" Mar 11 10:07:53 crc kubenswrapper[4808]: I0311 10:07:53.452082 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81d9ed0a-1430-421a-8f1f-7f5837ff2383-utilities\") pod \"81d9ed0a-1430-421a-8f1f-7f5837ff2383\" (UID: \"81d9ed0a-1430-421a-8f1f-7f5837ff2383\") " Mar 11 10:07:53 crc kubenswrapper[4808]: I0311 10:07:53.452174 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81d9ed0a-1430-421a-8f1f-7f5837ff2383-catalog-content\") pod \"81d9ed0a-1430-421a-8f1f-7f5837ff2383\" (UID: \"81d9ed0a-1430-421a-8f1f-7f5837ff2383\") " Mar 11 10:07:53 crc kubenswrapper[4808]: I0311 10:07:53.452489 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pqxtb\" (UniqueName: \"kubernetes.io/projected/81d9ed0a-1430-421a-8f1f-7f5837ff2383-kube-api-access-pqxtb\") pod \"81d9ed0a-1430-421a-8f1f-7f5837ff2383\" (UID: \"81d9ed0a-1430-421a-8f1f-7f5837ff2383\") " Mar 11 10:07:53 crc kubenswrapper[4808]: I0311 10:07:53.453130 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81d9ed0a-1430-421a-8f1f-7f5837ff2383-utilities" (OuterVolumeSpecName: "utilities") pod "81d9ed0a-1430-421a-8f1f-7f5837ff2383" (UID: "81d9ed0a-1430-421a-8f1f-7f5837ff2383"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 10:07:53 crc kubenswrapper[4808]: I0311 10:07:53.458774 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81d9ed0a-1430-421a-8f1f-7f5837ff2383-kube-api-access-pqxtb" (OuterVolumeSpecName: "kube-api-access-pqxtb") pod "81d9ed0a-1430-421a-8f1f-7f5837ff2383" (UID: "81d9ed0a-1430-421a-8f1f-7f5837ff2383"). InnerVolumeSpecName "kube-api-access-pqxtb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:07:53 crc kubenswrapper[4808]: I0311 10:07:53.554607 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pqxtb\" (UniqueName: \"kubernetes.io/projected/81d9ed0a-1430-421a-8f1f-7f5837ff2383-kube-api-access-pqxtb\") on node \"crc\" DevicePath \"\"" Mar 11 10:07:53 crc kubenswrapper[4808]: I0311 10:07:53.554644 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81d9ed0a-1430-421a-8f1f-7f5837ff2383-utilities\") on node \"crc\" DevicePath \"\"" Mar 11 10:07:53 crc kubenswrapper[4808]: I0311 10:07:53.599443 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81d9ed0a-1430-421a-8f1f-7f5837ff2383-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "81d9ed0a-1430-421a-8f1f-7f5837ff2383" (UID: "81d9ed0a-1430-421a-8f1f-7f5837ff2383"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 10:07:53 crc kubenswrapper[4808]: I0311 10:07:53.655831 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81d9ed0a-1430-421a-8f1f-7f5837ff2383-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 11 10:07:53 crc kubenswrapper[4808]: E0311 10:07:53.957644 4808 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod81d9ed0a_1430_421a_8f1f_7f5837ff2383.slice/crio-25103127ba0d5a12ffdee3e170da11683f98478f0b8663c6209c983c720535d3\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod81d9ed0a_1430_421a_8f1f_7f5837ff2383.slice\": RecentStats: unable to find data in memory cache]" Mar 11 10:07:53 crc kubenswrapper[4808]: I0311 10:07:53.970870 4808 generic.go:334] "Generic (PLEG): container finished" podID="81d9ed0a-1430-421a-8f1f-7f5837ff2383" containerID="7872949b007dcab0222e5d12faa2237b205ac28e4ee1812d4d0d31c69e6923ef" exitCode=0 Mar 11 10:07:53 crc kubenswrapper[4808]: I0311 10:07:53.970926 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9lwl6" event={"ID":"81d9ed0a-1430-421a-8f1f-7f5837ff2383","Type":"ContainerDied","Data":"7872949b007dcab0222e5d12faa2237b205ac28e4ee1812d4d0d31c69e6923ef"} Mar 11 10:07:53 crc kubenswrapper[4808]: I0311 10:07:53.970971 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9lwl6" Mar 11 10:07:53 crc kubenswrapper[4808]: I0311 10:07:53.971003 4808 scope.go:117] "RemoveContainer" containerID="7872949b007dcab0222e5d12faa2237b205ac28e4ee1812d4d0d31c69e6923ef" Mar 11 10:07:53 crc kubenswrapper[4808]: I0311 10:07:53.970983 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9lwl6" event={"ID":"81d9ed0a-1430-421a-8f1f-7f5837ff2383","Type":"ContainerDied","Data":"25103127ba0d5a12ffdee3e170da11683f98478f0b8663c6209c983c720535d3"} Mar 11 10:07:53 crc kubenswrapper[4808]: I0311 10:07:53.989993 4808 scope.go:117] "RemoveContainer" containerID="e4609e638318a62b8b9d1f114a066a6d00848853831e624fa1d59e0d7a6f3eeb" Mar 11 10:07:53 crc kubenswrapper[4808]: I0311 10:07:53.992060 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9lwl6"] Mar 11 10:07:53 crc kubenswrapper[4808]: I0311 10:07:53.996664 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9lwl6"] Mar 11 10:07:54 crc kubenswrapper[4808]: I0311 10:07:54.008051 4808 scope.go:117] "RemoveContainer" containerID="a56323ba4e80e54c2895fdbbbcd2b4ad19f3171df4a5dba82f89d35c1641844e" Mar 11 10:07:54 crc kubenswrapper[4808]: I0311 10:07:54.042853 4808 scope.go:117] "RemoveContainer" containerID="7872949b007dcab0222e5d12faa2237b205ac28e4ee1812d4d0d31c69e6923ef" Mar 11 10:07:54 crc kubenswrapper[4808]: E0311 10:07:54.043437 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7872949b007dcab0222e5d12faa2237b205ac28e4ee1812d4d0d31c69e6923ef\": container with ID starting with 7872949b007dcab0222e5d12faa2237b205ac28e4ee1812d4d0d31c69e6923ef not found: ID does not exist" containerID="7872949b007dcab0222e5d12faa2237b205ac28e4ee1812d4d0d31c69e6923ef" Mar 11 10:07:54 crc kubenswrapper[4808]: I0311 10:07:54.043498 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7872949b007dcab0222e5d12faa2237b205ac28e4ee1812d4d0d31c69e6923ef"} err="failed to get container status \"7872949b007dcab0222e5d12faa2237b205ac28e4ee1812d4d0d31c69e6923ef\": rpc error: code = NotFound desc = could not find container \"7872949b007dcab0222e5d12faa2237b205ac28e4ee1812d4d0d31c69e6923ef\": container with ID starting with 7872949b007dcab0222e5d12faa2237b205ac28e4ee1812d4d0d31c69e6923ef not found: ID does not exist" Mar 11 10:07:54 crc kubenswrapper[4808]: I0311 10:07:54.043524 4808 scope.go:117] "RemoveContainer" containerID="e4609e638318a62b8b9d1f114a066a6d00848853831e624fa1d59e0d7a6f3eeb" Mar 11 10:07:54 crc kubenswrapper[4808]: E0311 10:07:54.043833 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4609e638318a62b8b9d1f114a066a6d00848853831e624fa1d59e0d7a6f3eeb\": container with ID starting with e4609e638318a62b8b9d1f114a066a6d00848853831e624fa1d59e0d7a6f3eeb not found: ID does not exist" containerID="e4609e638318a62b8b9d1f114a066a6d00848853831e624fa1d59e0d7a6f3eeb" Mar 11 10:07:54 crc kubenswrapper[4808]: I0311 10:07:54.043869 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4609e638318a62b8b9d1f114a066a6d00848853831e624fa1d59e0d7a6f3eeb"} err="failed to get container status \"e4609e638318a62b8b9d1f114a066a6d00848853831e624fa1d59e0d7a6f3eeb\": rpc error: code = NotFound desc = could not find container \"e4609e638318a62b8b9d1f114a066a6d00848853831e624fa1d59e0d7a6f3eeb\": container with ID starting with e4609e638318a62b8b9d1f114a066a6d00848853831e624fa1d59e0d7a6f3eeb not found: ID does not exist" Mar 11 10:07:54 crc kubenswrapper[4808]: I0311 10:07:54.043882 4808 scope.go:117] "RemoveContainer" containerID="a56323ba4e80e54c2895fdbbbcd2b4ad19f3171df4a5dba82f89d35c1641844e" Mar 11 10:07:54 crc kubenswrapper[4808]: E0311 10:07:54.044235 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a56323ba4e80e54c2895fdbbbcd2b4ad19f3171df4a5dba82f89d35c1641844e\": container with ID starting with a56323ba4e80e54c2895fdbbbcd2b4ad19f3171df4a5dba82f89d35c1641844e not found: ID does not exist" containerID="a56323ba4e80e54c2895fdbbbcd2b4ad19f3171df4a5dba82f89d35c1641844e" Mar 11 10:07:54 crc kubenswrapper[4808]: I0311 10:07:54.044280 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a56323ba4e80e54c2895fdbbbcd2b4ad19f3171df4a5dba82f89d35c1641844e"} err="failed to get container status \"a56323ba4e80e54c2895fdbbbcd2b4ad19f3171df4a5dba82f89d35c1641844e\": rpc error: code = NotFound desc = could not find container \"a56323ba4e80e54c2895fdbbbcd2b4ad19f3171df4a5dba82f89d35c1641844e\": container with ID starting with a56323ba4e80e54c2895fdbbbcd2b4ad19f3171df4a5dba82f89d35c1641844e not found: ID does not exist" Mar 11 10:07:54 crc kubenswrapper[4808]: I0311 10:07:54.790025 4808 scope.go:117] "RemoveContainer" containerID="d524673f3a545738ac720864e37310705b7ec99d76cdcdbcad5a32411f049b97" Mar 11 10:07:55 crc kubenswrapper[4808]: I0311 10:07:55.800681 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81d9ed0a-1430-421a-8f1f-7f5837ff2383" path="/var/lib/kubelet/pods/81d9ed0a-1430-421a-8f1f-7f5837ff2383/volumes" Mar 11 10:07:55 crc kubenswrapper[4808]: I0311 10:07:55.993252 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" event={"ID":"3dda5309-668d-4e3c-b3b2-1d708eecc578","Type":"ContainerStarted","Data":"d68def1bbe9769885a009b4f600ab92eb2738788864513034c7d900f875e6f37"} Mar 11 10:07:59 crc kubenswrapper[4808]: I0311 10:07:59.023795 4808 generic.go:334] "Generic (PLEG): container finished" podID="36603a85-71a0-4dc5-ab71-cfee0f3331c3" containerID="1a80df7e5997fe7be9ef113b1bf395f7c75fd56a202798c997ed960661d2ecd1" exitCode=0 Mar 11 10:07:59 crc kubenswrapper[4808]: I0311 10:07:59.023881 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"36603a85-71a0-4dc5-ab71-cfee0f3331c3","Type":"ContainerDied","Data":"1a80df7e5997fe7be9ef113b1bf395f7c75fd56a202798c997ed960661d2ecd1"} Mar 11 10:07:59 crc kubenswrapper[4808]: I0311 10:07:59.028281 4808 generic.go:334] "Generic (PLEG): container finished" podID="e1274333-a9d8-468f-be72-671f26f26d78" containerID="214ccbed076ae001e9e2d6dbe3264c9a3c5bcf7bd5249be406a550d9c464eb0f" exitCode=0 Mar 11 10:07:59 crc kubenswrapper[4808]: I0311 10:07:59.028341 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e1274333-a9d8-468f-be72-671f26f26d78","Type":"ContainerDied","Data":"214ccbed076ae001e9e2d6dbe3264c9a3c5bcf7bd5249be406a550d9c464eb0f"} Mar 11 10:08:00 crc kubenswrapper[4808]: I0311 10:08:00.036719 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e1274333-a9d8-468f-be72-671f26f26d78","Type":"ContainerStarted","Data":"660b098add6a5269c998655d5a6c31c4aeb79af2406fca52f5d6f588ad9bd659"} Mar 11 10:08:00 crc kubenswrapper[4808]: I0311 10:08:00.037370 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:08:00 crc kubenswrapper[4808]: I0311 10:08:00.038763 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"36603a85-71a0-4dc5-ab71-cfee0f3331c3","Type":"ContainerStarted","Data":"bf65fc24dda41c1ed81b5184e8124ad7d166a380b4de2bac49fc4db6588b815e"} Mar 11 10:08:00 crc kubenswrapper[4808]: I0311 10:08:00.038974 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Mar 11 10:08:00 crc kubenswrapper[4808]: I0311 10:08:00.064478 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.064457885 podStartE2EDuration="37.064457885s" podCreationTimestamp="2026-03-11 10:07:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 10:08:00.060969376 +0000 UTC m=+5331.014292706" watchObservedRunningTime="2026-03-11 10:08:00.064457885 +0000 UTC m=+5331.017781205" Mar 11 10:08:00 crc kubenswrapper[4808]: I0311 10:08:00.090798 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.090773971 podStartE2EDuration="37.090773971s" podCreationTimestamp="2026-03-11 10:07:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 10:08:00.0872112 +0000 UTC m=+5331.040534520" watchObservedRunningTime="2026-03-11 10:08:00.090773971 +0000 UTC m=+5331.044097311" Mar 11 10:08:00 crc kubenswrapper[4808]: I0311 10:08:00.136652 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553728-47kvg"] Mar 11 10:08:00 crc kubenswrapper[4808]: E0311 10:08:00.136973 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81d9ed0a-1430-421a-8f1f-7f5837ff2383" containerName="registry-server" Mar 11 10:08:00 crc kubenswrapper[4808]: I0311 10:08:00.136993 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="81d9ed0a-1430-421a-8f1f-7f5837ff2383" containerName="registry-server" Mar 11 10:08:00 crc kubenswrapper[4808]: E0311 10:08:00.137006 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81d9ed0a-1430-421a-8f1f-7f5837ff2383" containerName="extract-content" Mar 11 10:08:00 crc kubenswrapper[4808]: I0311 10:08:00.137015 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="81d9ed0a-1430-421a-8f1f-7f5837ff2383" containerName="extract-content" Mar 11 10:08:00 crc kubenswrapper[4808]: E0311 10:08:00.137028 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81d9ed0a-1430-421a-8f1f-7f5837ff2383" containerName="extract-utilities" Mar 11 10:08:00 crc kubenswrapper[4808]: I0311 10:08:00.137034 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="81d9ed0a-1430-421a-8f1f-7f5837ff2383" containerName="extract-utilities" Mar 11 10:08:00 crc kubenswrapper[4808]: I0311 10:08:00.137225 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="81d9ed0a-1430-421a-8f1f-7f5837ff2383" containerName="registry-server" Mar 11 10:08:00 crc kubenswrapper[4808]: I0311 10:08:00.137783 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553728-47kvg" Mar 11 10:08:00 crc kubenswrapper[4808]: I0311 10:08:00.140228 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-8r7cc" Mar 11 10:08:00 crc kubenswrapper[4808]: I0311 10:08:00.140997 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 10:08:00 crc kubenswrapper[4808]: I0311 10:08:00.141457 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 10:08:00 crc kubenswrapper[4808]: I0311 10:08:00.145523 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553728-47kvg"] Mar 11 10:08:00 crc kubenswrapper[4808]: I0311 10:08:00.258376 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlf2d\" (UniqueName: \"kubernetes.io/projected/7b9b275d-100b-449a-8cff-8ff9ba3fc336-kube-api-access-vlf2d\") pod \"auto-csr-approver-29553728-47kvg\" (UID: \"7b9b275d-100b-449a-8cff-8ff9ba3fc336\") " pod="openshift-infra/auto-csr-approver-29553728-47kvg" Mar 11 10:08:00 crc kubenswrapper[4808]: I0311 10:08:00.360306 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlf2d\" (UniqueName: \"kubernetes.io/projected/7b9b275d-100b-449a-8cff-8ff9ba3fc336-kube-api-access-vlf2d\") pod \"auto-csr-approver-29553728-47kvg\" (UID: \"7b9b275d-100b-449a-8cff-8ff9ba3fc336\") " pod="openshift-infra/auto-csr-approver-29553728-47kvg" Mar 11 10:08:00 crc kubenswrapper[4808]: I0311 10:08:00.379414 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlf2d\" (UniqueName: \"kubernetes.io/projected/7b9b275d-100b-449a-8cff-8ff9ba3fc336-kube-api-access-vlf2d\") pod \"auto-csr-approver-29553728-47kvg\" (UID: \"7b9b275d-100b-449a-8cff-8ff9ba3fc336\") " pod="openshift-infra/auto-csr-approver-29553728-47kvg" Mar 11 10:08:00 crc kubenswrapper[4808]: I0311 10:08:00.455171 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553728-47kvg" Mar 11 10:08:00 crc kubenswrapper[4808]: I0311 10:08:00.880014 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553728-47kvg"] Mar 11 10:08:00 crc kubenswrapper[4808]: W0311 10:08:00.881156 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7b9b275d_100b_449a_8cff_8ff9ba3fc336.slice/crio-06055d5bacf89d18d17dab1de7cfec4bf06599db941316527ec2c20372f62c32 WatchSource:0}: Error finding container 06055d5bacf89d18d17dab1de7cfec4bf06599db941316527ec2c20372f62c32: Status 404 returned error can't find the container with id 06055d5bacf89d18d17dab1de7cfec4bf06599db941316527ec2c20372f62c32 Mar 11 10:08:01 crc kubenswrapper[4808]: I0311 10:08:01.046041 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553728-47kvg" event={"ID":"7b9b275d-100b-449a-8cff-8ff9ba3fc336","Type":"ContainerStarted","Data":"06055d5bacf89d18d17dab1de7cfec4bf06599db941316527ec2c20372f62c32"} Mar 11 10:08:03 crc kubenswrapper[4808]: I0311 10:08:03.063911 4808 generic.go:334] "Generic (PLEG): container finished" podID="7b9b275d-100b-449a-8cff-8ff9ba3fc336" containerID="f28171a0c7103528519f133e1465494817b7805f2ce2a6a73885d25fcd958065" exitCode=0 Mar 11 10:08:03 crc kubenswrapper[4808]: I0311 10:08:03.064039 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553728-47kvg" event={"ID":"7b9b275d-100b-449a-8cff-8ff9ba3fc336","Type":"ContainerDied","Data":"f28171a0c7103528519f133e1465494817b7805f2ce2a6a73885d25fcd958065"} Mar 11 10:08:04 crc kubenswrapper[4808]: I0311 10:08:04.376452 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553728-47kvg" Mar 11 10:08:04 crc kubenswrapper[4808]: I0311 10:08:04.524421 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vlf2d\" (UniqueName: \"kubernetes.io/projected/7b9b275d-100b-449a-8cff-8ff9ba3fc336-kube-api-access-vlf2d\") pod \"7b9b275d-100b-449a-8cff-8ff9ba3fc336\" (UID: \"7b9b275d-100b-449a-8cff-8ff9ba3fc336\") " Mar 11 10:08:04 crc kubenswrapper[4808]: I0311 10:08:04.531314 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b9b275d-100b-449a-8cff-8ff9ba3fc336-kube-api-access-vlf2d" (OuterVolumeSpecName: "kube-api-access-vlf2d") pod "7b9b275d-100b-449a-8cff-8ff9ba3fc336" (UID: "7b9b275d-100b-449a-8cff-8ff9ba3fc336"). InnerVolumeSpecName "kube-api-access-vlf2d". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:08:04 crc kubenswrapper[4808]: I0311 10:08:04.626245 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vlf2d\" (UniqueName: \"kubernetes.io/projected/7b9b275d-100b-449a-8cff-8ff9ba3fc336-kube-api-access-vlf2d\") on node \"crc\" DevicePath \"\"" Mar 11 10:08:05 crc kubenswrapper[4808]: I0311 10:08:05.079801 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553728-47kvg" event={"ID":"7b9b275d-100b-449a-8cff-8ff9ba3fc336","Type":"ContainerDied","Data":"06055d5bacf89d18d17dab1de7cfec4bf06599db941316527ec2c20372f62c32"} Mar 11 10:08:05 crc kubenswrapper[4808]: I0311 10:08:05.080132 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06055d5bacf89d18d17dab1de7cfec4bf06599db941316527ec2c20372f62c32" Mar 11 10:08:05 crc kubenswrapper[4808]: I0311 10:08:05.079856 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553728-47kvg" Mar 11 10:08:05 crc kubenswrapper[4808]: I0311 10:08:05.447151 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553722-67vl8"] Mar 11 10:08:05 crc kubenswrapper[4808]: I0311 10:08:05.452873 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553722-67vl8"] Mar 11 10:08:05 crc kubenswrapper[4808]: I0311 10:08:05.797750 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d924bd5e-3ed1-4cb4-94f1-87c160c51b47" path="/var/lib/kubelet/pods/d924bd5e-3ed1-4cb4-94f1-87c160c51b47/volumes" Mar 11 10:08:14 crc kubenswrapper[4808]: I0311 10:08:14.090732 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Mar 11 10:08:14 crc kubenswrapper[4808]: I0311 10:08:14.137594 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:08:17 crc kubenswrapper[4808]: I0311 10:08:17.547184 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Mar 11 10:08:17 crc kubenswrapper[4808]: E0311 10:08:17.547917 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b9b275d-100b-449a-8cff-8ff9ba3fc336" containerName="oc" Mar 11 10:08:17 crc kubenswrapper[4808]: I0311 10:08:17.547936 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b9b275d-100b-449a-8cff-8ff9ba3fc336" containerName="oc" Mar 11 10:08:17 crc kubenswrapper[4808]: I0311 10:08:17.548166 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b9b275d-100b-449a-8cff-8ff9ba3fc336" containerName="oc" Mar 11 10:08:17 crc kubenswrapper[4808]: I0311 10:08:17.548771 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Mar 11 10:08:17 crc kubenswrapper[4808]: I0311 10:08:17.552661 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-48lsp" Mar 11 10:08:17 crc kubenswrapper[4808]: I0311 10:08:17.564806 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Mar 11 10:08:17 crc kubenswrapper[4808]: I0311 10:08:17.643937 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sk2dc\" (UniqueName: \"kubernetes.io/projected/82fd3008-62d7-4c21-bae3-56d2d443414d-kube-api-access-sk2dc\") pod \"mariadb-client\" (UID: \"82fd3008-62d7-4c21-bae3-56d2d443414d\") " pod="openstack/mariadb-client" Mar 11 10:08:17 crc kubenswrapper[4808]: I0311 10:08:17.745838 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sk2dc\" (UniqueName: \"kubernetes.io/projected/82fd3008-62d7-4c21-bae3-56d2d443414d-kube-api-access-sk2dc\") pod \"mariadb-client\" (UID: \"82fd3008-62d7-4c21-bae3-56d2d443414d\") " pod="openstack/mariadb-client" Mar 11 10:08:17 crc kubenswrapper[4808]: I0311 10:08:17.767315 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sk2dc\" (UniqueName: \"kubernetes.io/projected/82fd3008-62d7-4c21-bae3-56d2d443414d-kube-api-access-sk2dc\") pod \"mariadb-client\" (UID: \"82fd3008-62d7-4c21-bae3-56d2d443414d\") " pod="openstack/mariadb-client" Mar 11 10:08:17 crc kubenswrapper[4808]: I0311 10:08:17.867753 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Mar 11 10:08:18 crc kubenswrapper[4808]: I0311 10:08:18.430189 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Mar 11 10:08:18 crc kubenswrapper[4808]: W0311 10:08:18.444768 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod82fd3008_62d7_4c21_bae3_56d2d443414d.slice/crio-262b20f7bb22580ff2d87b64705ab97ff5b7a8a18b7df4e3ff29ee3d66109d41 WatchSource:0}: Error finding container 262b20f7bb22580ff2d87b64705ab97ff5b7a8a18b7df4e3ff29ee3d66109d41: Status 404 returned error can't find the container with id 262b20f7bb22580ff2d87b64705ab97ff5b7a8a18b7df4e3ff29ee3d66109d41 Mar 11 10:08:19 crc kubenswrapper[4808]: I0311 10:08:19.194131 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"82fd3008-62d7-4c21-bae3-56d2d443414d","Type":"ContainerStarted","Data":"262b20f7bb22580ff2d87b64705ab97ff5b7a8a18b7df4e3ff29ee3d66109d41"} Mar 11 10:08:25 crc kubenswrapper[4808]: I0311 10:08:25.243091 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"82fd3008-62d7-4c21-bae3-56d2d443414d","Type":"ContainerStarted","Data":"5f7b13d0772d8646d0d71a6cf29cd378d2b514efd87aea074c9638c70cc7979d"} Mar 11 10:08:25 crc kubenswrapper[4808]: I0311 10:08:25.260878 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-client" podStartSLOduration=1.694138669 podStartE2EDuration="8.260860881s" podCreationTimestamp="2026-03-11 10:08:17 +0000 UTC" firstStartedPulling="2026-03-11 10:08:18.449429088 +0000 UTC m=+5349.402752418" lastFinishedPulling="2026-03-11 10:08:25.01615131 +0000 UTC m=+5355.969474630" observedRunningTime="2026-03-11 10:08:25.256031214 +0000 UTC m=+5356.209354544" watchObservedRunningTime="2026-03-11 10:08:25.260860881 +0000 UTC m=+5356.214184211" Mar 11 10:08:40 crc kubenswrapper[4808]: I0311 10:08:40.377940 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Mar 11 10:08:40 crc kubenswrapper[4808]: I0311 10:08:40.378710 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mariadb-client" podUID="82fd3008-62d7-4c21-bae3-56d2d443414d" containerName="mariadb-client" containerID="cri-o://5f7b13d0772d8646d0d71a6cf29cd378d2b514efd87aea074c9638c70cc7979d" gracePeriod=30 Mar 11 10:08:40 crc kubenswrapper[4808]: I0311 10:08:40.842803 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Mar 11 10:08:40 crc kubenswrapper[4808]: I0311 10:08:40.939284 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sk2dc\" (UniqueName: \"kubernetes.io/projected/82fd3008-62d7-4c21-bae3-56d2d443414d-kube-api-access-sk2dc\") pod \"82fd3008-62d7-4c21-bae3-56d2d443414d\" (UID: \"82fd3008-62d7-4c21-bae3-56d2d443414d\") " Mar 11 10:08:40 crc kubenswrapper[4808]: I0311 10:08:40.945551 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82fd3008-62d7-4c21-bae3-56d2d443414d-kube-api-access-sk2dc" (OuterVolumeSpecName: "kube-api-access-sk2dc") pod "82fd3008-62d7-4c21-bae3-56d2d443414d" (UID: "82fd3008-62d7-4c21-bae3-56d2d443414d"). InnerVolumeSpecName "kube-api-access-sk2dc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:08:41 crc kubenswrapper[4808]: I0311 10:08:41.041446 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sk2dc\" (UniqueName: \"kubernetes.io/projected/82fd3008-62d7-4c21-bae3-56d2d443414d-kube-api-access-sk2dc\") on node \"crc\" DevicePath \"\"" Mar 11 10:08:41 crc kubenswrapper[4808]: I0311 10:08:41.395375 4808 generic.go:334] "Generic (PLEG): container finished" podID="82fd3008-62d7-4c21-bae3-56d2d443414d" containerID="5f7b13d0772d8646d0d71a6cf29cd378d2b514efd87aea074c9638c70cc7979d" exitCode=143 Mar 11 10:08:41 crc kubenswrapper[4808]: I0311 10:08:41.395424 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"82fd3008-62d7-4c21-bae3-56d2d443414d","Type":"ContainerDied","Data":"5f7b13d0772d8646d0d71a6cf29cd378d2b514efd87aea074c9638c70cc7979d"} Mar 11 10:08:41 crc kubenswrapper[4808]: I0311 10:08:41.395448 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Mar 11 10:08:41 crc kubenswrapper[4808]: I0311 10:08:41.395480 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"82fd3008-62d7-4c21-bae3-56d2d443414d","Type":"ContainerDied","Data":"262b20f7bb22580ff2d87b64705ab97ff5b7a8a18b7df4e3ff29ee3d66109d41"} Mar 11 10:08:41 crc kubenswrapper[4808]: I0311 10:08:41.395503 4808 scope.go:117] "RemoveContainer" containerID="5f7b13d0772d8646d0d71a6cf29cd378d2b514efd87aea074c9638c70cc7979d" Mar 11 10:08:41 crc kubenswrapper[4808]: I0311 10:08:41.433839 4808 scope.go:117] "RemoveContainer" containerID="5f7b13d0772d8646d0d71a6cf29cd378d2b514efd87aea074c9638c70cc7979d" Mar 11 10:08:41 crc kubenswrapper[4808]: E0311 10:08:41.437695 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f7b13d0772d8646d0d71a6cf29cd378d2b514efd87aea074c9638c70cc7979d\": container with ID starting with 5f7b13d0772d8646d0d71a6cf29cd378d2b514efd87aea074c9638c70cc7979d not found: ID does not exist" containerID="5f7b13d0772d8646d0d71a6cf29cd378d2b514efd87aea074c9638c70cc7979d" Mar 11 10:08:41 crc kubenswrapper[4808]: I0311 10:08:41.437759 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f7b13d0772d8646d0d71a6cf29cd378d2b514efd87aea074c9638c70cc7979d"} err="failed to get container status \"5f7b13d0772d8646d0d71a6cf29cd378d2b514efd87aea074c9638c70cc7979d\": rpc error: code = NotFound desc = could not find container \"5f7b13d0772d8646d0d71a6cf29cd378d2b514efd87aea074c9638c70cc7979d\": container with ID starting with 5f7b13d0772d8646d0d71a6cf29cd378d2b514efd87aea074c9638c70cc7979d not found: ID does not exist" Mar 11 10:08:41 crc kubenswrapper[4808]: I0311 10:08:41.442162 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Mar 11 10:08:41 crc kubenswrapper[4808]: I0311 10:08:41.447585 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Mar 11 10:08:41 crc kubenswrapper[4808]: I0311 10:08:41.799028 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82fd3008-62d7-4c21-bae3-56d2d443414d" path="/var/lib/kubelet/pods/82fd3008-62d7-4c21-bae3-56d2d443414d/volumes" Mar 11 10:08:56 crc kubenswrapper[4808]: I0311 10:08:56.610863 4808 scope.go:117] "RemoveContainer" containerID="f1e2ed260f77d424fc8de77efcebda27dd8376e683f2446d3fab90affdbe47a8" Mar 11 10:10:00 crc kubenswrapper[4808]: I0311 10:10:00.142523 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553730-j45zk"] Mar 11 10:10:00 crc kubenswrapper[4808]: E0311 10:10:00.143500 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82fd3008-62d7-4c21-bae3-56d2d443414d" containerName="mariadb-client" Mar 11 10:10:00 crc kubenswrapper[4808]: I0311 10:10:00.143520 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="82fd3008-62d7-4c21-bae3-56d2d443414d" containerName="mariadb-client" Mar 11 10:10:00 crc kubenswrapper[4808]: I0311 10:10:00.143682 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="82fd3008-62d7-4c21-bae3-56d2d443414d" containerName="mariadb-client" Mar 11 10:10:00 crc kubenswrapper[4808]: I0311 10:10:00.144198 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553730-j45zk" Mar 11 10:10:00 crc kubenswrapper[4808]: I0311 10:10:00.146379 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 10:10:00 crc kubenswrapper[4808]: I0311 10:10:00.146846 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-8r7cc" Mar 11 10:10:00 crc kubenswrapper[4808]: I0311 10:10:00.147306 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 10:10:00 crc kubenswrapper[4808]: I0311 10:10:00.151911 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553730-j45zk"] Mar 11 10:10:00 crc kubenswrapper[4808]: I0311 10:10:00.331414 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lf97z\" (UniqueName: \"kubernetes.io/projected/94a16095-2446-4848-8204-65f4b867368f-kube-api-access-lf97z\") pod \"auto-csr-approver-29553730-j45zk\" (UID: \"94a16095-2446-4848-8204-65f4b867368f\") " pod="openshift-infra/auto-csr-approver-29553730-j45zk" Mar 11 10:10:00 crc kubenswrapper[4808]: I0311 10:10:00.432832 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lf97z\" (UniqueName: \"kubernetes.io/projected/94a16095-2446-4848-8204-65f4b867368f-kube-api-access-lf97z\") pod \"auto-csr-approver-29553730-j45zk\" (UID: \"94a16095-2446-4848-8204-65f4b867368f\") " pod="openshift-infra/auto-csr-approver-29553730-j45zk" Mar 11 10:10:00 crc kubenswrapper[4808]: I0311 10:10:00.457562 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lf97z\" (UniqueName: \"kubernetes.io/projected/94a16095-2446-4848-8204-65f4b867368f-kube-api-access-lf97z\") pod \"auto-csr-approver-29553730-j45zk\" (UID: \"94a16095-2446-4848-8204-65f4b867368f\") " pod="openshift-infra/auto-csr-approver-29553730-j45zk" Mar 11 10:10:00 crc kubenswrapper[4808]: I0311 10:10:00.465541 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553730-j45zk" Mar 11 10:10:00 crc kubenswrapper[4808]: I0311 10:10:00.875788 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553730-j45zk"] Mar 11 10:10:00 crc kubenswrapper[4808]: I0311 10:10:00.890577 4808 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 11 10:10:01 crc kubenswrapper[4808]: I0311 10:10:01.050534 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553730-j45zk" event={"ID":"94a16095-2446-4848-8204-65f4b867368f","Type":"ContainerStarted","Data":"77adec710f7fa8b13d2fc0ac7d2ef1066ab2eedd20bf7d2c7be711a6c52dd8aa"} Mar 11 10:10:03 crc kubenswrapper[4808]: I0311 10:10:03.067503 4808 generic.go:334] "Generic (PLEG): container finished" podID="94a16095-2446-4848-8204-65f4b867368f" containerID="db7ff7442ec651d44c73e2b1eb0f57714519159a18293099d69078af59dae38e" exitCode=0 Mar 11 10:10:03 crc kubenswrapper[4808]: I0311 10:10:03.067608 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553730-j45zk" event={"ID":"94a16095-2446-4848-8204-65f4b867368f","Type":"ContainerDied","Data":"db7ff7442ec651d44c73e2b1eb0f57714519159a18293099d69078af59dae38e"} Mar 11 10:10:04 crc kubenswrapper[4808]: I0311 10:10:04.353845 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553730-j45zk" Mar 11 10:10:04 crc kubenswrapper[4808]: I0311 10:10:04.514904 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lf97z\" (UniqueName: \"kubernetes.io/projected/94a16095-2446-4848-8204-65f4b867368f-kube-api-access-lf97z\") pod \"94a16095-2446-4848-8204-65f4b867368f\" (UID: \"94a16095-2446-4848-8204-65f4b867368f\") " Mar 11 10:10:04 crc kubenswrapper[4808]: I0311 10:10:04.522461 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94a16095-2446-4848-8204-65f4b867368f-kube-api-access-lf97z" (OuterVolumeSpecName: "kube-api-access-lf97z") pod "94a16095-2446-4848-8204-65f4b867368f" (UID: "94a16095-2446-4848-8204-65f4b867368f"). InnerVolumeSpecName "kube-api-access-lf97z". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:10:04 crc kubenswrapper[4808]: I0311 10:10:04.618269 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lf97z\" (UniqueName: \"kubernetes.io/projected/94a16095-2446-4848-8204-65f4b867368f-kube-api-access-lf97z\") on node \"crc\" DevicePath \"\"" Mar 11 10:10:05 crc kubenswrapper[4808]: I0311 10:10:05.084113 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553730-j45zk" event={"ID":"94a16095-2446-4848-8204-65f4b867368f","Type":"ContainerDied","Data":"77adec710f7fa8b13d2fc0ac7d2ef1066ab2eedd20bf7d2c7be711a6c52dd8aa"} Mar 11 10:10:05 crc kubenswrapper[4808]: I0311 10:10:05.084154 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="77adec710f7fa8b13d2fc0ac7d2ef1066ab2eedd20bf7d2c7be711a6c52dd8aa" Mar 11 10:10:05 crc kubenswrapper[4808]: I0311 10:10:05.084194 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553730-j45zk" Mar 11 10:10:05 crc kubenswrapper[4808]: I0311 10:10:05.438731 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553724-46br5"] Mar 11 10:10:05 crc kubenswrapper[4808]: I0311 10:10:05.443703 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553724-46br5"] Mar 11 10:10:05 crc kubenswrapper[4808]: I0311 10:10:05.801429 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71f66e51-0d86-444d-a754-9ccd5aab1449" path="/var/lib/kubelet/pods/71f66e51-0d86-444d-a754-9ccd5aab1449/volumes" Mar 11 10:10:16 crc kubenswrapper[4808]: I0311 10:10:16.027595 4808 patch_prober.go:28] interesting pod/machine-config-daemon-tfsm9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 10:10:16 crc kubenswrapper[4808]: I0311 10:10:16.028347 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 10:10:46 crc kubenswrapper[4808]: I0311 10:10:46.027426 4808 patch_prober.go:28] interesting pod/machine-config-daemon-tfsm9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 10:10:46 crc kubenswrapper[4808]: I0311 10:10:46.028047 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 10:10:56 crc kubenswrapper[4808]: I0311 10:10:56.722582 4808 scope.go:117] "RemoveContainer" containerID="690c04ed0981ae8f63343af98ac6b0beb2df6a26eea293b03e71b92a88641915" Mar 11 10:10:56 crc kubenswrapper[4808]: I0311 10:10:56.780998 4808 scope.go:117] "RemoveContainer" containerID="06763e31982846dd7285bad417060d27456bee8f54c74baff14643169b2b3a35" Mar 11 10:11:16 crc kubenswrapper[4808]: I0311 10:11:16.028188 4808 patch_prober.go:28] interesting pod/machine-config-daemon-tfsm9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 10:11:16 crc kubenswrapper[4808]: I0311 10:11:16.028887 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 10:11:16 crc kubenswrapper[4808]: I0311 10:11:16.028957 4808 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" Mar 11 10:11:16 crc kubenswrapper[4808]: I0311 10:11:16.031267 4808 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d68def1bbe9769885a009b4f600ab92eb2738788864513034c7d900f875e6f37"} pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 11 10:11:16 crc kubenswrapper[4808]: I0311 10:11:16.031421 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerName="machine-config-daemon" containerID="cri-o://d68def1bbe9769885a009b4f600ab92eb2738788864513034c7d900f875e6f37" gracePeriod=600 Mar 11 10:11:16 crc kubenswrapper[4808]: I0311 10:11:16.749622 4808 generic.go:334] "Generic (PLEG): container finished" podID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerID="d68def1bbe9769885a009b4f600ab92eb2738788864513034c7d900f875e6f37" exitCode=0 Mar 11 10:11:16 crc kubenswrapper[4808]: I0311 10:11:16.749708 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" event={"ID":"3dda5309-668d-4e3c-b3b2-1d708eecc578","Type":"ContainerDied","Data":"d68def1bbe9769885a009b4f600ab92eb2738788864513034c7d900f875e6f37"} Mar 11 10:11:16 crc kubenswrapper[4808]: I0311 10:11:16.750184 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" event={"ID":"3dda5309-668d-4e3c-b3b2-1d708eecc578","Type":"ContainerStarted","Data":"d768d0cd77bb5d61a0071efa203bb3e177aa6a9fb593f9adbae7687be9d0ea73"} Mar 11 10:11:16 crc kubenswrapper[4808]: I0311 10:11:16.750210 4808 scope.go:117] "RemoveContainer" containerID="d524673f3a545738ac720864e37310705b7ec99d76cdcdbcad5a32411f049b97" Mar 11 10:12:00 crc kubenswrapper[4808]: I0311 10:12:00.149459 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553732-2qmwm"] Mar 11 10:12:00 crc kubenswrapper[4808]: E0311 10:12:00.150466 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94a16095-2446-4848-8204-65f4b867368f" containerName="oc" Mar 11 10:12:00 crc kubenswrapper[4808]: I0311 10:12:00.150485 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="94a16095-2446-4848-8204-65f4b867368f" containerName="oc" Mar 11 10:12:00 crc kubenswrapper[4808]: I0311 10:12:00.150701 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="94a16095-2446-4848-8204-65f4b867368f" containerName="oc" Mar 11 10:12:00 crc kubenswrapper[4808]: I0311 10:12:00.152082 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553732-2qmwm" Mar 11 10:12:00 crc kubenswrapper[4808]: I0311 10:12:00.157810 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 10:12:00 crc kubenswrapper[4808]: I0311 10:12:00.158041 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-8r7cc" Mar 11 10:12:00 crc kubenswrapper[4808]: I0311 10:12:00.158258 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 10:12:00 crc kubenswrapper[4808]: I0311 10:12:00.164184 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553732-2qmwm"] Mar 11 10:12:00 crc kubenswrapper[4808]: I0311 10:12:00.334775 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wctbp\" (UniqueName: \"kubernetes.io/projected/1814f4a4-98ad-4d9f-a953-06998cc96484-kube-api-access-wctbp\") pod \"auto-csr-approver-29553732-2qmwm\" (UID: \"1814f4a4-98ad-4d9f-a953-06998cc96484\") " pod="openshift-infra/auto-csr-approver-29553732-2qmwm" Mar 11 10:12:00 crc kubenswrapper[4808]: I0311 10:12:00.436508 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wctbp\" (UniqueName: \"kubernetes.io/projected/1814f4a4-98ad-4d9f-a953-06998cc96484-kube-api-access-wctbp\") pod \"auto-csr-approver-29553732-2qmwm\" (UID: \"1814f4a4-98ad-4d9f-a953-06998cc96484\") " pod="openshift-infra/auto-csr-approver-29553732-2qmwm" Mar 11 10:12:00 crc kubenswrapper[4808]: I0311 10:12:00.456802 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wctbp\" (UniqueName: \"kubernetes.io/projected/1814f4a4-98ad-4d9f-a953-06998cc96484-kube-api-access-wctbp\") pod \"auto-csr-approver-29553732-2qmwm\" (UID: \"1814f4a4-98ad-4d9f-a953-06998cc96484\") " pod="openshift-infra/auto-csr-approver-29553732-2qmwm" Mar 11 10:12:00 crc kubenswrapper[4808]: I0311 10:12:00.474771 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553732-2qmwm" Mar 11 10:12:00 crc kubenswrapper[4808]: I0311 10:12:00.914912 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553732-2qmwm"] Mar 11 10:12:01 crc kubenswrapper[4808]: I0311 10:12:01.170810 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553732-2qmwm" event={"ID":"1814f4a4-98ad-4d9f-a953-06998cc96484","Type":"ContainerStarted","Data":"63b7807c16c0f314dd4cc06673ed5655e4fb6274647c2153eb226727258e7090"} Mar 11 10:12:02 crc kubenswrapper[4808]: I0311 10:12:02.180766 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553732-2qmwm" event={"ID":"1814f4a4-98ad-4d9f-a953-06998cc96484","Type":"ContainerStarted","Data":"32525adbc52daa9218478bd0a179ab28a93f9ddf5aa84ecc966b6204f6192462"} Mar 11 10:12:02 crc kubenswrapper[4808]: I0311 10:12:02.203202 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29553732-2qmwm" podStartSLOduration=1.321874938 podStartE2EDuration="2.203175681s" podCreationTimestamp="2026-03-11 10:12:00 +0000 UTC" firstStartedPulling="2026-03-11 10:12:00.930711278 +0000 UTC m=+5571.884034608" lastFinishedPulling="2026-03-11 10:12:01.812012011 +0000 UTC m=+5572.765335351" observedRunningTime="2026-03-11 10:12:02.196229944 +0000 UTC m=+5573.149553264" watchObservedRunningTime="2026-03-11 10:12:02.203175681 +0000 UTC m=+5573.156499041" Mar 11 10:12:03 crc kubenswrapper[4808]: I0311 10:12:03.190217 4808 generic.go:334] "Generic (PLEG): container finished" podID="1814f4a4-98ad-4d9f-a953-06998cc96484" containerID="32525adbc52daa9218478bd0a179ab28a93f9ddf5aa84ecc966b6204f6192462" exitCode=0 Mar 11 10:12:03 crc kubenswrapper[4808]: I0311 10:12:03.190334 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553732-2qmwm" event={"ID":"1814f4a4-98ad-4d9f-a953-06998cc96484","Type":"ContainerDied","Data":"32525adbc52daa9218478bd0a179ab28a93f9ddf5aa84ecc966b6204f6192462"} Mar 11 10:12:04 crc kubenswrapper[4808]: I0311 10:12:04.526166 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553732-2qmwm" Mar 11 10:12:04 crc kubenswrapper[4808]: I0311 10:12:04.703567 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wctbp\" (UniqueName: \"kubernetes.io/projected/1814f4a4-98ad-4d9f-a953-06998cc96484-kube-api-access-wctbp\") pod \"1814f4a4-98ad-4d9f-a953-06998cc96484\" (UID: \"1814f4a4-98ad-4d9f-a953-06998cc96484\") " Mar 11 10:12:04 crc kubenswrapper[4808]: I0311 10:12:04.710245 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1814f4a4-98ad-4d9f-a953-06998cc96484-kube-api-access-wctbp" (OuterVolumeSpecName: "kube-api-access-wctbp") pod "1814f4a4-98ad-4d9f-a953-06998cc96484" (UID: "1814f4a4-98ad-4d9f-a953-06998cc96484"). InnerVolumeSpecName "kube-api-access-wctbp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:12:04 crc kubenswrapper[4808]: I0311 10:12:04.805375 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wctbp\" (UniqueName: \"kubernetes.io/projected/1814f4a4-98ad-4d9f-a953-06998cc96484-kube-api-access-wctbp\") on node \"crc\" DevicePath \"\"" Mar 11 10:12:05 crc kubenswrapper[4808]: I0311 10:12:05.210051 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553732-2qmwm" event={"ID":"1814f4a4-98ad-4d9f-a953-06998cc96484","Type":"ContainerDied","Data":"63b7807c16c0f314dd4cc06673ed5655e4fb6274647c2153eb226727258e7090"} Mar 11 10:12:05 crc kubenswrapper[4808]: I0311 10:12:05.210123 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63b7807c16c0f314dd4cc06673ed5655e4fb6274647c2153eb226727258e7090" Mar 11 10:12:05 crc kubenswrapper[4808]: I0311 10:12:05.210142 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553732-2qmwm" Mar 11 10:12:05 crc kubenswrapper[4808]: I0311 10:12:05.278167 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553726-5rpz8"] Mar 11 10:12:05 crc kubenswrapper[4808]: I0311 10:12:05.286031 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553726-5rpz8"] Mar 11 10:12:05 crc kubenswrapper[4808]: I0311 10:12:05.797576 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1be8e5d-de4f-413c-94a5-eb6535cdd87f" path="/var/lib/kubelet/pods/e1be8e5d-de4f-413c-94a5-eb6535cdd87f/volumes" Mar 11 10:12:15 crc kubenswrapper[4808]: I0311 10:12:15.697547 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-copy-data"] Mar 11 10:12:15 crc kubenswrapper[4808]: E0311 10:12:15.698493 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1814f4a4-98ad-4d9f-a953-06998cc96484" containerName="oc" Mar 11 10:12:15 crc kubenswrapper[4808]: I0311 10:12:15.698510 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="1814f4a4-98ad-4d9f-a953-06998cc96484" containerName="oc" Mar 11 10:12:15 crc kubenswrapper[4808]: I0311 10:12:15.698693 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="1814f4a4-98ad-4d9f-a953-06998cc96484" containerName="oc" Mar 11 10:12:15 crc kubenswrapper[4808]: I0311 10:12:15.699317 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Mar 11 10:12:15 crc kubenswrapper[4808]: I0311 10:12:15.701154 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-48lsp" Mar 11 10:12:15 crc kubenswrapper[4808]: I0311 10:12:15.708382 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-copy-data"] Mar 11 10:12:15 crc kubenswrapper[4808]: I0311 10:12:15.882145 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vr4pv\" (UniqueName: \"kubernetes.io/projected/652963de-2f4b-4395-a2e9-4f1fbd52f042-kube-api-access-vr4pv\") pod \"mariadb-copy-data\" (UID: \"652963de-2f4b-4395-a2e9-4f1fbd52f042\") " pod="openstack/mariadb-copy-data" Mar 11 10:12:15 crc kubenswrapper[4808]: I0311 10:12:15.882283 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-17165a70-904f-494d-a9da-ed6b330a8192\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-17165a70-904f-494d-a9da-ed6b330a8192\") pod \"mariadb-copy-data\" (UID: \"652963de-2f4b-4395-a2e9-4f1fbd52f042\") " pod="openstack/mariadb-copy-data" Mar 11 10:12:15 crc kubenswrapper[4808]: I0311 10:12:15.983348 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vr4pv\" (UniqueName: \"kubernetes.io/projected/652963de-2f4b-4395-a2e9-4f1fbd52f042-kube-api-access-vr4pv\") pod \"mariadb-copy-data\" (UID: \"652963de-2f4b-4395-a2e9-4f1fbd52f042\") " pod="openstack/mariadb-copy-data" Mar 11 10:12:15 crc kubenswrapper[4808]: I0311 10:12:15.983469 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-17165a70-904f-494d-a9da-ed6b330a8192\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-17165a70-904f-494d-a9da-ed6b330a8192\") pod \"mariadb-copy-data\" (UID: \"652963de-2f4b-4395-a2e9-4f1fbd52f042\") " pod="openstack/mariadb-copy-data" Mar 11 10:12:15 crc kubenswrapper[4808]: I0311 10:12:15.986253 4808 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 11 10:12:15 crc kubenswrapper[4808]: I0311 10:12:15.986282 4808 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-17165a70-904f-494d-a9da-ed6b330a8192\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-17165a70-904f-494d-a9da-ed6b330a8192\") pod \"mariadb-copy-data\" (UID: \"652963de-2f4b-4395-a2e9-4f1fbd52f042\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/81aacc4d66bbbfdb0005b024bcfb7c0c61cb927b307613d5470650be85e0e7db/globalmount\"" pod="openstack/mariadb-copy-data" Mar 11 10:12:16 crc kubenswrapper[4808]: I0311 10:12:16.003322 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vr4pv\" (UniqueName: \"kubernetes.io/projected/652963de-2f4b-4395-a2e9-4f1fbd52f042-kube-api-access-vr4pv\") pod \"mariadb-copy-data\" (UID: \"652963de-2f4b-4395-a2e9-4f1fbd52f042\") " pod="openstack/mariadb-copy-data" Mar 11 10:12:16 crc kubenswrapper[4808]: I0311 10:12:16.020166 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-17165a70-904f-494d-a9da-ed6b330a8192\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-17165a70-904f-494d-a9da-ed6b330a8192\") pod \"mariadb-copy-data\" (UID: \"652963de-2f4b-4395-a2e9-4f1fbd52f042\") " pod="openstack/mariadb-copy-data" Mar 11 10:12:16 crc kubenswrapper[4808]: I0311 10:12:16.320996 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Mar 11 10:12:16 crc kubenswrapper[4808]: I0311 10:12:16.803743 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-copy-data"] Mar 11 10:12:17 crc kubenswrapper[4808]: I0311 10:12:17.311864 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"652963de-2f4b-4395-a2e9-4f1fbd52f042","Type":"ContainerStarted","Data":"39e8b58ab7016ddea7c9741857731253157a805ccb3b0522fc618d2aa6f8dd37"} Mar 11 10:12:17 crc kubenswrapper[4808]: I0311 10:12:17.312421 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"652963de-2f4b-4395-a2e9-4f1fbd52f042","Type":"ContainerStarted","Data":"203f2fff91fcfffaa3f80c2b09356d97d2a62cef7f4a48dfa48087140d3f65f8"} Mar 11 10:12:17 crc kubenswrapper[4808]: I0311 10:12:17.339308 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-copy-data" podStartSLOduration=3.3392764870000002 podStartE2EDuration="3.339276487s" podCreationTimestamp="2026-03-11 10:12:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 10:12:17.330136358 +0000 UTC m=+5588.283459678" watchObservedRunningTime="2026-03-11 10:12:17.339276487 +0000 UTC m=+5588.292599807" Mar 11 10:12:19 crc kubenswrapper[4808]: I0311 10:12:19.783465 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Mar 11 10:12:19 crc kubenswrapper[4808]: I0311 10:12:19.787141 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Mar 11 10:12:19 crc kubenswrapper[4808]: I0311 10:12:19.816732 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Mar 11 10:12:19 crc kubenswrapper[4808]: I0311 10:12:19.946102 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crkgn\" (UniqueName: \"kubernetes.io/projected/9c469c35-d9f2-4e01-bb4e-61c91f317c6e-kube-api-access-crkgn\") pod \"mariadb-client\" (UID: \"9c469c35-d9f2-4e01-bb4e-61c91f317c6e\") " pod="openstack/mariadb-client" Mar 11 10:12:20 crc kubenswrapper[4808]: I0311 10:12:20.048583 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crkgn\" (UniqueName: \"kubernetes.io/projected/9c469c35-d9f2-4e01-bb4e-61c91f317c6e-kube-api-access-crkgn\") pod \"mariadb-client\" (UID: \"9c469c35-d9f2-4e01-bb4e-61c91f317c6e\") " pod="openstack/mariadb-client" Mar 11 10:12:20 crc kubenswrapper[4808]: I0311 10:12:20.074500 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crkgn\" (UniqueName: \"kubernetes.io/projected/9c469c35-d9f2-4e01-bb4e-61c91f317c6e-kube-api-access-crkgn\") pod \"mariadb-client\" (UID: \"9c469c35-d9f2-4e01-bb4e-61c91f317c6e\") " pod="openstack/mariadb-client" Mar 11 10:12:20 crc kubenswrapper[4808]: I0311 10:12:20.119767 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Mar 11 10:12:20 crc kubenswrapper[4808]: I0311 10:12:20.595402 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Mar 11 10:12:21 crc kubenswrapper[4808]: I0311 10:12:21.344491 4808 generic.go:334] "Generic (PLEG): container finished" podID="9c469c35-d9f2-4e01-bb4e-61c91f317c6e" containerID="57aa520cffcff80b7f46b8071963467abd0ad96a1a0d2f702ebf8a1232669f80" exitCode=0 Mar 11 10:12:21 crc kubenswrapper[4808]: I0311 10:12:21.344538 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"9c469c35-d9f2-4e01-bb4e-61c91f317c6e","Type":"ContainerDied","Data":"57aa520cffcff80b7f46b8071963467abd0ad96a1a0d2f702ebf8a1232669f80"} Mar 11 10:12:21 crc kubenswrapper[4808]: I0311 10:12:21.344560 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"9c469c35-d9f2-4e01-bb4e-61c91f317c6e","Type":"ContainerStarted","Data":"814f110e7206e030f3059d8f88e140aa550723d7193eff3e6390a03f2930b8ab"} Mar 11 10:12:22 crc kubenswrapper[4808]: I0311 10:12:22.680533 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Mar 11 10:12:22 crc kubenswrapper[4808]: I0311 10:12:22.703588 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client_9c469c35-d9f2-4e01-bb4e-61c91f317c6e/mariadb-client/0.log" Mar 11 10:12:22 crc kubenswrapper[4808]: I0311 10:12:22.733390 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Mar 11 10:12:22 crc kubenswrapper[4808]: I0311 10:12:22.738842 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Mar 11 10:12:22 crc kubenswrapper[4808]: I0311 10:12:22.806799 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-crkgn\" (UniqueName: \"kubernetes.io/projected/9c469c35-d9f2-4e01-bb4e-61c91f317c6e-kube-api-access-crkgn\") pod \"9c469c35-d9f2-4e01-bb4e-61c91f317c6e\" (UID: \"9c469c35-d9f2-4e01-bb4e-61c91f317c6e\") " Mar 11 10:12:22 crc kubenswrapper[4808]: I0311 10:12:22.823664 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c469c35-d9f2-4e01-bb4e-61c91f317c6e-kube-api-access-crkgn" (OuterVolumeSpecName: "kube-api-access-crkgn") pod "9c469c35-d9f2-4e01-bb4e-61c91f317c6e" (UID: "9c469c35-d9f2-4e01-bb4e-61c91f317c6e"). InnerVolumeSpecName "kube-api-access-crkgn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:12:22 crc kubenswrapper[4808]: I0311 10:12:22.844125 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Mar 11 10:12:22 crc kubenswrapper[4808]: E0311 10:12:22.844747 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c469c35-d9f2-4e01-bb4e-61c91f317c6e" containerName="mariadb-client" Mar 11 10:12:22 crc kubenswrapper[4808]: I0311 10:12:22.844773 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c469c35-d9f2-4e01-bb4e-61c91f317c6e" containerName="mariadb-client" Mar 11 10:12:22 crc kubenswrapper[4808]: I0311 10:12:22.844947 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c469c35-d9f2-4e01-bb4e-61c91f317c6e" containerName="mariadb-client" Mar 11 10:12:22 crc kubenswrapper[4808]: I0311 10:12:22.845592 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Mar 11 10:12:22 crc kubenswrapper[4808]: I0311 10:12:22.850945 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Mar 11 10:12:22 crc kubenswrapper[4808]: I0311 10:12:22.909088 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-crkgn\" (UniqueName: \"kubernetes.io/projected/9c469c35-d9f2-4e01-bb4e-61c91f317c6e-kube-api-access-crkgn\") on node \"crc\" DevicePath \"\"" Mar 11 10:12:23 crc kubenswrapper[4808]: I0311 10:12:23.011321 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tstf8\" (UniqueName: \"kubernetes.io/projected/472bf783-2ebe-4bbf-92b8-e80f22d4debc-kube-api-access-tstf8\") pod \"mariadb-client\" (UID: \"472bf783-2ebe-4bbf-92b8-e80f22d4debc\") " pod="openstack/mariadb-client" Mar 11 10:12:23 crc kubenswrapper[4808]: I0311 10:12:23.113659 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tstf8\" (UniqueName: \"kubernetes.io/projected/472bf783-2ebe-4bbf-92b8-e80f22d4debc-kube-api-access-tstf8\") pod \"mariadb-client\" (UID: \"472bf783-2ebe-4bbf-92b8-e80f22d4debc\") " pod="openstack/mariadb-client" Mar 11 10:12:23 crc kubenswrapper[4808]: I0311 10:12:23.139198 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tstf8\" (UniqueName: \"kubernetes.io/projected/472bf783-2ebe-4bbf-92b8-e80f22d4debc-kube-api-access-tstf8\") pod \"mariadb-client\" (UID: \"472bf783-2ebe-4bbf-92b8-e80f22d4debc\") " pod="openstack/mariadb-client" Mar 11 10:12:23 crc kubenswrapper[4808]: I0311 10:12:23.185931 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Mar 11 10:12:23 crc kubenswrapper[4808]: I0311 10:12:23.375670 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="814f110e7206e030f3059d8f88e140aa550723d7193eff3e6390a03f2930b8ab" Mar 11 10:12:23 crc kubenswrapper[4808]: I0311 10:12:23.375943 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Mar 11 10:12:23 crc kubenswrapper[4808]: I0311 10:12:23.405965 4808 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/mariadb-client" oldPodUID="9c469c35-d9f2-4e01-bb4e-61c91f317c6e" podUID="472bf783-2ebe-4bbf-92b8-e80f22d4debc" Mar 11 10:12:23 crc kubenswrapper[4808]: I0311 10:12:23.658974 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Mar 11 10:12:23 crc kubenswrapper[4808]: W0311 10:12:23.665346 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod472bf783_2ebe_4bbf_92b8_e80f22d4debc.slice/crio-1296c85b14b120c551592141cc14e72b471daa1337b5a4a871985c3b5cf818d1 WatchSource:0}: Error finding container 1296c85b14b120c551592141cc14e72b471daa1337b5a4a871985c3b5cf818d1: Status 404 returned error can't find the container with id 1296c85b14b120c551592141cc14e72b471daa1337b5a4a871985c3b5cf818d1 Mar 11 10:12:23 crc kubenswrapper[4808]: I0311 10:12:23.801271 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c469c35-d9f2-4e01-bb4e-61c91f317c6e" path="/var/lib/kubelet/pods/9c469c35-d9f2-4e01-bb4e-61c91f317c6e/volumes" Mar 11 10:12:24 crc kubenswrapper[4808]: I0311 10:12:24.386751 4808 generic.go:334] "Generic (PLEG): container finished" podID="472bf783-2ebe-4bbf-92b8-e80f22d4debc" containerID="6668edf19dbddb4c8f58dc63731fa46b017caa8f004a3f943d6811a5b05d6215" exitCode=0 Mar 11 10:12:24 crc kubenswrapper[4808]: I0311 10:12:24.386802 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"472bf783-2ebe-4bbf-92b8-e80f22d4debc","Type":"ContainerDied","Data":"6668edf19dbddb4c8f58dc63731fa46b017caa8f004a3f943d6811a5b05d6215"} Mar 11 10:12:24 crc kubenswrapper[4808]: I0311 10:12:24.386833 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"472bf783-2ebe-4bbf-92b8-e80f22d4debc","Type":"ContainerStarted","Data":"1296c85b14b120c551592141cc14e72b471daa1337b5a4a871985c3b5cf818d1"} Mar 11 10:12:25 crc kubenswrapper[4808]: I0311 10:12:25.754119 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Mar 11 10:12:25 crc kubenswrapper[4808]: I0311 10:12:25.775124 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client_472bf783-2ebe-4bbf-92b8-e80f22d4debc/mariadb-client/0.log" Mar 11 10:12:25 crc kubenswrapper[4808]: I0311 10:12:25.818670 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Mar 11 10:12:25 crc kubenswrapper[4808]: I0311 10:12:25.823011 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Mar 11 10:12:25 crc kubenswrapper[4808]: I0311 10:12:25.859897 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tstf8\" (UniqueName: \"kubernetes.io/projected/472bf783-2ebe-4bbf-92b8-e80f22d4debc-kube-api-access-tstf8\") pod \"472bf783-2ebe-4bbf-92b8-e80f22d4debc\" (UID: \"472bf783-2ebe-4bbf-92b8-e80f22d4debc\") " Mar 11 10:12:25 crc kubenswrapper[4808]: I0311 10:12:25.869878 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/472bf783-2ebe-4bbf-92b8-e80f22d4debc-kube-api-access-tstf8" (OuterVolumeSpecName: "kube-api-access-tstf8") pod "472bf783-2ebe-4bbf-92b8-e80f22d4debc" (UID: "472bf783-2ebe-4bbf-92b8-e80f22d4debc"). InnerVolumeSpecName "kube-api-access-tstf8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:12:25 crc kubenswrapper[4808]: I0311 10:12:25.962630 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tstf8\" (UniqueName: \"kubernetes.io/projected/472bf783-2ebe-4bbf-92b8-e80f22d4debc-kube-api-access-tstf8\") on node \"crc\" DevicePath \"\"" Mar 11 10:12:26 crc kubenswrapper[4808]: I0311 10:12:26.405383 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1296c85b14b120c551592141cc14e72b471daa1337b5a4a871985c3b5cf818d1" Mar 11 10:12:26 crc kubenswrapper[4808]: I0311 10:12:26.405502 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Mar 11 10:12:27 crc kubenswrapper[4808]: I0311 10:12:27.801625 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="472bf783-2ebe-4bbf-92b8-e80f22d4debc" path="/var/lib/kubelet/pods/472bf783-2ebe-4bbf-92b8-e80f22d4debc/volumes" Mar 11 10:12:55 crc kubenswrapper[4808]: I0311 10:12:55.856057 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 11 10:12:55 crc kubenswrapper[4808]: E0311 10:12:55.857150 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="472bf783-2ebe-4bbf-92b8-e80f22d4debc" containerName="mariadb-client" Mar 11 10:12:55 crc kubenswrapper[4808]: I0311 10:12:55.857171 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="472bf783-2ebe-4bbf-92b8-e80f22d4debc" containerName="mariadb-client" Mar 11 10:12:55 crc kubenswrapper[4808]: I0311 10:12:55.857422 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="472bf783-2ebe-4bbf-92b8-e80f22d4debc" containerName="mariadb-client" Mar 11 10:12:55 crc kubenswrapper[4808]: I0311 10:12:55.858645 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Mar 11 10:12:55 crc kubenswrapper[4808]: I0311 10:12:55.875621 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Mar 11 10:12:55 crc kubenswrapper[4808]: I0311 10:12:55.875762 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-f8wjg" Mar 11 10:12:55 crc kubenswrapper[4808]: I0311 10:12:55.875768 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Mar 11 10:12:55 crc kubenswrapper[4808]: I0311 10:12:55.875980 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Mar 11 10:12:55 crc kubenswrapper[4808]: I0311 10:12:55.876185 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Mar 11 10:12:55 crc kubenswrapper[4808]: I0311 10:12:55.882557 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-1"] Mar 11 10:12:55 crc kubenswrapper[4808]: I0311 10:12:55.884246 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-1" Mar 11 10:12:55 crc kubenswrapper[4808]: I0311 10:12:55.894736 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-2"] Mar 11 10:12:55 crc kubenswrapper[4808]: I0311 10:12:55.896815 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Mar 11 10:12:55 crc kubenswrapper[4808]: I0311 10:12:55.911324 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 11 10:12:55 crc kubenswrapper[4808]: I0311 10:12:55.934260 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-1"] Mar 11 10:12:55 crc kubenswrapper[4808]: I0311 10:12:55.947585 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-2"] Mar 11 10:12:55 crc kubenswrapper[4808]: I0311 10:12:55.953007 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58c3819d-74e4-4b60-9633-d91a32c760a4-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"58c3819d-74e4-4b60-9633-d91a32c760a4\") " pod="openstack/ovsdbserver-nb-0" Mar 11 10:12:55 crc kubenswrapper[4808]: I0311 10:12:55.953126 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/58c3819d-74e4-4b60-9633-d91a32c760a4-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"58c3819d-74e4-4b60-9633-d91a32c760a4\") " pod="openstack/ovsdbserver-nb-0" Mar 11 10:12:55 crc kubenswrapper[4808]: I0311 10:12:55.953158 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/58c3819d-74e4-4b60-9633-d91a32c760a4-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"58c3819d-74e4-4b60-9633-d91a32c760a4\") " pod="openstack/ovsdbserver-nb-0" Mar 11 10:12:55 crc kubenswrapper[4808]: I0311 10:12:55.953194 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/58c3819d-74e4-4b60-9633-d91a32c760a4-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"58c3819d-74e4-4b60-9633-d91a32c760a4\") " pod="openstack/ovsdbserver-nb-0" Mar 11 10:12:55 crc kubenswrapper[4808]: I0311 10:12:55.953218 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfp75\" (UniqueName: \"kubernetes.io/projected/58c3819d-74e4-4b60-9633-d91a32c760a4-kube-api-access-xfp75\") pod \"ovsdbserver-nb-0\" (UID: \"58c3819d-74e4-4b60-9633-d91a32c760a4\") " pod="openstack/ovsdbserver-nb-0" Mar 11 10:12:55 crc kubenswrapper[4808]: I0311 10:12:55.953239 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58c3819d-74e4-4b60-9633-d91a32c760a4-config\") pod \"ovsdbserver-nb-0\" (UID: \"58c3819d-74e4-4b60-9633-d91a32c760a4\") " pod="openstack/ovsdbserver-nb-0" Mar 11 10:12:55 crc kubenswrapper[4808]: I0311 10:12:55.953260 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/58c3819d-74e4-4b60-9633-d91a32c760a4-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"58c3819d-74e4-4b60-9633-d91a32c760a4\") " pod="openstack/ovsdbserver-nb-0" Mar 11 10:12:55 crc kubenswrapper[4808]: I0311 10:12:55.953290 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-66a02844-193d-4f8b-845c-a11c9524abfe\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-66a02844-193d-4f8b-845c-a11c9524abfe\") pod \"ovsdbserver-nb-0\" (UID: \"58c3819d-74e4-4b60-9633-d91a32c760a4\") " pod="openstack/ovsdbserver-nb-0" Mar 11 10:12:56 crc kubenswrapper[4808]: I0311 10:12:56.054202 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ljsv\" (UniqueName: \"kubernetes.io/projected/4b0b0d29-2902-4951-812c-05049145f128-kube-api-access-5ljsv\") pod \"ovsdbserver-nb-2\" (UID: \"4b0b0d29-2902-4951-812c-05049145f128\") " pod="openstack/ovsdbserver-nb-2" Mar 11 10:12:56 crc kubenswrapper[4808]: I0311 10:12:56.054262 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0d3ee66e-b34a-4874-aee2-0ae3fa3ff658\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0d3ee66e-b34a-4874-aee2-0ae3fa3ff658\") pod \"ovsdbserver-nb-2\" (UID: \"4b0b0d29-2902-4951-812c-05049145f128\") " pod="openstack/ovsdbserver-nb-2" Mar 11 10:12:56 crc kubenswrapper[4808]: I0311 10:12:56.054303 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b0b0d29-2902-4951-812c-05049145f128-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"4b0b0d29-2902-4951-812c-05049145f128\") " pod="openstack/ovsdbserver-nb-2" Mar 11 10:12:56 crc kubenswrapper[4808]: I0311 10:12:56.054380 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4b0b0d29-2902-4951-812c-05049145f128-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"4b0b0d29-2902-4951-812c-05049145f128\") " pod="openstack/ovsdbserver-nb-2" Mar 11 10:12:56 crc kubenswrapper[4808]: I0311 10:12:56.054429 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/58c3819d-74e4-4b60-9633-d91a32c760a4-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"58c3819d-74e4-4b60-9633-d91a32c760a4\") " pod="openstack/ovsdbserver-nb-0" Mar 11 10:12:56 crc kubenswrapper[4808]: I0311 10:12:56.054462 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b0b0d29-2902-4951-812c-05049145f128-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"4b0b0d29-2902-4951-812c-05049145f128\") " pod="openstack/ovsdbserver-nb-2" Mar 11 10:12:56 crc kubenswrapper[4808]: I0311 10:12:56.054485 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/58c3819d-74e4-4b60-9633-d91a32c760a4-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"58c3819d-74e4-4b60-9633-d91a32c760a4\") " pod="openstack/ovsdbserver-nb-0" Mar 11 10:12:56 crc kubenswrapper[4808]: I0311 10:12:56.054684 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jznl\" (UniqueName: \"kubernetes.io/projected/8150999f-828b-4e71-98b6-9edd203c7b27-kube-api-access-8jznl\") pod \"ovsdbserver-nb-1\" (UID: \"8150999f-828b-4e71-98b6-9edd203c7b27\") " pod="openstack/ovsdbserver-nb-1" Mar 11 10:12:56 crc kubenswrapper[4808]: I0311 10:12:56.054746 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/58c3819d-74e4-4b60-9633-d91a32c760a4-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"58c3819d-74e4-4b60-9633-d91a32c760a4\") " pod="openstack/ovsdbserver-nb-0" Mar 11 10:12:56 crc kubenswrapper[4808]: I0311 10:12:56.054787 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8150999f-828b-4e71-98b6-9edd203c7b27-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"8150999f-828b-4e71-98b6-9edd203c7b27\") " pod="openstack/ovsdbserver-nb-1" Mar 11 10:12:56 crc kubenswrapper[4808]: I0311 10:12:56.054814 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfp75\" (UniqueName: \"kubernetes.io/projected/58c3819d-74e4-4b60-9633-d91a32c760a4-kube-api-access-xfp75\") pod \"ovsdbserver-nb-0\" (UID: \"58c3819d-74e4-4b60-9633-d91a32c760a4\") " pod="openstack/ovsdbserver-nb-0" Mar 11 10:12:56 crc kubenswrapper[4808]: I0311 10:12:56.054841 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8150999f-828b-4e71-98b6-9edd203c7b27-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"8150999f-828b-4e71-98b6-9edd203c7b27\") " pod="openstack/ovsdbserver-nb-1" Mar 11 10:12:56 crc kubenswrapper[4808]: I0311 10:12:56.054868 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8150999f-828b-4e71-98b6-9edd203c7b27-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"8150999f-828b-4e71-98b6-9edd203c7b27\") " pod="openstack/ovsdbserver-nb-1" Mar 11 10:12:56 crc kubenswrapper[4808]: I0311 10:12:56.054901 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8150999f-828b-4e71-98b6-9edd203c7b27-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"8150999f-828b-4e71-98b6-9edd203c7b27\") " pod="openstack/ovsdbserver-nb-1" Mar 11 10:12:56 crc kubenswrapper[4808]: I0311 10:12:56.054927 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58c3819d-74e4-4b60-9633-d91a32c760a4-config\") pod \"ovsdbserver-nb-0\" (UID: \"58c3819d-74e4-4b60-9633-d91a32c760a4\") " pod="openstack/ovsdbserver-nb-0" Mar 11 10:12:56 crc kubenswrapper[4808]: I0311 10:12:56.054963 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/58c3819d-74e4-4b60-9633-d91a32c760a4-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"58c3819d-74e4-4b60-9633-d91a32c760a4\") " pod="openstack/ovsdbserver-nb-0" Mar 11 10:12:56 crc kubenswrapper[4808]: I0311 10:12:56.054999 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8150999f-828b-4e71-98b6-9edd203c7b27-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"8150999f-828b-4e71-98b6-9edd203c7b27\") " pod="openstack/ovsdbserver-nb-1" Mar 11 10:12:56 crc kubenswrapper[4808]: I0311 10:12:56.055040 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-66a02844-193d-4f8b-845c-a11c9524abfe\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-66a02844-193d-4f8b-845c-a11c9524abfe\") pod \"ovsdbserver-nb-0\" (UID: \"58c3819d-74e4-4b60-9633-d91a32c760a4\") " pod="openstack/ovsdbserver-nb-0" Mar 11 10:12:56 crc kubenswrapper[4808]: I0311 10:12:56.055070 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b0b0d29-2902-4951-812c-05049145f128-config\") pod \"ovsdbserver-nb-2\" (UID: \"4b0b0d29-2902-4951-812c-05049145f128\") " pod="openstack/ovsdbserver-nb-2" Mar 11 10:12:56 crc kubenswrapper[4808]: I0311 10:12:56.055119 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8150999f-828b-4e71-98b6-9edd203c7b27-config\") pod \"ovsdbserver-nb-1\" (UID: \"8150999f-828b-4e71-98b6-9edd203c7b27\") " pod="openstack/ovsdbserver-nb-1" Mar 11 10:12:56 crc kubenswrapper[4808]: I0311 10:12:56.055190 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b0b0d29-2902-4951-812c-05049145f128-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"4b0b0d29-2902-4951-812c-05049145f128\") " pod="openstack/ovsdbserver-nb-2" Mar 11 10:12:56 crc kubenswrapper[4808]: I0311 10:12:56.055218 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-1b231143-7d48-4ef1-bb63-83f5e3f0c31a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1b231143-7d48-4ef1-bb63-83f5e3f0c31a\") pod \"ovsdbserver-nb-1\" (UID: \"8150999f-828b-4e71-98b6-9edd203c7b27\") " pod="openstack/ovsdbserver-nb-1" Mar 11 10:12:56 crc kubenswrapper[4808]: I0311 10:12:56.055256 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4b0b0d29-2902-4951-812c-05049145f128-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"4b0b0d29-2902-4951-812c-05049145f128\") " pod="openstack/ovsdbserver-nb-2" Mar 11 10:12:56 crc kubenswrapper[4808]: I0311 10:12:56.055277 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58c3819d-74e4-4b60-9633-d91a32c760a4-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"58c3819d-74e4-4b60-9633-d91a32c760a4\") " pod="openstack/ovsdbserver-nb-0" Mar 11 10:12:56 crc kubenswrapper[4808]: I0311 10:12:56.055714 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/58c3819d-74e4-4b60-9633-d91a32c760a4-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"58c3819d-74e4-4b60-9633-d91a32c760a4\") " pod="openstack/ovsdbserver-nb-0" Mar 11 10:12:56 crc kubenswrapper[4808]: I0311 10:12:56.055903 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/58c3819d-74e4-4b60-9633-d91a32c760a4-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"58c3819d-74e4-4b60-9633-d91a32c760a4\") " pod="openstack/ovsdbserver-nb-0" Mar 11 10:12:56 crc kubenswrapper[4808]: I0311 10:12:56.056626 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58c3819d-74e4-4b60-9633-d91a32c760a4-config\") pod \"ovsdbserver-nb-0\" (UID: \"58c3819d-74e4-4b60-9633-d91a32c760a4\") " pod="openstack/ovsdbserver-nb-0" Mar 11 10:12:56 crc kubenswrapper[4808]: I0311 10:12:56.061086 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/58c3819d-74e4-4b60-9633-d91a32c760a4-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"58c3819d-74e4-4b60-9633-d91a32c760a4\") " pod="openstack/ovsdbserver-nb-0" Mar 11 10:12:56 crc kubenswrapper[4808]: I0311 10:12:56.061262 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58c3819d-74e4-4b60-9633-d91a32c760a4-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"58c3819d-74e4-4b60-9633-d91a32c760a4\") " pod="openstack/ovsdbserver-nb-0" Mar 11 10:12:56 crc kubenswrapper[4808]: I0311 10:12:56.066794 4808 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 11 10:12:56 crc kubenswrapper[4808]: I0311 10:12:56.066986 4808 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-66a02844-193d-4f8b-845c-a11c9524abfe\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-66a02844-193d-4f8b-845c-a11c9524abfe\") pod \"ovsdbserver-nb-0\" (UID: \"58c3819d-74e4-4b60-9633-d91a32c760a4\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/8bc4400267a9708d45f30185d5e772636a8af46f481eb55f8caa4564a340c63b/globalmount\"" pod="openstack/ovsdbserver-nb-0" Mar 11 10:12:56 crc kubenswrapper[4808]: I0311 10:12:56.066875 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/58c3819d-74e4-4b60-9633-d91a32c760a4-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"58c3819d-74e4-4b60-9633-d91a32c760a4\") " pod="openstack/ovsdbserver-nb-0" Mar 11 10:12:56 crc kubenswrapper[4808]: I0311 10:12:56.077216 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfp75\" (UniqueName: \"kubernetes.io/projected/58c3819d-74e4-4b60-9633-d91a32c760a4-kube-api-access-xfp75\") pod \"ovsdbserver-nb-0\" (UID: \"58c3819d-74e4-4b60-9633-d91a32c760a4\") " pod="openstack/ovsdbserver-nb-0" Mar 11 10:12:56 crc kubenswrapper[4808]: I0311 10:12:56.096759 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-66a02844-193d-4f8b-845c-a11c9524abfe\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-66a02844-193d-4f8b-845c-a11c9524abfe\") pod \"ovsdbserver-nb-0\" (UID: \"58c3819d-74e4-4b60-9633-d91a32c760a4\") " pod="openstack/ovsdbserver-nb-0" Mar 11 10:12:56 crc kubenswrapper[4808]: I0311 10:12:56.156704 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b0b0d29-2902-4951-812c-05049145f128-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"4b0b0d29-2902-4951-812c-05049145f128\") " pod="openstack/ovsdbserver-nb-2" Mar 11 10:12:56 crc kubenswrapper[4808]: I0311 10:12:56.156780 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jznl\" (UniqueName: \"kubernetes.io/projected/8150999f-828b-4e71-98b6-9edd203c7b27-kube-api-access-8jznl\") pod \"ovsdbserver-nb-1\" (UID: \"8150999f-828b-4e71-98b6-9edd203c7b27\") " pod="openstack/ovsdbserver-nb-1" Mar 11 10:12:56 crc kubenswrapper[4808]: I0311 10:12:56.156814 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8150999f-828b-4e71-98b6-9edd203c7b27-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"8150999f-828b-4e71-98b6-9edd203c7b27\") " pod="openstack/ovsdbserver-nb-1" Mar 11 10:12:56 crc kubenswrapper[4808]: I0311 10:12:56.156836 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8150999f-828b-4e71-98b6-9edd203c7b27-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"8150999f-828b-4e71-98b6-9edd203c7b27\") " pod="openstack/ovsdbserver-nb-1" Mar 11 10:12:56 crc kubenswrapper[4808]: I0311 10:12:56.156863 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8150999f-828b-4e71-98b6-9edd203c7b27-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"8150999f-828b-4e71-98b6-9edd203c7b27\") " pod="openstack/ovsdbserver-nb-1" Mar 11 10:12:56 crc kubenswrapper[4808]: I0311 10:12:56.156885 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8150999f-828b-4e71-98b6-9edd203c7b27-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"8150999f-828b-4e71-98b6-9edd203c7b27\") " pod="openstack/ovsdbserver-nb-1" Mar 11 10:12:56 crc kubenswrapper[4808]: I0311 10:12:56.156911 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8150999f-828b-4e71-98b6-9edd203c7b27-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"8150999f-828b-4e71-98b6-9edd203c7b27\") " pod="openstack/ovsdbserver-nb-1" Mar 11 10:12:56 crc kubenswrapper[4808]: I0311 10:12:56.156958 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b0b0d29-2902-4951-812c-05049145f128-config\") pod \"ovsdbserver-nb-2\" (UID: \"4b0b0d29-2902-4951-812c-05049145f128\") " pod="openstack/ovsdbserver-nb-2" Mar 11 10:12:56 crc kubenswrapper[4808]: I0311 10:12:56.157007 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8150999f-828b-4e71-98b6-9edd203c7b27-config\") pod \"ovsdbserver-nb-1\" (UID: \"8150999f-828b-4e71-98b6-9edd203c7b27\") " pod="openstack/ovsdbserver-nb-1" Mar 11 10:12:56 crc kubenswrapper[4808]: I0311 10:12:56.157052 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b0b0d29-2902-4951-812c-05049145f128-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"4b0b0d29-2902-4951-812c-05049145f128\") " pod="openstack/ovsdbserver-nb-2" Mar 11 10:12:56 crc kubenswrapper[4808]: I0311 10:12:56.157093 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-1b231143-7d48-4ef1-bb63-83f5e3f0c31a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1b231143-7d48-4ef1-bb63-83f5e3f0c31a\") pod \"ovsdbserver-nb-1\" (UID: \"8150999f-828b-4e71-98b6-9edd203c7b27\") " pod="openstack/ovsdbserver-nb-1" Mar 11 10:12:56 crc kubenswrapper[4808]: I0311 10:12:56.157131 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4b0b0d29-2902-4951-812c-05049145f128-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"4b0b0d29-2902-4951-812c-05049145f128\") " pod="openstack/ovsdbserver-nb-2" Mar 11 10:12:56 crc kubenswrapper[4808]: I0311 10:12:56.157201 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ljsv\" (UniqueName: \"kubernetes.io/projected/4b0b0d29-2902-4951-812c-05049145f128-kube-api-access-5ljsv\") pod \"ovsdbserver-nb-2\" (UID: \"4b0b0d29-2902-4951-812c-05049145f128\") " pod="openstack/ovsdbserver-nb-2" Mar 11 10:12:56 crc kubenswrapper[4808]: I0311 10:12:56.157236 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0d3ee66e-b34a-4874-aee2-0ae3fa3ff658\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0d3ee66e-b34a-4874-aee2-0ae3fa3ff658\") pod \"ovsdbserver-nb-2\" (UID: \"4b0b0d29-2902-4951-812c-05049145f128\") " pod="openstack/ovsdbserver-nb-2" Mar 11 10:12:56 crc kubenswrapper[4808]: I0311 10:12:56.157271 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b0b0d29-2902-4951-812c-05049145f128-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"4b0b0d29-2902-4951-812c-05049145f128\") " pod="openstack/ovsdbserver-nb-2" Mar 11 10:12:56 crc kubenswrapper[4808]: I0311 10:12:56.157310 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4b0b0d29-2902-4951-812c-05049145f128-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"4b0b0d29-2902-4951-812c-05049145f128\") " pod="openstack/ovsdbserver-nb-2" Mar 11 10:12:56 crc kubenswrapper[4808]: I0311 10:12:56.158058 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4b0b0d29-2902-4951-812c-05049145f128-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"4b0b0d29-2902-4951-812c-05049145f128\") " pod="openstack/ovsdbserver-nb-2" Mar 11 10:12:56 crc kubenswrapper[4808]: I0311 10:12:56.158129 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b0b0d29-2902-4951-812c-05049145f128-config\") pod \"ovsdbserver-nb-2\" (UID: \"4b0b0d29-2902-4951-812c-05049145f128\") " pod="openstack/ovsdbserver-nb-2" Mar 11 10:12:56 crc kubenswrapper[4808]: I0311 10:12:56.158154 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8150999f-828b-4e71-98b6-9edd203c7b27-config\") pod \"ovsdbserver-nb-1\" (UID: \"8150999f-828b-4e71-98b6-9edd203c7b27\") " pod="openstack/ovsdbserver-nb-1" Mar 11 10:12:56 crc kubenswrapper[4808]: I0311 10:12:56.158582 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4b0b0d29-2902-4951-812c-05049145f128-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"4b0b0d29-2902-4951-812c-05049145f128\") " pod="openstack/ovsdbserver-nb-2" Mar 11 10:12:56 crc kubenswrapper[4808]: I0311 10:12:56.158728 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8150999f-828b-4e71-98b6-9edd203c7b27-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"8150999f-828b-4e71-98b6-9edd203c7b27\") " pod="openstack/ovsdbserver-nb-1" Mar 11 10:12:56 crc kubenswrapper[4808]: I0311 10:12:56.159723 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8150999f-828b-4e71-98b6-9edd203c7b27-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"8150999f-828b-4e71-98b6-9edd203c7b27\") " pod="openstack/ovsdbserver-nb-1" Mar 11 10:12:56 crc kubenswrapper[4808]: I0311 10:12:56.160402 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b0b0d29-2902-4951-812c-05049145f128-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"4b0b0d29-2902-4951-812c-05049145f128\") " pod="openstack/ovsdbserver-nb-2" Mar 11 10:12:56 crc kubenswrapper[4808]: I0311 10:12:56.160669 4808 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 11 10:12:56 crc kubenswrapper[4808]: I0311 10:12:56.160699 4808 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0d3ee66e-b34a-4874-aee2-0ae3fa3ff658\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0d3ee66e-b34a-4874-aee2-0ae3fa3ff658\") pod \"ovsdbserver-nb-2\" (UID: \"4b0b0d29-2902-4951-812c-05049145f128\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/cdec6158c368cb85497dbca330b2778ca4d940c97ef365b6fcbda80157400d14/globalmount\"" pod="openstack/ovsdbserver-nb-2" Mar 11 10:12:56 crc kubenswrapper[4808]: I0311 10:12:56.161586 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8150999f-828b-4e71-98b6-9edd203c7b27-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"8150999f-828b-4e71-98b6-9edd203c7b27\") " pod="openstack/ovsdbserver-nb-1" Mar 11 10:12:56 crc kubenswrapper[4808]: I0311 10:12:56.161745 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8150999f-828b-4e71-98b6-9edd203c7b27-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"8150999f-828b-4e71-98b6-9edd203c7b27\") " pod="openstack/ovsdbserver-nb-1" Mar 11 10:12:56 crc kubenswrapper[4808]: I0311 10:12:56.161856 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b0b0d29-2902-4951-812c-05049145f128-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"4b0b0d29-2902-4951-812c-05049145f128\") " pod="openstack/ovsdbserver-nb-2" Mar 11 10:12:56 crc kubenswrapper[4808]: I0311 10:12:56.161978 4808 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 11 10:12:56 crc kubenswrapper[4808]: I0311 10:12:56.162011 4808 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-1b231143-7d48-4ef1-bb63-83f5e3f0c31a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1b231143-7d48-4ef1-bb63-83f5e3f0c31a\") pod \"ovsdbserver-nb-1\" (UID: \"8150999f-828b-4e71-98b6-9edd203c7b27\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/c8025d9eb71fcfddd6b8523fd7ca0a5522c87d8440549ec6e8a8ef3afd0d70c8/globalmount\"" pod="openstack/ovsdbserver-nb-1" Mar 11 10:12:56 crc kubenswrapper[4808]: I0311 10:12:56.167237 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b0b0d29-2902-4951-812c-05049145f128-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"4b0b0d29-2902-4951-812c-05049145f128\") " pod="openstack/ovsdbserver-nb-2" Mar 11 10:12:56 crc kubenswrapper[4808]: I0311 10:12:56.167702 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8150999f-828b-4e71-98b6-9edd203c7b27-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"8150999f-828b-4e71-98b6-9edd203c7b27\") " pod="openstack/ovsdbserver-nb-1" Mar 11 10:12:56 crc kubenswrapper[4808]: I0311 10:12:56.173631 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jznl\" (UniqueName: \"kubernetes.io/projected/8150999f-828b-4e71-98b6-9edd203c7b27-kube-api-access-8jznl\") pod \"ovsdbserver-nb-1\" (UID: \"8150999f-828b-4e71-98b6-9edd203c7b27\") " pod="openstack/ovsdbserver-nb-1" Mar 11 10:12:56 crc kubenswrapper[4808]: I0311 10:12:56.179166 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ljsv\" (UniqueName: \"kubernetes.io/projected/4b0b0d29-2902-4951-812c-05049145f128-kube-api-access-5ljsv\") pod \"ovsdbserver-nb-2\" (UID: \"4b0b0d29-2902-4951-812c-05049145f128\") " pod="openstack/ovsdbserver-nb-2" Mar 11 10:12:56 crc kubenswrapper[4808]: I0311 10:12:56.197680 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Mar 11 10:12:56 crc kubenswrapper[4808]: I0311 10:12:56.200738 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-1b231143-7d48-4ef1-bb63-83f5e3f0c31a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1b231143-7d48-4ef1-bb63-83f5e3f0c31a\") pod \"ovsdbserver-nb-1\" (UID: \"8150999f-828b-4e71-98b6-9edd203c7b27\") " pod="openstack/ovsdbserver-nb-1" Mar 11 10:12:56 crc kubenswrapper[4808]: I0311 10:12:56.203046 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0d3ee66e-b34a-4874-aee2-0ae3fa3ff658\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0d3ee66e-b34a-4874-aee2-0ae3fa3ff658\") pod \"ovsdbserver-nb-2\" (UID: \"4b0b0d29-2902-4951-812c-05049145f128\") " pod="openstack/ovsdbserver-nb-2" Mar 11 10:12:56 crc kubenswrapper[4808]: I0311 10:12:56.218586 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-1" Mar 11 10:12:56 crc kubenswrapper[4808]: I0311 10:12:56.225858 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Mar 11 10:12:56 crc kubenswrapper[4808]: I0311 10:12:56.601528 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-1"] Mar 11 10:12:56 crc kubenswrapper[4808]: I0311 10:12:56.650304 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"8150999f-828b-4e71-98b6-9edd203c7b27","Type":"ContainerStarted","Data":"0702b81bc6d858e80636a9fc25a1d232efb0e5895a5a87a6c7c973326dfdeb28"} Mar 11 10:12:56 crc kubenswrapper[4808]: W0311 10:12:56.717450 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod58c3819d_74e4_4b60_9633_d91a32c760a4.slice/crio-896143ce6c533f1921af4b99d952373d103e08cf48a42693b24675116097c8c1 WatchSource:0}: Error finding container 896143ce6c533f1921af4b99d952373d103e08cf48a42693b24675116097c8c1: Status 404 returned error can't find the container with id 896143ce6c533f1921af4b99d952373d103e08cf48a42693b24675116097c8c1 Mar 11 10:12:56 crc kubenswrapper[4808]: I0311 10:12:56.717562 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 11 10:12:56 crc kubenswrapper[4808]: I0311 10:12:56.842095 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-2"] Mar 11 10:12:56 crc kubenswrapper[4808]: W0311 10:12:56.858731 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4b0b0d29_2902_4951_812c_05049145f128.slice/crio-0390f7102d42f6101e6b7c695c08198b5490f23a56035bae6fc059bc1c065d1e WatchSource:0}: Error finding container 0390f7102d42f6101e6b7c695c08198b5490f23a56035bae6fc059bc1c065d1e: Status 404 returned error can't find the container with id 0390f7102d42f6101e6b7c695c08198b5490f23a56035bae6fc059bc1c065d1e Mar 11 10:12:56 crc kubenswrapper[4808]: I0311 10:12:56.872752 4808 scope.go:117] "RemoveContainer" containerID="48249d8d185d7a10c8c2f354e39b16aef09249a962ff163cd9614612c427fcee" Mar 11 10:12:56 crc kubenswrapper[4808]: I0311 10:12:56.915511 4808 scope.go:117] "RemoveContainer" containerID="6cad207c79c3a1871cddd5408012ce1fba7291ba5dac44dd2bedeab6da4e97ef" Mar 11 10:12:57 crc kubenswrapper[4808]: I0311 10:12:57.659644 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"8150999f-828b-4e71-98b6-9edd203c7b27","Type":"ContainerStarted","Data":"dc2893867c057529aa7c64ca53cac9087486269a50654cea11b774cfe15cad60"} Mar 11 10:12:57 crc kubenswrapper[4808]: I0311 10:12:57.660201 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"8150999f-828b-4e71-98b6-9edd203c7b27","Type":"ContainerStarted","Data":"5489fa84531da4a7b085a2e0979402bbd2196db5ee2aebd02f8c25c9dcd18206"} Mar 11 10:12:57 crc kubenswrapper[4808]: I0311 10:12:57.663200 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"4b0b0d29-2902-4951-812c-05049145f128","Type":"ContainerStarted","Data":"e6b67a9e73cf32c67c796dc5c89d95965fa4e26b5c7d5cf05e08a97f2d95105b"} Mar 11 10:12:57 crc kubenswrapper[4808]: I0311 10:12:57.663257 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"4b0b0d29-2902-4951-812c-05049145f128","Type":"ContainerStarted","Data":"84beb1a10deb047bdd401ec9f63bdccb408ed5190cd65490054b85419a05e882"} Mar 11 10:12:57 crc kubenswrapper[4808]: I0311 10:12:57.663271 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"4b0b0d29-2902-4951-812c-05049145f128","Type":"ContainerStarted","Data":"0390f7102d42f6101e6b7c695c08198b5490f23a56035bae6fc059bc1c065d1e"} Mar 11 10:12:57 crc kubenswrapper[4808]: I0311 10:12:57.666995 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"58c3819d-74e4-4b60-9633-d91a32c760a4","Type":"ContainerStarted","Data":"22135548b04b18f3f09964e61c0ee14ed9cafa675ea75caeccb7013b3d50bfd5"} Mar 11 10:12:57 crc kubenswrapper[4808]: I0311 10:12:57.667127 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"58c3819d-74e4-4b60-9633-d91a32c760a4","Type":"ContainerStarted","Data":"95873359fd5023a50719cc1261687046b31eb2bcc341e5d7273c25d665eba2c9"} Mar 11 10:12:57 crc kubenswrapper[4808]: I0311 10:12:57.667215 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"58c3819d-74e4-4b60-9633-d91a32c760a4","Type":"ContainerStarted","Data":"896143ce6c533f1921af4b99d952373d103e08cf48a42693b24675116097c8c1"} Mar 11 10:12:57 crc kubenswrapper[4808]: I0311 10:12:57.688736 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-1" podStartSLOduration=3.688711983 podStartE2EDuration="3.688711983s" podCreationTimestamp="2026-03-11 10:12:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 10:12:57.678520424 +0000 UTC m=+5628.631843824" watchObservedRunningTime="2026-03-11 10:12:57.688711983 +0000 UTC m=+5628.642035323" Mar 11 10:12:57 crc kubenswrapper[4808]: I0311 10:12:57.702067 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-2" podStartSLOduration=3.702047511 podStartE2EDuration="3.702047511s" podCreationTimestamp="2026-03-11 10:12:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 10:12:57.695188456 +0000 UTC m=+5628.648511796" watchObservedRunningTime="2026-03-11 10:12:57.702047511 +0000 UTC m=+5628.655370831" Mar 11 10:12:57 crc kubenswrapper[4808]: I0311 10:12:57.715814 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=3.71579491 podStartE2EDuration="3.71579491s" podCreationTimestamp="2026-03-11 10:12:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 10:12:57.713580117 +0000 UTC m=+5628.666903467" watchObservedRunningTime="2026-03-11 10:12:57.71579491 +0000 UTC m=+5628.669118230" Mar 11 10:12:58 crc kubenswrapper[4808]: I0311 10:12:58.430873 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 11 10:12:58 crc kubenswrapper[4808]: I0311 10:12:58.443444 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Mar 11 10:12:58 crc kubenswrapper[4808]: I0311 10:12:58.448728 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Mar 11 10:12:58 crc kubenswrapper[4808]: I0311 10:12:58.448909 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Mar 11 10:12:58 crc kubenswrapper[4808]: I0311 10:12:58.449032 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-p58ml" Mar 11 10:12:58 crc kubenswrapper[4808]: I0311 10:12:58.449443 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Mar 11 10:12:58 crc kubenswrapper[4808]: I0311 10:12:58.449650 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-1"] Mar 11 10:12:58 crc kubenswrapper[4808]: I0311 10:12:58.450914 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-1" Mar 11 10:12:58 crc kubenswrapper[4808]: I0311 10:12:58.457045 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 11 10:12:58 crc kubenswrapper[4808]: I0311 10:12:58.481245 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-2"] Mar 11 10:12:58 crc kubenswrapper[4808]: I0311 10:12:58.482802 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-2" Mar 11 10:12:58 crc kubenswrapper[4808]: I0311 10:12:58.492965 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-1"] Mar 11 10:12:58 crc kubenswrapper[4808]: I0311 10:12:58.499960 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-2"] Mar 11 10:12:58 crc kubenswrapper[4808]: I0311 10:12:58.500533 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/047f7cf8-06eb-4caf-8ffc-846dcee172f4-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"047f7cf8-06eb-4caf-8ffc-846dcee172f4\") " pod="openstack/ovsdbserver-sb-0" Mar 11 10:12:58 crc kubenswrapper[4808]: I0311 10:12:58.500577 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/047f7cf8-06eb-4caf-8ffc-846dcee172f4-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"047f7cf8-06eb-4caf-8ffc-846dcee172f4\") " pod="openstack/ovsdbserver-sb-0" Mar 11 10:12:58 crc kubenswrapper[4808]: I0311 10:12:58.500605 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-573964a3-e2aa-4b51-8e94-bc1cf8429590\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-573964a3-e2aa-4b51-8e94-bc1cf8429590\") pod \"ovsdbserver-sb-0\" (UID: \"047f7cf8-06eb-4caf-8ffc-846dcee172f4\") " pod="openstack/ovsdbserver-sb-0" Mar 11 10:12:58 crc kubenswrapper[4808]: I0311 10:12:58.500635 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/047f7cf8-06eb-4caf-8ffc-846dcee172f4-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"047f7cf8-06eb-4caf-8ffc-846dcee172f4\") " pod="openstack/ovsdbserver-sb-0" Mar 11 10:12:58 crc kubenswrapper[4808]: I0311 10:12:58.500664 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/047f7cf8-06eb-4caf-8ffc-846dcee172f4-config\") pod \"ovsdbserver-sb-0\" (UID: \"047f7cf8-06eb-4caf-8ffc-846dcee172f4\") " pod="openstack/ovsdbserver-sb-0" Mar 11 10:12:58 crc kubenswrapper[4808]: I0311 10:12:58.500688 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/047f7cf8-06eb-4caf-8ffc-846dcee172f4-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"047f7cf8-06eb-4caf-8ffc-846dcee172f4\") " pod="openstack/ovsdbserver-sb-0" Mar 11 10:12:58 crc kubenswrapper[4808]: I0311 10:12:58.500749 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drnd4\" (UniqueName: \"kubernetes.io/projected/047f7cf8-06eb-4caf-8ffc-846dcee172f4-kube-api-access-drnd4\") pod \"ovsdbserver-sb-0\" (UID: \"047f7cf8-06eb-4caf-8ffc-846dcee172f4\") " pod="openstack/ovsdbserver-sb-0" Mar 11 10:12:58 crc kubenswrapper[4808]: I0311 10:12:58.500780 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/047f7cf8-06eb-4caf-8ffc-846dcee172f4-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"047f7cf8-06eb-4caf-8ffc-846dcee172f4\") " pod="openstack/ovsdbserver-sb-0" Mar 11 10:12:58 crc kubenswrapper[4808]: I0311 10:12:58.601937 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bffb61a6-2929-4424-a4cc-b8cac705264a-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"bffb61a6-2929-4424-a4cc-b8cac705264a\") " pod="openstack/ovsdbserver-sb-2" Mar 11 10:12:58 crc kubenswrapper[4808]: I0311 10:12:58.602020 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/047f7cf8-06eb-4caf-8ffc-846dcee172f4-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"047f7cf8-06eb-4caf-8ffc-846dcee172f4\") " pod="openstack/ovsdbserver-sb-0" Mar 11 10:12:58 crc kubenswrapper[4808]: I0311 10:12:58.602081 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/047f7cf8-06eb-4caf-8ffc-846dcee172f4-config\") pod \"ovsdbserver-sb-0\" (UID: \"047f7cf8-06eb-4caf-8ffc-846dcee172f4\") " pod="openstack/ovsdbserver-sb-0" Mar 11 10:12:58 crc kubenswrapper[4808]: I0311 10:12:58.602105 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jhkk\" (UniqueName: \"kubernetes.io/projected/ee49f07d-8df5-402e-824a-3635bdfbe980-kube-api-access-6jhkk\") pod \"ovsdbserver-sb-1\" (UID: \"ee49f07d-8df5-402e-824a-3635bdfbe980\") " pod="openstack/ovsdbserver-sb-1" Mar 11 10:12:58 crc kubenswrapper[4808]: I0311 10:12:58.602161 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/047f7cf8-06eb-4caf-8ffc-846dcee172f4-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"047f7cf8-06eb-4caf-8ffc-846dcee172f4\") " pod="openstack/ovsdbserver-sb-0" Mar 11 10:12:58 crc kubenswrapper[4808]: I0311 10:12:58.602187 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bffb61a6-2929-4424-a4cc-b8cac705264a-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"bffb61a6-2929-4424-a4cc-b8cac705264a\") " pod="openstack/ovsdbserver-sb-2" Mar 11 10:12:58 crc kubenswrapper[4808]: I0311 10:12:58.602313 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bffb61a6-2929-4424-a4cc-b8cac705264a-config\") pod \"ovsdbserver-sb-2\" (UID: \"bffb61a6-2929-4424-a4cc-b8cac705264a\") " pod="openstack/ovsdbserver-sb-2" Mar 11 10:12:58 crc kubenswrapper[4808]: I0311 10:12:58.602339 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b7ff3663-9f8a-4bc6-87e6-3a4a3fa288de\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b7ff3663-9f8a-4bc6-87e6-3a4a3fa288de\") pod \"ovsdbserver-sb-1\" (UID: \"ee49f07d-8df5-402e-824a-3635bdfbe980\") " pod="openstack/ovsdbserver-sb-1" Mar 11 10:12:58 crc kubenswrapper[4808]: I0311 10:12:58.602393 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2nbq\" (UniqueName: \"kubernetes.io/projected/bffb61a6-2929-4424-a4cc-b8cac705264a-kube-api-access-j2nbq\") pod \"ovsdbserver-sb-2\" (UID: \"bffb61a6-2929-4424-a4cc-b8cac705264a\") " pod="openstack/ovsdbserver-sb-2" Mar 11 10:12:58 crc kubenswrapper[4808]: I0311 10:12:58.602418 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee49f07d-8df5-402e-824a-3635bdfbe980-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"ee49f07d-8df5-402e-824a-3635bdfbe980\") " pod="openstack/ovsdbserver-sb-1" Mar 11 10:12:58 crc kubenswrapper[4808]: I0311 10:12:58.602465 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ee49f07d-8df5-402e-824a-3635bdfbe980-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"ee49f07d-8df5-402e-824a-3635bdfbe980\") " pod="openstack/ovsdbserver-sb-1" Mar 11 10:12:58 crc kubenswrapper[4808]: I0311 10:12:58.602499 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee49f07d-8df5-402e-824a-3635bdfbe980-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"ee49f07d-8df5-402e-824a-3635bdfbe980\") " pod="openstack/ovsdbserver-sb-1" Mar 11 10:12:58 crc kubenswrapper[4808]: I0311 10:12:58.602548 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bffb61a6-2929-4424-a4cc-b8cac705264a-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"bffb61a6-2929-4424-a4cc-b8cac705264a\") " pod="openstack/ovsdbserver-sb-2" Mar 11 10:12:58 crc kubenswrapper[4808]: I0311 10:12:58.602581 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ee49f07d-8df5-402e-824a-3635bdfbe980-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"ee49f07d-8df5-402e-824a-3635bdfbe980\") " pod="openstack/ovsdbserver-sb-1" Mar 11 10:12:58 crc kubenswrapper[4808]: I0311 10:12:58.602766 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/047f7cf8-06eb-4caf-8ffc-846dcee172f4-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"047f7cf8-06eb-4caf-8ffc-846dcee172f4\") " pod="openstack/ovsdbserver-sb-0" Mar 11 10:12:58 crc kubenswrapper[4808]: I0311 10:12:58.602832 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-352e95ae-5b45-492f-9a6e-932e1c561f4b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-352e95ae-5b45-492f-9a6e-932e1c561f4b\") pod \"ovsdbserver-sb-2\" (UID: \"bffb61a6-2929-4424-a4cc-b8cac705264a\") " pod="openstack/ovsdbserver-sb-2" Mar 11 10:12:58 crc kubenswrapper[4808]: I0311 10:12:58.602954 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drnd4\" (UniqueName: \"kubernetes.io/projected/047f7cf8-06eb-4caf-8ffc-846dcee172f4-kube-api-access-drnd4\") pod \"ovsdbserver-sb-0\" (UID: \"047f7cf8-06eb-4caf-8ffc-846dcee172f4\") " pod="openstack/ovsdbserver-sb-0" Mar 11 10:12:58 crc kubenswrapper[4808]: I0311 10:12:58.603056 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee49f07d-8df5-402e-824a-3635bdfbe980-config\") pod \"ovsdbserver-sb-1\" (UID: \"ee49f07d-8df5-402e-824a-3635bdfbe980\") " pod="openstack/ovsdbserver-sb-1" Mar 11 10:12:58 crc kubenswrapper[4808]: I0311 10:12:58.603151 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/047f7cf8-06eb-4caf-8ffc-846dcee172f4-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"047f7cf8-06eb-4caf-8ffc-846dcee172f4\") " pod="openstack/ovsdbserver-sb-0" Mar 11 10:12:58 crc kubenswrapper[4808]: I0311 10:12:58.603225 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bffb61a6-2929-4424-a4cc-b8cac705264a-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"bffb61a6-2929-4424-a4cc-b8cac705264a\") " pod="openstack/ovsdbserver-sb-2" Mar 11 10:12:58 crc kubenswrapper[4808]: I0311 10:12:58.603289 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/047f7cf8-06eb-4caf-8ffc-846dcee172f4-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"047f7cf8-06eb-4caf-8ffc-846dcee172f4\") " pod="openstack/ovsdbserver-sb-0" Mar 11 10:12:58 crc kubenswrapper[4808]: I0311 10:12:58.603349 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/bffb61a6-2929-4424-a4cc-b8cac705264a-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"bffb61a6-2929-4424-a4cc-b8cac705264a\") " pod="openstack/ovsdbserver-sb-2" Mar 11 10:12:58 crc kubenswrapper[4808]: I0311 10:12:58.603501 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/047f7cf8-06eb-4caf-8ffc-846dcee172f4-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"047f7cf8-06eb-4caf-8ffc-846dcee172f4\") " pod="openstack/ovsdbserver-sb-0" Mar 11 10:12:58 crc kubenswrapper[4808]: I0311 10:12:58.603559 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee49f07d-8df5-402e-824a-3635bdfbe980-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"ee49f07d-8df5-402e-824a-3635bdfbe980\") " pod="openstack/ovsdbserver-sb-1" Mar 11 10:12:58 crc kubenswrapper[4808]: I0311 10:12:58.603636 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-573964a3-e2aa-4b51-8e94-bc1cf8429590\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-573964a3-e2aa-4b51-8e94-bc1cf8429590\") pod \"ovsdbserver-sb-0\" (UID: \"047f7cf8-06eb-4caf-8ffc-846dcee172f4\") " pod="openstack/ovsdbserver-sb-0" Mar 11 10:12:58 crc kubenswrapper[4808]: I0311 10:12:58.605389 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/047f7cf8-06eb-4caf-8ffc-846dcee172f4-config\") pod \"ovsdbserver-sb-0\" (UID: \"047f7cf8-06eb-4caf-8ffc-846dcee172f4\") " pod="openstack/ovsdbserver-sb-0" Mar 11 10:12:58 crc kubenswrapper[4808]: I0311 10:12:58.606720 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/047f7cf8-06eb-4caf-8ffc-846dcee172f4-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"047f7cf8-06eb-4caf-8ffc-846dcee172f4\") " pod="openstack/ovsdbserver-sb-0" Mar 11 10:12:58 crc kubenswrapper[4808]: I0311 10:12:58.607225 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/047f7cf8-06eb-4caf-8ffc-846dcee172f4-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"047f7cf8-06eb-4caf-8ffc-846dcee172f4\") " pod="openstack/ovsdbserver-sb-0" Mar 11 10:12:58 crc kubenswrapper[4808]: I0311 10:12:58.607677 4808 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 11 10:12:58 crc kubenswrapper[4808]: I0311 10:12:58.607739 4808 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-573964a3-e2aa-4b51-8e94-bc1cf8429590\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-573964a3-e2aa-4b51-8e94-bc1cf8429590\") pod \"ovsdbserver-sb-0\" (UID: \"047f7cf8-06eb-4caf-8ffc-846dcee172f4\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/19287813f119a447fb2dd10fcb3bbe4d862b4b22b2451ee853fd7012c5761f21/globalmount\"" pod="openstack/ovsdbserver-sb-0" Mar 11 10:12:58 crc kubenswrapper[4808]: I0311 10:12:58.608141 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/047f7cf8-06eb-4caf-8ffc-846dcee172f4-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"047f7cf8-06eb-4caf-8ffc-846dcee172f4\") " pod="openstack/ovsdbserver-sb-0" Mar 11 10:12:58 crc kubenswrapper[4808]: I0311 10:12:58.615688 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/047f7cf8-06eb-4caf-8ffc-846dcee172f4-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"047f7cf8-06eb-4caf-8ffc-846dcee172f4\") " pod="openstack/ovsdbserver-sb-0" Mar 11 10:12:58 crc kubenswrapper[4808]: I0311 10:12:58.625271 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drnd4\" (UniqueName: \"kubernetes.io/projected/047f7cf8-06eb-4caf-8ffc-846dcee172f4-kube-api-access-drnd4\") pod \"ovsdbserver-sb-0\" (UID: \"047f7cf8-06eb-4caf-8ffc-846dcee172f4\") " pod="openstack/ovsdbserver-sb-0" Mar 11 10:12:58 crc kubenswrapper[4808]: I0311 10:12:58.641193 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-573964a3-e2aa-4b51-8e94-bc1cf8429590\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-573964a3-e2aa-4b51-8e94-bc1cf8429590\") pod \"ovsdbserver-sb-0\" (UID: \"047f7cf8-06eb-4caf-8ffc-846dcee172f4\") " pod="openstack/ovsdbserver-sb-0" Mar 11 10:12:58 crc kubenswrapper[4808]: I0311 10:12:58.704874 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee49f07d-8df5-402e-824a-3635bdfbe980-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"ee49f07d-8df5-402e-824a-3635bdfbe980\") " pod="openstack/ovsdbserver-sb-1" Mar 11 10:12:58 crc kubenswrapper[4808]: I0311 10:12:58.704920 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bffb61a6-2929-4424-a4cc-b8cac705264a-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"bffb61a6-2929-4424-a4cc-b8cac705264a\") " pod="openstack/ovsdbserver-sb-2" Mar 11 10:12:58 crc kubenswrapper[4808]: I0311 10:12:58.704946 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ee49f07d-8df5-402e-824a-3635bdfbe980-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"ee49f07d-8df5-402e-824a-3635bdfbe980\") " pod="openstack/ovsdbserver-sb-1" Mar 11 10:12:58 crc kubenswrapper[4808]: I0311 10:12:58.704979 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-352e95ae-5b45-492f-9a6e-932e1c561f4b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-352e95ae-5b45-492f-9a6e-932e1c561f4b\") pod \"ovsdbserver-sb-2\" (UID: \"bffb61a6-2929-4424-a4cc-b8cac705264a\") " pod="openstack/ovsdbserver-sb-2" Mar 11 10:12:58 crc kubenswrapper[4808]: I0311 10:12:58.705004 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee49f07d-8df5-402e-824a-3635bdfbe980-config\") pod \"ovsdbserver-sb-1\" (UID: \"ee49f07d-8df5-402e-824a-3635bdfbe980\") " pod="openstack/ovsdbserver-sb-1" Mar 11 10:12:58 crc kubenswrapper[4808]: I0311 10:12:58.705029 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bffb61a6-2929-4424-a4cc-b8cac705264a-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"bffb61a6-2929-4424-a4cc-b8cac705264a\") " pod="openstack/ovsdbserver-sb-2" Mar 11 10:12:58 crc kubenswrapper[4808]: I0311 10:12:58.705047 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/bffb61a6-2929-4424-a4cc-b8cac705264a-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"bffb61a6-2929-4424-a4cc-b8cac705264a\") " pod="openstack/ovsdbserver-sb-2" Mar 11 10:12:58 crc kubenswrapper[4808]: I0311 10:12:58.705070 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee49f07d-8df5-402e-824a-3635bdfbe980-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"ee49f07d-8df5-402e-824a-3635bdfbe980\") " pod="openstack/ovsdbserver-sb-1" Mar 11 10:12:58 crc kubenswrapper[4808]: I0311 10:12:58.705097 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bffb61a6-2929-4424-a4cc-b8cac705264a-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"bffb61a6-2929-4424-a4cc-b8cac705264a\") " pod="openstack/ovsdbserver-sb-2" Mar 11 10:12:58 crc kubenswrapper[4808]: I0311 10:12:58.705123 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jhkk\" (UniqueName: \"kubernetes.io/projected/ee49f07d-8df5-402e-824a-3635bdfbe980-kube-api-access-6jhkk\") pod \"ovsdbserver-sb-1\" (UID: \"ee49f07d-8df5-402e-824a-3635bdfbe980\") " pod="openstack/ovsdbserver-sb-1" Mar 11 10:12:58 crc kubenswrapper[4808]: I0311 10:12:58.705147 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bffb61a6-2929-4424-a4cc-b8cac705264a-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"bffb61a6-2929-4424-a4cc-b8cac705264a\") " pod="openstack/ovsdbserver-sb-2" Mar 11 10:12:58 crc kubenswrapper[4808]: I0311 10:12:58.705168 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bffb61a6-2929-4424-a4cc-b8cac705264a-config\") pod \"ovsdbserver-sb-2\" (UID: \"bffb61a6-2929-4424-a4cc-b8cac705264a\") " pod="openstack/ovsdbserver-sb-2" Mar 11 10:12:58 crc kubenswrapper[4808]: I0311 10:12:58.705185 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b7ff3663-9f8a-4bc6-87e6-3a4a3fa288de\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b7ff3663-9f8a-4bc6-87e6-3a4a3fa288de\") pod \"ovsdbserver-sb-1\" (UID: \"ee49f07d-8df5-402e-824a-3635bdfbe980\") " pod="openstack/ovsdbserver-sb-1" Mar 11 10:12:58 crc kubenswrapper[4808]: I0311 10:12:58.705202 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2nbq\" (UniqueName: \"kubernetes.io/projected/bffb61a6-2929-4424-a4cc-b8cac705264a-kube-api-access-j2nbq\") pod \"ovsdbserver-sb-2\" (UID: \"bffb61a6-2929-4424-a4cc-b8cac705264a\") " pod="openstack/ovsdbserver-sb-2" Mar 11 10:12:58 crc kubenswrapper[4808]: I0311 10:12:58.705221 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee49f07d-8df5-402e-824a-3635bdfbe980-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"ee49f07d-8df5-402e-824a-3635bdfbe980\") " pod="openstack/ovsdbserver-sb-1" Mar 11 10:12:58 crc kubenswrapper[4808]: I0311 10:12:58.705238 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ee49f07d-8df5-402e-824a-3635bdfbe980-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"ee49f07d-8df5-402e-824a-3635bdfbe980\") " pod="openstack/ovsdbserver-sb-1" Mar 11 10:12:58 crc kubenswrapper[4808]: I0311 10:12:58.705750 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ee49f07d-8df5-402e-824a-3635bdfbe980-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"ee49f07d-8df5-402e-824a-3635bdfbe980\") " pod="openstack/ovsdbserver-sb-1" Mar 11 10:12:58 crc kubenswrapper[4808]: I0311 10:12:58.706649 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bffb61a6-2929-4424-a4cc-b8cac705264a-config\") pod \"ovsdbserver-sb-2\" (UID: \"bffb61a6-2929-4424-a4cc-b8cac705264a\") " pod="openstack/ovsdbserver-sb-2" Mar 11 10:12:58 crc kubenswrapper[4808]: I0311 10:12:58.707841 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bffb61a6-2929-4424-a4cc-b8cac705264a-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"bffb61a6-2929-4424-a4cc-b8cac705264a\") " pod="openstack/ovsdbserver-sb-2" Mar 11 10:12:58 crc kubenswrapper[4808]: I0311 10:12:58.707989 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ee49f07d-8df5-402e-824a-3635bdfbe980-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"ee49f07d-8df5-402e-824a-3635bdfbe980\") " pod="openstack/ovsdbserver-sb-1" Mar 11 10:12:58 crc kubenswrapper[4808]: I0311 10:12:58.708973 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee49f07d-8df5-402e-824a-3635bdfbe980-config\") pod \"ovsdbserver-sb-1\" (UID: \"ee49f07d-8df5-402e-824a-3635bdfbe980\") " pod="openstack/ovsdbserver-sb-1" Mar 11 10:12:58 crc kubenswrapper[4808]: I0311 10:12:58.709368 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee49f07d-8df5-402e-824a-3635bdfbe980-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"ee49f07d-8df5-402e-824a-3635bdfbe980\") " pod="openstack/ovsdbserver-sb-1" Mar 11 10:12:58 crc kubenswrapper[4808]: I0311 10:12:58.709903 4808 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 11 10:12:58 crc kubenswrapper[4808]: I0311 10:12:58.709939 4808 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-352e95ae-5b45-492f-9a6e-932e1c561f4b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-352e95ae-5b45-492f-9a6e-932e1c561f4b\") pod \"ovsdbserver-sb-2\" (UID: \"bffb61a6-2929-4424-a4cc-b8cac705264a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/db51b88a8cbf91da4334a8480d098039d142a9240eafa34241d79d404ec3f3ed/globalmount\"" pod="openstack/ovsdbserver-sb-2" Mar 11 10:12:58 crc kubenswrapper[4808]: I0311 10:12:58.709955 4808 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 11 10:12:58 crc kubenswrapper[4808]: I0311 10:12:58.710104 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/bffb61a6-2929-4424-a4cc-b8cac705264a-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"bffb61a6-2929-4424-a4cc-b8cac705264a\") " pod="openstack/ovsdbserver-sb-2" Mar 11 10:12:58 crc kubenswrapper[4808]: I0311 10:12:58.710754 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bffb61a6-2929-4424-a4cc-b8cac705264a-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"bffb61a6-2929-4424-a4cc-b8cac705264a\") " pod="openstack/ovsdbserver-sb-2" Mar 11 10:12:58 crc kubenswrapper[4808]: I0311 10:12:58.710804 4808 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b7ff3663-9f8a-4bc6-87e6-3a4a3fa288de\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b7ff3663-9f8a-4bc6-87e6-3a4a3fa288de\") pod \"ovsdbserver-sb-1\" (UID: \"ee49f07d-8df5-402e-824a-3635bdfbe980\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/27c6d2ac2e8cd7917f677bb25138ad3698dabc9e6e4420d772feee65fa7e0276/globalmount\"" pod="openstack/ovsdbserver-sb-1" Mar 11 10:12:58 crc kubenswrapper[4808]: I0311 10:12:58.710831 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bffb61a6-2929-4424-a4cc-b8cac705264a-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"bffb61a6-2929-4424-a4cc-b8cac705264a\") " pod="openstack/ovsdbserver-sb-2" Mar 11 10:12:58 crc kubenswrapper[4808]: I0311 10:12:58.712398 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee49f07d-8df5-402e-824a-3635bdfbe980-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"ee49f07d-8df5-402e-824a-3635bdfbe980\") " pod="openstack/ovsdbserver-sb-1" Mar 11 10:12:58 crc kubenswrapper[4808]: I0311 10:12:58.714189 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bffb61a6-2929-4424-a4cc-b8cac705264a-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"bffb61a6-2929-4424-a4cc-b8cac705264a\") " pod="openstack/ovsdbserver-sb-2" Mar 11 10:12:58 crc kubenswrapper[4808]: I0311 10:12:58.718130 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee49f07d-8df5-402e-824a-3635bdfbe980-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"ee49f07d-8df5-402e-824a-3635bdfbe980\") " pod="openstack/ovsdbserver-sb-1" Mar 11 10:12:58 crc kubenswrapper[4808]: I0311 10:12:58.724882 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2nbq\" (UniqueName: \"kubernetes.io/projected/bffb61a6-2929-4424-a4cc-b8cac705264a-kube-api-access-j2nbq\") pod \"ovsdbserver-sb-2\" (UID: \"bffb61a6-2929-4424-a4cc-b8cac705264a\") " pod="openstack/ovsdbserver-sb-2" Mar 11 10:12:58 crc kubenswrapper[4808]: I0311 10:12:58.732813 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jhkk\" (UniqueName: \"kubernetes.io/projected/ee49f07d-8df5-402e-824a-3635bdfbe980-kube-api-access-6jhkk\") pod \"ovsdbserver-sb-1\" (UID: \"ee49f07d-8df5-402e-824a-3635bdfbe980\") " pod="openstack/ovsdbserver-sb-1" Mar 11 10:12:58 crc kubenswrapper[4808]: I0311 10:12:58.741732 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-352e95ae-5b45-492f-9a6e-932e1c561f4b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-352e95ae-5b45-492f-9a6e-932e1c561f4b\") pod \"ovsdbserver-sb-2\" (UID: \"bffb61a6-2929-4424-a4cc-b8cac705264a\") " pod="openstack/ovsdbserver-sb-2" Mar 11 10:12:58 crc kubenswrapper[4808]: I0311 10:12:58.750456 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b7ff3663-9f8a-4bc6-87e6-3a4a3fa288de\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b7ff3663-9f8a-4bc6-87e6-3a4a3fa288de\") pod \"ovsdbserver-sb-1\" (UID: \"ee49f07d-8df5-402e-824a-3635bdfbe980\") " pod="openstack/ovsdbserver-sb-1" Mar 11 10:12:58 crc kubenswrapper[4808]: I0311 10:12:58.773387 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Mar 11 10:12:58 crc kubenswrapper[4808]: I0311 10:12:58.781573 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-1" Mar 11 10:12:58 crc kubenswrapper[4808]: I0311 10:12:58.803209 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-2" Mar 11 10:12:59 crc kubenswrapper[4808]: I0311 10:12:59.198407 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Mar 11 10:12:59 crc kubenswrapper[4808]: I0311 10:12:59.219842 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-1" Mar 11 10:12:59 crc kubenswrapper[4808]: I0311 10:12:59.226741 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-2" Mar 11 10:12:59 crc kubenswrapper[4808]: I0311 10:12:59.244188 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Mar 11 10:12:59 crc kubenswrapper[4808]: I0311 10:12:59.272849 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-1" Mar 11 10:12:59 crc kubenswrapper[4808]: I0311 10:12:59.340248 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 11 10:12:59 crc kubenswrapper[4808]: W0311 10:12:59.350834 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod047f7cf8_06eb_4caf_8ffc_846dcee172f4.slice/crio-08097c3c74c4b1831b7203edb90bb0e3eaa158a5ea324d9370526875eabab5d0 WatchSource:0}: Error finding container 08097c3c74c4b1831b7203edb90bb0e3eaa158a5ea324d9370526875eabab5d0: Status 404 returned error can't find the container with id 08097c3c74c4b1831b7203edb90bb0e3eaa158a5ea324d9370526875eabab5d0 Mar 11 10:12:59 crc kubenswrapper[4808]: I0311 10:12:59.442993 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-1"] Mar 11 10:12:59 crc kubenswrapper[4808]: I0311 10:12:59.695239 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"ee49f07d-8df5-402e-824a-3635bdfbe980","Type":"ContainerStarted","Data":"b5a458e5a69ace73cc1235c0f1b71b52dd4041fb0a2655cdbc8502b2443d7545"} Mar 11 10:12:59 crc kubenswrapper[4808]: I0311 10:12:59.695308 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"ee49f07d-8df5-402e-824a-3635bdfbe980","Type":"ContainerStarted","Data":"120cc27690674bdf10ec7851484480ca4bc948adc801106c823fa6325ceef2e9"} Mar 11 10:12:59 crc kubenswrapper[4808]: I0311 10:12:59.702069 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"047f7cf8-06eb-4caf-8ffc-846dcee172f4","Type":"ContainerStarted","Data":"a758b3314ae6d05256c8d1d78fb393ad5aa38799574cac4fe1f4b5e9ca4c1d55"} Mar 11 10:12:59 crc kubenswrapper[4808]: I0311 10:12:59.702109 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"047f7cf8-06eb-4caf-8ffc-846dcee172f4","Type":"ContainerStarted","Data":"08097c3c74c4b1831b7203edb90bb0e3eaa158a5ea324d9370526875eabab5d0"} Mar 11 10:12:59 crc kubenswrapper[4808]: I0311 10:12:59.702125 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-1" Mar 11 10:12:59 crc kubenswrapper[4808]: I0311 10:12:59.703473 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Mar 11 10:12:59 crc kubenswrapper[4808]: I0311 10:12:59.987906 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-2"] Mar 11 10:12:59 crc kubenswrapper[4808]: W0311 10:12:59.988042 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbffb61a6_2929_4424_a4cc_b8cac705264a.slice/crio-cf4c472ee07e4740911c4f6a3d7c16a2f84ed6b0fc8d3483be0228bd85a831c3 WatchSource:0}: Error finding container cf4c472ee07e4740911c4f6a3d7c16a2f84ed6b0fc8d3483be0228bd85a831c3: Status 404 returned error can't find the container with id cf4c472ee07e4740911c4f6a3d7c16a2f84ed6b0fc8d3483be0228bd85a831c3 Mar 11 10:13:00 crc kubenswrapper[4808]: I0311 10:13:00.710317 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"ee49f07d-8df5-402e-824a-3635bdfbe980","Type":"ContainerStarted","Data":"58c9eb3c053560a3e37fdf73b5713c797815a0130df90fc0563c09d42bb398e6"} Mar 11 10:13:00 crc kubenswrapper[4808]: I0311 10:13:00.711854 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"047f7cf8-06eb-4caf-8ffc-846dcee172f4","Type":"ContainerStarted","Data":"0f48ff49408e2d3d5f29362e97d79ef3d8f8ab5903e709da543088e4a40eb579"} Mar 11 10:13:00 crc kubenswrapper[4808]: I0311 10:13:00.714237 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"bffb61a6-2929-4424-a4cc-b8cac705264a","Type":"ContainerStarted","Data":"a22bceb3905510c1d15f5f00f29b543e921bc43fa0d9561ae4e8e2da1869a3ab"} Mar 11 10:13:00 crc kubenswrapper[4808]: I0311 10:13:00.714342 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"bffb61a6-2929-4424-a4cc-b8cac705264a","Type":"ContainerStarted","Data":"07b68f2b7d367ebdf38326a0a64a4604704cd76f60b70c9f7e55159fcd73c5fb"} Mar 11 10:13:00 crc kubenswrapper[4808]: I0311 10:13:00.714494 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"bffb61a6-2929-4424-a4cc-b8cac705264a","Type":"ContainerStarted","Data":"cf4c472ee07e4740911c4f6a3d7c16a2f84ed6b0fc8d3483be0228bd85a831c3"} Mar 11 10:13:00 crc kubenswrapper[4808]: I0311 10:13:00.728690 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-1" podStartSLOduration=3.72867194 podStartE2EDuration="3.72867194s" podCreationTimestamp="2026-03-11 10:12:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 10:13:00.728053202 +0000 UTC m=+5631.681376542" watchObservedRunningTime="2026-03-11 10:13:00.72867194 +0000 UTC m=+5631.681995260" Mar 11 10:13:00 crc kubenswrapper[4808]: I0311 10:13:00.757085 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-2" podStartSLOduration=3.757070724 podStartE2EDuration="3.757070724s" podCreationTimestamp="2026-03-11 10:12:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 10:13:00.753332958 +0000 UTC m=+5631.706656278" watchObservedRunningTime="2026-03-11 10:13:00.757070724 +0000 UTC m=+5631.710394074" Mar 11 10:13:00 crc kubenswrapper[4808]: I0311 10:13:00.775890 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=3.775873947 podStartE2EDuration="3.775873947s" podCreationTimestamp="2026-03-11 10:12:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 10:13:00.775109275 +0000 UTC m=+5631.728432595" watchObservedRunningTime="2026-03-11 10:13:00.775873947 +0000 UTC m=+5631.729197267" Mar 11 10:13:01 crc kubenswrapper[4808]: I0311 10:13:01.226954 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-2" Mar 11 10:13:01 crc kubenswrapper[4808]: I0311 10:13:01.239732 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Mar 11 10:13:01 crc kubenswrapper[4808]: I0311 10:13:01.272713 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-1" Mar 11 10:13:01 crc kubenswrapper[4808]: I0311 10:13:01.532292 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-867c585875-tnnx2"] Mar 11 10:13:01 crc kubenswrapper[4808]: I0311 10:13:01.533877 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-867c585875-tnnx2" Mar 11 10:13:01 crc kubenswrapper[4808]: I0311 10:13:01.537066 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Mar 11 10:13:01 crc kubenswrapper[4808]: I0311 10:13:01.552078 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-867c585875-tnnx2"] Mar 11 10:13:01 crc kubenswrapper[4808]: I0311 10:13:01.667342 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31ff1e1e-2e52-4ac0-93dc-59fbd07f7a6d-config\") pod \"dnsmasq-dns-867c585875-tnnx2\" (UID: \"31ff1e1e-2e52-4ac0-93dc-59fbd07f7a6d\") " pod="openstack/dnsmasq-dns-867c585875-tnnx2" Mar 11 10:13:01 crc kubenswrapper[4808]: I0311 10:13:01.667718 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/31ff1e1e-2e52-4ac0-93dc-59fbd07f7a6d-ovsdbserver-nb\") pod \"dnsmasq-dns-867c585875-tnnx2\" (UID: \"31ff1e1e-2e52-4ac0-93dc-59fbd07f7a6d\") " pod="openstack/dnsmasq-dns-867c585875-tnnx2" Mar 11 10:13:01 crc kubenswrapper[4808]: I0311 10:13:01.667973 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/31ff1e1e-2e52-4ac0-93dc-59fbd07f7a6d-dns-svc\") pod \"dnsmasq-dns-867c585875-tnnx2\" (UID: \"31ff1e1e-2e52-4ac0-93dc-59fbd07f7a6d\") " pod="openstack/dnsmasq-dns-867c585875-tnnx2" Mar 11 10:13:01 crc kubenswrapper[4808]: I0311 10:13:01.668156 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9d2vk\" (UniqueName: \"kubernetes.io/projected/31ff1e1e-2e52-4ac0-93dc-59fbd07f7a6d-kube-api-access-9d2vk\") pod \"dnsmasq-dns-867c585875-tnnx2\" (UID: \"31ff1e1e-2e52-4ac0-93dc-59fbd07f7a6d\") " pod="openstack/dnsmasq-dns-867c585875-tnnx2" Mar 11 10:13:01 crc kubenswrapper[4808]: I0311 10:13:01.769190 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31ff1e1e-2e52-4ac0-93dc-59fbd07f7a6d-config\") pod \"dnsmasq-dns-867c585875-tnnx2\" (UID: \"31ff1e1e-2e52-4ac0-93dc-59fbd07f7a6d\") " pod="openstack/dnsmasq-dns-867c585875-tnnx2" Mar 11 10:13:01 crc kubenswrapper[4808]: I0311 10:13:01.769574 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/31ff1e1e-2e52-4ac0-93dc-59fbd07f7a6d-ovsdbserver-nb\") pod \"dnsmasq-dns-867c585875-tnnx2\" (UID: \"31ff1e1e-2e52-4ac0-93dc-59fbd07f7a6d\") " pod="openstack/dnsmasq-dns-867c585875-tnnx2" Mar 11 10:13:01 crc kubenswrapper[4808]: I0311 10:13:01.769645 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/31ff1e1e-2e52-4ac0-93dc-59fbd07f7a6d-dns-svc\") pod \"dnsmasq-dns-867c585875-tnnx2\" (UID: \"31ff1e1e-2e52-4ac0-93dc-59fbd07f7a6d\") " pod="openstack/dnsmasq-dns-867c585875-tnnx2" Mar 11 10:13:01 crc kubenswrapper[4808]: I0311 10:13:01.769687 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9d2vk\" (UniqueName: \"kubernetes.io/projected/31ff1e1e-2e52-4ac0-93dc-59fbd07f7a6d-kube-api-access-9d2vk\") pod \"dnsmasq-dns-867c585875-tnnx2\" (UID: \"31ff1e1e-2e52-4ac0-93dc-59fbd07f7a6d\") " pod="openstack/dnsmasq-dns-867c585875-tnnx2" Mar 11 10:13:01 crc kubenswrapper[4808]: I0311 10:13:01.770309 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31ff1e1e-2e52-4ac0-93dc-59fbd07f7a6d-config\") pod \"dnsmasq-dns-867c585875-tnnx2\" (UID: \"31ff1e1e-2e52-4ac0-93dc-59fbd07f7a6d\") " pod="openstack/dnsmasq-dns-867c585875-tnnx2" Mar 11 10:13:01 crc kubenswrapper[4808]: I0311 10:13:01.770541 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/31ff1e1e-2e52-4ac0-93dc-59fbd07f7a6d-ovsdbserver-nb\") pod \"dnsmasq-dns-867c585875-tnnx2\" (UID: \"31ff1e1e-2e52-4ac0-93dc-59fbd07f7a6d\") " pod="openstack/dnsmasq-dns-867c585875-tnnx2" Mar 11 10:13:01 crc kubenswrapper[4808]: I0311 10:13:01.770614 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/31ff1e1e-2e52-4ac0-93dc-59fbd07f7a6d-dns-svc\") pod \"dnsmasq-dns-867c585875-tnnx2\" (UID: \"31ff1e1e-2e52-4ac0-93dc-59fbd07f7a6d\") " pod="openstack/dnsmasq-dns-867c585875-tnnx2" Mar 11 10:13:01 crc kubenswrapper[4808]: I0311 10:13:01.773747 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Mar 11 10:13:01 crc kubenswrapper[4808]: I0311 10:13:01.781940 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-1" Mar 11 10:13:01 crc kubenswrapper[4808]: I0311 10:13:01.788709 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9d2vk\" (UniqueName: \"kubernetes.io/projected/31ff1e1e-2e52-4ac0-93dc-59fbd07f7a6d-kube-api-access-9d2vk\") pod \"dnsmasq-dns-867c585875-tnnx2\" (UID: \"31ff1e1e-2e52-4ac0-93dc-59fbd07f7a6d\") " pod="openstack/dnsmasq-dns-867c585875-tnnx2" Mar 11 10:13:01 crc kubenswrapper[4808]: I0311 10:13:01.803826 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-2" Mar 11 10:13:01 crc kubenswrapper[4808]: I0311 10:13:01.858651 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-867c585875-tnnx2" Mar 11 10:13:02 crc kubenswrapper[4808]: I0311 10:13:02.265466 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-2" Mar 11 10:13:02 crc kubenswrapper[4808]: I0311 10:13:02.297558 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-867c585875-tnnx2"] Mar 11 10:13:02 crc kubenswrapper[4808]: I0311 10:13:02.314682 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-2" Mar 11 10:13:02 crc kubenswrapper[4808]: I0311 10:13:02.737620 4808 generic.go:334] "Generic (PLEG): container finished" podID="31ff1e1e-2e52-4ac0-93dc-59fbd07f7a6d" containerID="7aae61de2d01766e7282aac1c8f8bde7b1417d68665b44d751458c166389290e" exitCode=0 Mar 11 10:13:02 crc kubenswrapper[4808]: I0311 10:13:02.737716 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-867c585875-tnnx2" event={"ID":"31ff1e1e-2e52-4ac0-93dc-59fbd07f7a6d","Type":"ContainerDied","Data":"7aae61de2d01766e7282aac1c8f8bde7b1417d68665b44d751458c166389290e"} Mar 11 10:13:02 crc kubenswrapper[4808]: I0311 10:13:02.738179 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-867c585875-tnnx2" event={"ID":"31ff1e1e-2e52-4ac0-93dc-59fbd07f7a6d","Type":"ContainerStarted","Data":"13a38959aacca34d56f8ca3a4a7089a762c5026a38b9e4daccb323e0a503ddae"} Mar 11 10:13:03 crc kubenswrapper[4808]: I0311 10:13:03.747663 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-867c585875-tnnx2" event={"ID":"31ff1e1e-2e52-4ac0-93dc-59fbd07f7a6d","Type":"ContainerStarted","Data":"e01b92c01fa63298932561b6909605cb75e9f4272cbfb47a34a0955ae0f6bc7c"} Mar 11 10:13:03 crc kubenswrapper[4808]: I0311 10:13:03.749860 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-867c585875-tnnx2" Mar 11 10:13:03 crc kubenswrapper[4808]: I0311 10:13:03.773596 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Mar 11 10:13:03 crc kubenswrapper[4808]: I0311 10:13:03.778137 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-867c585875-tnnx2" podStartSLOduration=2.778109385 podStartE2EDuration="2.778109385s" podCreationTimestamp="2026-03-11 10:13:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 10:13:03.774860863 +0000 UTC m=+5634.728184193" watchObservedRunningTime="2026-03-11 10:13:03.778109385 +0000 UTC m=+5634.731432715" Mar 11 10:13:03 crc kubenswrapper[4808]: I0311 10:13:03.782111 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-1" Mar 11 10:13:03 crc kubenswrapper[4808]: I0311 10:13:03.804220 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-2" Mar 11 10:13:04 crc kubenswrapper[4808]: I0311 10:13:04.827120 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Mar 11 10:13:04 crc kubenswrapper[4808]: I0311 10:13:04.846055 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-1" Mar 11 10:13:04 crc kubenswrapper[4808]: I0311 10:13:04.868943 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-2" Mar 11 10:13:04 crc kubenswrapper[4808]: I0311 10:13:04.896855 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Mar 11 10:13:04 crc kubenswrapper[4808]: I0311 10:13:04.906867 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-1" Mar 11 10:13:05 crc kubenswrapper[4808]: I0311 10:13:05.194795 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-867c585875-tnnx2"] Mar 11 10:13:05 crc kubenswrapper[4808]: I0311 10:13:05.258111 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6cc897dccf-z6rp4"] Mar 11 10:13:05 crc kubenswrapper[4808]: I0311 10:13:05.259383 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cc897dccf-z6rp4" Mar 11 10:13:05 crc kubenswrapper[4808]: I0311 10:13:05.262945 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Mar 11 10:13:05 crc kubenswrapper[4808]: I0311 10:13:05.273134 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6cc897dccf-z6rp4"] Mar 11 10:13:05 crc kubenswrapper[4808]: I0311 10:13:05.332749 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a375a012-b69e-494d-8648-4be3c035b2f1-ovsdbserver-nb\") pod \"dnsmasq-dns-6cc897dccf-z6rp4\" (UID: \"a375a012-b69e-494d-8648-4be3c035b2f1\") " pod="openstack/dnsmasq-dns-6cc897dccf-z6rp4" Mar 11 10:13:05 crc kubenswrapper[4808]: I0311 10:13:05.333129 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a375a012-b69e-494d-8648-4be3c035b2f1-config\") pod \"dnsmasq-dns-6cc897dccf-z6rp4\" (UID: \"a375a012-b69e-494d-8648-4be3c035b2f1\") " pod="openstack/dnsmasq-dns-6cc897dccf-z6rp4" Mar 11 10:13:05 crc kubenswrapper[4808]: I0311 10:13:05.333209 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a375a012-b69e-494d-8648-4be3c035b2f1-ovsdbserver-sb\") pod \"dnsmasq-dns-6cc897dccf-z6rp4\" (UID: \"a375a012-b69e-494d-8648-4be3c035b2f1\") " pod="openstack/dnsmasq-dns-6cc897dccf-z6rp4" Mar 11 10:13:05 crc kubenswrapper[4808]: I0311 10:13:05.333297 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5h64t\" (UniqueName: \"kubernetes.io/projected/a375a012-b69e-494d-8648-4be3c035b2f1-kube-api-access-5h64t\") pod \"dnsmasq-dns-6cc897dccf-z6rp4\" (UID: \"a375a012-b69e-494d-8648-4be3c035b2f1\") " pod="openstack/dnsmasq-dns-6cc897dccf-z6rp4" Mar 11 10:13:05 crc kubenswrapper[4808]: I0311 10:13:05.333344 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a375a012-b69e-494d-8648-4be3c035b2f1-dns-svc\") pod \"dnsmasq-dns-6cc897dccf-z6rp4\" (UID: \"a375a012-b69e-494d-8648-4be3c035b2f1\") " pod="openstack/dnsmasq-dns-6cc897dccf-z6rp4" Mar 11 10:13:05 crc kubenswrapper[4808]: I0311 10:13:05.435044 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a375a012-b69e-494d-8648-4be3c035b2f1-dns-svc\") pod \"dnsmasq-dns-6cc897dccf-z6rp4\" (UID: \"a375a012-b69e-494d-8648-4be3c035b2f1\") " pod="openstack/dnsmasq-dns-6cc897dccf-z6rp4" Mar 11 10:13:05 crc kubenswrapper[4808]: I0311 10:13:05.435151 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a375a012-b69e-494d-8648-4be3c035b2f1-ovsdbserver-nb\") pod \"dnsmasq-dns-6cc897dccf-z6rp4\" (UID: \"a375a012-b69e-494d-8648-4be3c035b2f1\") " pod="openstack/dnsmasq-dns-6cc897dccf-z6rp4" Mar 11 10:13:05 crc kubenswrapper[4808]: I0311 10:13:05.435202 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a375a012-b69e-494d-8648-4be3c035b2f1-config\") pod \"dnsmasq-dns-6cc897dccf-z6rp4\" (UID: \"a375a012-b69e-494d-8648-4be3c035b2f1\") " pod="openstack/dnsmasq-dns-6cc897dccf-z6rp4" Mar 11 10:13:05 crc kubenswrapper[4808]: I0311 10:13:05.435266 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a375a012-b69e-494d-8648-4be3c035b2f1-ovsdbserver-sb\") pod \"dnsmasq-dns-6cc897dccf-z6rp4\" (UID: \"a375a012-b69e-494d-8648-4be3c035b2f1\") " pod="openstack/dnsmasq-dns-6cc897dccf-z6rp4" Mar 11 10:13:05 crc kubenswrapper[4808]: I0311 10:13:05.435380 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5h64t\" (UniqueName: \"kubernetes.io/projected/a375a012-b69e-494d-8648-4be3c035b2f1-kube-api-access-5h64t\") pod \"dnsmasq-dns-6cc897dccf-z6rp4\" (UID: \"a375a012-b69e-494d-8648-4be3c035b2f1\") " pod="openstack/dnsmasq-dns-6cc897dccf-z6rp4" Mar 11 10:13:05 crc kubenswrapper[4808]: I0311 10:13:05.436225 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a375a012-b69e-494d-8648-4be3c035b2f1-ovsdbserver-nb\") pod \"dnsmasq-dns-6cc897dccf-z6rp4\" (UID: \"a375a012-b69e-494d-8648-4be3c035b2f1\") " pod="openstack/dnsmasq-dns-6cc897dccf-z6rp4" Mar 11 10:13:05 crc kubenswrapper[4808]: I0311 10:13:05.436219 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a375a012-b69e-494d-8648-4be3c035b2f1-dns-svc\") pod \"dnsmasq-dns-6cc897dccf-z6rp4\" (UID: \"a375a012-b69e-494d-8648-4be3c035b2f1\") " pod="openstack/dnsmasq-dns-6cc897dccf-z6rp4" Mar 11 10:13:05 crc kubenswrapper[4808]: I0311 10:13:05.436337 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a375a012-b69e-494d-8648-4be3c035b2f1-config\") pod \"dnsmasq-dns-6cc897dccf-z6rp4\" (UID: \"a375a012-b69e-494d-8648-4be3c035b2f1\") " pod="openstack/dnsmasq-dns-6cc897dccf-z6rp4" Mar 11 10:13:05 crc kubenswrapper[4808]: I0311 10:13:05.436391 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a375a012-b69e-494d-8648-4be3c035b2f1-ovsdbserver-sb\") pod \"dnsmasq-dns-6cc897dccf-z6rp4\" (UID: \"a375a012-b69e-494d-8648-4be3c035b2f1\") " pod="openstack/dnsmasq-dns-6cc897dccf-z6rp4" Mar 11 10:13:05 crc kubenswrapper[4808]: I0311 10:13:05.453130 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5h64t\" (UniqueName: \"kubernetes.io/projected/a375a012-b69e-494d-8648-4be3c035b2f1-kube-api-access-5h64t\") pod \"dnsmasq-dns-6cc897dccf-z6rp4\" (UID: \"a375a012-b69e-494d-8648-4be3c035b2f1\") " pod="openstack/dnsmasq-dns-6cc897dccf-z6rp4" Mar 11 10:13:05 crc kubenswrapper[4808]: I0311 10:13:05.578088 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cc897dccf-z6rp4" Mar 11 10:13:06 crc kubenswrapper[4808]: I0311 10:13:06.041803 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6cc897dccf-z6rp4"] Mar 11 10:13:06 crc kubenswrapper[4808]: W0311 10:13:06.059211 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda375a012_b69e_494d_8648_4be3c035b2f1.slice/crio-da062735a018b34ba6c3535eb1e44282ee2a6c1029404d969934fa0b57e98297 WatchSource:0}: Error finding container da062735a018b34ba6c3535eb1e44282ee2a6c1029404d969934fa0b57e98297: Status 404 returned error can't find the container with id da062735a018b34ba6c3535eb1e44282ee2a6c1029404d969934fa0b57e98297 Mar 11 10:13:06 crc kubenswrapper[4808]: I0311 10:13:06.775875 4808 generic.go:334] "Generic (PLEG): container finished" podID="a375a012-b69e-494d-8648-4be3c035b2f1" containerID="bb56672baf296e4cbf8cfaf915b88d6c4b8a83b51bd0fefd3214890541a8aa39" exitCode=0 Mar 11 10:13:06 crc kubenswrapper[4808]: I0311 10:13:06.776421 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-867c585875-tnnx2" podUID="31ff1e1e-2e52-4ac0-93dc-59fbd07f7a6d" containerName="dnsmasq-dns" containerID="cri-o://e01b92c01fa63298932561b6909605cb75e9f4272cbfb47a34a0955ae0f6bc7c" gracePeriod=10 Mar 11 10:13:06 crc kubenswrapper[4808]: I0311 10:13:06.775931 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cc897dccf-z6rp4" event={"ID":"a375a012-b69e-494d-8648-4be3c035b2f1","Type":"ContainerDied","Data":"bb56672baf296e4cbf8cfaf915b88d6c4b8a83b51bd0fefd3214890541a8aa39"} Mar 11 10:13:06 crc kubenswrapper[4808]: I0311 10:13:06.776510 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cc897dccf-z6rp4" event={"ID":"a375a012-b69e-494d-8648-4be3c035b2f1","Type":"ContainerStarted","Data":"da062735a018b34ba6c3535eb1e44282ee2a6c1029404d969934fa0b57e98297"} Mar 11 10:13:07 crc kubenswrapper[4808]: I0311 10:13:07.232345 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-867c585875-tnnx2" Mar 11 10:13:07 crc kubenswrapper[4808]: I0311 10:13:07.267614 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/31ff1e1e-2e52-4ac0-93dc-59fbd07f7a6d-ovsdbserver-nb\") pod \"31ff1e1e-2e52-4ac0-93dc-59fbd07f7a6d\" (UID: \"31ff1e1e-2e52-4ac0-93dc-59fbd07f7a6d\") " Mar 11 10:13:07 crc kubenswrapper[4808]: I0311 10:13:07.267748 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9d2vk\" (UniqueName: \"kubernetes.io/projected/31ff1e1e-2e52-4ac0-93dc-59fbd07f7a6d-kube-api-access-9d2vk\") pod \"31ff1e1e-2e52-4ac0-93dc-59fbd07f7a6d\" (UID: \"31ff1e1e-2e52-4ac0-93dc-59fbd07f7a6d\") " Mar 11 10:13:07 crc kubenswrapper[4808]: I0311 10:13:07.267792 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/31ff1e1e-2e52-4ac0-93dc-59fbd07f7a6d-dns-svc\") pod \"31ff1e1e-2e52-4ac0-93dc-59fbd07f7a6d\" (UID: \"31ff1e1e-2e52-4ac0-93dc-59fbd07f7a6d\") " Mar 11 10:13:07 crc kubenswrapper[4808]: I0311 10:13:07.267918 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31ff1e1e-2e52-4ac0-93dc-59fbd07f7a6d-config\") pod \"31ff1e1e-2e52-4ac0-93dc-59fbd07f7a6d\" (UID: \"31ff1e1e-2e52-4ac0-93dc-59fbd07f7a6d\") " Mar 11 10:13:07 crc kubenswrapper[4808]: I0311 10:13:07.274061 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31ff1e1e-2e52-4ac0-93dc-59fbd07f7a6d-kube-api-access-9d2vk" (OuterVolumeSpecName: "kube-api-access-9d2vk") pod "31ff1e1e-2e52-4ac0-93dc-59fbd07f7a6d" (UID: "31ff1e1e-2e52-4ac0-93dc-59fbd07f7a6d"). InnerVolumeSpecName "kube-api-access-9d2vk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:13:07 crc kubenswrapper[4808]: I0311 10:13:07.307858 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31ff1e1e-2e52-4ac0-93dc-59fbd07f7a6d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "31ff1e1e-2e52-4ac0-93dc-59fbd07f7a6d" (UID: "31ff1e1e-2e52-4ac0-93dc-59fbd07f7a6d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 10:13:07 crc kubenswrapper[4808]: I0311 10:13:07.321630 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31ff1e1e-2e52-4ac0-93dc-59fbd07f7a6d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "31ff1e1e-2e52-4ac0-93dc-59fbd07f7a6d" (UID: "31ff1e1e-2e52-4ac0-93dc-59fbd07f7a6d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 10:13:07 crc kubenswrapper[4808]: I0311 10:13:07.338592 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31ff1e1e-2e52-4ac0-93dc-59fbd07f7a6d-config" (OuterVolumeSpecName: "config") pod "31ff1e1e-2e52-4ac0-93dc-59fbd07f7a6d" (UID: "31ff1e1e-2e52-4ac0-93dc-59fbd07f7a6d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 10:13:07 crc kubenswrapper[4808]: I0311 10:13:07.370262 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31ff1e1e-2e52-4ac0-93dc-59fbd07f7a6d-config\") on node \"crc\" DevicePath \"\"" Mar 11 10:13:07 crc kubenswrapper[4808]: I0311 10:13:07.370310 4808 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/31ff1e1e-2e52-4ac0-93dc-59fbd07f7a6d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 11 10:13:07 crc kubenswrapper[4808]: I0311 10:13:07.370328 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9d2vk\" (UniqueName: \"kubernetes.io/projected/31ff1e1e-2e52-4ac0-93dc-59fbd07f7a6d-kube-api-access-9d2vk\") on node \"crc\" DevicePath \"\"" Mar 11 10:13:07 crc kubenswrapper[4808]: I0311 10:13:07.370340 4808 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/31ff1e1e-2e52-4ac0-93dc-59fbd07f7a6d-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 11 10:13:07 crc kubenswrapper[4808]: I0311 10:13:07.795514 4808 generic.go:334] "Generic (PLEG): container finished" podID="31ff1e1e-2e52-4ac0-93dc-59fbd07f7a6d" containerID="e01b92c01fa63298932561b6909605cb75e9f4272cbfb47a34a0955ae0f6bc7c" exitCode=0 Mar 11 10:13:07 crc kubenswrapper[4808]: I0311 10:13:07.795699 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-867c585875-tnnx2" Mar 11 10:13:07 crc kubenswrapper[4808]: I0311 10:13:07.805105 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6cc897dccf-z6rp4" Mar 11 10:13:07 crc kubenswrapper[4808]: I0311 10:13:07.805142 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cc897dccf-z6rp4" event={"ID":"a375a012-b69e-494d-8648-4be3c035b2f1","Type":"ContainerStarted","Data":"93a52ac634d0f3b5759624bcc9d43a6b3e3cbf1d5b52eeb2e2971518c75b38ff"} Mar 11 10:13:07 crc kubenswrapper[4808]: I0311 10:13:07.805160 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-867c585875-tnnx2" event={"ID":"31ff1e1e-2e52-4ac0-93dc-59fbd07f7a6d","Type":"ContainerDied","Data":"e01b92c01fa63298932561b6909605cb75e9f4272cbfb47a34a0955ae0f6bc7c"} Mar 11 10:13:07 crc kubenswrapper[4808]: I0311 10:13:07.805181 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-867c585875-tnnx2" event={"ID":"31ff1e1e-2e52-4ac0-93dc-59fbd07f7a6d","Type":"ContainerDied","Data":"13a38959aacca34d56f8ca3a4a7089a762c5026a38b9e4daccb323e0a503ddae"} Mar 11 10:13:07 crc kubenswrapper[4808]: I0311 10:13:07.805202 4808 scope.go:117] "RemoveContainer" containerID="e01b92c01fa63298932561b6909605cb75e9f4272cbfb47a34a0955ae0f6bc7c" Mar 11 10:13:07 crc kubenswrapper[4808]: I0311 10:13:07.830788 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6cc897dccf-z6rp4" podStartSLOduration=2.830766646 podStartE2EDuration="2.830766646s" podCreationTimestamp="2026-03-11 10:13:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 10:13:07.818450797 +0000 UTC m=+5638.771774157" watchObservedRunningTime="2026-03-11 10:13:07.830766646 +0000 UTC m=+5638.784089976" Mar 11 10:13:07 crc kubenswrapper[4808]: I0311 10:13:07.835705 4808 scope.go:117] "RemoveContainer" containerID="7aae61de2d01766e7282aac1c8f8bde7b1417d68665b44d751458c166389290e" Mar 11 10:13:07 crc kubenswrapper[4808]: I0311 10:13:07.862527 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-867c585875-tnnx2"] Mar 11 10:13:07 crc kubenswrapper[4808]: I0311 10:13:07.871494 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-867c585875-tnnx2"] Mar 11 10:13:07 crc kubenswrapper[4808]: I0311 10:13:07.872589 4808 scope.go:117] "RemoveContainer" containerID="e01b92c01fa63298932561b6909605cb75e9f4272cbfb47a34a0955ae0f6bc7c" Mar 11 10:13:07 crc kubenswrapper[4808]: E0311 10:13:07.873043 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e01b92c01fa63298932561b6909605cb75e9f4272cbfb47a34a0955ae0f6bc7c\": container with ID starting with e01b92c01fa63298932561b6909605cb75e9f4272cbfb47a34a0955ae0f6bc7c not found: ID does not exist" containerID="e01b92c01fa63298932561b6909605cb75e9f4272cbfb47a34a0955ae0f6bc7c" Mar 11 10:13:07 crc kubenswrapper[4808]: I0311 10:13:07.873076 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e01b92c01fa63298932561b6909605cb75e9f4272cbfb47a34a0955ae0f6bc7c"} err="failed to get container status \"e01b92c01fa63298932561b6909605cb75e9f4272cbfb47a34a0955ae0f6bc7c\": rpc error: code = NotFound desc = could not find container \"e01b92c01fa63298932561b6909605cb75e9f4272cbfb47a34a0955ae0f6bc7c\": container with ID starting with e01b92c01fa63298932561b6909605cb75e9f4272cbfb47a34a0955ae0f6bc7c not found: ID does not exist" Mar 11 10:13:07 crc kubenswrapper[4808]: I0311 10:13:07.873099 4808 scope.go:117] "RemoveContainer" containerID="7aae61de2d01766e7282aac1c8f8bde7b1417d68665b44d751458c166389290e" Mar 11 10:13:07 crc kubenswrapper[4808]: E0311 10:13:07.873374 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7aae61de2d01766e7282aac1c8f8bde7b1417d68665b44d751458c166389290e\": container with ID starting with 7aae61de2d01766e7282aac1c8f8bde7b1417d68665b44d751458c166389290e not found: ID does not exist" containerID="7aae61de2d01766e7282aac1c8f8bde7b1417d68665b44d751458c166389290e" Mar 11 10:13:07 crc kubenswrapper[4808]: I0311 10:13:07.873408 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7aae61de2d01766e7282aac1c8f8bde7b1417d68665b44d751458c166389290e"} err="failed to get container status \"7aae61de2d01766e7282aac1c8f8bde7b1417d68665b44d751458c166389290e\": rpc error: code = NotFound desc = could not find container \"7aae61de2d01766e7282aac1c8f8bde7b1417d68665b44d751458c166389290e\": container with ID starting with 7aae61de2d01766e7282aac1c8f8bde7b1417d68665b44d751458c166389290e not found: ID does not exist" Mar 11 10:13:08 crc kubenswrapper[4808]: I0311 10:13:08.864686 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-2" Mar 11 10:13:09 crc kubenswrapper[4808]: I0311 10:13:09.803879 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31ff1e1e-2e52-4ac0-93dc-59fbd07f7a6d" path="/var/lib/kubelet/pods/31ff1e1e-2e52-4ac0-93dc-59fbd07f7a6d/volumes" Mar 11 10:13:11 crc kubenswrapper[4808]: I0311 10:13:11.154487 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-copy-data"] Mar 11 10:13:11 crc kubenswrapper[4808]: E0311 10:13:11.155004 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31ff1e1e-2e52-4ac0-93dc-59fbd07f7a6d" containerName="dnsmasq-dns" Mar 11 10:13:11 crc kubenswrapper[4808]: I0311 10:13:11.155016 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="31ff1e1e-2e52-4ac0-93dc-59fbd07f7a6d" containerName="dnsmasq-dns" Mar 11 10:13:11 crc kubenswrapper[4808]: E0311 10:13:11.155042 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31ff1e1e-2e52-4ac0-93dc-59fbd07f7a6d" containerName="init" Mar 11 10:13:11 crc kubenswrapper[4808]: I0311 10:13:11.155048 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="31ff1e1e-2e52-4ac0-93dc-59fbd07f7a6d" containerName="init" Mar 11 10:13:11 crc kubenswrapper[4808]: I0311 10:13:11.155189 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="31ff1e1e-2e52-4ac0-93dc-59fbd07f7a6d" containerName="dnsmasq-dns" Mar 11 10:13:11 crc kubenswrapper[4808]: I0311 10:13:11.155711 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Mar 11 10:13:11 crc kubenswrapper[4808]: I0311 10:13:11.166202 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovn-data-cert" Mar 11 10:13:11 crc kubenswrapper[4808]: I0311 10:13:11.178056 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-copy-data"] Mar 11 10:13:11 crc kubenswrapper[4808]: I0311 10:13:11.236780 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/25eaf277-9c52-4ae6-909f-5340d24dc284-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"25eaf277-9c52-4ae6-909f-5340d24dc284\") " pod="openstack/ovn-copy-data" Mar 11 10:13:11 crc kubenswrapper[4808]: I0311 10:13:11.236825 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6888\" (UniqueName: \"kubernetes.io/projected/25eaf277-9c52-4ae6-909f-5340d24dc284-kube-api-access-v6888\") pod \"ovn-copy-data\" (UID: \"25eaf277-9c52-4ae6-909f-5340d24dc284\") " pod="openstack/ovn-copy-data" Mar 11 10:13:11 crc kubenswrapper[4808]: I0311 10:13:11.236934 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b10aab34-b31c-4f36-8eca-6b8873a5ac4a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b10aab34-b31c-4f36-8eca-6b8873a5ac4a\") pod \"ovn-copy-data\" (UID: \"25eaf277-9c52-4ae6-909f-5340d24dc284\") " pod="openstack/ovn-copy-data" Mar 11 10:13:11 crc kubenswrapper[4808]: I0311 10:13:11.338190 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/25eaf277-9c52-4ae6-909f-5340d24dc284-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"25eaf277-9c52-4ae6-909f-5340d24dc284\") " pod="openstack/ovn-copy-data" Mar 11 10:13:11 crc kubenswrapper[4808]: I0311 10:13:11.338239 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6888\" (UniqueName: \"kubernetes.io/projected/25eaf277-9c52-4ae6-909f-5340d24dc284-kube-api-access-v6888\") pod \"ovn-copy-data\" (UID: \"25eaf277-9c52-4ae6-909f-5340d24dc284\") " pod="openstack/ovn-copy-data" Mar 11 10:13:11 crc kubenswrapper[4808]: I0311 10:13:11.338392 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b10aab34-b31c-4f36-8eca-6b8873a5ac4a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b10aab34-b31c-4f36-8eca-6b8873a5ac4a\") pod \"ovn-copy-data\" (UID: \"25eaf277-9c52-4ae6-909f-5340d24dc284\") " pod="openstack/ovn-copy-data" Mar 11 10:13:11 crc kubenswrapper[4808]: I0311 10:13:11.342144 4808 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 11 10:13:11 crc kubenswrapper[4808]: I0311 10:13:11.342193 4808 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b10aab34-b31c-4f36-8eca-6b8873a5ac4a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b10aab34-b31c-4f36-8eca-6b8873a5ac4a\") pod \"ovn-copy-data\" (UID: \"25eaf277-9c52-4ae6-909f-5340d24dc284\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a2c6f52f34ff9e9429a95b9c6e55057fa3641466d66714889c73881fc8dbec5d/globalmount\"" pod="openstack/ovn-copy-data" Mar 11 10:13:11 crc kubenswrapper[4808]: I0311 10:13:11.352119 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/25eaf277-9c52-4ae6-909f-5340d24dc284-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"25eaf277-9c52-4ae6-909f-5340d24dc284\") " pod="openstack/ovn-copy-data" Mar 11 10:13:11 crc kubenswrapper[4808]: I0311 10:13:11.354997 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6888\" (UniqueName: \"kubernetes.io/projected/25eaf277-9c52-4ae6-909f-5340d24dc284-kube-api-access-v6888\") pod \"ovn-copy-data\" (UID: \"25eaf277-9c52-4ae6-909f-5340d24dc284\") " pod="openstack/ovn-copy-data" Mar 11 10:13:11 crc kubenswrapper[4808]: I0311 10:13:11.374259 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b10aab34-b31c-4f36-8eca-6b8873a5ac4a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b10aab34-b31c-4f36-8eca-6b8873a5ac4a\") pod \"ovn-copy-data\" (UID: \"25eaf277-9c52-4ae6-909f-5340d24dc284\") " pod="openstack/ovn-copy-data" Mar 11 10:13:11 crc kubenswrapper[4808]: I0311 10:13:11.484166 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Mar 11 10:13:12 crc kubenswrapper[4808]: I0311 10:13:12.003024 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-copy-data"] Mar 11 10:13:12 crc kubenswrapper[4808]: I0311 10:13:12.843017 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"25eaf277-9c52-4ae6-909f-5340d24dc284","Type":"ContainerStarted","Data":"d2afc8ba1d885401404982ef25119acc8fa3682204728febb05afab55ff7aeb6"} Mar 11 10:13:15 crc kubenswrapper[4808]: I0311 10:13:15.579590 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6cc897dccf-z6rp4" Mar 11 10:13:15 crc kubenswrapper[4808]: I0311 10:13:15.651991 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-66d5bf7c87-hzmp7"] Mar 11 10:13:15 crc kubenswrapper[4808]: I0311 10:13:15.652796 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-66d5bf7c87-hzmp7" podUID="8e28621f-b235-4e1a-a082-2c7542a2c3ac" containerName="dnsmasq-dns" containerID="cri-o://e0c317b2a6b610e601fc1aad6a1e17203ba8ecaf2fef2ad73672db5cf2f8e6a4" gracePeriod=10 Mar 11 10:13:15 crc kubenswrapper[4808]: I0311 10:13:15.865568 4808 generic.go:334] "Generic (PLEG): container finished" podID="8e28621f-b235-4e1a-a082-2c7542a2c3ac" containerID="e0c317b2a6b610e601fc1aad6a1e17203ba8ecaf2fef2ad73672db5cf2f8e6a4" exitCode=0 Mar 11 10:13:15 crc kubenswrapper[4808]: I0311 10:13:15.865620 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66d5bf7c87-hzmp7" event={"ID":"8e28621f-b235-4e1a-a082-2c7542a2c3ac","Type":"ContainerDied","Data":"e0c317b2a6b610e601fc1aad6a1e17203ba8ecaf2fef2ad73672db5cf2f8e6a4"} Mar 11 10:13:15 crc kubenswrapper[4808]: I0311 10:13:15.866700 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"25eaf277-9c52-4ae6-909f-5340d24dc284","Type":"ContainerStarted","Data":"eca33c63eff46a37602eb89b6f438f29919879b2a727a84e10ad79a36daf88bb"} Mar 11 10:13:15 crc kubenswrapper[4808]: I0311 10:13:15.885203 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-copy-data" podStartSLOduration=3.216460566 podStartE2EDuration="5.885183338s" podCreationTimestamp="2026-03-11 10:13:10 +0000 UTC" firstStartedPulling="2026-03-11 10:13:12.013577514 +0000 UTC m=+5642.966900834" lastFinishedPulling="2026-03-11 10:13:14.682300246 +0000 UTC m=+5645.635623606" observedRunningTime="2026-03-11 10:13:15.878882789 +0000 UTC m=+5646.832206109" watchObservedRunningTime="2026-03-11 10:13:15.885183338 +0000 UTC m=+5646.838506658" Mar 11 10:13:16 crc kubenswrapper[4808]: I0311 10:13:16.027487 4808 patch_prober.go:28] interesting pod/machine-config-daemon-tfsm9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 10:13:16 crc kubenswrapper[4808]: I0311 10:13:16.027536 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 10:13:16 crc kubenswrapper[4808]: I0311 10:13:16.111112 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66d5bf7c87-hzmp7" Mar 11 10:13:16 crc kubenswrapper[4808]: I0311 10:13:16.235574 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8e28621f-b235-4e1a-a082-2c7542a2c3ac-dns-svc\") pod \"8e28621f-b235-4e1a-a082-2c7542a2c3ac\" (UID: \"8e28621f-b235-4e1a-a082-2c7542a2c3ac\") " Mar 11 10:13:16 crc kubenswrapper[4808]: I0311 10:13:16.235787 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b2d7p\" (UniqueName: \"kubernetes.io/projected/8e28621f-b235-4e1a-a082-2c7542a2c3ac-kube-api-access-b2d7p\") pod \"8e28621f-b235-4e1a-a082-2c7542a2c3ac\" (UID: \"8e28621f-b235-4e1a-a082-2c7542a2c3ac\") " Mar 11 10:13:16 crc kubenswrapper[4808]: I0311 10:13:16.236580 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e28621f-b235-4e1a-a082-2c7542a2c3ac-config\") pod \"8e28621f-b235-4e1a-a082-2c7542a2c3ac\" (UID: \"8e28621f-b235-4e1a-a082-2c7542a2c3ac\") " Mar 11 10:13:16 crc kubenswrapper[4808]: I0311 10:13:16.243653 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e28621f-b235-4e1a-a082-2c7542a2c3ac-kube-api-access-b2d7p" (OuterVolumeSpecName: "kube-api-access-b2d7p") pod "8e28621f-b235-4e1a-a082-2c7542a2c3ac" (UID: "8e28621f-b235-4e1a-a082-2c7542a2c3ac"). InnerVolumeSpecName "kube-api-access-b2d7p". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:13:16 crc kubenswrapper[4808]: I0311 10:13:16.284680 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e28621f-b235-4e1a-a082-2c7542a2c3ac-config" (OuterVolumeSpecName: "config") pod "8e28621f-b235-4e1a-a082-2c7542a2c3ac" (UID: "8e28621f-b235-4e1a-a082-2c7542a2c3ac"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 10:13:16 crc kubenswrapper[4808]: I0311 10:13:16.285376 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e28621f-b235-4e1a-a082-2c7542a2c3ac-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8e28621f-b235-4e1a-a082-2c7542a2c3ac" (UID: "8e28621f-b235-4e1a-a082-2c7542a2c3ac"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 10:13:16 crc kubenswrapper[4808]: I0311 10:13:16.339260 4808 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8e28621f-b235-4e1a-a082-2c7542a2c3ac-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 11 10:13:16 crc kubenswrapper[4808]: I0311 10:13:16.339297 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b2d7p\" (UniqueName: \"kubernetes.io/projected/8e28621f-b235-4e1a-a082-2c7542a2c3ac-kube-api-access-b2d7p\") on node \"crc\" DevicePath \"\"" Mar 11 10:13:16 crc kubenswrapper[4808]: I0311 10:13:16.339309 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e28621f-b235-4e1a-a082-2c7542a2c3ac-config\") on node \"crc\" DevicePath \"\"" Mar 11 10:13:16 crc kubenswrapper[4808]: I0311 10:13:16.878105 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66d5bf7c87-hzmp7" event={"ID":"8e28621f-b235-4e1a-a082-2c7542a2c3ac","Type":"ContainerDied","Data":"1ba4e2910f776022b10c1f5ca821d75cea0581e4dcfce5f0421ea6ef39d40667"} Mar 11 10:13:16 crc kubenswrapper[4808]: I0311 10:13:16.878531 4808 scope.go:117] "RemoveContainer" containerID="e0c317b2a6b610e601fc1aad6a1e17203ba8ecaf2fef2ad73672db5cf2f8e6a4" Mar 11 10:13:16 crc kubenswrapper[4808]: I0311 10:13:16.878184 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66d5bf7c87-hzmp7" Mar 11 10:13:16 crc kubenswrapper[4808]: I0311 10:13:16.913260 4808 scope.go:117] "RemoveContainer" containerID="5e32985ef7339e383f65d096a935cbb7822fb5916dd1de35bbd6becec5a0c0b6" Mar 11 10:13:16 crc kubenswrapper[4808]: I0311 10:13:16.917657 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-66d5bf7c87-hzmp7"] Mar 11 10:13:16 crc kubenswrapper[4808]: I0311 10:13:16.925567 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-66d5bf7c87-hzmp7"] Mar 11 10:13:17 crc kubenswrapper[4808]: I0311 10:13:17.801908 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e28621f-b235-4e1a-a082-2c7542a2c3ac" path="/var/lib/kubelet/pods/8e28621f-b235-4e1a-a082-2c7542a2c3ac/volumes" Mar 11 10:13:17 crc kubenswrapper[4808]: E0311 10:13:17.836448 4808 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.113:52028->38.102.83.113:39975: write tcp 38.102.83.113:52028->38.102.83.113:39975: write: connection reset by peer Mar 11 10:13:20 crc kubenswrapper[4808]: I0311 10:13:20.713675 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Mar 11 10:13:20 crc kubenswrapper[4808]: E0311 10:13:20.720824 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e28621f-b235-4e1a-a082-2c7542a2c3ac" containerName="dnsmasq-dns" Mar 11 10:13:20 crc kubenswrapper[4808]: I0311 10:13:20.720843 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e28621f-b235-4e1a-a082-2c7542a2c3ac" containerName="dnsmasq-dns" Mar 11 10:13:20 crc kubenswrapper[4808]: E0311 10:13:20.720854 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e28621f-b235-4e1a-a082-2c7542a2c3ac" containerName="init" Mar 11 10:13:20 crc kubenswrapper[4808]: I0311 10:13:20.720861 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e28621f-b235-4e1a-a082-2c7542a2c3ac" containerName="init" Mar 11 10:13:20 crc kubenswrapper[4808]: I0311 10:13:20.721000 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e28621f-b235-4e1a-a082-2c7542a2c3ac" containerName="dnsmasq-dns" Mar 11 10:13:20 crc kubenswrapper[4808]: I0311 10:13:20.721905 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Mar 11 10:13:20 crc kubenswrapper[4808]: I0311 10:13:20.724460 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Mar 11 10:13:20 crc kubenswrapper[4808]: I0311 10:13:20.725327 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-znbhn" Mar 11 10:13:20 crc kubenswrapper[4808]: I0311 10:13:20.725710 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Mar 11 10:13:20 crc kubenswrapper[4808]: I0311 10:13:20.725897 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Mar 11 10:13:20 crc kubenswrapper[4808]: I0311 10:13:20.739695 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Mar 11 10:13:20 crc kubenswrapper[4808]: I0311 10:13:20.858751 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c5eb2773-f6fc-4322-8bb9-ded8d96de7bb-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"c5eb2773-f6fc-4322-8bb9-ded8d96de7bb\") " pod="openstack/ovn-northd-0" Mar 11 10:13:20 crc kubenswrapper[4808]: I0311 10:13:20.858797 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5eb2773-f6fc-4322-8bb9-ded8d96de7bb-config\") pod \"ovn-northd-0\" (UID: \"c5eb2773-f6fc-4322-8bb9-ded8d96de7bb\") " pod="openstack/ovn-northd-0" Mar 11 10:13:20 crc kubenswrapper[4808]: I0311 10:13:20.858935 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5eb2773-f6fc-4322-8bb9-ded8d96de7bb-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"c5eb2773-f6fc-4322-8bb9-ded8d96de7bb\") " pod="openstack/ovn-northd-0" Mar 11 10:13:20 crc kubenswrapper[4808]: I0311 10:13:20.858985 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c5eb2773-f6fc-4322-8bb9-ded8d96de7bb-scripts\") pod \"ovn-northd-0\" (UID: \"c5eb2773-f6fc-4322-8bb9-ded8d96de7bb\") " pod="openstack/ovn-northd-0" Mar 11 10:13:20 crc kubenswrapper[4808]: I0311 10:13:20.859087 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/c5eb2773-f6fc-4322-8bb9-ded8d96de7bb-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"c5eb2773-f6fc-4322-8bb9-ded8d96de7bb\") " pod="openstack/ovn-northd-0" Mar 11 10:13:20 crc kubenswrapper[4808]: I0311 10:13:20.859155 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c5eb2773-f6fc-4322-8bb9-ded8d96de7bb-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"c5eb2773-f6fc-4322-8bb9-ded8d96de7bb\") " pod="openstack/ovn-northd-0" Mar 11 10:13:20 crc kubenswrapper[4808]: I0311 10:13:20.859186 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6lx8\" (UniqueName: \"kubernetes.io/projected/c5eb2773-f6fc-4322-8bb9-ded8d96de7bb-kube-api-access-n6lx8\") pod \"ovn-northd-0\" (UID: \"c5eb2773-f6fc-4322-8bb9-ded8d96de7bb\") " pod="openstack/ovn-northd-0" Mar 11 10:13:20 crc kubenswrapper[4808]: I0311 10:13:20.960276 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/c5eb2773-f6fc-4322-8bb9-ded8d96de7bb-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"c5eb2773-f6fc-4322-8bb9-ded8d96de7bb\") " pod="openstack/ovn-northd-0" Mar 11 10:13:20 crc kubenswrapper[4808]: I0311 10:13:20.960343 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c5eb2773-f6fc-4322-8bb9-ded8d96de7bb-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"c5eb2773-f6fc-4322-8bb9-ded8d96de7bb\") " pod="openstack/ovn-northd-0" Mar 11 10:13:20 crc kubenswrapper[4808]: I0311 10:13:20.960380 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6lx8\" (UniqueName: \"kubernetes.io/projected/c5eb2773-f6fc-4322-8bb9-ded8d96de7bb-kube-api-access-n6lx8\") pod \"ovn-northd-0\" (UID: \"c5eb2773-f6fc-4322-8bb9-ded8d96de7bb\") " pod="openstack/ovn-northd-0" Mar 11 10:13:20 crc kubenswrapper[4808]: I0311 10:13:20.960444 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c5eb2773-f6fc-4322-8bb9-ded8d96de7bb-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"c5eb2773-f6fc-4322-8bb9-ded8d96de7bb\") " pod="openstack/ovn-northd-0" Mar 11 10:13:20 crc kubenswrapper[4808]: I0311 10:13:20.960465 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5eb2773-f6fc-4322-8bb9-ded8d96de7bb-config\") pod \"ovn-northd-0\" (UID: \"c5eb2773-f6fc-4322-8bb9-ded8d96de7bb\") " pod="openstack/ovn-northd-0" Mar 11 10:13:20 crc kubenswrapper[4808]: I0311 10:13:20.960508 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5eb2773-f6fc-4322-8bb9-ded8d96de7bb-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"c5eb2773-f6fc-4322-8bb9-ded8d96de7bb\") " pod="openstack/ovn-northd-0" Mar 11 10:13:20 crc kubenswrapper[4808]: I0311 10:13:20.960541 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c5eb2773-f6fc-4322-8bb9-ded8d96de7bb-scripts\") pod \"ovn-northd-0\" (UID: \"c5eb2773-f6fc-4322-8bb9-ded8d96de7bb\") " pod="openstack/ovn-northd-0" Mar 11 10:13:20 crc kubenswrapper[4808]: I0311 10:13:20.961700 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5eb2773-f6fc-4322-8bb9-ded8d96de7bb-config\") pod \"ovn-northd-0\" (UID: \"c5eb2773-f6fc-4322-8bb9-ded8d96de7bb\") " pod="openstack/ovn-northd-0" Mar 11 10:13:20 crc kubenswrapper[4808]: I0311 10:13:20.961755 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c5eb2773-f6fc-4322-8bb9-ded8d96de7bb-scripts\") pod \"ovn-northd-0\" (UID: \"c5eb2773-f6fc-4322-8bb9-ded8d96de7bb\") " pod="openstack/ovn-northd-0" Mar 11 10:13:20 crc kubenswrapper[4808]: I0311 10:13:20.961955 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c5eb2773-f6fc-4322-8bb9-ded8d96de7bb-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"c5eb2773-f6fc-4322-8bb9-ded8d96de7bb\") " pod="openstack/ovn-northd-0" Mar 11 10:13:20 crc kubenswrapper[4808]: I0311 10:13:20.966844 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/c5eb2773-f6fc-4322-8bb9-ded8d96de7bb-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"c5eb2773-f6fc-4322-8bb9-ded8d96de7bb\") " pod="openstack/ovn-northd-0" Mar 11 10:13:20 crc kubenswrapper[4808]: I0311 10:13:20.968086 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5eb2773-f6fc-4322-8bb9-ded8d96de7bb-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"c5eb2773-f6fc-4322-8bb9-ded8d96de7bb\") " pod="openstack/ovn-northd-0" Mar 11 10:13:20 crc kubenswrapper[4808]: I0311 10:13:20.968836 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c5eb2773-f6fc-4322-8bb9-ded8d96de7bb-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"c5eb2773-f6fc-4322-8bb9-ded8d96de7bb\") " pod="openstack/ovn-northd-0" Mar 11 10:13:20 crc kubenswrapper[4808]: I0311 10:13:20.977960 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6lx8\" (UniqueName: \"kubernetes.io/projected/c5eb2773-f6fc-4322-8bb9-ded8d96de7bb-kube-api-access-n6lx8\") pod \"ovn-northd-0\" (UID: \"c5eb2773-f6fc-4322-8bb9-ded8d96de7bb\") " pod="openstack/ovn-northd-0" Mar 11 10:13:21 crc kubenswrapper[4808]: I0311 10:13:21.049800 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Mar 11 10:13:21 crc kubenswrapper[4808]: I0311 10:13:21.483703 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Mar 11 10:13:21 crc kubenswrapper[4808]: W0311 10:13:21.487523 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc5eb2773_f6fc_4322_8bb9_ded8d96de7bb.slice/crio-0117f6b946ef38f408c393dcae8ea6d79fc79f839c131990f544497a32347b52 WatchSource:0}: Error finding container 0117f6b946ef38f408c393dcae8ea6d79fc79f839c131990f544497a32347b52: Status 404 returned error can't find the container with id 0117f6b946ef38f408c393dcae8ea6d79fc79f839c131990f544497a32347b52 Mar 11 10:13:21 crc kubenswrapper[4808]: I0311 10:13:21.921138 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"c5eb2773-f6fc-4322-8bb9-ded8d96de7bb","Type":"ContainerStarted","Data":"33ab24174cdc215507de487b2a61b0cb2d0651637d4e1df5a6819516a2fc0555"} Mar 11 10:13:21 crc kubenswrapper[4808]: I0311 10:13:21.921835 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"c5eb2773-f6fc-4322-8bb9-ded8d96de7bb","Type":"ContainerStarted","Data":"b04daafae58b7d59419f1b29da6204d3e5c720d35b6f888960e7eb8cf7273718"} Mar 11 10:13:21 crc kubenswrapper[4808]: I0311 10:13:21.921857 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"c5eb2773-f6fc-4322-8bb9-ded8d96de7bb","Type":"ContainerStarted","Data":"0117f6b946ef38f408c393dcae8ea6d79fc79f839c131990f544497a32347b52"} Mar 11 10:13:21 crc kubenswrapper[4808]: I0311 10:13:21.921882 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Mar 11 10:13:21 crc kubenswrapper[4808]: I0311 10:13:21.941658 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=1.941625186 podStartE2EDuration="1.941625186s" podCreationTimestamp="2026-03-11 10:13:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 10:13:21.936849741 +0000 UTC m=+5652.890173101" watchObservedRunningTime="2026-03-11 10:13:21.941625186 +0000 UTC m=+5652.894948546" Mar 11 10:13:25 crc kubenswrapper[4808]: I0311 10:13:25.948701 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-wf94v"] Mar 11 10:13:25 crc kubenswrapper[4808]: I0311 10:13:25.950610 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-wf94v" Mar 11 10:13:25 crc kubenswrapper[4808]: I0311 10:13:25.962005 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-wf94v"] Mar 11 10:13:26 crc kubenswrapper[4808]: I0311 10:13:26.042554 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-4f84-account-create-update-hkf9p"] Mar 11 10:13:26 crc kubenswrapper[4808]: I0311 10:13:26.045213 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/251ae4bf-4e3d-4665-b893-aa0409a4b0cc-operator-scripts\") pod \"keystone-db-create-wf94v\" (UID: \"251ae4bf-4e3d-4665-b893-aa0409a4b0cc\") " pod="openstack/keystone-db-create-wf94v" Mar 11 10:13:26 crc kubenswrapper[4808]: I0311 10:13:26.045269 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hc6g\" (UniqueName: \"kubernetes.io/projected/251ae4bf-4e3d-4665-b893-aa0409a4b0cc-kube-api-access-8hc6g\") pod \"keystone-db-create-wf94v\" (UID: \"251ae4bf-4e3d-4665-b893-aa0409a4b0cc\") " pod="openstack/keystone-db-create-wf94v" Mar 11 10:13:26 crc kubenswrapper[4808]: I0311 10:13:26.045478 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-4f84-account-create-update-hkf9p" Mar 11 10:13:26 crc kubenswrapper[4808]: I0311 10:13:26.048432 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Mar 11 10:13:26 crc kubenswrapper[4808]: I0311 10:13:26.066573 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-4f84-account-create-update-hkf9p"] Mar 11 10:13:26 crc kubenswrapper[4808]: I0311 10:13:26.147028 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6m7h\" (UniqueName: \"kubernetes.io/projected/0401232b-b657-4e67-b2b4-4bdd71c3f409-kube-api-access-b6m7h\") pod \"keystone-4f84-account-create-update-hkf9p\" (UID: \"0401232b-b657-4e67-b2b4-4bdd71c3f409\") " pod="openstack/keystone-4f84-account-create-update-hkf9p" Mar 11 10:13:26 crc kubenswrapper[4808]: I0311 10:13:26.147119 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/251ae4bf-4e3d-4665-b893-aa0409a4b0cc-operator-scripts\") pod \"keystone-db-create-wf94v\" (UID: \"251ae4bf-4e3d-4665-b893-aa0409a4b0cc\") " pod="openstack/keystone-db-create-wf94v" Mar 11 10:13:26 crc kubenswrapper[4808]: I0311 10:13:26.147144 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hc6g\" (UniqueName: \"kubernetes.io/projected/251ae4bf-4e3d-4665-b893-aa0409a4b0cc-kube-api-access-8hc6g\") pod \"keystone-db-create-wf94v\" (UID: \"251ae4bf-4e3d-4665-b893-aa0409a4b0cc\") " pod="openstack/keystone-db-create-wf94v" Mar 11 10:13:26 crc kubenswrapper[4808]: I0311 10:13:26.147197 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0401232b-b657-4e67-b2b4-4bdd71c3f409-operator-scripts\") pod \"keystone-4f84-account-create-update-hkf9p\" (UID: \"0401232b-b657-4e67-b2b4-4bdd71c3f409\") " pod="openstack/keystone-4f84-account-create-update-hkf9p" Mar 11 10:13:26 crc kubenswrapper[4808]: I0311 10:13:26.147966 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/251ae4bf-4e3d-4665-b893-aa0409a4b0cc-operator-scripts\") pod \"keystone-db-create-wf94v\" (UID: \"251ae4bf-4e3d-4665-b893-aa0409a4b0cc\") " pod="openstack/keystone-db-create-wf94v" Mar 11 10:13:26 crc kubenswrapper[4808]: I0311 10:13:26.166148 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hc6g\" (UniqueName: \"kubernetes.io/projected/251ae4bf-4e3d-4665-b893-aa0409a4b0cc-kube-api-access-8hc6g\") pod \"keystone-db-create-wf94v\" (UID: \"251ae4bf-4e3d-4665-b893-aa0409a4b0cc\") " pod="openstack/keystone-db-create-wf94v" Mar 11 10:13:26 crc kubenswrapper[4808]: I0311 10:13:26.248571 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0401232b-b657-4e67-b2b4-4bdd71c3f409-operator-scripts\") pod \"keystone-4f84-account-create-update-hkf9p\" (UID: \"0401232b-b657-4e67-b2b4-4bdd71c3f409\") " pod="openstack/keystone-4f84-account-create-update-hkf9p" Mar 11 10:13:26 crc kubenswrapper[4808]: I0311 10:13:26.249023 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6m7h\" (UniqueName: \"kubernetes.io/projected/0401232b-b657-4e67-b2b4-4bdd71c3f409-kube-api-access-b6m7h\") pod \"keystone-4f84-account-create-update-hkf9p\" (UID: \"0401232b-b657-4e67-b2b4-4bdd71c3f409\") " pod="openstack/keystone-4f84-account-create-update-hkf9p" Mar 11 10:13:26 crc kubenswrapper[4808]: I0311 10:13:26.251580 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0401232b-b657-4e67-b2b4-4bdd71c3f409-operator-scripts\") pod \"keystone-4f84-account-create-update-hkf9p\" (UID: \"0401232b-b657-4e67-b2b4-4bdd71c3f409\") " pod="openstack/keystone-4f84-account-create-update-hkf9p" Mar 11 10:13:26 crc kubenswrapper[4808]: I0311 10:13:26.270838 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-wf94v" Mar 11 10:13:26 crc kubenswrapper[4808]: I0311 10:13:26.272488 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6m7h\" (UniqueName: \"kubernetes.io/projected/0401232b-b657-4e67-b2b4-4bdd71c3f409-kube-api-access-b6m7h\") pod \"keystone-4f84-account-create-update-hkf9p\" (UID: \"0401232b-b657-4e67-b2b4-4bdd71c3f409\") " pod="openstack/keystone-4f84-account-create-update-hkf9p" Mar 11 10:13:26 crc kubenswrapper[4808]: I0311 10:13:26.369344 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-4f84-account-create-update-hkf9p" Mar 11 10:13:26 crc kubenswrapper[4808]: I0311 10:13:26.728958 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-wf94v"] Mar 11 10:13:26 crc kubenswrapper[4808]: W0311 10:13:26.739682 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod251ae4bf_4e3d_4665_b893_aa0409a4b0cc.slice/crio-9db6a1dde8edc0b5067cbb5da480357f9d7bfcf3bed2b1c2de1f91655dcfc7ad WatchSource:0}: Error finding container 9db6a1dde8edc0b5067cbb5da480357f9d7bfcf3bed2b1c2de1f91655dcfc7ad: Status 404 returned error can't find the container with id 9db6a1dde8edc0b5067cbb5da480357f9d7bfcf3bed2b1c2de1f91655dcfc7ad Mar 11 10:13:26 crc kubenswrapper[4808]: W0311 10:13:26.949726 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0401232b_b657_4e67_b2b4_4bdd71c3f409.slice/crio-b649c59f8bf0e7fe3f239ec10d0f2cdcac83f5ccb135ade0c3d3895085b21323 WatchSource:0}: Error finding container b649c59f8bf0e7fe3f239ec10d0f2cdcac83f5ccb135ade0c3d3895085b21323: Status 404 returned error can't find the container with id b649c59f8bf0e7fe3f239ec10d0f2cdcac83f5ccb135ade0c3d3895085b21323 Mar 11 10:13:26 crc kubenswrapper[4808]: I0311 10:13:26.949888 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-4f84-account-create-update-hkf9p"] Mar 11 10:13:26 crc kubenswrapper[4808]: I0311 10:13:26.969568 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-wf94v" event={"ID":"251ae4bf-4e3d-4665-b893-aa0409a4b0cc","Type":"ContainerStarted","Data":"d9b94245f6291c94f554660eaf0535397113461d74f015e17b5f8c0b3c32507d"} Mar 11 10:13:26 crc kubenswrapper[4808]: I0311 10:13:26.969619 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-wf94v" event={"ID":"251ae4bf-4e3d-4665-b893-aa0409a4b0cc","Type":"ContainerStarted","Data":"9db6a1dde8edc0b5067cbb5da480357f9d7bfcf3bed2b1c2de1f91655dcfc7ad"} Mar 11 10:13:26 crc kubenswrapper[4808]: I0311 10:13:26.970833 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-4f84-account-create-update-hkf9p" event={"ID":"0401232b-b657-4e67-b2b4-4bdd71c3f409","Type":"ContainerStarted","Data":"b649c59f8bf0e7fe3f239ec10d0f2cdcac83f5ccb135ade0c3d3895085b21323"} Mar 11 10:13:27 crc kubenswrapper[4808]: I0311 10:13:27.004433 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-wf94v" podStartSLOduration=2.004400149 podStartE2EDuration="2.004400149s" podCreationTimestamp="2026-03-11 10:13:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 10:13:26.99526635 +0000 UTC m=+5657.948589670" watchObservedRunningTime="2026-03-11 10:13:27.004400149 +0000 UTC m=+5657.957723489" Mar 11 10:13:27 crc kubenswrapper[4808]: I0311 10:13:27.979467 4808 generic.go:334] "Generic (PLEG): container finished" podID="251ae4bf-4e3d-4665-b893-aa0409a4b0cc" containerID="d9b94245f6291c94f554660eaf0535397113461d74f015e17b5f8c0b3c32507d" exitCode=0 Mar 11 10:13:27 crc kubenswrapper[4808]: I0311 10:13:27.979540 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-wf94v" event={"ID":"251ae4bf-4e3d-4665-b893-aa0409a4b0cc","Type":"ContainerDied","Data":"d9b94245f6291c94f554660eaf0535397113461d74f015e17b5f8c0b3c32507d"} Mar 11 10:13:27 crc kubenswrapper[4808]: I0311 10:13:27.981642 4808 generic.go:334] "Generic (PLEG): container finished" podID="0401232b-b657-4e67-b2b4-4bdd71c3f409" containerID="798430011ac65f269e138f564804834356fb80254c4826f6a805f0ae515974d4" exitCode=0 Mar 11 10:13:27 crc kubenswrapper[4808]: I0311 10:13:27.981684 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-4f84-account-create-update-hkf9p" event={"ID":"0401232b-b657-4e67-b2b4-4bdd71c3f409","Type":"ContainerDied","Data":"798430011ac65f269e138f564804834356fb80254c4826f6a805f0ae515974d4"} Mar 11 10:13:29 crc kubenswrapper[4808]: I0311 10:13:29.998219 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-4f84-account-create-update-hkf9p" event={"ID":"0401232b-b657-4e67-b2b4-4bdd71c3f409","Type":"ContainerDied","Data":"b649c59f8bf0e7fe3f239ec10d0f2cdcac83f5ccb135ade0c3d3895085b21323"} Mar 11 10:13:29 crc kubenswrapper[4808]: I0311 10:13:29.998293 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b649c59f8bf0e7fe3f239ec10d0f2cdcac83f5ccb135ade0c3d3895085b21323" Mar 11 10:13:30 crc kubenswrapper[4808]: I0311 10:13:30.103885 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-4f84-account-create-update-hkf9p" Mar 11 10:13:30 crc kubenswrapper[4808]: I0311 10:13:30.113255 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-wf94v" Mar 11 10:13:30 crc kubenswrapper[4808]: I0311 10:13:30.234081 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8hc6g\" (UniqueName: \"kubernetes.io/projected/251ae4bf-4e3d-4665-b893-aa0409a4b0cc-kube-api-access-8hc6g\") pod \"251ae4bf-4e3d-4665-b893-aa0409a4b0cc\" (UID: \"251ae4bf-4e3d-4665-b893-aa0409a4b0cc\") " Mar 11 10:13:30 crc kubenswrapper[4808]: I0311 10:13:30.234219 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6m7h\" (UniqueName: \"kubernetes.io/projected/0401232b-b657-4e67-b2b4-4bdd71c3f409-kube-api-access-b6m7h\") pod \"0401232b-b657-4e67-b2b4-4bdd71c3f409\" (UID: \"0401232b-b657-4e67-b2b4-4bdd71c3f409\") " Mar 11 10:13:30 crc kubenswrapper[4808]: I0311 10:13:30.234257 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/251ae4bf-4e3d-4665-b893-aa0409a4b0cc-operator-scripts\") pod \"251ae4bf-4e3d-4665-b893-aa0409a4b0cc\" (UID: \"251ae4bf-4e3d-4665-b893-aa0409a4b0cc\") " Mar 11 10:13:30 crc kubenswrapper[4808]: I0311 10:13:30.234353 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0401232b-b657-4e67-b2b4-4bdd71c3f409-operator-scripts\") pod \"0401232b-b657-4e67-b2b4-4bdd71c3f409\" (UID: \"0401232b-b657-4e67-b2b4-4bdd71c3f409\") " Mar 11 10:13:30 crc kubenswrapper[4808]: I0311 10:13:30.235126 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0401232b-b657-4e67-b2b4-4bdd71c3f409-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0401232b-b657-4e67-b2b4-4bdd71c3f409" (UID: "0401232b-b657-4e67-b2b4-4bdd71c3f409"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 10:13:30 crc kubenswrapper[4808]: I0311 10:13:30.235152 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/251ae4bf-4e3d-4665-b893-aa0409a4b0cc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "251ae4bf-4e3d-4665-b893-aa0409a4b0cc" (UID: "251ae4bf-4e3d-4665-b893-aa0409a4b0cc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 10:13:30 crc kubenswrapper[4808]: I0311 10:13:30.239093 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0401232b-b657-4e67-b2b4-4bdd71c3f409-kube-api-access-b6m7h" (OuterVolumeSpecName: "kube-api-access-b6m7h") pod "0401232b-b657-4e67-b2b4-4bdd71c3f409" (UID: "0401232b-b657-4e67-b2b4-4bdd71c3f409"). InnerVolumeSpecName "kube-api-access-b6m7h". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:13:30 crc kubenswrapper[4808]: I0311 10:13:30.240052 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/251ae4bf-4e3d-4665-b893-aa0409a4b0cc-kube-api-access-8hc6g" (OuterVolumeSpecName: "kube-api-access-8hc6g") pod "251ae4bf-4e3d-4665-b893-aa0409a4b0cc" (UID: "251ae4bf-4e3d-4665-b893-aa0409a4b0cc"). InnerVolumeSpecName "kube-api-access-8hc6g". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:13:30 crc kubenswrapper[4808]: I0311 10:13:30.336845 4808 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0401232b-b657-4e67-b2b4-4bdd71c3f409-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 10:13:30 crc kubenswrapper[4808]: I0311 10:13:30.336893 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8hc6g\" (UniqueName: \"kubernetes.io/projected/251ae4bf-4e3d-4665-b893-aa0409a4b0cc-kube-api-access-8hc6g\") on node \"crc\" DevicePath \"\"" Mar 11 10:13:30 crc kubenswrapper[4808]: I0311 10:13:30.336907 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6m7h\" (UniqueName: \"kubernetes.io/projected/0401232b-b657-4e67-b2b4-4bdd71c3f409-kube-api-access-b6m7h\") on node \"crc\" DevicePath \"\"" Mar 11 10:13:30 crc kubenswrapper[4808]: I0311 10:13:30.336919 4808 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/251ae4bf-4e3d-4665-b893-aa0409a4b0cc-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 10:13:31 crc kubenswrapper[4808]: I0311 10:13:31.007493 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-wf94v" event={"ID":"251ae4bf-4e3d-4665-b893-aa0409a4b0cc","Type":"ContainerDied","Data":"9db6a1dde8edc0b5067cbb5da480357f9d7bfcf3bed2b1c2de1f91655dcfc7ad"} Mar 11 10:13:31 crc kubenswrapper[4808]: I0311 10:13:31.007798 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9db6a1dde8edc0b5067cbb5da480357f9d7bfcf3bed2b1c2de1f91655dcfc7ad" Mar 11 10:13:31 crc kubenswrapper[4808]: I0311 10:13:31.007539 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-wf94v" Mar 11 10:13:31 crc kubenswrapper[4808]: I0311 10:13:31.007527 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-4f84-account-create-update-hkf9p" Mar 11 10:13:31 crc kubenswrapper[4808]: I0311 10:13:31.706459 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Mar 11 10:13:37 crc kubenswrapper[4808]: I0311 10:13:37.078216 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-r866s"] Mar 11 10:13:37 crc kubenswrapper[4808]: E0311 10:13:37.079091 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0401232b-b657-4e67-b2b4-4bdd71c3f409" containerName="mariadb-account-create-update" Mar 11 10:13:37 crc kubenswrapper[4808]: I0311 10:13:37.079108 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="0401232b-b657-4e67-b2b4-4bdd71c3f409" containerName="mariadb-account-create-update" Mar 11 10:13:37 crc kubenswrapper[4808]: E0311 10:13:37.079147 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="251ae4bf-4e3d-4665-b893-aa0409a4b0cc" containerName="mariadb-database-create" Mar 11 10:13:37 crc kubenswrapper[4808]: I0311 10:13:37.079155 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="251ae4bf-4e3d-4665-b893-aa0409a4b0cc" containerName="mariadb-database-create" Mar 11 10:13:37 crc kubenswrapper[4808]: I0311 10:13:37.079302 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="251ae4bf-4e3d-4665-b893-aa0409a4b0cc" containerName="mariadb-database-create" Mar 11 10:13:37 crc kubenswrapper[4808]: I0311 10:13:37.079313 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="0401232b-b657-4e67-b2b4-4bdd71c3f409" containerName="mariadb-account-create-update" Mar 11 10:13:37 crc kubenswrapper[4808]: I0311 10:13:37.079892 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-r866s" Mar 11 10:13:37 crc kubenswrapper[4808]: I0311 10:13:37.082917 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 11 10:13:37 crc kubenswrapper[4808]: I0311 10:13:37.083195 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 11 10:13:37 crc kubenswrapper[4808]: I0311 10:13:37.086066 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-gnptc" Mar 11 10:13:37 crc kubenswrapper[4808]: I0311 10:13:37.093926 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 11 10:13:37 crc kubenswrapper[4808]: I0311 10:13:37.103137 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-r866s"] Mar 11 10:13:37 crc kubenswrapper[4808]: I0311 10:13:37.151538 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b4593c5-768f-4fc4-a240-58c73fc92664-config-data\") pod \"keystone-db-sync-r866s\" (UID: \"4b4593c5-768f-4fc4-a240-58c73fc92664\") " pod="openstack/keystone-db-sync-r866s" Mar 11 10:13:37 crc kubenswrapper[4808]: I0311 10:13:37.151602 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b4593c5-768f-4fc4-a240-58c73fc92664-combined-ca-bundle\") pod \"keystone-db-sync-r866s\" (UID: \"4b4593c5-768f-4fc4-a240-58c73fc92664\") " pod="openstack/keystone-db-sync-r866s" Mar 11 10:13:37 crc kubenswrapper[4808]: I0311 10:13:37.151830 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xgl2\" (UniqueName: \"kubernetes.io/projected/4b4593c5-768f-4fc4-a240-58c73fc92664-kube-api-access-7xgl2\") pod \"keystone-db-sync-r866s\" (UID: \"4b4593c5-768f-4fc4-a240-58c73fc92664\") " pod="openstack/keystone-db-sync-r866s" Mar 11 10:13:37 crc kubenswrapper[4808]: I0311 10:13:37.253999 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b4593c5-768f-4fc4-a240-58c73fc92664-combined-ca-bundle\") pod \"keystone-db-sync-r866s\" (UID: \"4b4593c5-768f-4fc4-a240-58c73fc92664\") " pod="openstack/keystone-db-sync-r866s" Mar 11 10:13:37 crc kubenswrapper[4808]: I0311 10:13:37.254091 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xgl2\" (UniqueName: \"kubernetes.io/projected/4b4593c5-768f-4fc4-a240-58c73fc92664-kube-api-access-7xgl2\") pod \"keystone-db-sync-r866s\" (UID: \"4b4593c5-768f-4fc4-a240-58c73fc92664\") " pod="openstack/keystone-db-sync-r866s" Mar 11 10:13:37 crc kubenswrapper[4808]: I0311 10:13:37.254218 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b4593c5-768f-4fc4-a240-58c73fc92664-config-data\") pod \"keystone-db-sync-r866s\" (UID: \"4b4593c5-768f-4fc4-a240-58c73fc92664\") " pod="openstack/keystone-db-sync-r866s" Mar 11 10:13:37 crc kubenswrapper[4808]: I0311 10:13:37.260573 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b4593c5-768f-4fc4-a240-58c73fc92664-combined-ca-bundle\") pod \"keystone-db-sync-r866s\" (UID: \"4b4593c5-768f-4fc4-a240-58c73fc92664\") " pod="openstack/keystone-db-sync-r866s" Mar 11 10:13:37 crc kubenswrapper[4808]: I0311 10:13:37.260659 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b4593c5-768f-4fc4-a240-58c73fc92664-config-data\") pod \"keystone-db-sync-r866s\" (UID: \"4b4593c5-768f-4fc4-a240-58c73fc92664\") " pod="openstack/keystone-db-sync-r866s" Mar 11 10:13:37 crc kubenswrapper[4808]: I0311 10:13:37.274017 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xgl2\" (UniqueName: \"kubernetes.io/projected/4b4593c5-768f-4fc4-a240-58c73fc92664-kube-api-access-7xgl2\") pod \"keystone-db-sync-r866s\" (UID: \"4b4593c5-768f-4fc4-a240-58c73fc92664\") " pod="openstack/keystone-db-sync-r866s" Mar 11 10:13:37 crc kubenswrapper[4808]: I0311 10:13:37.412803 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-r866s" Mar 11 10:13:37 crc kubenswrapper[4808]: I0311 10:13:37.867699 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-r866s"] Mar 11 10:13:38 crc kubenswrapper[4808]: I0311 10:13:38.064477 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-r866s" event={"ID":"4b4593c5-768f-4fc4-a240-58c73fc92664","Type":"ContainerStarted","Data":"a5c43d420c49c3d5b62c46e7a5f653f5fc596022bb23d113f30d388fb762d36f"} Mar 11 10:13:39 crc kubenswrapper[4808]: I0311 10:13:39.073399 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-r866s" event={"ID":"4b4593c5-768f-4fc4-a240-58c73fc92664","Type":"ContainerStarted","Data":"ae3a065f46d337a45bd55bf7959adffc239b664cf6bbcd7497b74b571196fa1d"} Mar 11 10:13:39 crc kubenswrapper[4808]: I0311 10:13:39.090125 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-r866s" podStartSLOduration=2.090035895 podStartE2EDuration="2.090035895s" podCreationTimestamp="2026-03-11 10:13:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 10:13:39.089817729 +0000 UTC m=+5670.043141049" watchObservedRunningTime="2026-03-11 10:13:39.090035895 +0000 UTC m=+5670.043359215" Mar 11 10:13:40 crc kubenswrapper[4808]: I0311 10:13:40.082679 4808 generic.go:334] "Generic (PLEG): container finished" podID="4b4593c5-768f-4fc4-a240-58c73fc92664" containerID="ae3a065f46d337a45bd55bf7959adffc239b664cf6bbcd7497b74b571196fa1d" exitCode=0 Mar 11 10:13:40 crc kubenswrapper[4808]: I0311 10:13:40.082726 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-r866s" event={"ID":"4b4593c5-768f-4fc4-a240-58c73fc92664","Type":"ContainerDied","Data":"ae3a065f46d337a45bd55bf7959adffc239b664cf6bbcd7497b74b571196fa1d"} Mar 11 10:13:41 crc kubenswrapper[4808]: I0311 10:13:41.430911 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-r866s" Mar 11 10:13:41 crc kubenswrapper[4808]: I0311 10:13:41.542106 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b4593c5-768f-4fc4-a240-58c73fc92664-config-data\") pod \"4b4593c5-768f-4fc4-a240-58c73fc92664\" (UID: \"4b4593c5-768f-4fc4-a240-58c73fc92664\") " Mar 11 10:13:41 crc kubenswrapper[4808]: I0311 10:13:41.542605 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7xgl2\" (UniqueName: \"kubernetes.io/projected/4b4593c5-768f-4fc4-a240-58c73fc92664-kube-api-access-7xgl2\") pod \"4b4593c5-768f-4fc4-a240-58c73fc92664\" (UID: \"4b4593c5-768f-4fc4-a240-58c73fc92664\") " Mar 11 10:13:41 crc kubenswrapper[4808]: I0311 10:13:41.542731 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b4593c5-768f-4fc4-a240-58c73fc92664-combined-ca-bundle\") pod \"4b4593c5-768f-4fc4-a240-58c73fc92664\" (UID: \"4b4593c5-768f-4fc4-a240-58c73fc92664\") " Mar 11 10:13:41 crc kubenswrapper[4808]: I0311 10:13:41.548223 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b4593c5-768f-4fc4-a240-58c73fc92664-kube-api-access-7xgl2" (OuterVolumeSpecName: "kube-api-access-7xgl2") pod "4b4593c5-768f-4fc4-a240-58c73fc92664" (UID: "4b4593c5-768f-4fc4-a240-58c73fc92664"). InnerVolumeSpecName "kube-api-access-7xgl2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:13:41 crc kubenswrapper[4808]: I0311 10:13:41.571594 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b4593c5-768f-4fc4-a240-58c73fc92664-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4b4593c5-768f-4fc4-a240-58c73fc92664" (UID: "4b4593c5-768f-4fc4-a240-58c73fc92664"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 10:13:41 crc kubenswrapper[4808]: I0311 10:13:41.582170 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b4593c5-768f-4fc4-a240-58c73fc92664-config-data" (OuterVolumeSpecName: "config-data") pod "4b4593c5-768f-4fc4-a240-58c73fc92664" (UID: "4b4593c5-768f-4fc4-a240-58c73fc92664"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 10:13:41 crc kubenswrapper[4808]: I0311 10:13:41.645018 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7xgl2\" (UniqueName: \"kubernetes.io/projected/4b4593c5-768f-4fc4-a240-58c73fc92664-kube-api-access-7xgl2\") on node \"crc\" DevicePath \"\"" Mar 11 10:13:41 crc kubenswrapper[4808]: I0311 10:13:41.645061 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b4593c5-768f-4fc4-a240-58c73fc92664-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 10:13:41 crc kubenswrapper[4808]: I0311 10:13:41.645074 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b4593c5-768f-4fc4-a240-58c73fc92664-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 10:13:42 crc kubenswrapper[4808]: I0311 10:13:42.127649 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-r866s" event={"ID":"4b4593c5-768f-4fc4-a240-58c73fc92664","Type":"ContainerDied","Data":"a5c43d420c49c3d5b62c46e7a5f653f5fc596022bb23d113f30d388fb762d36f"} Mar 11 10:13:42 crc kubenswrapper[4808]: I0311 10:13:42.127707 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5c43d420c49c3d5b62c46e7a5f653f5fc596022bb23d113f30d388fb762d36f" Mar 11 10:13:42 crc kubenswrapper[4808]: I0311 10:13:42.127711 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-r866s" Mar 11 10:13:42 crc kubenswrapper[4808]: I0311 10:13:42.324659 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-76b77f9d55-nglmp"] Mar 11 10:13:42 crc kubenswrapper[4808]: E0311 10:13:42.325145 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b4593c5-768f-4fc4-a240-58c73fc92664" containerName="keystone-db-sync" Mar 11 10:13:42 crc kubenswrapper[4808]: I0311 10:13:42.325190 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b4593c5-768f-4fc4-a240-58c73fc92664" containerName="keystone-db-sync" Mar 11 10:13:42 crc kubenswrapper[4808]: I0311 10:13:42.325469 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b4593c5-768f-4fc4-a240-58c73fc92664" containerName="keystone-db-sync" Mar 11 10:13:42 crc kubenswrapper[4808]: I0311 10:13:42.328334 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76b77f9d55-nglmp" Mar 11 10:13:42 crc kubenswrapper[4808]: I0311 10:13:42.332974 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76b77f9d55-nglmp"] Mar 11 10:13:42 crc kubenswrapper[4808]: I0311 10:13:42.382678 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-f7tnm"] Mar 11 10:13:42 crc kubenswrapper[4808]: I0311 10:13:42.383749 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-f7tnm" Mar 11 10:13:42 crc kubenswrapper[4808]: I0311 10:13:42.386772 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 11 10:13:42 crc kubenswrapper[4808]: I0311 10:13:42.387102 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 11 10:13:42 crc kubenswrapper[4808]: I0311 10:13:42.387273 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 11 10:13:42 crc kubenswrapper[4808]: I0311 10:13:42.387545 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 11 10:13:42 crc kubenswrapper[4808]: I0311 10:13:42.389282 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-gnptc" Mar 11 10:13:42 crc kubenswrapper[4808]: I0311 10:13:42.402652 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-f7tnm"] Mar 11 10:13:42 crc kubenswrapper[4808]: I0311 10:13:42.459918 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/47681338-c25c-475f-a5a9-faa1c1f65049-ovsdbserver-sb\") pod \"dnsmasq-dns-76b77f9d55-nglmp\" (UID: \"47681338-c25c-475f-a5a9-faa1c1f65049\") " pod="openstack/dnsmasq-dns-76b77f9d55-nglmp" Mar 11 10:13:42 crc kubenswrapper[4808]: I0311 10:13:42.459977 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52grw\" (UniqueName: \"kubernetes.io/projected/1a46ea2f-29bc-4cf4-86e9-bde65b50357c-kube-api-access-52grw\") pod \"keystone-bootstrap-f7tnm\" (UID: \"1a46ea2f-29bc-4cf4-86e9-bde65b50357c\") " pod="openstack/keystone-bootstrap-f7tnm" Mar 11 10:13:42 crc kubenswrapper[4808]: I0311 10:13:42.460054 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47681338-c25c-475f-a5a9-faa1c1f65049-config\") pod \"dnsmasq-dns-76b77f9d55-nglmp\" (UID: \"47681338-c25c-475f-a5a9-faa1c1f65049\") " pod="openstack/dnsmasq-dns-76b77f9d55-nglmp" Mar 11 10:13:42 crc kubenswrapper[4808]: I0311 10:13:42.460102 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6l6wk\" (UniqueName: \"kubernetes.io/projected/47681338-c25c-475f-a5a9-faa1c1f65049-kube-api-access-6l6wk\") pod \"dnsmasq-dns-76b77f9d55-nglmp\" (UID: \"47681338-c25c-475f-a5a9-faa1c1f65049\") " pod="openstack/dnsmasq-dns-76b77f9d55-nglmp" Mar 11 10:13:42 crc kubenswrapper[4808]: I0311 10:13:42.460225 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/47681338-c25c-475f-a5a9-faa1c1f65049-dns-svc\") pod \"dnsmasq-dns-76b77f9d55-nglmp\" (UID: \"47681338-c25c-475f-a5a9-faa1c1f65049\") " pod="openstack/dnsmasq-dns-76b77f9d55-nglmp" Mar 11 10:13:42 crc kubenswrapper[4808]: I0311 10:13:42.460258 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1a46ea2f-29bc-4cf4-86e9-bde65b50357c-credential-keys\") pod \"keystone-bootstrap-f7tnm\" (UID: \"1a46ea2f-29bc-4cf4-86e9-bde65b50357c\") " pod="openstack/keystone-bootstrap-f7tnm" Mar 11 10:13:42 crc kubenswrapper[4808]: I0311 10:13:42.460333 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1a46ea2f-29bc-4cf4-86e9-bde65b50357c-scripts\") pod \"keystone-bootstrap-f7tnm\" (UID: \"1a46ea2f-29bc-4cf4-86e9-bde65b50357c\") " pod="openstack/keystone-bootstrap-f7tnm" Mar 11 10:13:42 crc kubenswrapper[4808]: I0311 10:13:42.460393 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1a46ea2f-29bc-4cf4-86e9-bde65b50357c-fernet-keys\") pod \"keystone-bootstrap-f7tnm\" (UID: \"1a46ea2f-29bc-4cf4-86e9-bde65b50357c\") " pod="openstack/keystone-bootstrap-f7tnm" Mar 11 10:13:42 crc kubenswrapper[4808]: I0311 10:13:42.460418 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a46ea2f-29bc-4cf4-86e9-bde65b50357c-config-data\") pod \"keystone-bootstrap-f7tnm\" (UID: \"1a46ea2f-29bc-4cf4-86e9-bde65b50357c\") " pod="openstack/keystone-bootstrap-f7tnm" Mar 11 10:13:42 crc kubenswrapper[4808]: I0311 10:13:42.460439 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a46ea2f-29bc-4cf4-86e9-bde65b50357c-combined-ca-bundle\") pod \"keystone-bootstrap-f7tnm\" (UID: \"1a46ea2f-29bc-4cf4-86e9-bde65b50357c\") " pod="openstack/keystone-bootstrap-f7tnm" Mar 11 10:13:42 crc kubenswrapper[4808]: I0311 10:13:42.460466 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/47681338-c25c-475f-a5a9-faa1c1f65049-ovsdbserver-nb\") pod \"dnsmasq-dns-76b77f9d55-nglmp\" (UID: \"47681338-c25c-475f-a5a9-faa1c1f65049\") " pod="openstack/dnsmasq-dns-76b77f9d55-nglmp" Mar 11 10:13:42 crc kubenswrapper[4808]: I0311 10:13:42.561325 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/47681338-c25c-475f-a5a9-faa1c1f65049-dns-svc\") pod \"dnsmasq-dns-76b77f9d55-nglmp\" (UID: \"47681338-c25c-475f-a5a9-faa1c1f65049\") " pod="openstack/dnsmasq-dns-76b77f9d55-nglmp" Mar 11 10:13:42 crc kubenswrapper[4808]: I0311 10:13:42.561377 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1a46ea2f-29bc-4cf4-86e9-bde65b50357c-credential-keys\") pod \"keystone-bootstrap-f7tnm\" (UID: \"1a46ea2f-29bc-4cf4-86e9-bde65b50357c\") " pod="openstack/keystone-bootstrap-f7tnm" Mar 11 10:13:42 crc kubenswrapper[4808]: I0311 10:13:42.561414 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1a46ea2f-29bc-4cf4-86e9-bde65b50357c-scripts\") pod \"keystone-bootstrap-f7tnm\" (UID: \"1a46ea2f-29bc-4cf4-86e9-bde65b50357c\") " pod="openstack/keystone-bootstrap-f7tnm" Mar 11 10:13:42 crc kubenswrapper[4808]: I0311 10:13:42.561441 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1a46ea2f-29bc-4cf4-86e9-bde65b50357c-fernet-keys\") pod \"keystone-bootstrap-f7tnm\" (UID: \"1a46ea2f-29bc-4cf4-86e9-bde65b50357c\") " pod="openstack/keystone-bootstrap-f7tnm" Mar 11 10:13:42 crc kubenswrapper[4808]: I0311 10:13:42.561462 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a46ea2f-29bc-4cf4-86e9-bde65b50357c-config-data\") pod \"keystone-bootstrap-f7tnm\" (UID: \"1a46ea2f-29bc-4cf4-86e9-bde65b50357c\") " pod="openstack/keystone-bootstrap-f7tnm" Mar 11 10:13:42 crc kubenswrapper[4808]: I0311 10:13:42.561482 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a46ea2f-29bc-4cf4-86e9-bde65b50357c-combined-ca-bundle\") pod \"keystone-bootstrap-f7tnm\" (UID: \"1a46ea2f-29bc-4cf4-86e9-bde65b50357c\") " pod="openstack/keystone-bootstrap-f7tnm" Mar 11 10:13:42 crc kubenswrapper[4808]: I0311 10:13:42.561508 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/47681338-c25c-475f-a5a9-faa1c1f65049-ovsdbserver-nb\") pod \"dnsmasq-dns-76b77f9d55-nglmp\" (UID: \"47681338-c25c-475f-a5a9-faa1c1f65049\") " pod="openstack/dnsmasq-dns-76b77f9d55-nglmp" Mar 11 10:13:42 crc kubenswrapper[4808]: I0311 10:13:42.561527 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/47681338-c25c-475f-a5a9-faa1c1f65049-ovsdbserver-sb\") pod \"dnsmasq-dns-76b77f9d55-nglmp\" (UID: \"47681338-c25c-475f-a5a9-faa1c1f65049\") " pod="openstack/dnsmasq-dns-76b77f9d55-nglmp" Mar 11 10:13:42 crc kubenswrapper[4808]: I0311 10:13:42.561550 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52grw\" (UniqueName: \"kubernetes.io/projected/1a46ea2f-29bc-4cf4-86e9-bde65b50357c-kube-api-access-52grw\") pod \"keystone-bootstrap-f7tnm\" (UID: \"1a46ea2f-29bc-4cf4-86e9-bde65b50357c\") " pod="openstack/keystone-bootstrap-f7tnm" Mar 11 10:13:42 crc kubenswrapper[4808]: I0311 10:13:42.561574 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47681338-c25c-475f-a5a9-faa1c1f65049-config\") pod \"dnsmasq-dns-76b77f9d55-nglmp\" (UID: \"47681338-c25c-475f-a5a9-faa1c1f65049\") " pod="openstack/dnsmasq-dns-76b77f9d55-nglmp" Mar 11 10:13:42 crc kubenswrapper[4808]: I0311 10:13:42.561594 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6l6wk\" (UniqueName: \"kubernetes.io/projected/47681338-c25c-475f-a5a9-faa1c1f65049-kube-api-access-6l6wk\") pod \"dnsmasq-dns-76b77f9d55-nglmp\" (UID: \"47681338-c25c-475f-a5a9-faa1c1f65049\") " pod="openstack/dnsmasq-dns-76b77f9d55-nglmp" Mar 11 10:13:42 crc kubenswrapper[4808]: I0311 10:13:42.562202 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/47681338-c25c-475f-a5a9-faa1c1f65049-dns-svc\") pod \"dnsmasq-dns-76b77f9d55-nglmp\" (UID: \"47681338-c25c-475f-a5a9-faa1c1f65049\") " pod="openstack/dnsmasq-dns-76b77f9d55-nglmp" Mar 11 10:13:42 crc kubenswrapper[4808]: I0311 10:13:42.562878 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/47681338-c25c-475f-a5a9-faa1c1f65049-ovsdbserver-sb\") pod \"dnsmasq-dns-76b77f9d55-nglmp\" (UID: \"47681338-c25c-475f-a5a9-faa1c1f65049\") " pod="openstack/dnsmasq-dns-76b77f9d55-nglmp" Mar 11 10:13:42 crc kubenswrapper[4808]: I0311 10:13:42.562905 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47681338-c25c-475f-a5a9-faa1c1f65049-config\") pod \"dnsmasq-dns-76b77f9d55-nglmp\" (UID: \"47681338-c25c-475f-a5a9-faa1c1f65049\") " pod="openstack/dnsmasq-dns-76b77f9d55-nglmp" Mar 11 10:13:42 crc kubenswrapper[4808]: I0311 10:13:42.565609 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/47681338-c25c-475f-a5a9-faa1c1f65049-ovsdbserver-nb\") pod \"dnsmasq-dns-76b77f9d55-nglmp\" (UID: \"47681338-c25c-475f-a5a9-faa1c1f65049\") " pod="openstack/dnsmasq-dns-76b77f9d55-nglmp" Mar 11 10:13:42 crc kubenswrapper[4808]: I0311 10:13:42.566580 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1a46ea2f-29bc-4cf4-86e9-bde65b50357c-scripts\") pod \"keystone-bootstrap-f7tnm\" (UID: \"1a46ea2f-29bc-4cf4-86e9-bde65b50357c\") " pod="openstack/keystone-bootstrap-f7tnm" Mar 11 10:13:42 crc kubenswrapper[4808]: I0311 10:13:42.567088 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1a46ea2f-29bc-4cf4-86e9-bde65b50357c-credential-keys\") pod \"keystone-bootstrap-f7tnm\" (UID: \"1a46ea2f-29bc-4cf4-86e9-bde65b50357c\") " pod="openstack/keystone-bootstrap-f7tnm" Mar 11 10:13:42 crc kubenswrapper[4808]: I0311 10:13:42.567569 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1a46ea2f-29bc-4cf4-86e9-bde65b50357c-fernet-keys\") pod \"keystone-bootstrap-f7tnm\" (UID: \"1a46ea2f-29bc-4cf4-86e9-bde65b50357c\") " pod="openstack/keystone-bootstrap-f7tnm" Mar 11 10:13:42 crc kubenswrapper[4808]: I0311 10:13:42.581325 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a46ea2f-29bc-4cf4-86e9-bde65b50357c-combined-ca-bundle\") pod \"keystone-bootstrap-f7tnm\" (UID: \"1a46ea2f-29bc-4cf4-86e9-bde65b50357c\") " pod="openstack/keystone-bootstrap-f7tnm" Mar 11 10:13:42 crc kubenswrapper[4808]: I0311 10:13:42.583874 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6l6wk\" (UniqueName: \"kubernetes.io/projected/47681338-c25c-475f-a5a9-faa1c1f65049-kube-api-access-6l6wk\") pod \"dnsmasq-dns-76b77f9d55-nglmp\" (UID: \"47681338-c25c-475f-a5a9-faa1c1f65049\") " pod="openstack/dnsmasq-dns-76b77f9d55-nglmp" Mar 11 10:13:42 crc kubenswrapper[4808]: I0311 10:13:42.585255 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52grw\" (UniqueName: \"kubernetes.io/projected/1a46ea2f-29bc-4cf4-86e9-bde65b50357c-kube-api-access-52grw\") pod \"keystone-bootstrap-f7tnm\" (UID: \"1a46ea2f-29bc-4cf4-86e9-bde65b50357c\") " pod="openstack/keystone-bootstrap-f7tnm" Mar 11 10:13:42 crc kubenswrapper[4808]: I0311 10:13:42.585617 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a46ea2f-29bc-4cf4-86e9-bde65b50357c-config-data\") pod \"keystone-bootstrap-f7tnm\" (UID: \"1a46ea2f-29bc-4cf4-86e9-bde65b50357c\") " pod="openstack/keystone-bootstrap-f7tnm" Mar 11 10:13:42 crc kubenswrapper[4808]: I0311 10:13:42.659191 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76b77f9d55-nglmp" Mar 11 10:13:42 crc kubenswrapper[4808]: I0311 10:13:42.704841 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-f7tnm" Mar 11 10:13:43 crc kubenswrapper[4808]: I0311 10:13:43.149058 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76b77f9d55-nglmp"] Mar 11 10:13:43 crc kubenswrapper[4808]: W0311 10:13:43.153866 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod47681338_c25c_475f_a5a9_faa1c1f65049.slice/crio-4e5b68208813e3a7fb426f5f216e7d6610d5edd5012fd950b83eef0282b2b1c4 WatchSource:0}: Error finding container 4e5b68208813e3a7fb426f5f216e7d6610d5edd5012fd950b83eef0282b2b1c4: Status 404 returned error can't find the container with id 4e5b68208813e3a7fb426f5f216e7d6610d5edd5012fd950b83eef0282b2b1c4 Mar 11 10:13:43 crc kubenswrapper[4808]: I0311 10:13:43.231897 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-f7tnm"] Mar 11 10:13:43 crc kubenswrapper[4808]: W0311 10:13:43.245240 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1a46ea2f_29bc_4cf4_86e9_bde65b50357c.slice/crio-d0cf0408b11a8d2877703fc69fe2f76e8df1b9b4ad707722b0da4a4112941c07 WatchSource:0}: Error finding container d0cf0408b11a8d2877703fc69fe2f76e8df1b9b4ad707722b0da4a4112941c07: Status 404 returned error can't find the container with id d0cf0408b11a8d2877703fc69fe2f76e8df1b9b4ad707722b0da4a4112941c07 Mar 11 10:13:44 crc kubenswrapper[4808]: I0311 10:13:44.142920 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-f7tnm" event={"ID":"1a46ea2f-29bc-4cf4-86e9-bde65b50357c","Type":"ContainerStarted","Data":"afaa5cb37ca00d3e01df722d961c7c3895a32b15b8cd7b4d7c4eb6cbe6e5cf30"} Mar 11 10:13:44 crc kubenswrapper[4808]: I0311 10:13:44.143269 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-f7tnm" event={"ID":"1a46ea2f-29bc-4cf4-86e9-bde65b50357c","Type":"ContainerStarted","Data":"d0cf0408b11a8d2877703fc69fe2f76e8df1b9b4ad707722b0da4a4112941c07"} Mar 11 10:13:44 crc kubenswrapper[4808]: I0311 10:13:44.146500 4808 generic.go:334] "Generic (PLEG): container finished" podID="47681338-c25c-475f-a5a9-faa1c1f65049" containerID="24be50ca6e0083b20eab9f39a7f08ef62ace1a9ba9fc99b834a56404adc71228" exitCode=0 Mar 11 10:13:44 crc kubenswrapper[4808]: I0311 10:13:44.146538 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76b77f9d55-nglmp" event={"ID":"47681338-c25c-475f-a5a9-faa1c1f65049","Type":"ContainerDied","Data":"24be50ca6e0083b20eab9f39a7f08ef62ace1a9ba9fc99b834a56404adc71228"} Mar 11 10:13:44 crc kubenswrapper[4808]: I0311 10:13:44.146559 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76b77f9d55-nglmp" event={"ID":"47681338-c25c-475f-a5a9-faa1c1f65049","Type":"ContainerStarted","Data":"4e5b68208813e3a7fb426f5f216e7d6610d5edd5012fd950b83eef0282b2b1c4"} Mar 11 10:13:44 crc kubenswrapper[4808]: I0311 10:13:44.207195 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-f7tnm" podStartSLOduration=2.207173117 podStartE2EDuration="2.207173117s" podCreationTimestamp="2026-03-11 10:13:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 10:13:44.177314741 +0000 UTC m=+5675.130638081" watchObservedRunningTime="2026-03-11 10:13:44.207173117 +0000 UTC m=+5675.160496437" Mar 11 10:13:45 crc kubenswrapper[4808]: I0311 10:13:45.155784 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76b77f9d55-nglmp" event={"ID":"47681338-c25c-475f-a5a9-faa1c1f65049","Type":"ContainerStarted","Data":"6f5163441eb4f48c4ec15af8f58cea943859e4af90d6378d1c9cb695d8846058"} Mar 11 10:13:45 crc kubenswrapper[4808]: I0311 10:13:45.180093 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-76b77f9d55-nglmp" podStartSLOduration=3.180074114 podStartE2EDuration="3.180074114s" podCreationTimestamp="2026-03-11 10:13:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 10:13:45.171347727 +0000 UTC m=+5676.124671067" watchObservedRunningTime="2026-03-11 10:13:45.180074114 +0000 UTC m=+5676.133397444" Mar 11 10:13:46 crc kubenswrapper[4808]: I0311 10:13:46.027951 4808 patch_prober.go:28] interesting pod/machine-config-daemon-tfsm9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 10:13:46 crc kubenswrapper[4808]: I0311 10:13:46.028020 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 10:13:46 crc kubenswrapper[4808]: I0311 10:13:46.161338 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-76b77f9d55-nglmp" Mar 11 10:13:47 crc kubenswrapper[4808]: I0311 10:13:47.171281 4808 generic.go:334] "Generic (PLEG): container finished" podID="1a46ea2f-29bc-4cf4-86e9-bde65b50357c" containerID="afaa5cb37ca00d3e01df722d961c7c3895a32b15b8cd7b4d7c4eb6cbe6e5cf30" exitCode=0 Mar 11 10:13:47 crc kubenswrapper[4808]: I0311 10:13:47.171392 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-f7tnm" event={"ID":"1a46ea2f-29bc-4cf4-86e9-bde65b50357c","Type":"ContainerDied","Data":"afaa5cb37ca00d3e01df722d961c7c3895a32b15b8cd7b4d7c4eb6cbe6e5cf30"} Mar 11 10:13:48 crc kubenswrapper[4808]: I0311 10:13:48.517114 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-f7tnm" Mar 11 10:13:48 crc kubenswrapper[4808]: I0311 10:13:48.578762 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1a46ea2f-29bc-4cf4-86e9-bde65b50357c-fernet-keys\") pod \"1a46ea2f-29bc-4cf4-86e9-bde65b50357c\" (UID: \"1a46ea2f-29bc-4cf4-86e9-bde65b50357c\") " Mar 11 10:13:48 crc kubenswrapper[4808]: I0311 10:13:48.578827 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a46ea2f-29bc-4cf4-86e9-bde65b50357c-combined-ca-bundle\") pod \"1a46ea2f-29bc-4cf4-86e9-bde65b50357c\" (UID: \"1a46ea2f-29bc-4cf4-86e9-bde65b50357c\") " Mar 11 10:13:48 crc kubenswrapper[4808]: I0311 10:13:48.578875 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a46ea2f-29bc-4cf4-86e9-bde65b50357c-config-data\") pod \"1a46ea2f-29bc-4cf4-86e9-bde65b50357c\" (UID: \"1a46ea2f-29bc-4cf4-86e9-bde65b50357c\") " Mar 11 10:13:48 crc kubenswrapper[4808]: I0311 10:13:48.578978 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-52grw\" (UniqueName: \"kubernetes.io/projected/1a46ea2f-29bc-4cf4-86e9-bde65b50357c-kube-api-access-52grw\") pod \"1a46ea2f-29bc-4cf4-86e9-bde65b50357c\" (UID: \"1a46ea2f-29bc-4cf4-86e9-bde65b50357c\") " Mar 11 10:13:48 crc kubenswrapper[4808]: I0311 10:13:48.579002 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1a46ea2f-29bc-4cf4-86e9-bde65b50357c-scripts\") pod \"1a46ea2f-29bc-4cf4-86e9-bde65b50357c\" (UID: \"1a46ea2f-29bc-4cf4-86e9-bde65b50357c\") " Mar 11 10:13:48 crc kubenswrapper[4808]: I0311 10:13:48.579055 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1a46ea2f-29bc-4cf4-86e9-bde65b50357c-credential-keys\") pod \"1a46ea2f-29bc-4cf4-86e9-bde65b50357c\" (UID: \"1a46ea2f-29bc-4cf4-86e9-bde65b50357c\") " Mar 11 10:13:48 crc kubenswrapper[4808]: I0311 10:13:48.584220 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a46ea2f-29bc-4cf4-86e9-bde65b50357c-kube-api-access-52grw" (OuterVolumeSpecName: "kube-api-access-52grw") pod "1a46ea2f-29bc-4cf4-86e9-bde65b50357c" (UID: "1a46ea2f-29bc-4cf4-86e9-bde65b50357c"). InnerVolumeSpecName "kube-api-access-52grw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:13:48 crc kubenswrapper[4808]: I0311 10:13:48.584531 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a46ea2f-29bc-4cf4-86e9-bde65b50357c-scripts" (OuterVolumeSpecName: "scripts") pod "1a46ea2f-29bc-4cf4-86e9-bde65b50357c" (UID: "1a46ea2f-29bc-4cf4-86e9-bde65b50357c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 10:13:48 crc kubenswrapper[4808]: I0311 10:13:48.584840 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a46ea2f-29bc-4cf4-86e9-bde65b50357c-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "1a46ea2f-29bc-4cf4-86e9-bde65b50357c" (UID: "1a46ea2f-29bc-4cf4-86e9-bde65b50357c"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 10:13:48 crc kubenswrapper[4808]: I0311 10:13:48.585451 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a46ea2f-29bc-4cf4-86e9-bde65b50357c-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "1a46ea2f-29bc-4cf4-86e9-bde65b50357c" (UID: "1a46ea2f-29bc-4cf4-86e9-bde65b50357c"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 10:13:48 crc kubenswrapper[4808]: I0311 10:13:48.605558 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a46ea2f-29bc-4cf4-86e9-bde65b50357c-config-data" (OuterVolumeSpecName: "config-data") pod "1a46ea2f-29bc-4cf4-86e9-bde65b50357c" (UID: "1a46ea2f-29bc-4cf4-86e9-bde65b50357c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 10:13:48 crc kubenswrapper[4808]: I0311 10:13:48.607275 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a46ea2f-29bc-4cf4-86e9-bde65b50357c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1a46ea2f-29bc-4cf4-86e9-bde65b50357c" (UID: "1a46ea2f-29bc-4cf4-86e9-bde65b50357c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 10:13:48 crc kubenswrapper[4808]: I0311 10:13:48.681076 4808 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1a46ea2f-29bc-4cf4-86e9-bde65b50357c-fernet-keys\") on node \"crc\" DevicePath \"\"" Mar 11 10:13:48 crc kubenswrapper[4808]: I0311 10:13:48.681111 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a46ea2f-29bc-4cf4-86e9-bde65b50357c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 10:13:48 crc kubenswrapper[4808]: I0311 10:13:48.681121 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a46ea2f-29bc-4cf4-86e9-bde65b50357c-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 10:13:48 crc kubenswrapper[4808]: I0311 10:13:48.681128 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-52grw\" (UniqueName: \"kubernetes.io/projected/1a46ea2f-29bc-4cf4-86e9-bde65b50357c-kube-api-access-52grw\") on node \"crc\" DevicePath \"\"" Mar 11 10:13:48 crc kubenswrapper[4808]: I0311 10:13:48.681136 4808 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1a46ea2f-29bc-4cf4-86e9-bde65b50357c-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 10:13:48 crc kubenswrapper[4808]: I0311 10:13:48.681144 4808 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1a46ea2f-29bc-4cf4-86e9-bde65b50357c-credential-keys\") on node \"crc\" DevicePath \"\"" Mar 11 10:13:49 crc kubenswrapper[4808]: I0311 10:13:49.186976 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-f7tnm" event={"ID":"1a46ea2f-29bc-4cf4-86e9-bde65b50357c","Type":"ContainerDied","Data":"d0cf0408b11a8d2877703fc69fe2f76e8df1b9b4ad707722b0da4a4112941c07"} Mar 11 10:13:49 crc kubenswrapper[4808]: I0311 10:13:49.187019 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0cf0408b11a8d2877703fc69fe2f76e8df1b9b4ad707722b0da4a4112941c07" Mar 11 10:13:49 crc kubenswrapper[4808]: I0311 10:13:49.187062 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-f7tnm" Mar 11 10:13:49 crc kubenswrapper[4808]: I0311 10:13:49.260261 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-f7tnm"] Mar 11 10:13:49 crc kubenswrapper[4808]: I0311 10:13:49.266720 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-f7tnm"] Mar 11 10:13:49 crc kubenswrapper[4808]: I0311 10:13:49.359336 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-hmcwh"] Mar 11 10:13:49 crc kubenswrapper[4808]: E0311 10:13:49.359768 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a46ea2f-29bc-4cf4-86e9-bde65b50357c" containerName="keystone-bootstrap" Mar 11 10:13:49 crc kubenswrapper[4808]: I0311 10:13:49.359821 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a46ea2f-29bc-4cf4-86e9-bde65b50357c" containerName="keystone-bootstrap" Mar 11 10:13:49 crc kubenswrapper[4808]: I0311 10:13:49.360159 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a46ea2f-29bc-4cf4-86e9-bde65b50357c" containerName="keystone-bootstrap" Mar 11 10:13:49 crc kubenswrapper[4808]: I0311 10:13:49.360861 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hmcwh" Mar 11 10:13:49 crc kubenswrapper[4808]: I0311 10:13:49.363604 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 11 10:13:49 crc kubenswrapper[4808]: I0311 10:13:49.363880 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 11 10:13:49 crc kubenswrapper[4808]: I0311 10:13:49.363997 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-gnptc" Mar 11 10:13:49 crc kubenswrapper[4808]: I0311 10:13:49.364107 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 11 10:13:49 crc kubenswrapper[4808]: I0311 10:13:49.365992 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 11 10:13:49 crc kubenswrapper[4808]: I0311 10:13:49.369932 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-hmcwh"] Mar 11 10:13:49 crc kubenswrapper[4808]: I0311 10:13:49.493679 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5774d085-51ad-43a1-9404-6c288deec196-scripts\") pod \"keystone-bootstrap-hmcwh\" (UID: \"5774d085-51ad-43a1-9404-6c288deec196\") " pod="openstack/keystone-bootstrap-hmcwh" Mar 11 10:13:49 crc kubenswrapper[4808]: I0311 10:13:49.493950 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5774d085-51ad-43a1-9404-6c288deec196-fernet-keys\") pod \"keystone-bootstrap-hmcwh\" (UID: \"5774d085-51ad-43a1-9404-6c288deec196\") " pod="openstack/keystone-bootstrap-hmcwh" Mar 11 10:13:49 crc kubenswrapper[4808]: I0311 10:13:49.493970 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5774d085-51ad-43a1-9404-6c288deec196-credential-keys\") pod \"keystone-bootstrap-hmcwh\" (UID: \"5774d085-51ad-43a1-9404-6c288deec196\") " pod="openstack/keystone-bootstrap-hmcwh" Mar 11 10:13:49 crc kubenswrapper[4808]: I0311 10:13:49.493997 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5774d085-51ad-43a1-9404-6c288deec196-config-data\") pod \"keystone-bootstrap-hmcwh\" (UID: \"5774d085-51ad-43a1-9404-6c288deec196\") " pod="openstack/keystone-bootstrap-hmcwh" Mar 11 10:13:49 crc kubenswrapper[4808]: I0311 10:13:49.494030 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5774d085-51ad-43a1-9404-6c288deec196-combined-ca-bundle\") pod \"keystone-bootstrap-hmcwh\" (UID: \"5774d085-51ad-43a1-9404-6c288deec196\") " pod="openstack/keystone-bootstrap-hmcwh" Mar 11 10:13:49 crc kubenswrapper[4808]: I0311 10:13:49.494051 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-st5sz\" (UniqueName: \"kubernetes.io/projected/5774d085-51ad-43a1-9404-6c288deec196-kube-api-access-st5sz\") pod \"keystone-bootstrap-hmcwh\" (UID: \"5774d085-51ad-43a1-9404-6c288deec196\") " pod="openstack/keystone-bootstrap-hmcwh" Mar 11 10:13:49 crc kubenswrapper[4808]: I0311 10:13:49.595303 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5774d085-51ad-43a1-9404-6c288deec196-scripts\") pod \"keystone-bootstrap-hmcwh\" (UID: \"5774d085-51ad-43a1-9404-6c288deec196\") " pod="openstack/keystone-bootstrap-hmcwh" Mar 11 10:13:49 crc kubenswrapper[4808]: I0311 10:13:49.596121 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5774d085-51ad-43a1-9404-6c288deec196-fernet-keys\") pod \"keystone-bootstrap-hmcwh\" (UID: \"5774d085-51ad-43a1-9404-6c288deec196\") " pod="openstack/keystone-bootstrap-hmcwh" Mar 11 10:13:49 crc kubenswrapper[4808]: I0311 10:13:49.596280 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5774d085-51ad-43a1-9404-6c288deec196-credential-keys\") pod \"keystone-bootstrap-hmcwh\" (UID: \"5774d085-51ad-43a1-9404-6c288deec196\") " pod="openstack/keystone-bootstrap-hmcwh" Mar 11 10:13:49 crc kubenswrapper[4808]: I0311 10:13:49.596476 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5774d085-51ad-43a1-9404-6c288deec196-config-data\") pod \"keystone-bootstrap-hmcwh\" (UID: \"5774d085-51ad-43a1-9404-6c288deec196\") " pod="openstack/keystone-bootstrap-hmcwh" Mar 11 10:13:49 crc kubenswrapper[4808]: I0311 10:13:49.596630 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5774d085-51ad-43a1-9404-6c288deec196-combined-ca-bundle\") pod \"keystone-bootstrap-hmcwh\" (UID: \"5774d085-51ad-43a1-9404-6c288deec196\") " pod="openstack/keystone-bootstrap-hmcwh" Mar 11 10:13:49 crc kubenswrapper[4808]: I0311 10:13:49.596743 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-st5sz\" (UniqueName: \"kubernetes.io/projected/5774d085-51ad-43a1-9404-6c288deec196-kube-api-access-st5sz\") pod \"keystone-bootstrap-hmcwh\" (UID: \"5774d085-51ad-43a1-9404-6c288deec196\") " pod="openstack/keystone-bootstrap-hmcwh" Mar 11 10:13:49 crc kubenswrapper[4808]: I0311 10:13:49.599556 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5774d085-51ad-43a1-9404-6c288deec196-scripts\") pod \"keystone-bootstrap-hmcwh\" (UID: \"5774d085-51ad-43a1-9404-6c288deec196\") " pod="openstack/keystone-bootstrap-hmcwh" Mar 11 10:13:49 crc kubenswrapper[4808]: I0311 10:13:49.599996 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5774d085-51ad-43a1-9404-6c288deec196-credential-keys\") pod \"keystone-bootstrap-hmcwh\" (UID: \"5774d085-51ad-43a1-9404-6c288deec196\") " pod="openstack/keystone-bootstrap-hmcwh" Mar 11 10:13:49 crc kubenswrapper[4808]: I0311 10:13:49.601533 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5774d085-51ad-43a1-9404-6c288deec196-combined-ca-bundle\") pod \"keystone-bootstrap-hmcwh\" (UID: \"5774d085-51ad-43a1-9404-6c288deec196\") " pod="openstack/keystone-bootstrap-hmcwh" Mar 11 10:13:49 crc kubenswrapper[4808]: I0311 10:13:49.602026 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5774d085-51ad-43a1-9404-6c288deec196-config-data\") pod \"keystone-bootstrap-hmcwh\" (UID: \"5774d085-51ad-43a1-9404-6c288deec196\") " pod="openstack/keystone-bootstrap-hmcwh" Mar 11 10:13:49 crc kubenswrapper[4808]: I0311 10:13:49.603972 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5774d085-51ad-43a1-9404-6c288deec196-fernet-keys\") pod \"keystone-bootstrap-hmcwh\" (UID: \"5774d085-51ad-43a1-9404-6c288deec196\") " pod="openstack/keystone-bootstrap-hmcwh" Mar 11 10:13:49 crc kubenswrapper[4808]: I0311 10:13:49.621426 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-st5sz\" (UniqueName: \"kubernetes.io/projected/5774d085-51ad-43a1-9404-6c288deec196-kube-api-access-st5sz\") pod \"keystone-bootstrap-hmcwh\" (UID: \"5774d085-51ad-43a1-9404-6c288deec196\") " pod="openstack/keystone-bootstrap-hmcwh" Mar 11 10:13:49 crc kubenswrapper[4808]: I0311 10:13:49.683383 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hmcwh" Mar 11 10:13:49 crc kubenswrapper[4808]: I0311 10:13:49.827262 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a46ea2f-29bc-4cf4-86e9-bde65b50357c" path="/var/lib/kubelet/pods/1a46ea2f-29bc-4cf4-86e9-bde65b50357c/volumes" Mar 11 10:13:50 crc kubenswrapper[4808]: I0311 10:13:50.155495 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-hmcwh"] Mar 11 10:13:50 crc kubenswrapper[4808]: I0311 10:13:50.198702 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hmcwh" event={"ID":"5774d085-51ad-43a1-9404-6c288deec196","Type":"ContainerStarted","Data":"ce0d79a553d5beed55d9e30ea04e0326c1288c3336866cc50a19509a17b097e0"} Mar 11 10:13:51 crc kubenswrapper[4808]: I0311 10:13:51.207787 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hmcwh" event={"ID":"5774d085-51ad-43a1-9404-6c288deec196","Type":"ContainerStarted","Data":"812848b18e75e588cd448e12ae3f766b6be7e494c1942939a03312177f3d64d6"} Mar 11 10:13:52 crc kubenswrapper[4808]: I0311 10:13:52.660721 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-76b77f9d55-nglmp" Mar 11 10:13:52 crc kubenswrapper[4808]: I0311 10:13:52.678381 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-hmcwh" podStartSLOduration=3.678330602 podStartE2EDuration="3.678330602s" podCreationTimestamp="2026-03-11 10:13:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 10:13:51.229675149 +0000 UTC m=+5682.182998459" watchObservedRunningTime="2026-03-11 10:13:52.678330602 +0000 UTC m=+5683.631653922" Mar 11 10:13:52 crc kubenswrapper[4808]: I0311 10:13:52.720701 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6cc897dccf-z6rp4"] Mar 11 10:13:52 crc kubenswrapper[4808]: I0311 10:13:52.720938 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6cc897dccf-z6rp4" podUID="a375a012-b69e-494d-8648-4be3c035b2f1" containerName="dnsmasq-dns" containerID="cri-o://93a52ac634d0f3b5759624bcc9d43a6b3e3cbf1d5b52eeb2e2971518c75b38ff" gracePeriod=10 Mar 11 10:13:53 crc kubenswrapper[4808]: I0311 10:13:53.224443 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cc897dccf-z6rp4" Mar 11 10:13:53 crc kubenswrapper[4808]: I0311 10:13:53.225675 4808 generic.go:334] "Generic (PLEG): container finished" podID="5774d085-51ad-43a1-9404-6c288deec196" containerID="812848b18e75e588cd448e12ae3f766b6be7e494c1942939a03312177f3d64d6" exitCode=0 Mar 11 10:13:53 crc kubenswrapper[4808]: I0311 10:13:53.225739 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hmcwh" event={"ID":"5774d085-51ad-43a1-9404-6c288deec196","Type":"ContainerDied","Data":"812848b18e75e588cd448e12ae3f766b6be7e494c1942939a03312177f3d64d6"} Mar 11 10:13:53 crc kubenswrapper[4808]: I0311 10:13:53.228935 4808 generic.go:334] "Generic (PLEG): container finished" podID="a375a012-b69e-494d-8648-4be3c035b2f1" containerID="93a52ac634d0f3b5759624bcc9d43a6b3e3cbf1d5b52eeb2e2971518c75b38ff" exitCode=0 Mar 11 10:13:53 crc kubenswrapper[4808]: I0311 10:13:53.228963 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cc897dccf-z6rp4" event={"ID":"a375a012-b69e-494d-8648-4be3c035b2f1","Type":"ContainerDied","Data":"93a52ac634d0f3b5759624bcc9d43a6b3e3cbf1d5b52eeb2e2971518c75b38ff"} Mar 11 10:13:53 crc kubenswrapper[4808]: I0311 10:13:53.228982 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cc897dccf-z6rp4" event={"ID":"a375a012-b69e-494d-8648-4be3c035b2f1","Type":"ContainerDied","Data":"da062735a018b34ba6c3535eb1e44282ee2a6c1029404d969934fa0b57e98297"} Mar 11 10:13:53 crc kubenswrapper[4808]: I0311 10:13:53.228999 4808 scope.go:117] "RemoveContainer" containerID="93a52ac634d0f3b5759624bcc9d43a6b3e3cbf1d5b52eeb2e2971518c75b38ff" Mar 11 10:13:53 crc kubenswrapper[4808]: I0311 10:13:53.229114 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cc897dccf-z6rp4" Mar 11 10:13:53 crc kubenswrapper[4808]: I0311 10:13:53.262909 4808 scope.go:117] "RemoveContainer" containerID="bb56672baf296e4cbf8cfaf915b88d6c4b8a83b51bd0fefd3214890541a8aa39" Mar 11 10:13:53 crc kubenswrapper[4808]: I0311 10:13:53.301965 4808 scope.go:117] "RemoveContainer" containerID="93a52ac634d0f3b5759624bcc9d43a6b3e3cbf1d5b52eeb2e2971518c75b38ff" Mar 11 10:13:53 crc kubenswrapper[4808]: E0311 10:13:53.302498 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93a52ac634d0f3b5759624bcc9d43a6b3e3cbf1d5b52eeb2e2971518c75b38ff\": container with ID starting with 93a52ac634d0f3b5759624bcc9d43a6b3e3cbf1d5b52eeb2e2971518c75b38ff not found: ID does not exist" containerID="93a52ac634d0f3b5759624bcc9d43a6b3e3cbf1d5b52eeb2e2971518c75b38ff" Mar 11 10:13:53 crc kubenswrapper[4808]: I0311 10:13:53.302654 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93a52ac634d0f3b5759624bcc9d43a6b3e3cbf1d5b52eeb2e2971518c75b38ff"} err="failed to get container status \"93a52ac634d0f3b5759624bcc9d43a6b3e3cbf1d5b52eeb2e2971518c75b38ff\": rpc error: code = NotFound desc = could not find container \"93a52ac634d0f3b5759624bcc9d43a6b3e3cbf1d5b52eeb2e2971518c75b38ff\": container with ID starting with 93a52ac634d0f3b5759624bcc9d43a6b3e3cbf1d5b52eeb2e2971518c75b38ff not found: ID does not exist" Mar 11 10:13:53 crc kubenswrapper[4808]: I0311 10:13:53.302749 4808 scope.go:117] "RemoveContainer" containerID="bb56672baf296e4cbf8cfaf915b88d6c4b8a83b51bd0fefd3214890541a8aa39" Mar 11 10:13:53 crc kubenswrapper[4808]: E0311 10:13:53.303134 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb56672baf296e4cbf8cfaf915b88d6c4b8a83b51bd0fefd3214890541a8aa39\": container with ID starting with bb56672baf296e4cbf8cfaf915b88d6c4b8a83b51bd0fefd3214890541a8aa39 not found: ID does not exist" containerID="bb56672baf296e4cbf8cfaf915b88d6c4b8a83b51bd0fefd3214890541a8aa39" Mar 11 10:13:53 crc kubenswrapper[4808]: I0311 10:13:53.303223 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb56672baf296e4cbf8cfaf915b88d6c4b8a83b51bd0fefd3214890541a8aa39"} err="failed to get container status \"bb56672baf296e4cbf8cfaf915b88d6c4b8a83b51bd0fefd3214890541a8aa39\": rpc error: code = NotFound desc = could not find container \"bb56672baf296e4cbf8cfaf915b88d6c4b8a83b51bd0fefd3214890541a8aa39\": container with ID starting with bb56672baf296e4cbf8cfaf915b88d6c4b8a83b51bd0fefd3214890541a8aa39 not found: ID does not exist" Mar 11 10:13:53 crc kubenswrapper[4808]: I0311 10:13:53.372001 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a375a012-b69e-494d-8648-4be3c035b2f1-config\") pod \"a375a012-b69e-494d-8648-4be3c035b2f1\" (UID: \"a375a012-b69e-494d-8648-4be3c035b2f1\") " Mar 11 10:13:53 crc kubenswrapper[4808]: I0311 10:13:53.372238 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a375a012-b69e-494d-8648-4be3c035b2f1-ovsdbserver-nb\") pod \"a375a012-b69e-494d-8648-4be3c035b2f1\" (UID: \"a375a012-b69e-494d-8648-4be3c035b2f1\") " Mar 11 10:13:53 crc kubenswrapper[4808]: I0311 10:13:53.372461 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a375a012-b69e-494d-8648-4be3c035b2f1-ovsdbserver-sb\") pod \"a375a012-b69e-494d-8648-4be3c035b2f1\" (UID: \"a375a012-b69e-494d-8648-4be3c035b2f1\") " Mar 11 10:13:53 crc kubenswrapper[4808]: I0311 10:13:53.372566 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5h64t\" (UniqueName: \"kubernetes.io/projected/a375a012-b69e-494d-8648-4be3c035b2f1-kube-api-access-5h64t\") pod \"a375a012-b69e-494d-8648-4be3c035b2f1\" (UID: \"a375a012-b69e-494d-8648-4be3c035b2f1\") " Mar 11 10:13:53 crc kubenswrapper[4808]: I0311 10:13:53.372693 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a375a012-b69e-494d-8648-4be3c035b2f1-dns-svc\") pod \"a375a012-b69e-494d-8648-4be3c035b2f1\" (UID: \"a375a012-b69e-494d-8648-4be3c035b2f1\") " Mar 11 10:13:53 crc kubenswrapper[4808]: I0311 10:13:53.381592 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a375a012-b69e-494d-8648-4be3c035b2f1-kube-api-access-5h64t" (OuterVolumeSpecName: "kube-api-access-5h64t") pod "a375a012-b69e-494d-8648-4be3c035b2f1" (UID: "a375a012-b69e-494d-8648-4be3c035b2f1"). InnerVolumeSpecName "kube-api-access-5h64t". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:13:53 crc kubenswrapper[4808]: I0311 10:13:53.411136 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a375a012-b69e-494d-8648-4be3c035b2f1-config" (OuterVolumeSpecName: "config") pod "a375a012-b69e-494d-8648-4be3c035b2f1" (UID: "a375a012-b69e-494d-8648-4be3c035b2f1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 10:13:53 crc kubenswrapper[4808]: I0311 10:13:53.417270 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a375a012-b69e-494d-8648-4be3c035b2f1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a375a012-b69e-494d-8648-4be3c035b2f1" (UID: "a375a012-b69e-494d-8648-4be3c035b2f1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 10:13:53 crc kubenswrapper[4808]: I0311 10:13:53.420869 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a375a012-b69e-494d-8648-4be3c035b2f1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a375a012-b69e-494d-8648-4be3c035b2f1" (UID: "a375a012-b69e-494d-8648-4be3c035b2f1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 10:13:53 crc kubenswrapper[4808]: I0311 10:13:53.430248 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a375a012-b69e-494d-8648-4be3c035b2f1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a375a012-b69e-494d-8648-4be3c035b2f1" (UID: "a375a012-b69e-494d-8648-4be3c035b2f1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 10:13:53 crc kubenswrapper[4808]: I0311 10:13:53.474747 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a375a012-b69e-494d-8648-4be3c035b2f1-config\") on node \"crc\" DevicePath \"\"" Mar 11 10:13:53 crc kubenswrapper[4808]: I0311 10:13:53.475112 4808 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a375a012-b69e-494d-8648-4be3c035b2f1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 11 10:13:53 crc kubenswrapper[4808]: I0311 10:13:53.475126 4808 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a375a012-b69e-494d-8648-4be3c035b2f1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 11 10:13:53 crc kubenswrapper[4808]: I0311 10:13:53.475142 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5h64t\" (UniqueName: \"kubernetes.io/projected/a375a012-b69e-494d-8648-4be3c035b2f1-kube-api-access-5h64t\") on node \"crc\" DevicePath \"\"" Mar 11 10:13:53 crc kubenswrapper[4808]: I0311 10:13:53.475155 4808 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a375a012-b69e-494d-8648-4be3c035b2f1-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 11 10:13:53 crc kubenswrapper[4808]: I0311 10:13:53.571645 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6cc897dccf-z6rp4"] Mar 11 10:13:53 crc kubenswrapper[4808]: I0311 10:13:53.578267 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6cc897dccf-z6rp4"] Mar 11 10:13:53 crc kubenswrapper[4808]: I0311 10:13:53.801349 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a375a012-b69e-494d-8648-4be3c035b2f1" path="/var/lib/kubelet/pods/a375a012-b69e-494d-8648-4be3c035b2f1/volumes" Mar 11 10:13:54 crc kubenswrapper[4808]: I0311 10:13:54.515308 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hmcwh" Mar 11 10:13:54 crc kubenswrapper[4808]: I0311 10:13:54.597563 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-st5sz\" (UniqueName: \"kubernetes.io/projected/5774d085-51ad-43a1-9404-6c288deec196-kube-api-access-st5sz\") pod \"5774d085-51ad-43a1-9404-6c288deec196\" (UID: \"5774d085-51ad-43a1-9404-6c288deec196\") " Mar 11 10:13:54 crc kubenswrapper[4808]: I0311 10:13:54.597644 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5774d085-51ad-43a1-9404-6c288deec196-scripts\") pod \"5774d085-51ad-43a1-9404-6c288deec196\" (UID: \"5774d085-51ad-43a1-9404-6c288deec196\") " Mar 11 10:13:54 crc kubenswrapper[4808]: I0311 10:13:54.597694 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5774d085-51ad-43a1-9404-6c288deec196-combined-ca-bundle\") pod \"5774d085-51ad-43a1-9404-6c288deec196\" (UID: \"5774d085-51ad-43a1-9404-6c288deec196\") " Mar 11 10:13:54 crc kubenswrapper[4808]: I0311 10:13:54.597779 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5774d085-51ad-43a1-9404-6c288deec196-config-data\") pod \"5774d085-51ad-43a1-9404-6c288deec196\" (UID: \"5774d085-51ad-43a1-9404-6c288deec196\") " Mar 11 10:13:54 crc kubenswrapper[4808]: I0311 10:13:54.597851 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5774d085-51ad-43a1-9404-6c288deec196-fernet-keys\") pod \"5774d085-51ad-43a1-9404-6c288deec196\" (UID: \"5774d085-51ad-43a1-9404-6c288deec196\") " Mar 11 10:13:54 crc kubenswrapper[4808]: I0311 10:13:54.597881 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5774d085-51ad-43a1-9404-6c288deec196-credential-keys\") pod \"5774d085-51ad-43a1-9404-6c288deec196\" (UID: \"5774d085-51ad-43a1-9404-6c288deec196\") " Mar 11 10:13:54 crc kubenswrapper[4808]: I0311 10:13:54.601896 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5774d085-51ad-43a1-9404-6c288deec196-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "5774d085-51ad-43a1-9404-6c288deec196" (UID: "5774d085-51ad-43a1-9404-6c288deec196"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 10:13:54 crc kubenswrapper[4808]: I0311 10:13:54.602551 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5774d085-51ad-43a1-9404-6c288deec196-scripts" (OuterVolumeSpecName: "scripts") pod "5774d085-51ad-43a1-9404-6c288deec196" (UID: "5774d085-51ad-43a1-9404-6c288deec196"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 10:13:54 crc kubenswrapper[4808]: I0311 10:13:54.602682 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5774d085-51ad-43a1-9404-6c288deec196-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "5774d085-51ad-43a1-9404-6c288deec196" (UID: "5774d085-51ad-43a1-9404-6c288deec196"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 10:13:54 crc kubenswrapper[4808]: I0311 10:13:54.603732 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5774d085-51ad-43a1-9404-6c288deec196-kube-api-access-st5sz" (OuterVolumeSpecName: "kube-api-access-st5sz") pod "5774d085-51ad-43a1-9404-6c288deec196" (UID: "5774d085-51ad-43a1-9404-6c288deec196"). InnerVolumeSpecName "kube-api-access-st5sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:13:54 crc kubenswrapper[4808]: I0311 10:13:54.619265 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5774d085-51ad-43a1-9404-6c288deec196-config-data" (OuterVolumeSpecName: "config-data") pod "5774d085-51ad-43a1-9404-6c288deec196" (UID: "5774d085-51ad-43a1-9404-6c288deec196"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 10:13:54 crc kubenswrapper[4808]: I0311 10:13:54.622878 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5774d085-51ad-43a1-9404-6c288deec196-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5774d085-51ad-43a1-9404-6c288deec196" (UID: "5774d085-51ad-43a1-9404-6c288deec196"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 10:13:54 crc kubenswrapper[4808]: I0311 10:13:54.700370 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-st5sz\" (UniqueName: \"kubernetes.io/projected/5774d085-51ad-43a1-9404-6c288deec196-kube-api-access-st5sz\") on node \"crc\" DevicePath \"\"" Mar 11 10:13:54 crc kubenswrapper[4808]: I0311 10:13:54.700407 4808 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5774d085-51ad-43a1-9404-6c288deec196-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 10:13:54 crc kubenswrapper[4808]: I0311 10:13:54.700419 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5774d085-51ad-43a1-9404-6c288deec196-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 10:13:54 crc kubenswrapper[4808]: I0311 10:13:54.700431 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5774d085-51ad-43a1-9404-6c288deec196-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 10:13:54 crc kubenswrapper[4808]: I0311 10:13:54.700442 4808 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5774d085-51ad-43a1-9404-6c288deec196-fernet-keys\") on node \"crc\" DevicePath \"\"" Mar 11 10:13:54 crc kubenswrapper[4808]: I0311 10:13:54.700452 4808 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5774d085-51ad-43a1-9404-6c288deec196-credential-keys\") on node \"crc\" DevicePath \"\"" Mar 11 10:13:55 crc kubenswrapper[4808]: I0311 10:13:55.249920 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hmcwh" event={"ID":"5774d085-51ad-43a1-9404-6c288deec196","Type":"ContainerDied","Data":"ce0d79a553d5beed55d9e30ea04e0326c1288c3336866cc50a19509a17b097e0"} Mar 11 10:13:55 crc kubenswrapper[4808]: I0311 10:13:55.249975 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce0d79a553d5beed55d9e30ea04e0326c1288c3336866cc50a19509a17b097e0" Mar 11 10:13:55 crc kubenswrapper[4808]: I0311 10:13:55.250014 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hmcwh" Mar 11 10:13:55 crc kubenswrapper[4808]: I0311 10:13:55.350575 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-fd9864c75-s46ps"] Mar 11 10:13:55 crc kubenswrapper[4808]: E0311 10:13:55.350963 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a375a012-b69e-494d-8648-4be3c035b2f1" containerName="dnsmasq-dns" Mar 11 10:13:55 crc kubenswrapper[4808]: I0311 10:13:55.350983 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="a375a012-b69e-494d-8648-4be3c035b2f1" containerName="dnsmasq-dns" Mar 11 10:13:55 crc kubenswrapper[4808]: E0311 10:13:55.351006 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5774d085-51ad-43a1-9404-6c288deec196" containerName="keystone-bootstrap" Mar 11 10:13:55 crc kubenswrapper[4808]: I0311 10:13:55.351013 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="5774d085-51ad-43a1-9404-6c288deec196" containerName="keystone-bootstrap" Mar 11 10:13:55 crc kubenswrapper[4808]: E0311 10:13:55.351026 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a375a012-b69e-494d-8648-4be3c035b2f1" containerName="init" Mar 11 10:13:55 crc kubenswrapper[4808]: I0311 10:13:55.351033 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="a375a012-b69e-494d-8648-4be3c035b2f1" containerName="init" Mar 11 10:13:55 crc kubenswrapper[4808]: I0311 10:13:55.351199 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="a375a012-b69e-494d-8648-4be3c035b2f1" containerName="dnsmasq-dns" Mar 11 10:13:55 crc kubenswrapper[4808]: I0311 10:13:55.351216 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="5774d085-51ad-43a1-9404-6c288deec196" containerName="keystone-bootstrap" Mar 11 10:13:55 crc kubenswrapper[4808]: I0311 10:13:55.351889 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-fd9864c75-s46ps" Mar 11 10:13:55 crc kubenswrapper[4808]: I0311 10:13:55.354341 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 11 10:13:55 crc kubenswrapper[4808]: I0311 10:13:55.354344 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Mar 11 10:13:55 crc kubenswrapper[4808]: I0311 10:13:55.354636 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-gnptc" Mar 11 10:13:55 crc kubenswrapper[4808]: I0311 10:13:55.354707 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 11 10:13:55 crc kubenswrapper[4808]: I0311 10:13:55.354821 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 11 10:13:55 crc kubenswrapper[4808]: I0311 10:13:55.354948 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Mar 11 10:13:55 crc kubenswrapper[4808]: I0311 10:13:55.359376 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-fd9864c75-s46ps"] Mar 11 10:13:55 crc kubenswrapper[4808]: I0311 10:13:55.411843 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a0372f6-df1c-4590-9d76-eab6ef966ab2-internal-tls-certs\") pod \"keystone-fd9864c75-s46ps\" (UID: \"8a0372f6-df1c-4590-9d76-eab6ef966ab2\") " pod="openstack/keystone-fd9864c75-s46ps" Mar 11 10:13:55 crc kubenswrapper[4808]: I0311 10:13:55.411959 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8a0372f6-df1c-4590-9d76-eab6ef966ab2-credential-keys\") pod \"keystone-fd9864c75-s46ps\" (UID: \"8a0372f6-df1c-4590-9d76-eab6ef966ab2\") " pod="openstack/keystone-fd9864c75-s46ps" Mar 11 10:13:55 crc kubenswrapper[4808]: I0311 10:13:55.412014 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a0372f6-df1c-4590-9d76-eab6ef966ab2-combined-ca-bundle\") pod \"keystone-fd9864c75-s46ps\" (UID: \"8a0372f6-df1c-4590-9d76-eab6ef966ab2\") " pod="openstack/keystone-fd9864c75-s46ps" Mar 11 10:13:55 crc kubenswrapper[4808]: I0311 10:13:55.412154 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8a0372f6-df1c-4590-9d76-eab6ef966ab2-fernet-keys\") pod \"keystone-fd9864c75-s46ps\" (UID: \"8a0372f6-df1c-4590-9d76-eab6ef966ab2\") " pod="openstack/keystone-fd9864c75-s46ps" Mar 11 10:13:55 crc kubenswrapper[4808]: I0311 10:13:55.412213 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a0372f6-df1c-4590-9d76-eab6ef966ab2-scripts\") pod \"keystone-fd9864c75-s46ps\" (UID: \"8a0372f6-df1c-4590-9d76-eab6ef966ab2\") " pod="openstack/keystone-fd9864c75-s46ps" Mar 11 10:13:55 crc kubenswrapper[4808]: I0311 10:13:55.412251 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ctnr\" (UniqueName: \"kubernetes.io/projected/8a0372f6-df1c-4590-9d76-eab6ef966ab2-kube-api-access-6ctnr\") pod \"keystone-fd9864c75-s46ps\" (UID: \"8a0372f6-df1c-4590-9d76-eab6ef966ab2\") " pod="openstack/keystone-fd9864c75-s46ps" Mar 11 10:13:55 crc kubenswrapper[4808]: I0311 10:13:55.412284 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a0372f6-df1c-4590-9d76-eab6ef966ab2-config-data\") pod \"keystone-fd9864c75-s46ps\" (UID: \"8a0372f6-df1c-4590-9d76-eab6ef966ab2\") " pod="openstack/keystone-fd9864c75-s46ps" Mar 11 10:13:55 crc kubenswrapper[4808]: I0311 10:13:55.412311 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a0372f6-df1c-4590-9d76-eab6ef966ab2-public-tls-certs\") pod \"keystone-fd9864c75-s46ps\" (UID: \"8a0372f6-df1c-4590-9d76-eab6ef966ab2\") " pod="openstack/keystone-fd9864c75-s46ps" Mar 11 10:13:55 crc kubenswrapper[4808]: I0311 10:13:55.513470 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8a0372f6-df1c-4590-9d76-eab6ef966ab2-fernet-keys\") pod \"keystone-fd9864c75-s46ps\" (UID: \"8a0372f6-df1c-4590-9d76-eab6ef966ab2\") " pod="openstack/keystone-fd9864c75-s46ps" Mar 11 10:13:55 crc kubenswrapper[4808]: I0311 10:13:55.513596 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a0372f6-df1c-4590-9d76-eab6ef966ab2-scripts\") pod \"keystone-fd9864c75-s46ps\" (UID: \"8a0372f6-df1c-4590-9d76-eab6ef966ab2\") " pod="openstack/keystone-fd9864c75-s46ps" Mar 11 10:13:55 crc kubenswrapper[4808]: I0311 10:13:55.513627 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ctnr\" (UniqueName: \"kubernetes.io/projected/8a0372f6-df1c-4590-9d76-eab6ef966ab2-kube-api-access-6ctnr\") pod \"keystone-fd9864c75-s46ps\" (UID: \"8a0372f6-df1c-4590-9d76-eab6ef966ab2\") " pod="openstack/keystone-fd9864c75-s46ps" Mar 11 10:13:55 crc kubenswrapper[4808]: I0311 10:13:55.514517 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a0372f6-df1c-4590-9d76-eab6ef966ab2-config-data\") pod \"keystone-fd9864c75-s46ps\" (UID: \"8a0372f6-df1c-4590-9d76-eab6ef966ab2\") " pod="openstack/keystone-fd9864c75-s46ps" Mar 11 10:13:55 crc kubenswrapper[4808]: I0311 10:13:55.514545 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a0372f6-df1c-4590-9d76-eab6ef966ab2-public-tls-certs\") pod \"keystone-fd9864c75-s46ps\" (UID: \"8a0372f6-df1c-4590-9d76-eab6ef966ab2\") " pod="openstack/keystone-fd9864c75-s46ps" Mar 11 10:13:55 crc kubenswrapper[4808]: I0311 10:13:55.514590 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a0372f6-df1c-4590-9d76-eab6ef966ab2-internal-tls-certs\") pod \"keystone-fd9864c75-s46ps\" (UID: \"8a0372f6-df1c-4590-9d76-eab6ef966ab2\") " pod="openstack/keystone-fd9864c75-s46ps" Mar 11 10:13:55 crc kubenswrapper[4808]: I0311 10:13:55.514641 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8a0372f6-df1c-4590-9d76-eab6ef966ab2-credential-keys\") pod \"keystone-fd9864c75-s46ps\" (UID: \"8a0372f6-df1c-4590-9d76-eab6ef966ab2\") " pod="openstack/keystone-fd9864c75-s46ps" Mar 11 10:13:55 crc kubenswrapper[4808]: I0311 10:13:55.514668 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a0372f6-df1c-4590-9d76-eab6ef966ab2-combined-ca-bundle\") pod \"keystone-fd9864c75-s46ps\" (UID: \"8a0372f6-df1c-4590-9d76-eab6ef966ab2\") " pod="openstack/keystone-fd9864c75-s46ps" Mar 11 10:13:55 crc kubenswrapper[4808]: I0311 10:13:55.519550 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8a0372f6-df1c-4590-9d76-eab6ef966ab2-credential-keys\") pod \"keystone-fd9864c75-s46ps\" (UID: \"8a0372f6-df1c-4590-9d76-eab6ef966ab2\") " pod="openstack/keystone-fd9864c75-s46ps" Mar 11 10:13:55 crc kubenswrapper[4808]: I0311 10:13:55.519795 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a0372f6-df1c-4590-9d76-eab6ef966ab2-config-data\") pod \"keystone-fd9864c75-s46ps\" (UID: \"8a0372f6-df1c-4590-9d76-eab6ef966ab2\") " pod="openstack/keystone-fd9864c75-s46ps" Mar 11 10:13:55 crc kubenswrapper[4808]: I0311 10:13:55.519842 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a0372f6-df1c-4590-9d76-eab6ef966ab2-scripts\") pod \"keystone-fd9864c75-s46ps\" (UID: \"8a0372f6-df1c-4590-9d76-eab6ef966ab2\") " pod="openstack/keystone-fd9864c75-s46ps" Mar 11 10:13:55 crc kubenswrapper[4808]: I0311 10:13:55.519894 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8a0372f6-df1c-4590-9d76-eab6ef966ab2-fernet-keys\") pod \"keystone-fd9864c75-s46ps\" (UID: \"8a0372f6-df1c-4590-9d76-eab6ef966ab2\") " pod="openstack/keystone-fd9864c75-s46ps" Mar 11 10:13:55 crc kubenswrapper[4808]: I0311 10:13:55.520280 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a0372f6-df1c-4590-9d76-eab6ef966ab2-internal-tls-certs\") pod \"keystone-fd9864c75-s46ps\" (UID: \"8a0372f6-df1c-4590-9d76-eab6ef966ab2\") " pod="openstack/keystone-fd9864c75-s46ps" Mar 11 10:13:55 crc kubenswrapper[4808]: I0311 10:13:55.520548 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a0372f6-df1c-4590-9d76-eab6ef966ab2-combined-ca-bundle\") pod \"keystone-fd9864c75-s46ps\" (UID: \"8a0372f6-df1c-4590-9d76-eab6ef966ab2\") " pod="openstack/keystone-fd9864c75-s46ps" Mar 11 10:13:55 crc kubenswrapper[4808]: I0311 10:13:55.527855 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a0372f6-df1c-4590-9d76-eab6ef966ab2-public-tls-certs\") pod \"keystone-fd9864c75-s46ps\" (UID: \"8a0372f6-df1c-4590-9d76-eab6ef966ab2\") " pod="openstack/keystone-fd9864c75-s46ps" Mar 11 10:13:55 crc kubenswrapper[4808]: I0311 10:13:55.538344 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ctnr\" (UniqueName: \"kubernetes.io/projected/8a0372f6-df1c-4590-9d76-eab6ef966ab2-kube-api-access-6ctnr\") pod \"keystone-fd9864c75-s46ps\" (UID: \"8a0372f6-df1c-4590-9d76-eab6ef966ab2\") " pod="openstack/keystone-fd9864c75-s46ps" Mar 11 10:13:55 crc kubenswrapper[4808]: I0311 10:13:55.687159 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-fd9864c75-s46ps" Mar 11 10:13:56 crc kubenswrapper[4808]: I0311 10:13:56.144714 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-fd9864c75-s46ps"] Mar 11 10:13:56 crc kubenswrapper[4808]: I0311 10:13:56.258520 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-fd9864c75-s46ps" event={"ID":"8a0372f6-df1c-4590-9d76-eab6ef966ab2","Type":"ContainerStarted","Data":"1b07fe575658771bc212f7f8262e92f36059b26941eb7dd64d52695a8acf50a7"} Mar 11 10:13:57 crc kubenswrapper[4808]: I0311 10:13:57.265580 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-fd9864c75-s46ps" event={"ID":"8a0372f6-df1c-4590-9d76-eab6ef966ab2","Type":"ContainerStarted","Data":"8420a58a7445d15030c194d2507a48c7840d7e752e7d34df476addcea883df8e"} Mar 11 10:13:57 crc kubenswrapper[4808]: I0311 10:13:57.265929 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-fd9864c75-s46ps" Mar 11 10:13:57 crc kubenswrapper[4808]: I0311 10:13:57.287616 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-fd9864c75-s46ps" podStartSLOduration=2.287592028 podStartE2EDuration="2.287592028s" podCreationTimestamp="2026-03-11 10:13:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 10:13:57.283301197 +0000 UTC m=+5688.236624527" watchObservedRunningTime="2026-03-11 10:13:57.287592028 +0000 UTC m=+5688.240915358" Mar 11 10:14:00 crc kubenswrapper[4808]: I0311 10:14:00.143319 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553734-xzrt7"] Mar 11 10:14:00 crc kubenswrapper[4808]: I0311 10:14:00.144879 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553734-xzrt7" Mar 11 10:14:00 crc kubenswrapper[4808]: I0311 10:14:00.147704 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-8r7cc" Mar 11 10:14:00 crc kubenswrapper[4808]: I0311 10:14:00.147915 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 10:14:00 crc kubenswrapper[4808]: I0311 10:14:00.148123 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 10:14:00 crc kubenswrapper[4808]: I0311 10:14:00.154864 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553734-xzrt7"] Mar 11 10:14:00 crc kubenswrapper[4808]: I0311 10:14:00.196464 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5smw\" (UniqueName: \"kubernetes.io/projected/97f984b1-7897-4e8d-bdd9-38352f1a74db-kube-api-access-r5smw\") pod \"auto-csr-approver-29553734-xzrt7\" (UID: \"97f984b1-7897-4e8d-bdd9-38352f1a74db\") " pod="openshift-infra/auto-csr-approver-29553734-xzrt7" Mar 11 10:14:00 crc kubenswrapper[4808]: I0311 10:14:00.297880 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5smw\" (UniqueName: \"kubernetes.io/projected/97f984b1-7897-4e8d-bdd9-38352f1a74db-kube-api-access-r5smw\") pod \"auto-csr-approver-29553734-xzrt7\" (UID: \"97f984b1-7897-4e8d-bdd9-38352f1a74db\") " pod="openshift-infra/auto-csr-approver-29553734-xzrt7" Mar 11 10:14:00 crc kubenswrapper[4808]: I0311 10:14:00.319281 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5smw\" (UniqueName: \"kubernetes.io/projected/97f984b1-7897-4e8d-bdd9-38352f1a74db-kube-api-access-r5smw\") pod \"auto-csr-approver-29553734-xzrt7\" (UID: \"97f984b1-7897-4e8d-bdd9-38352f1a74db\") " pod="openshift-infra/auto-csr-approver-29553734-xzrt7" Mar 11 10:14:00 crc kubenswrapper[4808]: I0311 10:14:00.465811 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553734-xzrt7" Mar 11 10:14:00 crc kubenswrapper[4808]: I0311 10:14:00.879356 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553734-xzrt7"] Mar 11 10:14:01 crc kubenswrapper[4808]: I0311 10:14:01.313516 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553734-xzrt7" event={"ID":"97f984b1-7897-4e8d-bdd9-38352f1a74db","Type":"ContainerStarted","Data":"9f508b3285d5a7ed9e03959b3acc2cb3a21ceda315e8b768992b726104cb8b75"} Mar 11 10:14:02 crc kubenswrapper[4808]: I0311 10:14:02.322710 4808 generic.go:334] "Generic (PLEG): container finished" podID="97f984b1-7897-4e8d-bdd9-38352f1a74db" containerID="3de8830abb72784867cc299f4290b14566906cf1a8227a2524c51fa8cd1d3949" exitCode=0 Mar 11 10:14:02 crc kubenswrapper[4808]: I0311 10:14:02.322780 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553734-xzrt7" event={"ID":"97f984b1-7897-4e8d-bdd9-38352f1a74db","Type":"ContainerDied","Data":"3de8830abb72784867cc299f4290b14566906cf1a8227a2524c51fa8cd1d3949"} Mar 11 10:14:03 crc kubenswrapper[4808]: I0311 10:14:03.683921 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553734-xzrt7" Mar 11 10:14:03 crc kubenswrapper[4808]: I0311 10:14:03.756693 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r5smw\" (UniqueName: \"kubernetes.io/projected/97f984b1-7897-4e8d-bdd9-38352f1a74db-kube-api-access-r5smw\") pod \"97f984b1-7897-4e8d-bdd9-38352f1a74db\" (UID: \"97f984b1-7897-4e8d-bdd9-38352f1a74db\") " Mar 11 10:14:03 crc kubenswrapper[4808]: I0311 10:14:03.761791 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97f984b1-7897-4e8d-bdd9-38352f1a74db-kube-api-access-r5smw" (OuterVolumeSpecName: "kube-api-access-r5smw") pod "97f984b1-7897-4e8d-bdd9-38352f1a74db" (UID: "97f984b1-7897-4e8d-bdd9-38352f1a74db"). InnerVolumeSpecName "kube-api-access-r5smw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:14:03 crc kubenswrapper[4808]: I0311 10:14:03.859254 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r5smw\" (UniqueName: \"kubernetes.io/projected/97f984b1-7897-4e8d-bdd9-38352f1a74db-kube-api-access-r5smw\") on node \"crc\" DevicePath \"\"" Mar 11 10:14:04 crc kubenswrapper[4808]: I0311 10:14:04.340199 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553734-xzrt7" event={"ID":"97f984b1-7897-4e8d-bdd9-38352f1a74db","Type":"ContainerDied","Data":"9f508b3285d5a7ed9e03959b3acc2cb3a21ceda315e8b768992b726104cb8b75"} Mar 11 10:14:04 crc kubenswrapper[4808]: I0311 10:14:04.340584 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f508b3285d5a7ed9e03959b3acc2cb3a21ceda315e8b768992b726104cb8b75" Mar 11 10:14:04 crc kubenswrapper[4808]: I0311 10:14:04.340256 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553734-xzrt7" Mar 11 10:14:04 crc kubenswrapper[4808]: I0311 10:14:04.744016 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553728-47kvg"] Mar 11 10:14:04 crc kubenswrapper[4808]: I0311 10:14:04.754781 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553728-47kvg"] Mar 11 10:14:05 crc kubenswrapper[4808]: I0311 10:14:05.801976 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b9b275d-100b-449a-8cff-8ff9ba3fc336" path="/var/lib/kubelet/pods/7b9b275d-100b-449a-8cff-8ff9ba3fc336/volumes" Mar 11 10:14:16 crc kubenswrapper[4808]: I0311 10:14:16.027851 4808 patch_prober.go:28] interesting pod/machine-config-daemon-tfsm9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 10:14:16 crc kubenswrapper[4808]: I0311 10:14:16.028352 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 10:14:16 crc kubenswrapper[4808]: I0311 10:14:16.028424 4808 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" Mar 11 10:14:16 crc kubenswrapper[4808]: I0311 10:14:16.029064 4808 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d768d0cd77bb5d61a0071efa203bb3e177aa6a9fb593f9adbae7687be9d0ea73"} pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 11 10:14:16 crc kubenswrapper[4808]: I0311 10:14:16.029119 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerName="machine-config-daemon" containerID="cri-o://d768d0cd77bb5d61a0071efa203bb3e177aa6a9fb593f9adbae7687be9d0ea73" gracePeriod=600 Mar 11 10:14:16 crc kubenswrapper[4808]: E0311 10:14:16.152046 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 10:14:16 crc kubenswrapper[4808]: I0311 10:14:16.436046 4808 generic.go:334] "Generic (PLEG): container finished" podID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerID="d768d0cd77bb5d61a0071efa203bb3e177aa6a9fb593f9adbae7687be9d0ea73" exitCode=0 Mar 11 10:14:16 crc kubenswrapper[4808]: I0311 10:14:16.436178 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" event={"ID":"3dda5309-668d-4e3c-b3b2-1d708eecc578","Type":"ContainerDied","Data":"d768d0cd77bb5d61a0071efa203bb3e177aa6a9fb593f9adbae7687be9d0ea73"} Mar 11 10:14:16 crc kubenswrapper[4808]: I0311 10:14:16.436532 4808 scope.go:117] "RemoveContainer" containerID="d68def1bbe9769885a009b4f600ab92eb2738788864513034c7d900f875e6f37" Mar 11 10:14:16 crc kubenswrapper[4808]: I0311 10:14:16.437481 4808 scope.go:117] "RemoveContainer" containerID="d768d0cd77bb5d61a0071efa203bb3e177aa6a9fb593f9adbae7687be9d0ea73" Mar 11 10:14:16 crc kubenswrapper[4808]: E0311 10:14:16.438079 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 10:14:27 crc kubenswrapper[4808]: I0311 10:14:27.247223 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-fd9864c75-s46ps" Mar 11 10:14:27 crc kubenswrapper[4808]: I0311 10:14:27.790174 4808 scope.go:117] "RemoveContainer" containerID="d768d0cd77bb5d61a0071efa203bb3e177aa6a9fb593f9adbae7687be9d0ea73" Mar 11 10:14:27 crc kubenswrapper[4808]: E0311 10:14:27.790538 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 10:14:30 crc kubenswrapper[4808]: I0311 10:14:30.959527 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Mar 11 10:14:30 crc kubenswrapper[4808]: E0311 10:14:30.960249 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97f984b1-7897-4e8d-bdd9-38352f1a74db" containerName="oc" Mar 11 10:14:30 crc kubenswrapper[4808]: I0311 10:14:30.960261 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="97f984b1-7897-4e8d-bdd9-38352f1a74db" containerName="oc" Mar 11 10:14:30 crc kubenswrapper[4808]: I0311 10:14:30.960422 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="97f984b1-7897-4e8d-bdd9-38352f1a74db" containerName="oc" Mar 11 10:14:30 crc kubenswrapper[4808]: I0311 10:14:30.960904 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 11 10:14:30 crc kubenswrapper[4808]: I0311 10:14:30.963205 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Mar 11 10:14:30 crc kubenswrapper[4808]: I0311 10:14:30.963836 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-dg7j8" Mar 11 10:14:30 crc kubenswrapper[4808]: I0311 10:14:30.964220 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Mar 11 10:14:30 crc kubenswrapper[4808]: I0311 10:14:30.982439 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Mar 11 10:14:31 crc kubenswrapper[4808]: I0311 10:14:31.116985 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/ff6f14ad-50df-4762-a17f-a0390d23b346-openstack-config\") pod \"openstackclient\" (UID: \"ff6f14ad-50df-4762-a17f-a0390d23b346\") " pod="openstack/openstackclient" Mar 11 10:14:31 crc kubenswrapper[4808]: I0311 10:14:31.117054 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dvcx\" (UniqueName: \"kubernetes.io/projected/ff6f14ad-50df-4762-a17f-a0390d23b346-kube-api-access-8dvcx\") pod \"openstackclient\" (UID: \"ff6f14ad-50df-4762-a17f-a0390d23b346\") " pod="openstack/openstackclient" Mar 11 10:14:31 crc kubenswrapper[4808]: I0311 10:14:31.117378 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff6f14ad-50df-4762-a17f-a0390d23b346-combined-ca-bundle\") pod \"openstackclient\" (UID: \"ff6f14ad-50df-4762-a17f-a0390d23b346\") " pod="openstack/openstackclient" Mar 11 10:14:31 crc kubenswrapper[4808]: I0311 10:14:31.117518 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/ff6f14ad-50df-4762-a17f-a0390d23b346-openstack-config-secret\") pod \"openstackclient\" (UID: \"ff6f14ad-50df-4762-a17f-a0390d23b346\") " pod="openstack/openstackclient" Mar 11 10:14:31 crc kubenswrapper[4808]: I0311 10:14:31.219024 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/ff6f14ad-50df-4762-a17f-a0390d23b346-openstack-config\") pod \"openstackclient\" (UID: \"ff6f14ad-50df-4762-a17f-a0390d23b346\") " pod="openstack/openstackclient" Mar 11 10:14:31 crc kubenswrapper[4808]: I0311 10:14:31.219075 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dvcx\" (UniqueName: \"kubernetes.io/projected/ff6f14ad-50df-4762-a17f-a0390d23b346-kube-api-access-8dvcx\") pod \"openstackclient\" (UID: \"ff6f14ad-50df-4762-a17f-a0390d23b346\") " pod="openstack/openstackclient" Mar 11 10:14:31 crc kubenswrapper[4808]: I0311 10:14:31.219154 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff6f14ad-50df-4762-a17f-a0390d23b346-combined-ca-bundle\") pod \"openstackclient\" (UID: \"ff6f14ad-50df-4762-a17f-a0390d23b346\") " pod="openstack/openstackclient" Mar 11 10:14:31 crc kubenswrapper[4808]: I0311 10:14:31.219196 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/ff6f14ad-50df-4762-a17f-a0390d23b346-openstack-config-secret\") pod \"openstackclient\" (UID: \"ff6f14ad-50df-4762-a17f-a0390d23b346\") " pod="openstack/openstackclient" Mar 11 10:14:31 crc kubenswrapper[4808]: I0311 10:14:31.220301 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/ff6f14ad-50df-4762-a17f-a0390d23b346-openstack-config\") pod \"openstackclient\" (UID: \"ff6f14ad-50df-4762-a17f-a0390d23b346\") " pod="openstack/openstackclient" Mar 11 10:14:31 crc kubenswrapper[4808]: I0311 10:14:31.227851 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/ff6f14ad-50df-4762-a17f-a0390d23b346-openstack-config-secret\") pod \"openstackclient\" (UID: \"ff6f14ad-50df-4762-a17f-a0390d23b346\") " pod="openstack/openstackclient" Mar 11 10:14:31 crc kubenswrapper[4808]: I0311 10:14:31.228513 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff6f14ad-50df-4762-a17f-a0390d23b346-combined-ca-bundle\") pod \"openstackclient\" (UID: \"ff6f14ad-50df-4762-a17f-a0390d23b346\") " pod="openstack/openstackclient" Mar 11 10:14:31 crc kubenswrapper[4808]: I0311 10:14:31.242493 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dvcx\" (UniqueName: \"kubernetes.io/projected/ff6f14ad-50df-4762-a17f-a0390d23b346-kube-api-access-8dvcx\") pod \"openstackclient\" (UID: \"ff6f14ad-50df-4762-a17f-a0390d23b346\") " pod="openstack/openstackclient" Mar 11 10:14:31 crc kubenswrapper[4808]: I0311 10:14:31.281446 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 11 10:14:31 crc kubenswrapper[4808]: I0311 10:14:31.762847 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Mar 11 10:14:32 crc kubenswrapper[4808]: I0311 10:14:32.580313 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"ff6f14ad-50df-4762-a17f-a0390d23b346","Type":"ContainerStarted","Data":"f5a5b5f737469c2b1a1e6382e253a233c900cb38217ce0b72b8e0d432f1039b1"} Mar 11 10:14:32 crc kubenswrapper[4808]: I0311 10:14:32.580699 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"ff6f14ad-50df-4762-a17f-a0390d23b346","Type":"ContainerStarted","Data":"66c5775896908085664399d1c121644707abf33c6df401290a8100e4d1a7d269"} Mar 11 10:14:32 crc kubenswrapper[4808]: I0311 10:14:32.603629 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.603610981 podStartE2EDuration="2.603610981s" podCreationTimestamp="2026-03-11 10:14:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 10:14:32.601622745 +0000 UTC m=+5723.554946075" watchObservedRunningTime="2026-03-11 10:14:32.603610981 +0000 UTC m=+5723.556934311" Mar 11 10:14:38 crc kubenswrapper[4808]: I0311 10:14:38.789683 4808 scope.go:117] "RemoveContainer" containerID="d768d0cd77bb5d61a0071efa203bb3e177aa6a9fb593f9adbae7687be9d0ea73" Mar 11 10:14:38 crc kubenswrapper[4808]: E0311 10:14:38.791141 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 10:14:51 crc kubenswrapper[4808]: I0311 10:14:51.789649 4808 scope.go:117] "RemoveContainer" containerID="d768d0cd77bb5d61a0071efa203bb3e177aa6a9fb593f9adbae7687be9d0ea73" Mar 11 10:14:51 crc kubenswrapper[4808]: E0311 10:14:51.790295 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 10:14:57 crc kubenswrapper[4808]: I0311 10:14:57.048260 4808 scope.go:117] "RemoveContainer" containerID="f28171a0c7103528519f133e1465494817b7805f2ce2a6a73885d25fcd958065" Mar 11 10:15:00 crc kubenswrapper[4808]: I0311 10:15:00.165850 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29553735-l55nl"] Mar 11 10:15:00 crc kubenswrapper[4808]: I0311 10:15:00.167880 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29553735-l55nl" Mar 11 10:15:00 crc kubenswrapper[4808]: I0311 10:15:00.170492 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 11 10:15:00 crc kubenswrapper[4808]: I0311 10:15:00.170772 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 11 10:15:00 crc kubenswrapper[4808]: I0311 10:15:00.175088 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29553735-l55nl"] Mar 11 10:15:00 crc kubenswrapper[4808]: I0311 10:15:00.230603 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5f9t\" (UniqueName: \"kubernetes.io/projected/7ef37030-172f-4b45-8859-8b96ec417a10-kube-api-access-k5f9t\") pod \"collect-profiles-29553735-l55nl\" (UID: \"7ef37030-172f-4b45-8859-8b96ec417a10\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553735-l55nl" Mar 11 10:15:00 crc kubenswrapper[4808]: I0311 10:15:00.230660 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7ef37030-172f-4b45-8859-8b96ec417a10-config-volume\") pod \"collect-profiles-29553735-l55nl\" (UID: \"7ef37030-172f-4b45-8859-8b96ec417a10\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553735-l55nl" Mar 11 10:15:00 crc kubenswrapper[4808]: I0311 10:15:00.230730 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7ef37030-172f-4b45-8859-8b96ec417a10-secret-volume\") pod \"collect-profiles-29553735-l55nl\" (UID: \"7ef37030-172f-4b45-8859-8b96ec417a10\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553735-l55nl" Mar 11 10:15:00 crc kubenswrapper[4808]: I0311 10:15:00.332930 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5f9t\" (UniqueName: \"kubernetes.io/projected/7ef37030-172f-4b45-8859-8b96ec417a10-kube-api-access-k5f9t\") pod \"collect-profiles-29553735-l55nl\" (UID: \"7ef37030-172f-4b45-8859-8b96ec417a10\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553735-l55nl" Mar 11 10:15:00 crc kubenswrapper[4808]: I0311 10:15:00.333308 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7ef37030-172f-4b45-8859-8b96ec417a10-config-volume\") pod \"collect-profiles-29553735-l55nl\" (UID: \"7ef37030-172f-4b45-8859-8b96ec417a10\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553735-l55nl" Mar 11 10:15:00 crc kubenswrapper[4808]: I0311 10:15:00.333382 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7ef37030-172f-4b45-8859-8b96ec417a10-secret-volume\") pod \"collect-profiles-29553735-l55nl\" (UID: \"7ef37030-172f-4b45-8859-8b96ec417a10\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553735-l55nl" Mar 11 10:15:00 crc kubenswrapper[4808]: I0311 10:15:00.334635 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7ef37030-172f-4b45-8859-8b96ec417a10-config-volume\") pod \"collect-profiles-29553735-l55nl\" (UID: \"7ef37030-172f-4b45-8859-8b96ec417a10\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553735-l55nl" Mar 11 10:15:00 crc kubenswrapper[4808]: I0311 10:15:00.343025 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7ef37030-172f-4b45-8859-8b96ec417a10-secret-volume\") pod \"collect-profiles-29553735-l55nl\" (UID: \"7ef37030-172f-4b45-8859-8b96ec417a10\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553735-l55nl" Mar 11 10:15:00 crc kubenswrapper[4808]: I0311 10:15:00.351595 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5f9t\" (UniqueName: \"kubernetes.io/projected/7ef37030-172f-4b45-8859-8b96ec417a10-kube-api-access-k5f9t\") pod \"collect-profiles-29553735-l55nl\" (UID: \"7ef37030-172f-4b45-8859-8b96ec417a10\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553735-l55nl" Mar 11 10:15:00 crc kubenswrapper[4808]: I0311 10:15:00.533816 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29553735-l55nl" Mar 11 10:15:00 crc kubenswrapper[4808]: I0311 10:15:00.982723 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29553735-l55nl"] Mar 11 10:15:01 crc kubenswrapper[4808]: I0311 10:15:01.828167 4808 generic.go:334] "Generic (PLEG): container finished" podID="7ef37030-172f-4b45-8859-8b96ec417a10" containerID="504f7c5a1eab6b6884c4045681eebea5a4cb4092444e02b85762fa167291f55b" exitCode=0 Mar 11 10:15:01 crc kubenswrapper[4808]: I0311 10:15:01.828303 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29553735-l55nl" event={"ID":"7ef37030-172f-4b45-8859-8b96ec417a10","Type":"ContainerDied","Data":"504f7c5a1eab6b6884c4045681eebea5a4cb4092444e02b85762fa167291f55b"} Mar 11 10:15:01 crc kubenswrapper[4808]: I0311 10:15:01.828517 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29553735-l55nl" event={"ID":"7ef37030-172f-4b45-8859-8b96ec417a10","Type":"ContainerStarted","Data":"d68cf31c42cc086115dd0cdad5c2df918b5ea6544c5244a6397d79f659900b94"} Mar 11 10:15:02 crc kubenswrapper[4808]: I0311 10:15:02.790667 4808 scope.go:117] "RemoveContainer" containerID="d768d0cd77bb5d61a0071efa203bb3e177aa6a9fb593f9adbae7687be9d0ea73" Mar 11 10:15:02 crc kubenswrapper[4808]: E0311 10:15:02.791253 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 10:15:03 crc kubenswrapper[4808]: I0311 10:15:03.169407 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29553735-l55nl" Mar 11 10:15:03 crc kubenswrapper[4808]: I0311 10:15:03.285767 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7ef37030-172f-4b45-8859-8b96ec417a10-config-volume\") pod \"7ef37030-172f-4b45-8859-8b96ec417a10\" (UID: \"7ef37030-172f-4b45-8859-8b96ec417a10\") " Mar 11 10:15:03 crc kubenswrapper[4808]: I0311 10:15:03.286299 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7ef37030-172f-4b45-8859-8b96ec417a10-secret-volume\") pod \"7ef37030-172f-4b45-8859-8b96ec417a10\" (UID: \"7ef37030-172f-4b45-8859-8b96ec417a10\") " Mar 11 10:15:03 crc kubenswrapper[4808]: I0311 10:15:03.286420 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5f9t\" (UniqueName: \"kubernetes.io/projected/7ef37030-172f-4b45-8859-8b96ec417a10-kube-api-access-k5f9t\") pod \"7ef37030-172f-4b45-8859-8b96ec417a10\" (UID: \"7ef37030-172f-4b45-8859-8b96ec417a10\") " Mar 11 10:15:03 crc kubenswrapper[4808]: I0311 10:15:03.286581 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ef37030-172f-4b45-8859-8b96ec417a10-config-volume" (OuterVolumeSpecName: "config-volume") pod "7ef37030-172f-4b45-8859-8b96ec417a10" (UID: "7ef37030-172f-4b45-8859-8b96ec417a10"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 10:15:03 crc kubenswrapper[4808]: I0311 10:15:03.287125 4808 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7ef37030-172f-4b45-8859-8b96ec417a10-config-volume\") on node \"crc\" DevicePath \"\"" Mar 11 10:15:03 crc kubenswrapper[4808]: I0311 10:15:03.295780 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ef37030-172f-4b45-8859-8b96ec417a10-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7ef37030-172f-4b45-8859-8b96ec417a10" (UID: "7ef37030-172f-4b45-8859-8b96ec417a10"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 10:15:03 crc kubenswrapper[4808]: I0311 10:15:03.296173 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ef37030-172f-4b45-8859-8b96ec417a10-kube-api-access-k5f9t" (OuterVolumeSpecName: "kube-api-access-k5f9t") pod "7ef37030-172f-4b45-8859-8b96ec417a10" (UID: "7ef37030-172f-4b45-8859-8b96ec417a10"). InnerVolumeSpecName "kube-api-access-k5f9t". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:15:03 crc kubenswrapper[4808]: I0311 10:15:03.388998 4808 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7ef37030-172f-4b45-8859-8b96ec417a10-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 11 10:15:03 crc kubenswrapper[4808]: I0311 10:15:03.389039 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k5f9t\" (UniqueName: \"kubernetes.io/projected/7ef37030-172f-4b45-8859-8b96ec417a10-kube-api-access-k5f9t\") on node \"crc\" DevicePath \"\"" Mar 11 10:15:03 crc kubenswrapper[4808]: I0311 10:15:03.850921 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29553735-l55nl" event={"ID":"7ef37030-172f-4b45-8859-8b96ec417a10","Type":"ContainerDied","Data":"d68cf31c42cc086115dd0cdad5c2df918b5ea6544c5244a6397d79f659900b94"} Mar 11 10:15:03 crc kubenswrapper[4808]: I0311 10:15:03.850962 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d68cf31c42cc086115dd0cdad5c2df918b5ea6544c5244a6397d79f659900b94" Mar 11 10:15:03 crc kubenswrapper[4808]: I0311 10:15:03.850973 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29553735-l55nl" Mar 11 10:15:04 crc kubenswrapper[4808]: I0311 10:15:04.265679 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29553690-9xbqh"] Mar 11 10:15:04 crc kubenswrapper[4808]: I0311 10:15:04.278320 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29553690-9xbqh"] Mar 11 10:15:05 crc kubenswrapper[4808]: I0311 10:15:05.818317 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be40bf22-dc95-42d9-89ab-ad8bde725469" path="/var/lib/kubelet/pods/be40bf22-dc95-42d9-89ab-ad8bde725469/volumes" Mar 11 10:15:13 crc kubenswrapper[4808]: I0311 10:15:13.789949 4808 scope.go:117] "RemoveContainer" containerID="d768d0cd77bb5d61a0071efa203bb3e177aa6a9fb593f9adbae7687be9d0ea73" Mar 11 10:15:13 crc kubenswrapper[4808]: E0311 10:15:13.790688 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 10:15:25 crc kubenswrapper[4808]: I0311 10:15:25.789686 4808 scope.go:117] "RemoveContainer" containerID="d768d0cd77bb5d61a0071efa203bb3e177aa6a9fb593f9adbae7687be9d0ea73" Mar 11 10:15:25 crc kubenswrapper[4808]: E0311 10:15:25.790496 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 10:15:36 crc kubenswrapper[4808]: I0311 10:15:36.790171 4808 scope.go:117] "RemoveContainer" containerID="d768d0cd77bb5d61a0071efa203bb3e177aa6a9fb593f9adbae7687be9d0ea73" Mar 11 10:15:36 crc kubenswrapper[4808]: E0311 10:15:36.791088 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 10:15:49 crc kubenswrapper[4808]: I0311 10:15:49.803429 4808 scope.go:117] "RemoveContainer" containerID="d768d0cd77bb5d61a0071efa203bb3e177aa6a9fb593f9adbae7687be9d0ea73" Mar 11 10:15:49 crc kubenswrapper[4808]: E0311 10:15:49.804349 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 10:15:57 crc kubenswrapper[4808]: I0311 10:15:57.140401 4808 scope.go:117] "RemoveContainer" containerID="992934f4ee9270c3e7ee0f6747df2a4d888a787f9fff9947d49e533ad5ffab1d" Mar 11 10:16:00 crc kubenswrapper[4808]: I0311 10:16:00.139004 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553736-7dfbb"] Mar 11 10:16:00 crc kubenswrapper[4808]: E0311 10:16:00.140290 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ef37030-172f-4b45-8859-8b96ec417a10" containerName="collect-profiles" Mar 11 10:16:00 crc kubenswrapper[4808]: I0311 10:16:00.140308 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ef37030-172f-4b45-8859-8b96ec417a10" containerName="collect-profiles" Mar 11 10:16:00 crc kubenswrapper[4808]: I0311 10:16:00.140523 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ef37030-172f-4b45-8859-8b96ec417a10" containerName="collect-profiles" Mar 11 10:16:00 crc kubenswrapper[4808]: I0311 10:16:00.141434 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553736-7dfbb" Mar 11 10:16:00 crc kubenswrapper[4808]: I0311 10:16:00.143307 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-8r7cc" Mar 11 10:16:00 crc kubenswrapper[4808]: I0311 10:16:00.143932 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 10:16:00 crc kubenswrapper[4808]: I0311 10:16:00.143953 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 10:16:00 crc kubenswrapper[4808]: I0311 10:16:00.145135 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553736-7dfbb"] Mar 11 10:16:00 crc kubenswrapper[4808]: I0311 10:16:00.242008 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69tbh\" (UniqueName: \"kubernetes.io/projected/64e5bf6a-73d8-4f40-853d-3532947c1560-kube-api-access-69tbh\") pod \"auto-csr-approver-29553736-7dfbb\" (UID: \"64e5bf6a-73d8-4f40-853d-3532947c1560\") " pod="openshift-infra/auto-csr-approver-29553736-7dfbb" Mar 11 10:16:00 crc kubenswrapper[4808]: I0311 10:16:00.344163 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69tbh\" (UniqueName: \"kubernetes.io/projected/64e5bf6a-73d8-4f40-853d-3532947c1560-kube-api-access-69tbh\") pod \"auto-csr-approver-29553736-7dfbb\" (UID: \"64e5bf6a-73d8-4f40-853d-3532947c1560\") " pod="openshift-infra/auto-csr-approver-29553736-7dfbb" Mar 11 10:16:00 crc kubenswrapper[4808]: I0311 10:16:00.367201 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69tbh\" (UniqueName: \"kubernetes.io/projected/64e5bf6a-73d8-4f40-853d-3532947c1560-kube-api-access-69tbh\") pod \"auto-csr-approver-29553736-7dfbb\" (UID: \"64e5bf6a-73d8-4f40-853d-3532947c1560\") " pod="openshift-infra/auto-csr-approver-29553736-7dfbb" Mar 11 10:16:00 crc kubenswrapper[4808]: I0311 10:16:00.473953 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553736-7dfbb" Mar 11 10:16:00 crc kubenswrapper[4808]: I0311 10:16:00.790311 4808 scope.go:117] "RemoveContainer" containerID="d768d0cd77bb5d61a0071efa203bb3e177aa6a9fb593f9adbae7687be9d0ea73" Mar 11 10:16:00 crc kubenswrapper[4808]: E0311 10:16:00.791175 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 10:16:00 crc kubenswrapper[4808]: I0311 10:16:00.935553 4808 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 11 10:16:00 crc kubenswrapper[4808]: I0311 10:16:00.936889 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553736-7dfbb"] Mar 11 10:16:01 crc kubenswrapper[4808]: I0311 10:16:01.689997 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553736-7dfbb" event={"ID":"64e5bf6a-73d8-4f40-853d-3532947c1560","Type":"ContainerStarted","Data":"ba3b07765935cd60bfa1cb17dbca5101a525d38e6e4e60c338e74b944bab891e"} Mar 11 10:16:02 crc kubenswrapper[4808]: I0311 10:16:02.698982 4808 generic.go:334] "Generic (PLEG): container finished" podID="64e5bf6a-73d8-4f40-853d-3532947c1560" containerID="b039cc1cab752eb310dc8e82b620074a5eb9ff4ae7a4f913a4ab7b6a739d4d8e" exitCode=0 Mar 11 10:16:02 crc kubenswrapper[4808]: I0311 10:16:02.699035 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553736-7dfbb" event={"ID":"64e5bf6a-73d8-4f40-853d-3532947c1560","Type":"ContainerDied","Data":"b039cc1cab752eb310dc8e82b620074a5eb9ff4ae7a4f913a4ab7b6a739d4d8e"} Mar 11 10:16:04 crc kubenswrapper[4808]: I0311 10:16:04.099901 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553736-7dfbb" Mar 11 10:16:04 crc kubenswrapper[4808]: I0311 10:16:04.217615 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69tbh\" (UniqueName: \"kubernetes.io/projected/64e5bf6a-73d8-4f40-853d-3532947c1560-kube-api-access-69tbh\") pod \"64e5bf6a-73d8-4f40-853d-3532947c1560\" (UID: \"64e5bf6a-73d8-4f40-853d-3532947c1560\") " Mar 11 10:16:04 crc kubenswrapper[4808]: I0311 10:16:04.235286 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64e5bf6a-73d8-4f40-853d-3532947c1560-kube-api-access-69tbh" (OuterVolumeSpecName: "kube-api-access-69tbh") pod "64e5bf6a-73d8-4f40-853d-3532947c1560" (UID: "64e5bf6a-73d8-4f40-853d-3532947c1560"). InnerVolumeSpecName "kube-api-access-69tbh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:16:04 crc kubenswrapper[4808]: I0311 10:16:04.320000 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-69tbh\" (UniqueName: \"kubernetes.io/projected/64e5bf6a-73d8-4f40-853d-3532947c1560-kube-api-access-69tbh\") on node \"crc\" DevicePath \"\"" Mar 11 10:16:04 crc kubenswrapper[4808]: I0311 10:16:04.717611 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553736-7dfbb" event={"ID":"64e5bf6a-73d8-4f40-853d-3532947c1560","Type":"ContainerDied","Data":"ba3b07765935cd60bfa1cb17dbca5101a525d38e6e4e60c338e74b944bab891e"} Mar 11 10:16:04 crc kubenswrapper[4808]: I0311 10:16:04.717651 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba3b07765935cd60bfa1cb17dbca5101a525d38e6e4e60c338e74b944bab891e" Mar 11 10:16:04 crc kubenswrapper[4808]: I0311 10:16:04.717714 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553736-7dfbb" Mar 11 10:16:05 crc kubenswrapper[4808]: I0311 10:16:05.249042 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553730-j45zk"] Mar 11 10:16:05 crc kubenswrapper[4808]: I0311 10:16:05.271028 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553730-j45zk"] Mar 11 10:16:05 crc kubenswrapper[4808]: I0311 10:16:05.800417 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94a16095-2446-4848-8204-65f4b867368f" path="/var/lib/kubelet/pods/94a16095-2446-4848-8204-65f4b867368f/volumes" Mar 11 10:16:14 crc kubenswrapper[4808]: I0311 10:16:14.791064 4808 scope.go:117] "RemoveContainer" containerID="d768d0cd77bb5d61a0071efa203bb3e177aa6a9fb593f9adbae7687be9d0ea73" Mar 11 10:16:14 crc kubenswrapper[4808]: E0311 10:16:14.791776 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 10:16:25 crc kubenswrapper[4808]: I0311 10:16:25.789679 4808 scope.go:117] "RemoveContainer" containerID="d768d0cd77bb5d61a0071efa203bb3e177aa6a9fb593f9adbae7687be9d0ea73" Mar 11 10:16:25 crc kubenswrapper[4808]: E0311 10:16:25.790522 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 10:16:40 crc kubenswrapper[4808]: I0311 10:16:40.789715 4808 scope.go:117] "RemoveContainer" containerID="d768d0cd77bb5d61a0071efa203bb3e177aa6a9fb593f9adbae7687be9d0ea73" Mar 11 10:16:40 crc kubenswrapper[4808]: E0311 10:16:40.790487 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 10:16:47 crc kubenswrapper[4808]: I0311 10:16:47.064949 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-d6pnc"] Mar 11 10:16:47 crc kubenswrapper[4808]: I0311 10:16:47.071831 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-d6pnc"] Mar 11 10:16:47 crc kubenswrapper[4808]: I0311 10:16:47.806567 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37025f25-23b4-48b8-ba7c-c28df7c641f2" path="/var/lib/kubelet/pods/37025f25-23b4-48b8-ba7c-c28df7c641f2/volumes" Mar 11 10:16:55 crc kubenswrapper[4808]: I0311 10:16:55.789827 4808 scope.go:117] "RemoveContainer" containerID="d768d0cd77bb5d61a0071efa203bb3e177aa6a9fb593f9adbae7687be9d0ea73" Mar 11 10:16:55 crc kubenswrapper[4808]: E0311 10:16:55.790628 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 10:16:57 crc kubenswrapper[4808]: I0311 10:16:57.202810 4808 scope.go:117] "RemoveContainer" containerID="00f3757d9aa1b70765b5497cd1f8c0ea6fea815311691b3a94fb50e2d6c4ba6f" Mar 11 10:16:57 crc kubenswrapper[4808]: I0311 10:16:57.232602 4808 scope.go:117] "RemoveContainer" containerID="db7ff7442ec651d44c73e2b1eb0f57714519159a18293099d69078af59dae38e" Mar 11 10:17:09 crc kubenswrapper[4808]: I0311 10:17:09.801645 4808 scope.go:117] "RemoveContainer" containerID="d768d0cd77bb5d61a0071efa203bb3e177aa6a9fb593f9adbae7687be9d0ea73" Mar 11 10:17:09 crc kubenswrapper[4808]: E0311 10:17:09.807289 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 10:17:22 crc kubenswrapper[4808]: I0311 10:17:22.790848 4808 scope.go:117] "RemoveContainer" containerID="d768d0cd77bb5d61a0071efa203bb3e177aa6a9fb593f9adbae7687be9d0ea73" Mar 11 10:17:22 crc kubenswrapper[4808]: E0311 10:17:22.791753 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 10:17:34 crc kubenswrapper[4808]: I0311 10:17:34.790243 4808 scope.go:117] "RemoveContainer" containerID="d768d0cd77bb5d61a0071efa203bb3e177aa6a9fb593f9adbae7687be9d0ea73" Mar 11 10:17:34 crc kubenswrapper[4808]: E0311 10:17:34.791110 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 10:17:46 crc kubenswrapper[4808]: I0311 10:17:46.051461 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-znqht"] Mar 11 10:17:46 crc kubenswrapper[4808]: E0311 10:17:46.052393 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64e5bf6a-73d8-4f40-853d-3532947c1560" containerName="oc" Mar 11 10:17:46 crc kubenswrapper[4808]: I0311 10:17:46.052407 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="64e5bf6a-73d8-4f40-853d-3532947c1560" containerName="oc" Mar 11 10:17:46 crc kubenswrapper[4808]: I0311 10:17:46.052581 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="64e5bf6a-73d8-4f40-853d-3532947c1560" containerName="oc" Mar 11 10:17:46 crc kubenswrapper[4808]: I0311 10:17:46.053753 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-znqht" Mar 11 10:17:46 crc kubenswrapper[4808]: I0311 10:17:46.091471 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2a3a9ce-4e0e-40c9-9fc6-fa26f48cb9c5-utilities\") pod \"redhat-marketplace-znqht\" (UID: \"c2a3a9ce-4e0e-40c9-9fc6-fa26f48cb9c5\") " pod="openshift-marketplace/redhat-marketplace-znqht" Mar 11 10:17:46 crc kubenswrapper[4808]: I0311 10:17:46.092115 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhk6p\" (UniqueName: \"kubernetes.io/projected/c2a3a9ce-4e0e-40c9-9fc6-fa26f48cb9c5-kube-api-access-nhk6p\") pod \"redhat-marketplace-znqht\" (UID: \"c2a3a9ce-4e0e-40c9-9fc6-fa26f48cb9c5\") " pod="openshift-marketplace/redhat-marketplace-znqht" Mar 11 10:17:46 crc kubenswrapper[4808]: I0311 10:17:46.092269 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2a3a9ce-4e0e-40c9-9fc6-fa26f48cb9c5-catalog-content\") pod \"redhat-marketplace-znqht\" (UID: \"c2a3a9ce-4e0e-40c9-9fc6-fa26f48cb9c5\") " pod="openshift-marketplace/redhat-marketplace-znqht" Mar 11 10:17:46 crc kubenswrapper[4808]: I0311 10:17:46.104249 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-znqht"] Mar 11 10:17:46 crc kubenswrapper[4808]: I0311 10:17:46.193669 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhk6p\" (UniqueName: \"kubernetes.io/projected/c2a3a9ce-4e0e-40c9-9fc6-fa26f48cb9c5-kube-api-access-nhk6p\") pod \"redhat-marketplace-znqht\" (UID: \"c2a3a9ce-4e0e-40c9-9fc6-fa26f48cb9c5\") " pod="openshift-marketplace/redhat-marketplace-znqht" Mar 11 10:17:46 crc kubenswrapper[4808]: I0311 10:17:46.193739 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2a3a9ce-4e0e-40c9-9fc6-fa26f48cb9c5-catalog-content\") pod \"redhat-marketplace-znqht\" (UID: \"c2a3a9ce-4e0e-40c9-9fc6-fa26f48cb9c5\") " pod="openshift-marketplace/redhat-marketplace-znqht" Mar 11 10:17:46 crc kubenswrapper[4808]: I0311 10:17:46.193797 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2a3a9ce-4e0e-40c9-9fc6-fa26f48cb9c5-utilities\") pod \"redhat-marketplace-znqht\" (UID: \"c2a3a9ce-4e0e-40c9-9fc6-fa26f48cb9c5\") " pod="openshift-marketplace/redhat-marketplace-znqht" Mar 11 10:17:46 crc kubenswrapper[4808]: I0311 10:17:46.194332 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2a3a9ce-4e0e-40c9-9fc6-fa26f48cb9c5-utilities\") pod \"redhat-marketplace-znqht\" (UID: \"c2a3a9ce-4e0e-40c9-9fc6-fa26f48cb9c5\") " pod="openshift-marketplace/redhat-marketplace-znqht" Mar 11 10:17:46 crc kubenswrapper[4808]: I0311 10:17:46.194907 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2a3a9ce-4e0e-40c9-9fc6-fa26f48cb9c5-catalog-content\") pod \"redhat-marketplace-znqht\" (UID: \"c2a3a9ce-4e0e-40c9-9fc6-fa26f48cb9c5\") " pod="openshift-marketplace/redhat-marketplace-znqht" Mar 11 10:17:46 crc kubenswrapper[4808]: I0311 10:17:46.218639 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhk6p\" (UniqueName: \"kubernetes.io/projected/c2a3a9ce-4e0e-40c9-9fc6-fa26f48cb9c5-kube-api-access-nhk6p\") pod \"redhat-marketplace-znqht\" (UID: \"c2a3a9ce-4e0e-40c9-9fc6-fa26f48cb9c5\") " pod="openshift-marketplace/redhat-marketplace-znqht" Mar 11 10:17:46 crc kubenswrapper[4808]: I0311 10:17:46.382661 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-znqht" Mar 11 10:17:46 crc kubenswrapper[4808]: I0311 10:17:46.827199 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-znqht"] Mar 11 10:17:47 crc kubenswrapper[4808]: I0311 10:17:47.605573 4808 generic.go:334] "Generic (PLEG): container finished" podID="c2a3a9ce-4e0e-40c9-9fc6-fa26f48cb9c5" containerID="1810945fbfba580f1c671447561cf97937da2df66421bde7d70002a0e947fb59" exitCode=0 Mar 11 10:17:47 crc kubenswrapper[4808]: I0311 10:17:47.605643 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-znqht" event={"ID":"c2a3a9ce-4e0e-40c9-9fc6-fa26f48cb9c5","Type":"ContainerDied","Data":"1810945fbfba580f1c671447561cf97937da2df66421bde7d70002a0e947fb59"} Mar 11 10:17:47 crc kubenswrapper[4808]: I0311 10:17:47.605932 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-znqht" event={"ID":"c2a3a9ce-4e0e-40c9-9fc6-fa26f48cb9c5","Type":"ContainerStarted","Data":"d750728f7da975be858573694566ecf3f15f50fa33d76cb122c26906e5e580c7"} Mar 11 10:17:48 crc kubenswrapper[4808]: I0311 10:17:48.614741 4808 generic.go:334] "Generic (PLEG): container finished" podID="c2a3a9ce-4e0e-40c9-9fc6-fa26f48cb9c5" containerID="c65b50bbd40fe5fa8e1d20dc73c1762b1e26a173f6f8dffd0be3663b37a342e7" exitCode=0 Mar 11 10:17:48 crc kubenswrapper[4808]: I0311 10:17:48.614837 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-znqht" event={"ID":"c2a3a9ce-4e0e-40c9-9fc6-fa26f48cb9c5","Type":"ContainerDied","Data":"c65b50bbd40fe5fa8e1d20dc73c1762b1e26a173f6f8dffd0be3663b37a342e7"} Mar 11 10:17:48 crc kubenswrapper[4808]: I0311 10:17:48.788966 4808 scope.go:117] "RemoveContainer" containerID="d768d0cd77bb5d61a0071efa203bb3e177aa6a9fb593f9adbae7687be9d0ea73" Mar 11 10:17:48 crc kubenswrapper[4808]: E0311 10:17:48.789290 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 10:17:49 crc kubenswrapper[4808]: I0311 10:17:49.625527 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-znqht" event={"ID":"c2a3a9ce-4e0e-40c9-9fc6-fa26f48cb9c5","Type":"ContainerStarted","Data":"350943813c9d71c032592783e31d1d06d872b4a6b0c2ed7e5292400f9504bd39"} Mar 11 10:17:49 crc kubenswrapper[4808]: I0311 10:17:49.651125 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-znqht" podStartSLOduration=2.044309931 podStartE2EDuration="3.651104234s" podCreationTimestamp="2026-03-11 10:17:46 +0000 UTC" firstStartedPulling="2026-03-11 10:17:47.607133548 +0000 UTC m=+5918.560456868" lastFinishedPulling="2026-03-11 10:17:49.213927811 +0000 UTC m=+5920.167251171" observedRunningTime="2026-03-11 10:17:49.646998748 +0000 UTC m=+5920.600322078" watchObservedRunningTime="2026-03-11 10:17:49.651104234 +0000 UTC m=+5920.604427554" Mar 11 10:17:52 crc kubenswrapper[4808]: I0311 10:17:52.499924 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-r8xpn"] Mar 11 10:17:52 crc kubenswrapper[4808]: I0311 10:17:52.502563 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r8xpn" Mar 11 10:17:52 crc kubenswrapper[4808]: I0311 10:17:52.514391 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-r8xpn"] Mar 11 10:17:52 crc kubenswrapper[4808]: I0311 10:17:52.604206 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdbn9\" (UniqueName: \"kubernetes.io/projected/1b49a302-b4c3-4751-a4fe-d6c12dcf25e2-kube-api-access-cdbn9\") pod \"redhat-operators-r8xpn\" (UID: \"1b49a302-b4c3-4751-a4fe-d6c12dcf25e2\") " pod="openshift-marketplace/redhat-operators-r8xpn" Mar 11 10:17:52 crc kubenswrapper[4808]: I0311 10:17:52.604332 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b49a302-b4c3-4751-a4fe-d6c12dcf25e2-utilities\") pod \"redhat-operators-r8xpn\" (UID: \"1b49a302-b4c3-4751-a4fe-d6c12dcf25e2\") " pod="openshift-marketplace/redhat-operators-r8xpn" Mar 11 10:17:52 crc kubenswrapper[4808]: I0311 10:17:52.604406 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b49a302-b4c3-4751-a4fe-d6c12dcf25e2-catalog-content\") pod \"redhat-operators-r8xpn\" (UID: \"1b49a302-b4c3-4751-a4fe-d6c12dcf25e2\") " pod="openshift-marketplace/redhat-operators-r8xpn" Mar 11 10:17:52 crc kubenswrapper[4808]: I0311 10:17:52.706503 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b49a302-b4c3-4751-a4fe-d6c12dcf25e2-utilities\") pod \"redhat-operators-r8xpn\" (UID: \"1b49a302-b4c3-4751-a4fe-d6c12dcf25e2\") " pod="openshift-marketplace/redhat-operators-r8xpn" Mar 11 10:17:52 crc kubenswrapper[4808]: I0311 10:17:52.706590 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b49a302-b4c3-4751-a4fe-d6c12dcf25e2-catalog-content\") pod \"redhat-operators-r8xpn\" (UID: \"1b49a302-b4c3-4751-a4fe-d6c12dcf25e2\") " pod="openshift-marketplace/redhat-operators-r8xpn" Mar 11 10:17:52 crc kubenswrapper[4808]: I0311 10:17:52.706697 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdbn9\" (UniqueName: \"kubernetes.io/projected/1b49a302-b4c3-4751-a4fe-d6c12dcf25e2-kube-api-access-cdbn9\") pod \"redhat-operators-r8xpn\" (UID: \"1b49a302-b4c3-4751-a4fe-d6c12dcf25e2\") " pod="openshift-marketplace/redhat-operators-r8xpn" Mar 11 10:17:52 crc kubenswrapper[4808]: I0311 10:17:52.707712 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b49a302-b4c3-4751-a4fe-d6c12dcf25e2-catalog-content\") pod \"redhat-operators-r8xpn\" (UID: \"1b49a302-b4c3-4751-a4fe-d6c12dcf25e2\") " pod="openshift-marketplace/redhat-operators-r8xpn" Mar 11 10:17:52 crc kubenswrapper[4808]: I0311 10:17:52.707724 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b49a302-b4c3-4751-a4fe-d6c12dcf25e2-utilities\") pod \"redhat-operators-r8xpn\" (UID: \"1b49a302-b4c3-4751-a4fe-d6c12dcf25e2\") " pod="openshift-marketplace/redhat-operators-r8xpn" Mar 11 10:17:52 crc kubenswrapper[4808]: I0311 10:17:52.734128 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdbn9\" (UniqueName: \"kubernetes.io/projected/1b49a302-b4c3-4751-a4fe-d6c12dcf25e2-kube-api-access-cdbn9\") pod \"redhat-operators-r8xpn\" (UID: \"1b49a302-b4c3-4751-a4fe-d6c12dcf25e2\") " pod="openshift-marketplace/redhat-operators-r8xpn" Mar 11 10:17:52 crc kubenswrapper[4808]: I0311 10:17:52.839315 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r8xpn" Mar 11 10:17:53 crc kubenswrapper[4808]: I0311 10:17:53.279822 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-r8xpn"] Mar 11 10:17:53 crc kubenswrapper[4808]: I0311 10:17:53.656942 4808 generic.go:334] "Generic (PLEG): container finished" podID="1b49a302-b4c3-4751-a4fe-d6c12dcf25e2" containerID="4a37750a8541fc57d8fc4497e7ca2cc4d266ad81b62126dfc92f35556b9898b5" exitCode=0 Mar 11 10:17:53 crc kubenswrapper[4808]: I0311 10:17:53.657042 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r8xpn" event={"ID":"1b49a302-b4c3-4751-a4fe-d6c12dcf25e2","Type":"ContainerDied","Data":"4a37750a8541fc57d8fc4497e7ca2cc4d266ad81b62126dfc92f35556b9898b5"} Mar 11 10:17:53 crc kubenswrapper[4808]: I0311 10:17:53.657231 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r8xpn" event={"ID":"1b49a302-b4c3-4751-a4fe-d6c12dcf25e2","Type":"ContainerStarted","Data":"0f4d892d40c9aa4cd46bcd2b25c4c76e5459daef50f66bb1c0091fc44eeabf2c"} Mar 11 10:17:55 crc kubenswrapper[4808]: I0311 10:17:55.679501 4808 generic.go:334] "Generic (PLEG): container finished" podID="1b49a302-b4c3-4751-a4fe-d6c12dcf25e2" containerID="8fa5a5f35613029a482e4eb065a55e0aa4c0829674b0b73ee605f69e60f14728" exitCode=0 Mar 11 10:17:55 crc kubenswrapper[4808]: I0311 10:17:55.679552 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r8xpn" event={"ID":"1b49a302-b4c3-4751-a4fe-d6c12dcf25e2","Type":"ContainerDied","Data":"8fa5a5f35613029a482e4eb065a55e0aa4c0829674b0b73ee605f69e60f14728"} Mar 11 10:17:56 crc kubenswrapper[4808]: I0311 10:17:56.382993 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-znqht" Mar 11 10:17:56 crc kubenswrapper[4808]: I0311 10:17:56.383386 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-znqht" Mar 11 10:17:56 crc kubenswrapper[4808]: I0311 10:17:56.430508 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-znqht" Mar 11 10:17:56 crc kubenswrapper[4808]: I0311 10:17:56.688846 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r8xpn" event={"ID":"1b49a302-b4c3-4751-a4fe-d6c12dcf25e2","Type":"ContainerStarted","Data":"7d76fdeffe96ad4fed46df41924db1afdbb92118fc987041ee8138bbb455e23b"} Mar 11 10:17:56 crc kubenswrapper[4808]: I0311 10:17:56.707717 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-r8xpn" podStartSLOduration=2.2838385150000002 podStartE2EDuration="4.70769704s" podCreationTimestamp="2026-03-11 10:17:52 +0000 UTC" firstStartedPulling="2026-03-11 10:17:53.658549254 +0000 UTC m=+5924.611872574" lastFinishedPulling="2026-03-11 10:17:56.082407779 +0000 UTC m=+5927.035731099" observedRunningTime="2026-03-11 10:17:56.704641294 +0000 UTC m=+5927.657964614" watchObservedRunningTime="2026-03-11 10:17:56.70769704 +0000 UTC m=+5927.661020380" Mar 11 10:17:56 crc kubenswrapper[4808]: I0311 10:17:56.731158 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-znqht" Mar 11 10:17:58 crc kubenswrapper[4808]: I0311 10:17:58.417776 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-znqht"] Mar 11 10:17:59 crc kubenswrapper[4808]: I0311 10:17:59.729335 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-znqht" podUID="c2a3a9ce-4e0e-40c9-9fc6-fa26f48cb9c5" containerName="registry-server" containerID="cri-o://350943813c9d71c032592783e31d1d06d872b4a6b0c2ed7e5292400f9504bd39" gracePeriod=2 Mar 11 10:18:00 crc kubenswrapper[4808]: I0311 10:18:00.148497 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553738-bk6fl"] Mar 11 10:18:00 crc kubenswrapper[4808]: I0311 10:18:00.149804 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553738-bk6fl" Mar 11 10:18:00 crc kubenswrapper[4808]: I0311 10:18:00.151733 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 10:18:00 crc kubenswrapper[4808]: I0311 10:18:00.153294 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-8r7cc" Mar 11 10:18:00 crc kubenswrapper[4808]: I0311 10:18:00.153425 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 10:18:00 crc kubenswrapper[4808]: I0311 10:18:00.157766 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553738-bk6fl"] Mar 11 10:18:00 crc kubenswrapper[4808]: I0311 10:18:00.256506 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krwf9\" (UniqueName: \"kubernetes.io/projected/48fe4790-96d9-4697-b3bf-1ef43aa679ef-kube-api-access-krwf9\") pod \"auto-csr-approver-29553738-bk6fl\" (UID: \"48fe4790-96d9-4697-b3bf-1ef43aa679ef\") " pod="openshift-infra/auto-csr-approver-29553738-bk6fl" Mar 11 10:18:00 crc kubenswrapper[4808]: I0311 10:18:00.358019 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krwf9\" (UniqueName: \"kubernetes.io/projected/48fe4790-96d9-4697-b3bf-1ef43aa679ef-kube-api-access-krwf9\") pod \"auto-csr-approver-29553738-bk6fl\" (UID: \"48fe4790-96d9-4697-b3bf-1ef43aa679ef\") " pod="openshift-infra/auto-csr-approver-29553738-bk6fl" Mar 11 10:18:00 crc kubenswrapper[4808]: I0311 10:18:00.387131 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krwf9\" (UniqueName: \"kubernetes.io/projected/48fe4790-96d9-4697-b3bf-1ef43aa679ef-kube-api-access-krwf9\") pod \"auto-csr-approver-29553738-bk6fl\" (UID: \"48fe4790-96d9-4697-b3bf-1ef43aa679ef\") " pod="openshift-infra/auto-csr-approver-29553738-bk6fl" Mar 11 10:18:00 crc kubenswrapper[4808]: I0311 10:18:00.477729 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553738-bk6fl" Mar 11 10:18:00 crc kubenswrapper[4808]: I0311 10:18:00.656674 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-znqht" Mar 11 10:18:00 crc kubenswrapper[4808]: I0311 10:18:00.738980 4808 generic.go:334] "Generic (PLEG): container finished" podID="c2a3a9ce-4e0e-40c9-9fc6-fa26f48cb9c5" containerID="350943813c9d71c032592783e31d1d06d872b4a6b0c2ed7e5292400f9504bd39" exitCode=0 Mar 11 10:18:00 crc kubenswrapper[4808]: I0311 10:18:00.739391 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-znqht" event={"ID":"c2a3a9ce-4e0e-40c9-9fc6-fa26f48cb9c5","Type":"ContainerDied","Data":"350943813c9d71c032592783e31d1d06d872b4a6b0c2ed7e5292400f9504bd39"} Mar 11 10:18:00 crc kubenswrapper[4808]: I0311 10:18:00.739424 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-znqht" event={"ID":"c2a3a9ce-4e0e-40c9-9fc6-fa26f48cb9c5","Type":"ContainerDied","Data":"d750728f7da975be858573694566ecf3f15f50fa33d76cb122c26906e5e580c7"} Mar 11 10:18:00 crc kubenswrapper[4808]: I0311 10:18:00.739462 4808 scope.go:117] "RemoveContainer" containerID="350943813c9d71c032592783e31d1d06d872b4a6b0c2ed7e5292400f9504bd39" Mar 11 10:18:00 crc kubenswrapper[4808]: I0311 10:18:00.739634 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-znqht" Mar 11 10:18:00 crc kubenswrapper[4808]: I0311 10:18:00.762970 4808 scope.go:117] "RemoveContainer" containerID="c65b50bbd40fe5fa8e1d20dc73c1762b1e26a173f6f8dffd0be3663b37a342e7" Mar 11 10:18:00 crc kubenswrapper[4808]: I0311 10:18:00.763593 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2a3a9ce-4e0e-40c9-9fc6-fa26f48cb9c5-utilities\") pod \"c2a3a9ce-4e0e-40c9-9fc6-fa26f48cb9c5\" (UID: \"c2a3a9ce-4e0e-40c9-9fc6-fa26f48cb9c5\") " Mar 11 10:18:00 crc kubenswrapper[4808]: I0311 10:18:00.763648 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2a3a9ce-4e0e-40c9-9fc6-fa26f48cb9c5-catalog-content\") pod \"c2a3a9ce-4e0e-40c9-9fc6-fa26f48cb9c5\" (UID: \"c2a3a9ce-4e0e-40c9-9fc6-fa26f48cb9c5\") " Mar 11 10:18:00 crc kubenswrapper[4808]: I0311 10:18:00.763922 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhk6p\" (UniqueName: \"kubernetes.io/projected/c2a3a9ce-4e0e-40c9-9fc6-fa26f48cb9c5-kube-api-access-nhk6p\") pod \"c2a3a9ce-4e0e-40c9-9fc6-fa26f48cb9c5\" (UID: \"c2a3a9ce-4e0e-40c9-9fc6-fa26f48cb9c5\") " Mar 11 10:18:00 crc kubenswrapper[4808]: I0311 10:18:00.764860 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2a3a9ce-4e0e-40c9-9fc6-fa26f48cb9c5-utilities" (OuterVolumeSpecName: "utilities") pod "c2a3a9ce-4e0e-40c9-9fc6-fa26f48cb9c5" (UID: "c2a3a9ce-4e0e-40c9-9fc6-fa26f48cb9c5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 10:18:00 crc kubenswrapper[4808]: I0311 10:18:00.784583 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2a3a9ce-4e0e-40c9-9fc6-fa26f48cb9c5-kube-api-access-nhk6p" (OuterVolumeSpecName: "kube-api-access-nhk6p") pod "c2a3a9ce-4e0e-40c9-9fc6-fa26f48cb9c5" (UID: "c2a3a9ce-4e0e-40c9-9fc6-fa26f48cb9c5"). InnerVolumeSpecName "kube-api-access-nhk6p". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:18:00 crc kubenswrapper[4808]: I0311 10:18:00.797844 4808 scope.go:117] "RemoveContainer" containerID="1810945fbfba580f1c671447561cf97937da2df66421bde7d70002a0e947fb59" Mar 11 10:18:00 crc kubenswrapper[4808]: I0311 10:18:00.801891 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2a3a9ce-4e0e-40c9-9fc6-fa26f48cb9c5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c2a3a9ce-4e0e-40c9-9fc6-fa26f48cb9c5" (UID: "c2a3a9ce-4e0e-40c9-9fc6-fa26f48cb9c5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 10:18:00 crc kubenswrapper[4808]: I0311 10:18:00.814251 4808 scope.go:117] "RemoveContainer" containerID="350943813c9d71c032592783e31d1d06d872b4a6b0c2ed7e5292400f9504bd39" Mar 11 10:18:00 crc kubenswrapper[4808]: E0311 10:18:00.814772 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"350943813c9d71c032592783e31d1d06d872b4a6b0c2ed7e5292400f9504bd39\": container with ID starting with 350943813c9d71c032592783e31d1d06d872b4a6b0c2ed7e5292400f9504bd39 not found: ID does not exist" containerID="350943813c9d71c032592783e31d1d06d872b4a6b0c2ed7e5292400f9504bd39" Mar 11 10:18:00 crc kubenswrapper[4808]: I0311 10:18:00.814808 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"350943813c9d71c032592783e31d1d06d872b4a6b0c2ed7e5292400f9504bd39"} err="failed to get container status \"350943813c9d71c032592783e31d1d06d872b4a6b0c2ed7e5292400f9504bd39\": rpc error: code = NotFound desc = could not find container \"350943813c9d71c032592783e31d1d06d872b4a6b0c2ed7e5292400f9504bd39\": container with ID starting with 350943813c9d71c032592783e31d1d06d872b4a6b0c2ed7e5292400f9504bd39 not found: ID does not exist" Mar 11 10:18:00 crc kubenswrapper[4808]: I0311 10:18:00.814832 4808 scope.go:117] "RemoveContainer" containerID="c65b50bbd40fe5fa8e1d20dc73c1762b1e26a173f6f8dffd0be3663b37a342e7" Mar 11 10:18:00 crc kubenswrapper[4808]: E0311 10:18:00.815129 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c65b50bbd40fe5fa8e1d20dc73c1762b1e26a173f6f8dffd0be3663b37a342e7\": container with ID starting with c65b50bbd40fe5fa8e1d20dc73c1762b1e26a173f6f8dffd0be3663b37a342e7 not found: ID does not exist" containerID="c65b50bbd40fe5fa8e1d20dc73c1762b1e26a173f6f8dffd0be3663b37a342e7" Mar 11 10:18:00 crc kubenswrapper[4808]: I0311 10:18:00.815173 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c65b50bbd40fe5fa8e1d20dc73c1762b1e26a173f6f8dffd0be3663b37a342e7"} err="failed to get container status \"c65b50bbd40fe5fa8e1d20dc73c1762b1e26a173f6f8dffd0be3663b37a342e7\": rpc error: code = NotFound desc = could not find container \"c65b50bbd40fe5fa8e1d20dc73c1762b1e26a173f6f8dffd0be3663b37a342e7\": container with ID starting with c65b50bbd40fe5fa8e1d20dc73c1762b1e26a173f6f8dffd0be3663b37a342e7 not found: ID does not exist" Mar 11 10:18:00 crc kubenswrapper[4808]: I0311 10:18:00.815198 4808 scope.go:117] "RemoveContainer" containerID="1810945fbfba580f1c671447561cf97937da2df66421bde7d70002a0e947fb59" Mar 11 10:18:00 crc kubenswrapper[4808]: E0311 10:18:00.815528 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1810945fbfba580f1c671447561cf97937da2df66421bde7d70002a0e947fb59\": container with ID starting with 1810945fbfba580f1c671447561cf97937da2df66421bde7d70002a0e947fb59 not found: ID does not exist" containerID="1810945fbfba580f1c671447561cf97937da2df66421bde7d70002a0e947fb59" Mar 11 10:18:00 crc kubenswrapper[4808]: I0311 10:18:00.815561 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1810945fbfba580f1c671447561cf97937da2df66421bde7d70002a0e947fb59"} err="failed to get container status \"1810945fbfba580f1c671447561cf97937da2df66421bde7d70002a0e947fb59\": rpc error: code = NotFound desc = could not find container \"1810945fbfba580f1c671447561cf97937da2df66421bde7d70002a0e947fb59\": container with ID starting with 1810945fbfba580f1c671447561cf97937da2df66421bde7d70002a0e947fb59 not found: ID does not exist" Mar 11 10:18:00 crc kubenswrapper[4808]: I0311 10:18:00.866569 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2a3a9ce-4e0e-40c9-9fc6-fa26f48cb9c5-utilities\") on node \"crc\" DevicePath \"\"" Mar 11 10:18:00 crc kubenswrapper[4808]: I0311 10:18:00.866607 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2a3a9ce-4e0e-40c9-9fc6-fa26f48cb9c5-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 11 10:18:00 crc kubenswrapper[4808]: I0311 10:18:00.866621 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nhk6p\" (UniqueName: \"kubernetes.io/projected/c2a3a9ce-4e0e-40c9-9fc6-fa26f48cb9c5-kube-api-access-nhk6p\") on node \"crc\" DevicePath \"\"" Mar 11 10:18:00 crc kubenswrapper[4808]: I0311 10:18:00.964978 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553738-bk6fl"] Mar 11 10:18:00 crc kubenswrapper[4808]: W0311 10:18:00.974909 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod48fe4790_96d9_4697_b3bf_1ef43aa679ef.slice/crio-3c248c2bd627ebe5164e1e48c8b2f7f90e08e850fac47e5c486748e4e79d99be WatchSource:0}: Error finding container 3c248c2bd627ebe5164e1e48c8b2f7f90e08e850fac47e5c486748e4e79d99be: Status 404 returned error can't find the container with id 3c248c2bd627ebe5164e1e48c8b2f7f90e08e850fac47e5c486748e4e79d99be Mar 11 10:18:01 crc kubenswrapper[4808]: I0311 10:18:01.079674 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-znqht"] Mar 11 10:18:01 crc kubenswrapper[4808]: I0311 10:18:01.087409 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-znqht"] Mar 11 10:18:01 crc kubenswrapper[4808]: I0311 10:18:01.752134 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553738-bk6fl" event={"ID":"48fe4790-96d9-4697-b3bf-1ef43aa679ef","Type":"ContainerStarted","Data":"3c248c2bd627ebe5164e1e48c8b2f7f90e08e850fac47e5c486748e4e79d99be"} Mar 11 10:18:01 crc kubenswrapper[4808]: I0311 10:18:01.804597 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2a3a9ce-4e0e-40c9-9fc6-fa26f48cb9c5" path="/var/lib/kubelet/pods/c2a3a9ce-4e0e-40c9-9fc6-fa26f48cb9c5/volumes" Mar 11 10:18:02 crc kubenswrapper[4808]: I0311 10:18:02.790051 4808 scope.go:117] "RemoveContainer" containerID="d768d0cd77bb5d61a0071efa203bb3e177aa6a9fb593f9adbae7687be9d0ea73" Mar 11 10:18:02 crc kubenswrapper[4808]: E0311 10:18:02.790850 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 10:18:02 crc kubenswrapper[4808]: I0311 10:18:02.839831 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-r8xpn" Mar 11 10:18:02 crc kubenswrapper[4808]: I0311 10:18:02.839911 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-r8xpn" Mar 11 10:18:02 crc kubenswrapper[4808]: I0311 10:18:02.886277 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-r8xpn" Mar 11 10:18:03 crc kubenswrapper[4808]: I0311 10:18:03.822878 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-r8xpn" Mar 11 10:18:04 crc kubenswrapper[4808]: I0311 10:18:04.777430 4808 generic.go:334] "Generic (PLEG): container finished" podID="48fe4790-96d9-4697-b3bf-1ef43aa679ef" containerID="1191df428e66bead61cff11d4da4c0136b53e812dd50bd6843afa40fde54d414" exitCode=0 Mar 11 10:18:04 crc kubenswrapper[4808]: I0311 10:18:04.777522 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553738-bk6fl" event={"ID":"48fe4790-96d9-4697-b3bf-1ef43aa679ef","Type":"ContainerDied","Data":"1191df428e66bead61cff11d4da4c0136b53e812dd50bd6843afa40fde54d414"} Mar 11 10:18:05 crc kubenswrapper[4808]: I0311 10:18:05.011726 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-r8xpn"] Mar 11 10:18:05 crc kubenswrapper[4808]: I0311 10:18:05.784069 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-r8xpn" podUID="1b49a302-b4c3-4751-a4fe-d6c12dcf25e2" containerName="registry-server" containerID="cri-o://7d76fdeffe96ad4fed46df41924db1afdbb92118fc987041ee8138bbb455e23b" gracePeriod=2 Mar 11 10:18:06 crc kubenswrapper[4808]: I0311 10:18:06.086891 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553738-bk6fl" Mar 11 10:18:06 crc kubenswrapper[4808]: I0311 10:18:06.199408 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r8xpn" Mar 11 10:18:06 crc kubenswrapper[4808]: I0311 10:18:06.273784 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-krwf9\" (UniqueName: \"kubernetes.io/projected/48fe4790-96d9-4697-b3bf-1ef43aa679ef-kube-api-access-krwf9\") pod \"48fe4790-96d9-4697-b3bf-1ef43aa679ef\" (UID: \"48fe4790-96d9-4697-b3bf-1ef43aa679ef\") " Mar 11 10:18:06 crc kubenswrapper[4808]: I0311 10:18:06.280795 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48fe4790-96d9-4697-b3bf-1ef43aa679ef-kube-api-access-krwf9" (OuterVolumeSpecName: "kube-api-access-krwf9") pod "48fe4790-96d9-4697-b3bf-1ef43aa679ef" (UID: "48fe4790-96d9-4697-b3bf-1ef43aa679ef"). InnerVolumeSpecName "kube-api-access-krwf9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:18:06 crc kubenswrapper[4808]: I0311 10:18:06.375414 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b49a302-b4c3-4751-a4fe-d6c12dcf25e2-utilities\") pod \"1b49a302-b4c3-4751-a4fe-d6c12dcf25e2\" (UID: \"1b49a302-b4c3-4751-a4fe-d6c12dcf25e2\") " Mar 11 10:18:06 crc kubenswrapper[4808]: I0311 10:18:06.375496 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b49a302-b4c3-4751-a4fe-d6c12dcf25e2-catalog-content\") pod \"1b49a302-b4c3-4751-a4fe-d6c12dcf25e2\" (UID: \"1b49a302-b4c3-4751-a4fe-d6c12dcf25e2\") " Mar 11 10:18:06 crc kubenswrapper[4808]: I0311 10:18:06.375667 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cdbn9\" (UniqueName: \"kubernetes.io/projected/1b49a302-b4c3-4751-a4fe-d6c12dcf25e2-kube-api-access-cdbn9\") pod \"1b49a302-b4c3-4751-a4fe-d6c12dcf25e2\" (UID: \"1b49a302-b4c3-4751-a4fe-d6c12dcf25e2\") " Mar 11 10:18:06 crc kubenswrapper[4808]: I0311 10:18:06.376322 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-krwf9\" (UniqueName: \"kubernetes.io/projected/48fe4790-96d9-4697-b3bf-1ef43aa679ef-kube-api-access-krwf9\") on node \"crc\" DevicePath \"\"" Mar 11 10:18:06 crc kubenswrapper[4808]: I0311 10:18:06.378255 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b49a302-b4c3-4751-a4fe-d6c12dcf25e2-utilities" (OuterVolumeSpecName: "utilities") pod "1b49a302-b4c3-4751-a4fe-d6c12dcf25e2" (UID: "1b49a302-b4c3-4751-a4fe-d6c12dcf25e2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 10:18:06 crc kubenswrapper[4808]: I0311 10:18:06.380260 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b49a302-b4c3-4751-a4fe-d6c12dcf25e2-kube-api-access-cdbn9" (OuterVolumeSpecName: "kube-api-access-cdbn9") pod "1b49a302-b4c3-4751-a4fe-d6c12dcf25e2" (UID: "1b49a302-b4c3-4751-a4fe-d6c12dcf25e2"). InnerVolumeSpecName "kube-api-access-cdbn9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:18:06 crc kubenswrapper[4808]: I0311 10:18:06.477529 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cdbn9\" (UniqueName: \"kubernetes.io/projected/1b49a302-b4c3-4751-a4fe-d6c12dcf25e2-kube-api-access-cdbn9\") on node \"crc\" DevicePath \"\"" Mar 11 10:18:06 crc kubenswrapper[4808]: I0311 10:18:06.477566 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b49a302-b4c3-4751-a4fe-d6c12dcf25e2-utilities\") on node \"crc\" DevicePath \"\"" Mar 11 10:18:06 crc kubenswrapper[4808]: I0311 10:18:06.508519 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b49a302-b4c3-4751-a4fe-d6c12dcf25e2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1b49a302-b4c3-4751-a4fe-d6c12dcf25e2" (UID: "1b49a302-b4c3-4751-a4fe-d6c12dcf25e2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 10:18:06 crc kubenswrapper[4808]: I0311 10:18:06.579590 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b49a302-b4c3-4751-a4fe-d6c12dcf25e2-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 11 10:18:06 crc kubenswrapper[4808]: I0311 10:18:06.800554 4808 generic.go:334] "Generic (PLEG): container finished" podID="1b49a302-b4c3-4751-a4fe-d6c12dcf25e2" containerID="7d76fdeffe96ad4fed46df41924db1afdbb92118fc987041ee8138bbb455e23b" exitCode=0 Mar 11 10:18:06 crc kubenswrapper[4808]: I0311 10:18:06.800662 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r8xpn" Mar 11 10:18:06 crc kubenswrapper[4808]: I0311 10:18:06.800722 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r8xpn" event={"ID":"1b49a302-b4c3-4751-a4fe-d6c12dcf25e2","Type":"ContainerDied","Data":"7d76fdeffe96ad4fed46df41924db1afdbb92118fc987041ee8138bbb455e23b"} Mar 11 10:18:06 crc kubenswrapper[4808]: I0311 10:18:06.801203 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r8xpn" event={"ID":"1b49a302-b4c3-4751-a4fe-d6c12dcf25e2","Type":"ContainerDied","Data":"0f4d892d40c9aa4cd46bcd2b25c4c76e5459daef50f66bb1c0091fc44eeabf2c"} Mar 11 10:18:06 crc kubenswrapper[4808]: I0311 10:18:06.801259 4808 scope.go:117] "RemoveContainer" containerID="7d76fdeffe96ad4fed46df41924db1afdbb92118fc987041ee8138bbb455e23b" Mar 11 10:18:06 crc kubenswrapper[4808]: I0311 10:18:06.807680 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553738-bk6fl" Mar 11 10:18:06 crc kubenswrapper[4808]: I0311 10:18:06.807722 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553738-bk6fl" event={"ID":"48fe4790-96d9-4697-b3bf-1ef43aa679ef","Type":"ContainerDied","Data":"3c248c2bd627ebe5164e1e48c8b2f7f90e08e850fac47e5c486748e4e79d99be"} Mar 11 10:18:06 crc kubenswrapper[4808]: I0311 10:18:06.807790 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c248c2bd627ebe5164e1e48c8b2f7f90e08e850fac47e5c486748e4e79d99be" Mar 11 10:18:06 crc kubenswrapper[4808]: I0311 10:18:06.843690 4808 scope.go:117] "RemoveContainer" containerID="8fa5a5f35613029a482e4eb065a55e0aa4c0829674b0b73ee605f69e60f14728" Mar 11 10:18:06 crc kubenswrapper[4808]: I0311 10:18:06.867881 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-r8xpn"] Mar 11 10:18:06 crc kubenswrapper[4808]: I0311 10:18:06.876398 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-r8xpn"] Mar 11 10:18:06 crc kubenswrapper[4808]: I0311 10:18:06.893993 4808 scope.go:117] "RemoveContainer" containerID="4a37750a8541fc57d8fc4497e7ca2cc4d266ad81b62126dfc92f35556b9898b5" Mar 11 10:18:06 crc kubenswrapper[4808]: I0311 10:18:06.943527 4808 scope.go:117] "RemoveContainer" containerID="7d76fdeffe96ad4fed46df41924db1afdbb92118fc987041ee8138bbb455e23b" Mar 11 10:18:06 crc kubenswrapper[4808]: E0311 10:18:06.943966 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d76fdeffe96ad4fed46df41924db1afdbb92118fc987041ee8138bbb455e23b\": container with ID starting with 7d76fdeffe96ad4fed46df41924db1afdbb92118fc987041ee8138bbb455e23b not found: ID does not exist" containerID="7d76fdeffe96ad4fed46df41924db1afdbb92118fc987041ee8138bbb455e23b" Mar 11 10:18:06 crc kubenswrapper[4808]: I0311 10:18:06.944013 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d76fdeffe96ad4fed46df41924db1afdbb92118fc987041ee8138bbb455e23b"} err="failed to get container status \"7d76fdeffe96ad4fed46df41924db1afdbb92118fc987041ee8138bbb455e23b\": rpc error: code = NotFound desc = could not find container \"7d76fdeffe96ad4fed46df41924db1afdbb92118fc987041ee8138bbb455e23b\": container with ID starting with 7d76fdeffe96ad4fed46df41924db1afdbb92118fc987041ee8138bbb455e23b not found: ID does not exist" Mar 11 10:18:06 crc kubenswrapper[4808]: I0311 10:18:06.944042 4808 scope.go:117] "RemoveContainer" containerID="8fa5a5f35613029a482e4eb065a55e0aa4c0829674b0b73ee605f69e60f14728" Mar 11 10:18:06 crc kubenswrapper[4808]: E0311 10:18:06.944392 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8fa5a5f35613029a482e4eb065a55e0aa4c0829674b0b73ee605f69e60f14728\": container with ID starting with 8fa5a5f35613029a482e4eb065a55e0aa4c0829674b0b73ee605f69e60f14728 not found: ID does not exist" containerID="8fa5a5f35613029a482e4eb065a55e0aa4c0829674b0b73ee605f69e60f14728" Mar 11 10:18:06 crc kubenswrapper[4808]: I0311 10:18:06.944448 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8fa5a5f35613029a482e4eb065a55e0aa4c0829674b0b73ee605f69e60f14728"} err="failed to get container status \"8fa5a5f35613029a482e4eb065a55e0aa4c0829674b0b73ee605f69e60f14728\": rpc error: code = NotFound desc = could not find container \"8fa5a5f35613029a482e4eb065a55e0aa4c0829674b0b73ee605f69e60f14728\": container with ID starting with 8fa5a5f35613029a482e4eb065a55e0aa4c0829674b0b73ee605f69e60f14728 not found: ID does not exist" Mar 11 10:18:06 crc kubenswrapper[4808]: I0311 10:18:06.944480 4808 scope.go:117] "RemoveContainer" containerID="4a37750a8541fc57d8fc4497e7ca2cc4d266ad81b62126dfc92f35556b9898b5" Mar 11 10:18:06 crc kubenswrapper[4808]: E0311 10:18:06.944723 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a37750a8541fc57d8fc4497e7ca2cc4d266ad81b62126dfc92f35556b9898b5\": container with ID starting with 4a37750a8541fc57d8fc4497e7ca2cc4d266ad81b62126dfc92f35556b9898b5 not found: ID does not exist" containerID="4a37750a8541fc57d8fc4497e7ca2cc4d266ad81b62126dfc92f35556b9898b5" Mar 11 10:18:06 crc kubenswrapper[4808]: I0311 10:18:06.944750 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a37750a8541fc57d8fc4497e7ca2cc4d266ad81b62126dfc92f35556b9898b5"} err="failed to get container status \"4a37750a8541fc57d8fc4497e7ca2cc4d266ad81b62126dfc92f35556b9898b5\": rpc error: code = NotFound desc = could not find container \"4a37750a8541fc57d8fc4497e7ca2cc4d266ad81b62126dfc92f35556b9898b5\": container with ID starting with 4a37750a8541fc57d8fc4497e7ca2cc4d266ad81b62126dfc92f35556b9898b5 not found: ID does not exist" Mar 11 10:18:07 crc kubenswrapper[4808]: I0311 10:18:07.152878 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553732-2qmwm"] Mar 11 10:18:07 crc kubenswrapper[4808]: I0311 10:18:07.161017 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553732-2qmwm"] Mar 11 10:18:07 crc kubenswrapper[4808]: I0311 10:18:07.803435 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1814f4a4-98ad-4d9f-a953-06998cc96484" path="/var/lib/kubelet/pods/1814f4a4-98ad-4d9f-a953-06998cc96484/volumes" Mar 11 10:18:07 crc kubenswrapper[4808]: I0311 10:18:07.804737 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b49a302-b4c3-4751-a4fe-d6c12dcf25e2" path="/var/lib/kubelet/pods/1b49a302-b4c3-4751-a4fe-d6c12dcf25e2/volumes" Mar 11 10:18:08 crc kubenswrapper[4808]: I0311 10:18:08.619198 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7mtmx"] Mar 11 10:18:08 crc kubenswrapper[4808]: E0311 10:18:08.619618 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b49a302-b4c3-4751-a4fe-d6c12dcf25e2" containerName="extract-content" Mar 11 10:18:08 crc kubenswrapper[4808]: I0311 10:18:08.619636 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b49a302-b4c3-4751-a4fe-d6c12dcf25e2" containerName="extract-content" Mar 11 10:18:08 crc kubenswrapper[4808]: E0311 10:18:08.619655 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48fe4790-96d9-4697-b3bf-1ef43aa679ef" containerName="oc" Mar 11 10:18:08 crc kubenswrapper[4808]: I0311 10:18:08.619919 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="48fe4790-96d9-4697-b3bf-1ef43aa679ef" containerName="oc" Mar 11 10:18:08 crc kubenswrapper[4808]: E0311 10:18:08.619932 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2a3a9ce-4e0e-40c9-9fc6-fa26f48cb9c5" containerName="extract-content" Mar 11 10:18:08 crc kubenswrapper[4808]: I0311 10:18:08.619940 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2a3a9ce-4e0e-40c9-9fc6-fa26f48cb9c5" containerName="extract-content" Mar 11 10:18:08 crc kubenswrapper[4808]: E0311 10:18:08.619952 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2a3a9ce-4e0e-40c9-9fc6-fa26f48cb9c5" containerName="extract-utilities" Mar 11 10:18:08 crc kubenswrapper[4808]: I0311 10:18:08.619961 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2a3a9ce-4e0e-40c9-9fc6-fa26f48cb9c5" containerName="extract-utilities" Mar 11 10:18:08 crc kubenswrapper[4808]: E0311 10:18:08.619980 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b49a302-b4c3-4751-a4fe-d6c12dcf25e2" containerName="registry-server" Mar 11 10:18:08 crc kubenswrapper[4808]: I0311 10:18:08.619987 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b49a302-b4c3-4751-a4fe-d6c12dcf25e2" containerName="registry-server" Mar 11 10:18:08 crc kubenswrapper[4808]: E0311 10:18:08.620004 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b49a302-b4c3-4751-a4fe-d6c12dcf25e2" containerName="extract-utilities" Mar 11 10:18:08 crc kubenswrapper[4808]: I0311 10:18:08.620011 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b49a302-b4c3-4751-a4fe-d6c12dcf25e2" containerName="extract-utilities" Mar 11 10:18:08 crc kubenswrapper[4808]: E0311 10:18:08.620024 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2a3a9ce-4e0e-40c9-9fc6-fa26f48cb9c5" containerName="registry-server" Mar 11 10:18:08 crc kubenswrapper[4808]: I0311 10:18:08.620031 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2a3a9ce-4e0e-40c9-9fc6-fa26f48cb9c5" containerName="registry-server" Mar 11 10:18:08 crc kubenswrapper[4808]: I0311 10:18:08.620310 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="48fe4790-96d9-4697-b3bf-1ef43aa679ef" containerName="oc" Mar 11 10:18:08 crc kubenswrapper[4808]: I0311 10:18:08.620323 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2a3a9ce-4e0e-40c9-9fc6-fa26f48cb9c5" containerName="registry-server" Mar 11 10:18:08 crc kubenswrapper[4808]: I0311 10:18:08.620338 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b49a302-b4c3-4751-a4fe-d6c12dcf25e2" containerName="registry-server" Mar 11 10:18:08 crc kubenswrapper[4808]: I0311 10:18:08.621867 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7mtmx" Mar 11 10:18:08 crc kubenswrapper[4808]: I0311 10:18:08.630767 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7mtmx"] Mar 11 10:18:08 crc kubenswrapper[4808]: I0311 10:18:08.717483 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlhn2\" (UniqueName: \"kubernetes.io/projected/ac10923c-18c4-4d9a-9b64-9efd9469223a-kube-api-access-mlhn2\") pod \"community-operators-7mtmx\" (UID: \"ac10923c-18c4-4d9a-9b64-9efd9469223a\") " pod="openshift-marketplace/community-operators-7mtmx" Mar 11 10:18:08 crc kubenswrapper[4808]: I0311 10:18:08.717530 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac10923c-18c4-4d9a-9b64-9efd9469223a-utilities\") pod \"community-operators-7mtmx\" (UID: \"ac10923c-18c4-4d9a-9b64-9efd9469223a\") " pod="openshift-marketplace/community-operators-7mtmx" Mar 11 10:18:08 crc kubenswrapper[4808]: I0311 10:18:08.717680 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac10923c-18c4-4d9a-9b64-9efd9469223a-catalog-content\") pod \"community-operators-7mtmx\" (UID: \"ac10923c-18c4-4d9a-9b64-9efd9469223a\") " pod="openshift-marketplace/community-operators-7mtmx" Mar 11 10:18:08 crc kubenswrapper[4808]: I0311 10:18:08.819195 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac10923c-18c4-4d9a-9b64-9efd9469223a-catalog-content\") pod \"community-operators-7mtmx\" (UID: \"ac10923c-18c4-4d9a-9b64-9efd9469223a\") " pod="openshift-marketplace/community-operators-7mtmx" Mar 11 10:18:08 crc kubenswrapper[4808]: I0311 10:18:08.819235 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlhn2\" (UniqueName: \"kubernetes.io/projected/ac10923c-18c4-4d9a-9b64-9efd9469223a-kube-api-access-mlhn2\") pod \"community-operators-7mtmx\" (UID: \"ac10923c-18c4-4d9a-9b64-9efd9469223a\") " pod="openshift-marketplace/community-operators-7mtmx" Mar 11 10:18:08 crc kubenswrapper[4808]: I0311 10:18:08.819252 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac10923c-18c4-4d9a-9b64-9efd9469223a-utilities\") pod \"community-operators-7mtmx\" (UID: \"ac10923c-18c4-4d9a-9b64-9efd9469223a\") " pod="openshift-marketplace/community-operators-7mtmx" Mar 11 10:18:08 crc kubenswrapper[4808]: I0311 10:18:08.819778 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac10923c-18c4-4d9a-9b64-9efd9469223a-utilities\") pod \"community-operators-7mtmx\" (UID: \"ac10923c-18c4-4d9a-9b64-9efd9469223a\") " pod="openshift-marketplace/community-operators-7mtmx" Mar 11 10:18:08 crc kubenswrapper[4808]: I0311 10:18:08.820038 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac10923c-18c4-4d9a-9b64-9efd9469223a-catalog-content\") pod \"community-operators-7mtmx\" (UID: \"ac10923c-18c4-4d9a-9b64-9efd9469223a\") " pod="openshift-marketplace/community-operators-7mtmx" Mar 11 10:18:08 crc kubenswrapper[4808]: I0311 10:18:08.841196 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlhn2\" (UniqueName: \"kubernetes.io/projected/ac10923c-18c4-4d9a-9b64-9efd9469223a-kube-api-access-mlhn2\") pod \"community-operators-7mtmx\" (UID: \"ac10923c-18c4-4d9a-9b64-9efd9469223a\") " pod="openshift-marketplace/community-operators-7mtmx" Mar 11 10:18:08 crc kubenswrapper[4808]: I0311 10:18:08.943242 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7mtmx" Mar 11 10:18:09 crc kubenswrapper[4808]: I0311 10:18:09.480607 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7mtmx"] Mar 11 10:18:09 crc kubenswrapper[4808]: I0311 10:18:09.840247 4808 generic.go:334] "Generic (PLEG): container finished" podID="ac10923c-18c4-4d9a-9b64-9efd9469223a" containerID="62e63c9046a3c8a114a65b2a0b7642fdd79b95a732a77be899b582ed27eaecc9" exitCode=0 Mar 11 10:18:09 crc kubenswrapper[4808]: I0311 10:18:09.840347 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7mtmx" event={"ID":"ac10923c-18c4-4d9a-9b64-9efd9469223a","Type":"ContainerDied","Data":"62e63c9046a3c8a114a65b2a0b7642fdd79b95a732a77be899b582ed27eaecc9"} Mar 11 10:18:09 crc kubenswrapper[4808]: I0311 10:18:09.840585 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7mtmx" event={"ID":"ac10923c-18c4-4d9a-9b64-9efd9469223a","Type":"ContainerStarted","Data":"35a913715e1ab4d6c8a70850ed1e24db4bc1aade3c28fce22ff70ab9a66ffa2c"} Mar 11 10:18:11 crc kubenswrapper[4808]: I0311 10:18:11.857022 4808 generic.go:334] "Generic (PLEG): container finished" podID="ac10923c-18c4-4d9a-9b64-9efd9469223a" containerID="7f11e5e37452dfe4dcb64a2b9755cf5250a0e281553c3c3e3a4b734d42ab01f9" exitCode=0 Mar 11 10:18:11 crc kubenswrapper[4808]: I0311 10:18:11.857131 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7mtmx" event={"ID":"ac10923c-18c4-4d9a-9b64-9efd9469223a","Type":"ContainerDied","Data":"7f11e5e37452dfe4dcb64a2b9755cf5250a0e281553c3c3e3a4b734d42ab01f9"} Mar 11 10:18:12 crc kubenswrapper[4808]: I0311 10:18:12.868024 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7mtmx" event={"ID":"ac10923c-18c4-4d9a-9b64-9efd9469223a","Type":"ContainerStarted","Data":"257e4554fbad10aad80e135c3bd50bc12579e3694038500556acb3f78347800e"} Mar 11 10:18:12 crc kubenswrapper[4808]: I0311 10:18:12.887789 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7mtmx" podStartSLOduration=2.3613931 podStartE2EDuration="4.887768099s" podCreationTimestamp="2026-03-11 10:18:08 +0000 UTC" firstStartedPulling="2026-03-11 10:18:09.84348841 +0000 UTC m=+5940.796811730" lastFinishedPulling="2026-03-11 10:18:12.369863409 +0000 UTC m=+5943.323186729" observedRunningTime="2026-03-11 10:18:12.884296261 +0000 UTC m=+5943.837619591" watchObservedRunningTime="2026-03-11 10:18:12.887768099 +0000 UTC m=+5943.841091409" Mar 11 10:18:13 crc kubenswrapper[4808]: I0311 10:18:13.789419 4808 scope.go:117] "RemoveContainer" containerID="d768d0cd77bb5d61a0071efa203bb3e177aa6a9fb593f9adbae7687be9d0ea73" Mar 11 10:18:13 crc kubenswrapper[4808]: E0311 10:18:13.790277 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 10:18:18 crc kubenswrapper[4808]: I0311 10:18:18.943719 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7mtmx" Mar 11 10:18:18 crc kubenswrapper[4808]: I0311 10:18:18.945527 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7mtmx" Mar 11 10:18:18 crc kubenswrapper[4808]: I0311 10:18:18.997384 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7mtmx" Mar 11 10:18:19 crc kubenswrapper[4808]: I0311 10:18:19.972023 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7mtmx" Mar 11 10:18:20 crc kubenswrapper[4808]: I0311 10:18:20.017356 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7mtmx"] Mar 11 10:18:21 crc kubenswrapper[4808]: I0311 10:18:21.934664 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7mtmx" podUID="ac10923c-18c4-4d9a-9b64-9efd9469223a" containerName="registry-server" containerID="cri-o://257e4554fbad10aad80e135c3bd50bc12579e3694038500556acb3f78347800e" gracePeriod=2 Mar 11 10:18:23 crc kubenswrapper[4808]: I0311 10:18:23.694312 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7mtmx" Mar 11 10:18:23 crc kubenswrapper[4808]: I0311 10:18:23.784954 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac10923c-18c4-4d9a-9b64-9efd9469223a-catalog-content\") pod \"ac10923c-18c4-4d9a-9b64-9efd9469223a\" (UID: \"ac10923c-18c4-4d9a-9b64-9efd9469223a\") " Mar 11 10:18:23 crc kubenswrapper[4808]: I0311 10:18:23.785021 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac10923c-18c4-4d9a-9b64-9efd9469223a-utilities\") pod \"ac10923c-18c4-4d9a-9b64-9efd9469223a\" (UID: \"ac10923c-18c4-4d9a-9b64-9efd9469223a\") " Mar 11 10:18:23 crc kubenswrapper[4808]: I0311 10:18:23.785137 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mlhn2\" (UniqueName: \"kubernetes.io/projected/ac10923c-18c4-4d9a-9b64-9efd9469223a-kube-api-access-mlhn2\") pod \"ac10923c-18c4-4d9a-9b64-9efd9469223a\" (UID: \"ac10923c-18c4-4d9a-9b64-9efd9469223a\") " Mar 11 10:18:23 crc kubenswrapper[4808]: I0311 10:18:23.787229 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac10923c-18c4-4d9a-9b64-9efd9469223a-utilities" (OuterVolumeSpecName: "utilities") pod "ac10923c-18c4-4d9a-9b64-9efd9469223a" (UID: "ac10923c-18c4-4d9a-9b64-9efd9469223a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 10:18:23 crc kubenswrapper[4808]: I0311 10:18:23.800659 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac10923c-18c4-4d9a-9b64-9efd9469223a-kube-api-access-mlhn2" (OuterVolumeSpecName: "kube-api-access-mlhn2") pod "ac10923c-18c4-4d9a-9b64-9efd9469223a" (UID: "ac10923c-18c4-4d9a-9b64-9efd9469223a"). InnerVolumeSpecName "kube-api-access-mlhn2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:18:23 crc kubenswrapper[4808]: I0311 10:18:23.863636 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac10923c-18c4-4d9a-9b64-9efd9469223a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ac10923c-18c4-4d9a-9b64-9efd9469223a" (UID: "ac10923c-18c4-4d9a-9b64-9efd9469223a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 10:18:23 crc kubenswrapper[4808]: I0311 10:18:23.887439 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac10923c-18c4-4d9a-9b64-9efd9469223a-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 11 10:18:23 crc kubenswrapper[4808]: I0311 10:18:23.887511 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac10923c-18c4-4d9a-9b64-9efd9469223a-utilities\") on node \"crc\" DevicePath \"\"" Mar 11 10:18:23 crc kubenswrapper[4808]: I0311 10:18:23.887524 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mlhn2\" (UniqueName: \"kubernetes.io/projected/ac10923c-18c4-4d9a-9b64-9efd9469223a-kube-api-access-mlhn2\") on node \"crc\" DevicePath \"\"" Mar 11 10:18:23 crc kubenswrapper[4808]: I0311 10:18:23.965718 4808 generic.go:334] "Generic (PLEG): container finished" podID="ac10923c-18c4-4d9a-9b64-9efd9469223a" containerID="257e4554fbad10aad80e135c3bd50bc12579e3694038500556acb3f78347800e" exitCode=0 Mar 11 10:18:23 crc kubenswrapper[4808]: I0311 10:18:23.965997 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7mtmx" event={"ID":"ac10923c-18c4-4d9a-9b64-9efd9469223a","Type":"ContainerDied","Data":"257e4554fbad10aad80e135c3bd50bc12579e3694038500556acb3f78347800e"} Mar 11 10:18:23 crc kubenswrapper[4808]: I0311 10:18:23.966228 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7mtmx" event={"ID":"ac10923c-18c4-4d9a-9b64-9efd9469223a","Type":"ContainerDied","Data":"35a913715e1ab4d6c8a70850ed1e24db4bc1aade3c28fce22ff70ab9a66ffa2c"} Mar 11 10:18:23 crc kubenswrapper[4808]: I0311 10:18:23.966327 4808 scope.go:117] "RemoveContainer" containerID="257e4554fbad10aad80e135c3bd50bc12579e3694038500556acb3f78347800e" Mar 11 10:18:23 crc kubenswrapper[4808]: I0311 10:18:23.966067 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7mtmx" Mar 11 10:18:23 crc kubenswrapper[4808]: I0311 10:18:23.994609 4808 scope.go:117] "RemoveContainer" containerID="7f11e5e37452dfe4dcb64a2b9755cf5250a0e281553c3c3e3a4b734d42ab01f9" Mar 11 10:18:24 crc kubenswrapper[4808]: I0311 10:18:24.005347 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7mtmx"] Mar 11 10:18:24 crc kubenswrapper[4808]: I0311 10:18:24.012381 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7mtmx"] Mar 11 10:18:24 crc kubenswrapper[4808]: I0311 10:18:24.020689 4808 scope.go:117] "RemoveContainer" containerID="62e63c9046a3c8a114a65b2a0b7642fdd79b95a732a77be899b582ed27eaecc9" Mar 11 10:18:24 crc kubenswrapper[4808]: I0311 10:18:24.049067 4808 scope.go:117] "RemoveContainer" containerID="257e4554fbad10aad80e135c3bd50bc12579e3694038500556acb3f78347800e" Mar 11 10:18:24 crc kubenswrapper[4808]: E0311 10:18:24.052623 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"257e4554fbad10aad80e135c3bd50bc12579e3694038500556acb3f78347800e\": container with ID starting with 257e4554fbad10aad80e135c3bd50bc12579e3694038500556acb3f78347800e not found: ID does not exist" containerID="257e4554fbad10aad80e135c3bd50bc12579e3694038500556acb3f78347800e" Mar 11 10:18:24 crc kubenswrapper[4808]: I0311 10:18:24.052662 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"257e4554fbad10aad80e135c3bd50bc12579e3694038500556acb3f78347800e"} err="failed to get container status \"257e4554fbad10aad80e135c3bd50bc12579e3694038500556acb3f78347800e\": rpc error: code = NotFound desc = could not find container \"257e4554fbad10aad80e135c3bd50bc12579e3694038500556acb3f78347800e\": container with ID starting with 257e4554fbad10aad80e135c3bd50bc12579e3694038500556acb3f78347800e not found: ID does not exist" Mar 11 10:18:24 crc kubenswrapper[4808]: I0311 10:18:24.052700 4808 scope.go:117] "RemoveContainer" containerID="7f11e5e37452dfe4dcb64a2b9755cf5250a0e281553c3c3e3a4b734d42ab01f9" Mar 11 10:18:24 crc kubenswrapper[4808]: E0311 10:18:24.052988 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f11e5e37452dfe4dcb64a2b9755cf5250a0e281553c3c3e3a4b734d42ab01f9\": container with ID starting with 7f11e5e37452dfe4dcb64a2b9755cf5250a0e281553c3c3e3a4b734d42ab01f9 not found: ID does not exist" containerID="7f11e5e37452dfe4dcb64a2b9755cf5250a0e281553c3c3e3a4b734d42ab01f9" Mar 11 10:18:24 crc kubenswrapper[4808]: I0311 10:18:24.053026 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f11e5e37452dfe4dcb64a2b9755cf5250a0e281553c3c3e3a4b734d42ab01f9"} err="failed to get container status \"7f11e5e37452dfe4dcb64a2b9755cf5250a0e281553c3c3e3a4b734d42ab01f9\": rpc error: code = NotFound desc = could not find container \"7f11e5e37452dfe4dcb64a2b9755cf5250a0e281553c3c3e3a4b734d42ab01f9\": container with ID starting with 7f11e5e37452dfe4dcb64a2b9755cf5250a0e281553c3c3e3a4b734d42ab01f9 not found: ID does not exist" Mar 11 10:18:24 crc kubenswrapper[4808]: I0311 10:18:24.053041 4808 scope.go:117] "RemoveContainer" containerID="62e63c9046a3c8a114a65b2a0b7642fdd79b95a732a77be899b582ed27eaecc9" Mar 11 10:18:24 crc kubenswrapper[4808]: E0311 10:18:24.053266 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62e63c9046a3c8a114a65b2a0b7642fdd79b95a732a77be899b582ed27eaecc9\": container with ID starting with 62e63c9046a3c8a114a65b2a0b7642fdd79b95a732a77be899b582ed27eaecc9 not found: ID does not exist" containerID="62e63c9046a3c8a114a65b2a0b7642fdd79b95a732a77be899b582ed27eaecc9" Mar 11 10:18:24 crc kubenswrapper[4808]: I0311 10:18:24.053290 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62e63c9046a3c8a114a65b2a0b7642fdd79b95a732a77be899b582ed27eaecc9"} err="failed to get container status \"62e63c9046a3c8a114a65b2a0b7642fdd79b95a732a77be899b582ed27eaecc9\": rpc error: code = NotFound desc = could not find container \"62e63c9046a3c8a114a65b2a0b7642fdd79b95a732a77be899b582ed27eaecc9\": container with ID starting with 62e63c9046a3c8a114a65b2a0b7642fdd79b95a732a77be899b582ed27eaecc9 not found: ID does not exist" Mar 11 10:18:25 crc kubenswrapper[4808]: I0311 10:18:25.799597 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac10923c-18c4-4d9a-9b64-9efd9469223a" path="/var/lib/kubelet/pods/ac10923c-18c4-4d9a-9b64-9efd9469223a/volumes" Mar 11 10:18:28 crc kubenswrapper[4808]: I0311 10:18:28.791375 4808 scope.go:117] "RemoveContainer" containerID="d768d0cd77bb5d61a0071efa203bb3e177aa6a9fb593f9adbae7687be9d0ea73" Mar 11 10:18:28 crc kubenswrapper[4808]: E0311 10:18:28.792343 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 10:18:41 crc kubenswrapper[4808]: I0311 10:18:41.790008 4808 scope.go:117] "RemoveContainer" containerID="d768d0cd77bb5d61a0071efa203bb3e177aa6a9fb593f9adbae7687be9d0ea73" Mar 11 10:18:41 crc kubenswrapper[4808]: E0311 10:18:41.790965 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 10:18:56 crc kubenswrapper[4808]: I0311 10:18:56.789895 4808 scope.go:117] "RemoveContainer" containerID="d768d0cd77bb5d61a0071efa203bb3e177aa6a9fb593f9adbae7687be9d0ea73" Mar 11 10:18:56 crc kubenswrapper[4808]: E0311 10:18:56.790933 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 10:18:57 crc kubenswrapper[4808]: I0311 10:18:57.346588 4808 scope.go:117] "RemoveContainer" containerID="57aa520cffcff80b7f46b8071963467abd0ad96a1a0d2f702ebf8a1232669f80" Mar 11 10:18:57 crc kubenswrapper[4808]: I0311 10:18:57.396213 4808 scope.go:117] "RemoveContainer" containerID="32525adbc52daa9218478bd0a179ab28a93f9ddf5aa84ecc966b6204f6192462" Mar 11 10:18:57 crc kubenswrapper[4808]: I0311 10:18:57.499115 4808 scope.go:117] "RemoveContainer" containerID="6668edf19dbddb4c8f58dc63731fa46b017caa8f004a3f943d6811a5b05d6215" Mar 11 10:19:08 crc kubenswrapper[4808]: I0311 10:19:08.789245 4808 scope.go:117] "RemoveContainer" containerID="d768d0cd77bb5d61a0071efa203bb3e177aa6a9fb593f9adbae7687be9d0ea73" Mar 11 10:19:08 crc kubenswrapper[4808]: E0311 10:19:08.790185 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 10:19:20 crc kubenswrapper[4808]: I0311 10:19:20.789589 4808 scope.go:117] "RemoveContainer" containerID="d768d0cd77bb5d61a0071efa203bb3e177aa6a9fb593f9adbae7687be9d0ea73" Mar 11 10:19:21 crc kubenswrapper[4808]: I0311 10:19:21.491629 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" event={"ID":"3dda5309-668d-4e3c-b3b2-1d708eecc578","Type":"ContainerStarted","Data":"713b44db5f04305137581df2a17a4e783258fa66a6b487f3c22633da024fb6c9"} Mar 11 10:19:57 crc kubenswrapper[4808]: I0311 10:19:57.627413 4808 scope.go:117] "RemoveContainer" containerID="afaa5cb37ca00d3e01df722d961c7c3895a32b15b8cd7b4d7c4eb6cbe6e5cf30" Mar 11 10:20:00 crc kubenswrapper[4808]: I0311 10:20:00.142251 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553740-pqw28"] Mar 11 10:20:00 crc kubenswrapper[4808]: E0311 10:20:00.143252 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac10923c-18c4-4d9a-9b64-9efd9469223a" containerName="extract-content" Mar 11 10:20:00 crc kubenswrapper[4808]: I0311 10:20:00.143272 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac10923c-18c4-4d9a-9b64-9efd9469223a" containerName="extract-content" Mar 11 10:20:00 crc kubenswrapper[4808]: E0311 10:20:00.143297 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac10923c-18c4-4d9a-9b64-9efd9469223a" containerName="registry-server" Mar 11 10:20:00 crc kubenswrapper[4808]: I0311 10:20:00.143304 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac10923c-18c4-4d9a-9b64-9efd9469223a" containerName="registry-server" Mar 11 10:20:00 crc kubenswrapper[4808]: E0311 10:20:00.143318 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac10923c-18c4-4d9a-9b64-9efd9469223a" containerName="extract-utilities" Mar 11 10:20:00 crc kubenswrapper[4808]: I0311 10:20:00.143327 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac10923c-18c4-4d9a-9b64-9efd9469223a" containerName="extract-utilities" Mar 11 10:20:00 crc kubenswrapper[4808]: I0311 10:20:00.143540 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac10923c-18c4-4d9a-9b64-9efd9469223a" containerName="registry-server" Mar 11 10:20:00 crc kubenswrapper[4808]: I0311 10:20:00.144230 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553740-pqw28" Mar 11 10:20:00 crc kubenswrapper[4808]: I0311 10:20:00.146192 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-8r7cc" Mar 11 10:20:00 crc kubenswrapper[4808]: I0311 10:20:00.146288 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 10:20:00 crc kubenswrapper[4808]: I0311 10:20:00.147668 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 10:20:00 crc kubenswrapper[4808]: I0311 10:20:00.153341 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553740-pqw28"] Mar 11 10:20:00 crc kubenswrapper[4808]: I0311 10:20:00.208274 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jlsv\" (UniqueName: \"kubernetes.io/projected/4ff122be-684e-4fff-bdd4-925636ea0afd-kube-api-access-5jlsv\") pod \"auto-csr-approver-29553740-pqw28\" (UID: \"4ff122be-684e-4fff-bdd4-925636ea0afd\") " pod="openshift-infra/auto-csr-approver-29553740-pqw28" Mar 11 10:20:00 crc kubenswrapper[4808]: I0311 10:20:00.310300 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jlsv\" (UniqueName: \"kubernetes.io/projected/4ff122be-684e-4fff-bdd4-925636ea0afd-kube-api-access-5jlsv\") pod \"auto-csr-approver-29553740-pqw28\" (UID: \"4ff122be-684e-4fff-bdd4-925636ea0afd\") " pod="openshift-infra/auto-csr-approver-29553740-pqw28" Mar 11 10:20:00 crc kubenswrapper[4808]: I0311 10:20:00.334541 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jlsv\" (UniqueName: \"kubernetes.io/projected/4ff122be-684e-4fff-bdd4-925636ea0afd-kube-api-access-5jlsv\") pod \"auto-csr-approver-29553740-pqw28\" (UID: \"4ff122be-684e-4fff-bdd4-925636ea0afd\") " pod="openshift-infra/auto-csr-approver-29553740-pqw28" Mar 11 10:20:00 crc kubenswrapper[4808]: I0311 10:20:00.463767 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553740-pqw28" Mar 11 10:20:00 crc kubenswrapper[4808]: I0311 10:20:00.959641 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553740-pqw28"] Mar 11 10:20:01 crc kubenswrapper[4808]: I0311 10:20:01.814584 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553740-pqw28" event={"ID":"4ff122be-684e-4fff-bdd4-925636ea0afd","Type":"ContainerStarted","Data":"a582ded7f11c1dfbba6b49abe43b5d76d1ac6aaff09a6534fc3aae881cfe350c"} Mar 11 10:20:02 crc kubenswrapper[4808]: I0311 10:20:02.823712 4808 generic.go:334] "Generic (PLEG): container finished" podID="4ff122be-684e-4fff-bdd4-925636ea0afd" containerID="60f7d1327d649e2b103119c43fc97c8bdb112e3446797aa4a454ecde4d3028a6" exitCode=0 Mar 11 10:20:02 crc kubenswrapper[4808]: I0311 10:20:02.823761 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553740-pqw28" event={"ID":"4ff122be-684e-4fff-bdd4-925636ea0afd","Type":"ContainerDied","Data":"60f7d1327d649e2b103119c43fc97c8bdb112e3446797aa4a454ecde4d3028a6"} Mar 11 10:20:04 crc kubenswrapper[4808]: I0311 10:20:04.122741 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553740-pqw28" Mar 11 10:20:04 crc kubenswrapper[4808]: I0311 10:20:04.183021 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jlsv\" (UniqueName: \"kubernetes.io/projected/4ff122be-684e-4fff-bdd4-925636ea0afd-kube-api-access-5jlsv\") pod \"4ff122be-684e-4fff-bdd4-925636ea0afd\" (UID: \"4ff122be-684e-4fff-bdd4-925636ea0afd\") " Mar 11 10:20:04 crc kubenswrapper[4808]: I0311 10:20:04.189840 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ff122be-684e-4fff-bdd4-925636ea0afd-kube-api-access-5jlsv" (OuterVolumeSpecName: "kube-api-access-5jlsv") pod "4ff122be-684e-4fff-bdd4-925636ea0afd" (UID: "4ff122be-684e-4fff-bdd4-925636ea0afd"). InnerVolumeSpecName "kube-api-access-5jlsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:20:04 crc kubenswrapper[4808]: I0311 10:20:04.284800 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5jlsv\" (UniqueName: \"kubernetes.io/projected/4ff122be-684e-4fff-bdd4-925636ea0afd-kube-api-access-5jlsv\") on node \"crc\" DevicePath \"\"" Mar 11 10:20:04 crc kubenswrapper[4808]: I0311 10:20:04.839558 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553740-pqw28" event={"ID":"4ff122be-684e-4fff-bdd4-925636ea0afd","Type":"ContainerDied","Data":"a582ded7f11c1dfbba6b49abe43b5d76d1ac6aaff09a6534fc3aae881cfe350c"} Mar 11 10:20:04 crc kubenswrapper[4808]: I0311 10:20:04.839906 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a582ded7f11c1dfbba6b49abe43b5d76d1ac6aaff09a6534fc3aae881cfe350c" Mar 11 10:20:04 crc kubenswrapper[4808]: I0311 10:20:04.839966 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553740-pqw28" Mar 11 10:20:05 crc kubenswrapper[4808]: I0311 10:20:05.185819 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553734-xzrt7"] Mar 11 10:20:05 crc kubenswrapper[4808]: I0311 10:20:05.194430 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553734-xzrt7"] Mar 11 10:20:05 crc kubenswrapper[4808]: I0311 10:20:05.798734 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97f984b1-7897-4e8d-bdd9-38352f1a74db" path="/var/lib/kubelet/pods/97f984b1-7897-4e8d-bdd9-38352f1a74db/volumes" Mar 11 10:20:57 crc kubenswrapper[4808]: I0311 10:20:57.706157 4808 scope.go:117] "RemoveContainer" containerID="3de8830abb72784867cc299f4290b14566906cf1a8227a2524c51fa8cd1d3949" Mar 11 10:21:46 crc kubenswrapper[4808]: I0311 10:21:46.027865 4808 patch_prober.go:28] interesting pod/machine-config-daemon-tfsm9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 10:21:46 crc kubenswrapper[4808]: I0311 10:21:46.029584 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 10:22:00 crc kubenswrapper[4808]: I0311 10:22:00.187542 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553742-tvhjb"] Mar 11 10:22:00 crc kubenswrapper[4808]: E0311 10:22:00.188376 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ff122be-684e-4fff-bdd4-925636ea0afd" containerName="oc" Mar 11 10:22:00 crc kubenswrapper[4808]: I0311 10:22:00.188395 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ff122be-684e-4fff-bdd4-925636ea0afd" containerName="oc" Mar 11 10:22:00 crc kubenswrapper[4808]: I0311 10:22:00.188632 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ff122be-684e-4fff-bdd4-925636ea0afd" containerName="oc" Mar 11 10:22:00 crc kubenswrapper[4808]: I0311 10:22:00.189289 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553742-tvhjb" Mar 11 10:22:00 crc kubenswrapper[4808]: I0311 10:22:00.192041 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-8r7cc" Mar 11 10:22:00 crc kubenswrapper[4808]: I0311 10:22:00.192625 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 10:22:00 crc kubenswrapper[4808]: I0311 10:22:00.193371 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 10:22:00 crc kubenswrapper[4808]: I0311 10:22:00.211731 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553742-tvhjb"] Mar 11 10:22:00 crc kubenswrapper[4808]: I0311 10:22:00.215352 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxq6l\" (UniqueName: \"kubernetes.io/projected/e3a690cc-d7b2-4d2b-a810-d792daf721d1-kube-api-access-mxq6l\") pod \"auto-csr-approver-29553742-tvhjb\" (UID: \"e3a690cc-d7b2-4d2b-a810-d792daf721d1\") " pod="openshift-infra/auto-csr-approver-29553742-tvhjb" Mar 11 10:22:00 crc kubenswrapper[4808]: I0311 10:22:00.316917 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxq6l\" (UniqueName: \"kubernetes.io/projected/e3a690cc-d7b2-4d2b-a810-d792daf721d1-kube-api-access-mxq6l\") pod \"auto-csr-approver-29553742-tvhjb\" (UID: \"e3a690cc-d7b2-4d2b-a810-d792daf721d1\") " pod="openshift-infra/auto-csr-approver-29553742-tvhjb" Mar 11 10:22:00 crc kubenswrapper[4808]: I0311 10:22:00.340345 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxq6l\" (UniqueName: \"kubernetes.io/projected/e3a690cc-d7b2-4d2b-a810-d792daf721d1-kube-api-access-mxq6l\") pod \"auto-csr-approver-29553742-tvhjb\" (UID: \"e3a690cc-d7b2-4d2b-a810-d792daf721d1\") " pod="openshift-infra/auto-csr-approver-29553742-tvhjb" Mar 11 10:22:00 crc kubenswrapper[4808]: I0311 10:22:00.512676 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553742-tvhjb" Mar 11 10:22:00 crc kubenswrapper[4808]: I0311 10:22:00.945594 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553742-tvhjb"] Mar 11 10:22:00 crc kubenswrapper[4808]: I0311 10:22:00.952749 4808 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 11 10:22:01 crc kubenswrapper[4808]: I0311 10:22:01.851190 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553742-tvhjb" event={"ID":"e3a690cc-d7b2-4d2b-a810-d792daf721d1","Type":"ContainerStarted","Data":"fbd2417f25500c6b7f22b351a58661d47a0451f10241f3eccf27a07ddf199f25"} Mar 11 10:22:02 crc kubenswrapper[4808]: I0311 10:22:02.863385 4808 generic.go:334] "Generic (PLEG): container finished" podID="e3a690cc-d7b2-4d2b-a810-d792daf721d1" containerID="cbf217815e76feb57f930177b64ced4c7e4e827d7f10a1eed0b73bf6384c6929" exitCode=0 Mar 11 10:22:02 crc kubenswrapper[4808]: I0311 10:22:02.863438 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553742-tvhjb" event={"ID":"e3a690cc-d7b2-4d2b-a810-d792daf721d1","Type":"ContainerDied","Data":"cbf217815e76feb57f930177b64ced4c7e4e827d7f10a1eed0b73bf6384c6929"} Mar 11 10:22:04 crc kubenswrapper[4808]: I0311 10:22:04.228005 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553742-tvhjb" Mar 11 10:22:04 crc kubenswrapper[4808]: I0311 10:22:04.387134 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mxq6l\" (UniqueName: \"kubernetes.io/projected/e3a690cc-d7b2-4d2b-a810-d792daf721d1-kube-api-access-mxq6l\") pod \"e3a690cc-d7b2-4d2b-a810-d792daf721d1\" (UID: \"e3a690cc-d7b2-4d2b-a810-d792daf721d1\") " Mar 11 10:22:04 crc kubenswrapper[4808]: I0311 10:22:04.393931 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3a690cc-d7b2-4d2b-a810-d792daf721d1-kube-api-access-mxq6l" (OuterVolumeSpecName: "kube-api-access-mxq6l") pod "e3a690cc-d7b2-4d2b-a810-d792daf721d1" (UID: "e3a690cc-d7b2-4d2b-a810-d792daf721d1"). InnerVolumeSpecName "kube-api-access-mxq6l". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:22:04 crc kubenswrapper[4808]: I0311 10:22:04.489068 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mxq6l\" (UniqueName: \"kubernetes.io/projected/e3a690cc-d7b2-4d2b-a810-d792daf721d1-kube-api-access-mxq6l\") on node \"crc\" DevicePath \"\"" Mar 11 10:22:04 crc kubenswrapper[4808]: I0311 10:22:04.890925 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553742-tvhjb" event={"ID":"e3a690cc-d7b2-4d2b-a810-d792daf721d1","Type":"ContainerDied","Data":"fbd2417f25500c6b7f22b351a58661d47a0451f10241f3eccf27a07ddf199f25"} Mar 11 10:22:04 crc kubenswrapper[4808]: I0311 10:22:04.891346 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fbd2417f25500c6b7f22b351a58661d47a0451f10241f3eccf27a07ddf199f25" Mar 11 10:22:04 crc kubenswrapper[4808]: I0311 10:22:04.890949 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553742-tvhjb" Mar 11 10:22:05 crc kubenswrapper[4808]: I0311 10:22:05.299014 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553736-7dfbb"] Mar 11 10:22:05 crc kubenswrapper[4808]: I0311 10:22:05.304665 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553736-7dfbb"] Mar 11 10:22:05 crc kubenswrapper[4808]: I0311 10:22:05.807664 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64e5bf6a-73d8-4f40-853d-3532947c1560" path="/var/lib/kubelet/pods/64e5bf6a-73d8-4f40-853d-3532947c1560/volumes" Mar 11 10:22:16 crc kubenswrapper[4808]: I0311 10:22:16.027561 4808 patch_prober.go:28] interesting pod/machine-config-daemon-tfsm9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 10:22:16 crc kubenswrapper[4808]: I0311 10:22:16.028227 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 10:22:46 crc kubenswrapper[4808]: I0311 10:22:46.028056 4808 patch_prober.go:28] interesting pod/machine-config-daemon-tfsm9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 10:22:46 crc kubenswrapper[4808]: I0311 10:22:46.028738 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 10:22:46 crc kubenswrapper[4808]: I0311 10:22:46.028793 4808 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" Mar 11 10:22:46 crc kubenswrapper[4808]: I0311 10:22:46.029604 4808 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"713b44db5f04305137581df2a17a4e783258fa66a6b487f3c22633da024fb6c9"} pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 11 10:22:46 crc kubenswrapper[4808]: I0311 10:22:46.029668 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerName="machine-config-daemon" containerID="cri-o://713b44db5f04305137581df2a17a4e783258fa66a6b487f3c22633da024fb6c9" gracePeriod=600 Mar 11 10:22:46 crc kubenswrapper[4808]: I0311 10:22:46.278763 4808 generic.go:334] "Generic (PLEG): container finished" podID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerID="713b44db5f04305137581df2a17a4e783258fa66a6b487f3c22633da024fb6c9" exitCode=0 Mar 11 10:22:46 crc kubenswrapper[4808]: I0311 10:22:46.278842 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" event={"ID":"3dda5309-668d-4e3c-b3b2-1d708eecc578","Type":"ContainerDied","Data":"713b44db5f04305137581df2a17a4e783258fa66a6b487f3c22633da024fb6c9"} Mar 11 10:22:46 crc kubenswrapper[4808]: I0311 10:22:46.279123 4808 scope.go:117] "RemoveContainer" containerID="d768d0cd77bb5d61a0071efa203bb3e177aa6a9fb593f9adbae7687be9d0ea73" Mar 11 10:22:47 crc kubenswrapper[4808]: I0311 10:22:47.289010 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" event={"ID":"3dda5309-668d-4e3c-b3b2-1d708eecc578","Type":"ContainerStarted","Data":"736d4f503e3af501480de20ee69fa653054f5eeb2ae9186331abccc648c6682f"} Mar 11 10:22:57 crc kubenswrapper[4808]: I0311 10:22:57.794328 4808 scope.go:117] "RemoveContainer" containerID="b039cc1cab752eb310dc8e82b620074a5eb9ff4ae7a4f913a4ab7b6a739d4d8e" Mar 11 10:23:14 crc kubenswrapper[4808]: I0311 10:23:14.425310 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7fqlp"] Mar 11 10:23:14 crc kubenswrapper[4808]: E0311 10:23:14.426400 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3a690cc-d7b2-4d2b-a810-d792daf721d1" containerName="oc" Mar 11 10:23:14 crc kubenswrapper[4808]: I0311 10:23:14.426418 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3a690cc-d7b2-4d2b-a810-d792daf721d1" containerName="oc" Mar 11 10:23:14 crc kubenswrapper[4808]: I0311 10:23:14.426713 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3a690cc-d7b2-4d2b-a810-d792daf721d1" containerName="oc" Mar 11 10:23:14 crc kubenswrapper[4808]: I0311 10:23:14.431480 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7fqlp" Mar 11 10:23:14 crc kubenswrapper[4808]: I0311 10:23:14.442739 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7fqlp"] Mar 11 10:23:14 crc kubenswrapper[4808]: I0311 10:23:14.508192 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/deb5efc8-24ed-486e-9e6c-819857df1b3c-utilities\") pod \"certified-operators-7fqlp\" (UID: \"deb5efc8-24ed-486e-9e6c-819857df1b3c\") " pod="openshift-marketplace/certified-operators-7fqlp" Mar 11 10:23:14 crc kubenswrapper[4808]: I0311 10:23:14.508238 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/deb5efc8-24ed-486e-9e6c-819857df1b3c-catalog-content\") pod \"certified-operators-7fqlp\" (UID: \"deb5efc8-24ed-486e-9e6c-819857df1b3c\") " pod="openshift-marketplace/certified-operators-7fqlp" Mar 11 10:23:14 crc kubenswrapper[4808]: I0311 10:23:14.508449 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgpfb\" (UniqueName: \"kubernetes.io/projected/deb5efc8-24ed-486e-9e6c-819857df1b3c-kube-api-access-cgpfb\") pod \"certified-operators-7fqlp\" (UID: \"deb5efc8-24ed-486e-9e6c-819857df1b3c\") " pod="openshift-marketplace/certified-operators-7fqlp" Mar 11 10:23:14 crc kubenswrapper[4808]: I0311 10:23:14.610582 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgpfb\" (UniqueName: \"kubernetes.io/projected/deb5efc8-24ed-486e-9e6c-819857df1b3c-kube-api-access-cgpfb\") pod \"certified-operators-7fqlp\" (UID: \"deb5efc8-24ed-486e-9e6c-819857df1b3c\") " pod="openshift-marketplace/certified-operators-7fqlp" Mar 11 10:23:14 crc kubenswrapper[4808]: I0311 10:23:14.610798 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/deb5efc8-24ed-486e-9e6c-819857df1b3c-utilities\") pod \"certified-operators-7fqlp\" (UID: \"deb5efc8-24ed-486e-9e6c-819857df1b3c\") " pod="openshift-marketplace/certified-operators-7fqlp" Mar 11 10:23:14 crc kubenswrapper[4808]: I0311 10:23:14.610827 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/deb5efc8-24ed-486e-9e6c-819857df1b3c-catalog-content\") pod \"certified-operators-7fqlp\" (UID: \"deb5efc8-24ed-486e-9e6c-819857df1b3c\") " pod="openshift-marketplace/certified-operators-7fqlp" Mar 11 10:23:14 crc kubenswrapper[4808]: I0311 10:23:14.611399 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/deb5efc8-24ed-486e-9e6c-819857df1b3c-catalog-content\") pod \"certified-operators-7fqlp\" (UID: \"deb5efc8-24ed-486e-9e6c-819857df1b3c\") " pod="openshift-marketplace/certified-operators-7fqlp" Mar 11 10:23:14 crc kubenswrapper[4808]: I0311 10:23:14.611518 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/deb5efc8-24ed-486e-9e6c-819857df1b3c-utilities\") pod \"certified-operators-7fqlp\" (UID: \"deb5efc8-24ed-486e-9e6c-819857df1b3c\") " pod="openshift-marketplace/certified-operators-7fqlp" Mar 11 10:23:14 crc kubenswrapper[4808]: I0311 10:23:14.629912 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgpfb\" (UniqueName: \"kubernetes.io/projected/deb5efc8-24ed-486e-9e6c-819857df1b3c-kube-api-access-cgpfb\") pod \"certified-operators-7fqlp\" (UID: \"deb5efc8-24ed-486e-9e6c-819857df1b3c\") " pod="openshift-marketplace/certified-operators-7fqlp" Mar 11 10:23:14 crc kubenswrapper[4808]: I0311 10:23:14.757610 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7fqlp" Mar 11 10:23:15 crc kubenswrapper[4808]: I0311 10:23:15.044119 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7fqlp"] Mar 11 10:23:15 crc kubenswrapper[4808]: I0311 10:23:15.539432 4808 generic.go:334] "Generic (PLEG): container finished" podID="deb5efc8-24ed-486e-9e6c-819857df1b3c" containerID="9f87b11e4a47cf427bbd2b87a0cd075c9d203e1cb974f5964ce3a7c1636478a8" exitCode=0 Mar 11 10:23:15 crc kubenswrapper[4808]: I0311 10:23:15.539516 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7fqlp" event={"ID":"deb5efc8-24ed-486e-9e6c-819857df1b3c","Type":"ContainerDied","Data":"9f87b11e4a47cf427bbd2b87a0cd075c9d203e1cb974f5964ce3a7c1636478a8"} Mar 11 10:23:15 crc kubenswrapper[4808]: I0311 10:23:15.539745 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7fqlp" event={"ID":"deb5efc8-24ed-486e-9e6c-819857df1b3c","Type":"ContainerStarted","Data":"08072116cab898fdbde04f4c156ade2a6b4135f75474d2997b388030183f2aa7"} Mar 11 10:23:16 crc kubenswrapper[4808]: I0311 10:23:16.549650 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7fqlp" event={"ID":"deb5efc8-24ed-486e-9e6c-819857df1b3c","Type":"ContainerStarted","Data":"3807ca723d45f44309c57c973c18ef6d4c0c2549a7e5ecc4c967552751d6a14f"} Mar 11 10:23:17 crc kubenswrapper[4808]: I0311 10:23:17.562177 4808 generic.go:334] "Generic (PLEG): container finished" podID="deb5efc8-24ed-486e-9e6c-819857df1b3c" containerID="3807ca723d45f44309c57c973c18ef6d4c0c2549a7e5ecc4c967552751d6a14f" exitCode=0 Mar 11 10:23:17 crc kubenswrapper[4808]: I0311 10:23:17.562327 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7fqlp" event={"ID":"deb5efc8-24ed-486e-9e6c-819857df1b3c","Type":"ContainerDied","Data":"3807ca723d45f44309c57c973c18ef6d4c0c2549a7e5ecc4c967552751d6a14f"} Mar 11 10:23:19 crc kubenswrapper[4808]: I0311 10:23:19.600278 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7fqlp" event={"ID":"deb5efc8-24ed-486e-9e6c-819857df1b3c","Type":"ContainerStarted","Data":"6765ec92e7a513837b85b5bb7f2167df25ada7d46ec282ae57f4fedff4418f0d"} Mar 11 10:23:19 crc kubenswrapper[4808]: I0311 10:23:19.621700 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7fqlp" podStartSLOduration=2.625010128 podStartE2EDuration="5.621675327s" podCreationTimestamp="2026-03-11 10:23:14 +0000 UTC" firstStartedPulling="2026-03-11 10:23:15.540911911 +0000 UTC m=+6246.494235221" lastFinishedPulling="2026-03-11 10:23:18.5375771 +0000 UTC m=+6249.490900420" observedRunningTime="2026-03-11 10:23:19.616116789 +0000 UTC m=+6250.569440119" watchObservedRunningTime="2026-03-11 10:23:19.621675327 +0000 UTC m=+6250.574998667" Mar 11 10:23:24 crc kubenswrapper[4808]: I0311 10:23:24.757878 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7fqlp" Mar 11 10:23:24 crc kubenswrapper[4808]: I0311 10:23:24.757923 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7fqlp" Mar 11 10:23:24 crc kubenswrapper[4808]: I0311 10:23:24.833118 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7fqlp" Mar 11 10:23:25 crc kubenswrapper[4808]: I0311 10:23:25.728335 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7fqlp" Mar 11 10:23:25 crc kubenswrapper[4808]: I0311 10:23:25.781755 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7fqlp"] Mar 11 10:23:27 crc kubenswrapper[4808]: I0311 10:23:27.978326 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7fqlp" podUID="deb5efc8-24ed-486e-9e6c-819857df1b3c" containerName="registry-server" containerID="cri-o://6765ec92e7a513837b85b5bb7f2167df25ada7d46ec282ae57f4fedff4418f0d" gracePeriod=2 Mar 11 10:23:28 crc kubenswrapper[4808]: I0311 10:23:28.975302 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7fqlp" Mar 11 10:23:28 crc kubenswrapper[4808]: I0311 10:23:28.987530 4808 generic.go:334] "Generic (PLEG): container finished" podID="deb5efc8-24ed-486e-9e6c-819857df1b3c" containerID="6765ec92e7a513837b85b5bb7f2167df25ada7d46ec282ae57f4fedff4418f0d" exitCode=0 Mar 11 10:23:28 crc kubenswrapper[4808]: I0311 10:23:28.987572 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7fqlp" event={"ID":"deb5efc8-24ed-486e-9e6c-819857df1b3c","Type":"ContainerDied","Data":"6765ec92e7a513837b85b5bb7f2167df25ada7d46ec282ae57f4fedff4418f0d"} Mar 11 10:23:28 crc kubenswrapper[4808]: I0311 10:23:28.987606 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7fqlp" event={"ID":"deb5efc8-24ed-486e-9e6c-819857df1b3c","Type":"ContainerDied","Data":"08072116cab898fdbde04f4c156ade2a6b4135f75474d2997b388030183f2aa7"} Mar 11 10:23:28 crc kubenswrapper[4808]: I0311 10:23:28.987602 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7fqlp" Mar 11 10:23:28 crc kubenswrapper[4808]: I0311 10:23:28.987623 4808 scope.go:117] "RemoveContainer" containerID="6765ec92e7a513837b85b5bb7f2167df25ada7d46ec282ae57f4fedff4418f0d" Mar 11 10:23:28 crc kubenswrapper[4808]: I0311 10:23:28.995817 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/deb5efc8-24ed-486e-9e6c-819857df1b3c-utilities\") pod \"deb5efc8-24ed-486e-9e6c-819857df1b3c\" (UID: \"deb5efc8-24ed-486e-9e6c-819857df1b3c\") " Mar 11 10:23:28 crc kubenswrapper[4808]: I0311 10:23:28.996269 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cgpfb\" (UniqueName: \"kubernetes.io/projected/deb5efc8-24ed-486e-9e6c-819857df1b3c-kube-api-access-cgpfb\") pod \"deb5efc8-24ed-486e-9e6c-819857df1b3c\" (UID: \"deb5efc8-24ed-486e-9e6c-819857df1b3c\") " Mar 11 10:23:28 crc kubenswrapper[4808]: I0311 10:23:28.996502 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/deb5efc8-24ed-486e-9e6c-819857df1b3c-catalog-content\") pod \"deb5efc8-24ed-486e-9e6c-819857df1b3c\" (UID: \"deb5efc8-24ed-486e-9e6c-819857df1b3c\") " Mar 11 10:23:28 crc kubenswrapper[4808]: I0311 10:23:28.999897 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/deb5efc8-24ed-486e-9e6c-819857df1b3c-utilities" (OuterVolumeSpecName: "utilities") pod "deb5efc8-24ed-486e-9e6c-819857df1b3c" (UID: "deb5efc8-24ed-486e-9e6c-819857df1b3c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 10:23:29 crc kubenswrapper[4808]: I0311 10:23:29.008160 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/deb5efc8-24ed-486e-9e6c-819857df1b3c-kube-api-access-cgpfb" (OuterVolumeSpecName: "kube-api-access-cgpfb") pod "deb5efc8-24ed-486e-9e6c-819857df1b3c" (UID: "deb5efc8-24ed-486e-9e6c-819857df1b3c"). InnerVolumeSpecName "kube-api-access-cgpfb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:23:29 crc kubenswrapper[4808]: I0311 10:23:29.020284 4808 scope.go:117] "RemoveContainer" containerID="3807ca723d45f44309c57c973c18ef6d4c0c2549a7e5ecc4c967552751d6a14f" Mar 11 10:23:29 crc kubenswrapper[4808]: I0311 10:23:29.051055 4808 scope.go:117] "RemoveContainer" containerID="9f87b11e4a47cf427bbd2b87a0cd075c9d203e1cb974f5964ce3a7c1636478a8" Mar 11 10:23:29 crc kubenswrapper[4808]: I0311 10:23:29.085393 4808 scope.go:117] "RemoveContainer" containerID="6765ec92e7a513837b85b5bb7f2167df25ada7d46ec282ae57f4fedff4418f0d" Mar 11 10:23:29 crc kubenswrapper[4808]: E0311 10:23:29.085901 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6765ec92e7a513837b85b5bb7f2167df25ada7d46ec282ae57f4fedff4418f0d\": container with ID starting with 6765ec92e7a513837b85b5bb7f2167df25ada7d46ec282ae57f4fedff4418f0d not found: ID does not exist" containerID="6765ec92e7a513837b85b5bb7f2167df25ada7d46ec282ae57f4fedff4418f0d" Mar 11 10:23:29 crc kubenswrapper[4808]: I0311 10:23:29.085967 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6765ec92e7a513837b85b5bb7f2167df25ada7d46ec282ae57f4fedff4418f0d"} err="failed to get container status \"6765ec92e7a513837b85b5bb7f2167df25ada7d46ec282ae57f4fedff4418f0d\": rpc error: code = NotFound desc = could not find container \"6765ec92e7a513837b85b5bb7f2167df25ada7d46ec282ae57f4fedff4418f0d\": container with ID starting with 6765ec92e7a513837b85b5bb7f2167df25ada7d46ec282ae57f4fedff4418f0d not found: ID does not exist" Mar 11 10:23:29 crc kubenswrapper[4808]: I0311 10:23:29.085997 4808 scope.go:117] "RemoveContainer" containerID="3807ca723d45f44309c57c973c18ef6d4c0c2549a7e5ecc4c967552751d6a14f" Mar 11 10:23:29 crc kubenswrapper[4808]: E0311 10:23:29.086428 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3807ca723d45f44309c57c973c18ef6d4c0c2549a7e5ecc4c967552751d6a14f\": container with ID starting with 3807ca723d45f44309c57c973c18ef6d4c0c2549a7e5ecc4c967552751d6a14f not found: ID does not exist" containerID="3807ca723d45f44309c57c973c18ef6d4c0c2549a7e5ecc4c967552751d6a14f" Mar 11 10:23:29 crc kubenswrapper[4808]: I0311 10:23:29.086476 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3807ca723d45f44309c57c973c18ef6d4c0c2549a7e5ecc4c967552751d6a14f"} err="failed to get container status \"3807ca723d45f44309c57c973c18ef6d4c0c2549a7e5ecc4c967552751d6a14f\": rpc error: code = NotFound desc = could not find container \"3807ca723d45f44309c57c973c18ef6d4c0c2549a7e5ecc4c967552751d6a14f\": container with ID starting with 3807ca723d45f44309c57c973c18ef6d4c0c2549a7e5ecc4c967552751d6a14f not found: ID does not exist" Mar 11 10:23:29 crc kubenswrapper[4808]: I0311 10:23:29.086498 4808 scope.go:117] "RemoveContainer" containerID="9f87b11e4a47cf427bbd2b87a0cd075c9d203e1cb974f5964ce3a7c1636478a8" Mar 11 10:23:29 crc kubenswrapper[4808]: E0311 10:23:29.087010 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f87b11e4a47cf427bbd2b87a0cd075c9d203e1cb974f5964ce3a7c1636478a8\": container with ID starting with 9f87b11e4a47cf427bbd2b87a0cd075c9d203e1cb974f5964ce3a7c1636478a8 not found: ID does not exist" containerID="9f87b11e4a47cf427bbd2b87a0cd075c9d203e1cb974f5964ce3a7c1636478a8" Mar 11 10:23:29 crc kubenswrapper[4808]: I0311 10:23:29.087065 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f87b11e4a47cf427bbd2b87a0cd075c9d203e1cb974f5964ce3a7c1636478a8"} err="failed to get container status \"9f87b11e4a47cf427bbd2b87a0cd075c9d203e1cb974f5964ce3a7c1636478a8\": rpc error: code = NotFound desc = could not find container \"9f87b11e4a47cf427bbd2b87a0cd075c9d203e1cb974f5964ce3a7c1636478a8\": container with ID starting with 9f87b11e4a47cf427bbd2b87a0cd075c9d203e1cb974f5964ce3a7c1636478a8 not found: ID does not exist" Mar 11 10:23:29 crc kubenswrapper[4808]: I0311 10:23:29.090017 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/deb5efc8-24ed-486e-9e6c-819857df1b3c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "deb5efc8-24ed-486e-9e6c-819857df1b3c" (UID: "deb5efc8-24ed-486e-9e6c-819857df1b3c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 10:23:29 crc kubenswrapper[4808]: I0311 10:23:29.100311 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/deb5efc8-24ed-486e-9e6c-819857df1b3c-utilities\") on node \"crc\" DevicePath \"\"" Mar 11 10:23:29 crc kubenswrapper[4808]: I0311 10:23:29.100347 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cgpfb\" (UniqueName: \"kubernetes.io/projected/deb5efc8-24ed-486e-9e6c-819857df1b3c-kube-api-access-cgpfb\") on node \"crc\" DevicePath \"\"" Mar 11 10:23:29 crc kubenswrapper[4808]: I0311 10:23:29.100371 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/deb5efc8-24ed-486e-9e6c-819857df1b3c-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 11 10:23:29 crc kubenswrapper[4808]: I0311 10:23:29.322727 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7fqlp"] Mar 11 10:23:29 crc kubenswrapper[4808]: I0311 10:23:29.328078 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7fqlp"] Mar 11 10:23:29 crc kubenswrapper[4808]: I0311 10:23:29.799020 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="deb5efc8-24ed-486e-9e6c-819857df1b3c" path="/var/lib/kubelet/pods/deb5efc8-24ed-486e-9e6c-819857df1b3c/volumes" Mar 11 10:23:31 crc kubenswrapper[4808]: I0311 10:23:31.059450 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-4f84-account-create-update-hkf9p"] Mar 11 10:23:31 crc kubenswrapper[4808]: I0311 10:23:31.065933 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-wf94v"] Mar 11 10:23:31 crc kubenswrapper[4808]: I0311 10:23:31.077006 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-4f84-account-create-update-hkf9p"] Mar 11 10:23:31 crc kubenswrapper[4808]: I0311 10:23:31.085534 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-wf94v"] Mar 11 10:23:31 crc kubenswrapper[4808]: I0311 10:23:31.805754 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0401232b-b657-4e67-b2b4-4bdd71c3f409" path="/var/lib/kubelet/pods/0401232b-b657-4e67-b2b4-4bdd71c3f409/volumes" Mar 11 10:23:31 crc kubenswrapper[4808]: I0311 10:23:31.806484 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="251ae4bf-4e3d-4665-b893-aa0409a4b0cc" path="/var/lib/kubelet/pods/251ae4bf-4e3d-4665-b893-aa0409a4b0cc/volumes" Mar 11 10:23:42 crc kubenswrapper[4808]: I0311 10:23:42.038237 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-r866s"] Mar 11 10:23:42 crc kubenswrapper[4808]: I0311 10:23:42.047261 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-r866s"] Mar 11 10:23:43 crc kubenswrapper[4808]: I0311 10:23:43.801781 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b4593c5-768f-4fc4-a240-58c73fc92664" path="/var/lib/kubelet/pods/4b4593c5-768f-4fc4-a240-58c73fc92664/volumes" Mar 11 10:23:55 crc kubenswrapper[4808]: I0311 10:23:55.030078 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-hmcwh"] Mar 11 10:23:55 crc kubenswrapper[4808]: I0311 10:23:55.040582 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-hmcwh"] Mar 11 10:23:55 crc kubenswrapper[4808]: I0311 10:23:55.805076 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5774d085-51ad-43a1-9404-6c288deec196" path="/var/lib/kubelet/pods/5774d085-51ad-43a1-9404-6c288deec196/volumes" Mar 11 10:23:57 crc kubenswrapper[4808]: I0311 10:23:57.859131 4808 scope.go:117] "RemoveContainer" containerID="d9b94245f6291c94f554660eaf0535397113461d74f015e17b5f8c0b3c32507d" Mar 11 10:23:57 crc kubenswrapper[4808]: I0311 10:23:57.892975 4808 scope.go:117] "RemoveContainer" containerID="ae3a065f46d337a45bd55bf7959adffc239b664cf6bbcd7497b74b571196fa1d" Mar 11 10:23:57 crc kubenswrapper[4808]: I0311 10:23:57.954670 4808 scope.go:117] "RemoveContainer" containerID="798430011ac65f269e138f564804834356fb80254c4826f6a805f0ae515974d4" Mar 11 10:23:58 crc kubenswrapper[4808]: I0311 10:23:58.020155 4808 scope.go:117] "RemoveContainer" containerID="812848b18e75e588cd448e12ae3f766b6be7e494c1942939a03312177f3d64d6" Mar 11 10:24:00 crc kubenswrapper[4808]: I0311 10:24:00.150176 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553744-2wzr4"] Mar 11 10:24:00 crc kubenswrapper[4808]: E0311 10:24:00.150875 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="deb5efc8-24ed-486e-9e6c-819857df1b3c" containerName="extract-utilities" Mar 11 10:24:00 crc kubenswrapper[4808]: I0311 10:24:00.150891 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="deb5efc8-24ed-486e-9e6c-819857df1b3c" containerName="extract-utilities" Mar 11 10:24:00 crc kubenswrapper[4808]: E0311 10:24:00.150906 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="deb5efc8-24ed-486e-9e6c-819857df1b3c" containerName="extract-content" Mar 11 10:24:00 crc kubenswrapper[4808]: I0311 10:24:00.150913 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="deb5efc8-24ed-486e-9e6c-819857df1b3c" containerName="extract-content" Mar 11 10:24:00 crc kubenswrapper[4808]: E0311 10:24:00.150950 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="deb5efc8-24ed-486e-9e6c-819857df1b3c" containerName="registry-server" Mar 11 10:24:00 crc kubenswrapper[4808]: I0311 10:24:00.150958 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="deb5efc8-24ed-486e-9e6c-819857df1b3c" containerName="registry-server" Mar 11 10:24:00 crc kubenswrapper[4808]: I0311 10:24:00.151162 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="deb5efc8-24ed-486e-9e6c-819857df1b3c" containerName="registry-server" Mar 11 10:24:00 crc kubenswrapper[4808]: I0311 10:24:00.151784 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553744-2wzr4" Mar 11 10:24:00 crc kubenswrapper[4808]: I0311 10:24:00.153693 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-8r7cc" Mar 11 10:24:00 crc kubenswrapper[4808]: I0311 10:24:00.154273 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 10:24:00 crc kubenswrapper[4808]: I0311 10:24:00.154393 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 10:24:00 crc kubenswrapper[4808]: I0311 10:24:00.168646 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553744-2wzr4"] Mar 11 10:24:00 crc kubenswrapper[4808]: I0311 10:24:00.211305 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vl6gv\" (UniqueName: \"kubernetes.io/projected/a103e496-f239-4c74-ae75-7a7baa88edd5-kube-api-access-vl6gv\") pod \"auto-csr-approver-29553744-2wzr4\" (UID: \"a103e496-f239-4c74-ae75-7a7baa88edd5\") " pod="openshift-infra/auto-csr-approver-29553744-2wzr4" Mar 11 10:24:00 crc kubenswrapper[4808]: I0311 10:24:00.312568 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vl6gv\" (UniqueName: \"kubernetes.io/projected/a103e496-f239-4c74-ae75-7a7baa88edd5-kube-api-access-vl6gv\") pod \"auto-csr-approver-29553744-2wzr4\" (UID: \"a103e496-f239-4c74-ae75-7a7baa88edd5\") " pod="openshift-infra/auto-csr-approver-29553744-2wzr4" Mar 11 10:24:00 crc kubenswrapper[4808]: I0311 10:24:00.345879 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vl6gv\" (UniqueName: \"kubernetes.io/projected/a103e496-f239-4c74-ae75-7a7baa88edd5-kube-api-access-vl6gv\") pod \"auto-csr-approver-29553744-2wzr4\" (UID: \"a103e496-f239-4c74-ae75-7a7baa88edd5\") " pod="openshift-infra/auto-csr-approver-29553744-2wzr4" Mar 11 10:24:00 crc kubenswrapper[4808]: I0311 10:24:00.473192 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553744-2wzr4" Mar 11 10:24:00 crc kubenswrapper[4808]: W0311 10:24:00.872623 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda103e496_f239_4c74_ae75_7a7baa88edd5.slice/crio-cd96bc0bf597d541b641d865995c40470d0cd0f8f422839108e3a52f0312569c WatchSource:0}: Error finding container cd96bc0bf597d541b641d865995c40470d0cd0f8f422839108e3a52f0312569c: Status 404 returned error can't find the container with id cd96bc0bf597d541b641d865995c40470d0cd0f8f422839108e3a52f0312569c Mar 11 10:24:00 crc kubenswrapper[4808]: I0311 10:24:00.873628 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553744-2wzr4"] Mar 11 10:24:01 crc kubenswrapper[4808]: I0311 10:24:01.269760 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553744-2wzr4" event={"ID":"a103e496-f239-4c74-ae75-7a7baa88edd5","Type":"ContainerStarted","Data":"cd96bc0bf597d541b641d865995c40470d0cd0f8f422839108e3a52f0312569c"} Mar 11 10:24:02 crc kubenswrapper[4808]: I0311 10:24:02.278141 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553744-2wzr4" event={"ID":"a103e496-f239-4c74-ae75-7a7baa88edd5","Type":"ContainerStarted","Data":"834d472ab3da85d30059757c7ab7e8a0051e6fcd749b98ec257ed1da6e382126"} Mar 11 10:24:02 crc kubenswrapper[4808]: I0311 10:24:02.304624 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29553744-2wzr4" podStartSLOduration=1.408740975 podStartE2EDuration="2.304604132s" podCreationTimestamp="2026-03-11 10:24:00 +0000 UTC" firstStartedPulling="2026-03-11 10:24:00.875061204 +0000 UTC m=+6291.828384524" lastFinishedPulling="2026-03-11 10:24:01.770924371 +0000 UTC m=+6292.724247681" observedRunningTime="2026-03-11 10:24:02.298429928 +0000 UTC m=+6293.251753258" watchObservedRunningTime="2026-03-11 10:24:02.304604132 +0000 UTC m=+6293.257927452" Mar 11 10:24:03 crc kubenswrapper[4808]: I0311 10:24:03.286630 4808 generic.go:334] "Generic (PLEG): container finished" podID="a103e496-f239-4c74-ae75-7a7baa88edd5" containerID="834d472ab3da85d30059757c7ab7e8a0051e6fcd749b98ec257ed1da6e382126" exitCode=0 Mar 11 10:24:03 crc kubenswrapper[4808]: I0311 10:24:03.286701 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553744-2wzr4" event={"ID":"a103e496-f239-4c74-ae75-7a7baa88edd5","Type":"ContainerDied","Data":"834d472ab3da85d30059757c7ab7e8a0051e6fcd749b98ec257ed1da6e382126"} Mar 11 10:24:04 crc kubenswrapper[4808]: I0311 10:24:04.582470 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553744-2wzr4" Mar 11 10:24:04 crc kubenswrapper[4808]: I0311 10:24:04.600561 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vl6gv\" (UniqueName: \"kubernetes.io/projected/a103e496-f239-4c74-ae75-7a7baa88edd5-kube-api-access-vl6gv\") pod \"a103e496-f239-4c74-ae75-7a7baa88edd5\" (UID: \"a103e496-f239-4c74-ae75-7a7baa88edd5\") " Mar 11 10:24:04 crc kubenswrapper[4808]: I0311 10:24:04.607662 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a103e496-f239-4c74-ae75-7a7baa88edd5-kube-api-access-vl6gv" (OuterVolumeSpecName: "kube-api-access-vl6gv") pod "a103e496-f239-4c74-ae75-7a7baa88edd5" (UID: "a103e496-f239-4c74-ae75-7a7baa88edd5"). InnerVolumeSpecName "kube-api-access-vl6gv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:24:04 crc kubenswrapper[4808]: I0311 10:24:04.703154 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vl6gv\" (UniqueName: \"kubernetes.io/projected/a103e496-f239-4c74-ae75-7a7baa88edd5-kube-api-access-vl6gv\") on node \"crc\" DevicePath \"\"" Mar 11 10:24:05 crc kubenswrapper[4808]: I0311 10:24:05.302527 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553744-2wzr4" event={"ID":"a103e496-f239-4c74-ae75-7a7baa88edd5","Type":"ContainerDied","Data":"cd96bc0bf597d541b641d865995c40470d0cd0f8f422839108e3a52f0312569c"} Mar 11 10:24:05 crc kubenswrapper[4808]: I0311 10:24:05.302770 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd96bc0bf597d541b641d865995c40470d0cd0f8f422839108e3a52f0312569c" Mar 11 10:24:05 crc kubenswrapper[4808]: I0311 10:24:05.302719 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553744-2wzr4" Mar 11 10:24:05 crc kubenswrapper[4808]: I0311 10:24:05.368888 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553738-bk6fl"] Mar 11 10:24:05 crc kubenswrapper[4808]: I0311 10:24:05.377218 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553738-bk6fl"] Mar 11 10:24:05 crc kubenswrapper[4808]: I0311 10:24:05.798748 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48fe4790-96d9-4697-b3bf-1ef43aa679ef" path="/var/lib/kubelet/pods/48fe4790-96d9-4697-b3bf-1ef43aa679ef/volumes" Mar 11 10:24:46 crc kubenswrapper[4808]: I0311 10:24:46.027942 4808 patch_prober.go:28] interesting pod/machine-config-daemon-tfsm9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 10:24:46 crc kubenswrapper[4808]: I0311 10:24:46.028572 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 10:24:58 crc kubenswrapper[4808]: I0311 10:24:58.156534 4808 scope.go:117] "RemoveContainer" containerID="1191df428e66bead61cff11d4da4c0136b53e812dd50bd6843afa40fde54d414" Mar 11 10:25:16 crc kubenswrapper[4808]: I0311 10:25:16.027036 4808 patch_prober.go:28] interesting pod/machine-config-daemon-tfsm9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 10:25:16 crc kubenswrapper[4808]: I0311 10:25:16.027483 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 10:25:46 crc kubenswrapper[4808]: I0311 10:25:46.027969 4808 patch_prober.go:28] interesting pod/machine-config-daemon-tfsm9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 10:25:46 crc kubenswrapper[4808]: I0311 10:25:46.028520 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 10:25:46 crc kubenswrapper[4808]: I0311 10:25:46.028564 4808 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" Mar 11 10:25:46 crc kubenswrapper[4808]: I0311 10:25:46.029040 4808 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"736d4f503e3af501480de20ee69fa653054f5eeb2ae9186331abccc648c6682f"} pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 11 10:25:46 crc kubenswrapper[4808]: I0311 10:25:46.029090 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerName="machine-config-daemon" containerID="cri-o://736d4f503e3af501480de20ee69fa653054f5eeb2ae9186331abccc648c6682f" gracePeriod=600 Mar 11 10:25:46 crc kubenswrapper[4808]: E0311 10:25:46.150159 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 10:25:46 crc kubenswrapper[4808]: I0311 10:25:46.177721 4808 generic.go:334] "Generic (PLEG): container finished" podID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerID="736d4f503e3af501480de20ee69fa653054f5eeb2ae9186331abccc648c6682f" exitCode=0 Mar 11 10:25:46 crc kubenswrapper[4808]: I0311 10:25:46.177771 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" event={"ID":"3dda5309-668d-4e3c-b3b2-1d708eecc578","Type":"ContainerDied","Data":"736d4f503e3af501480de20ee69fa653054f5eeb2ae9186331abccc648c6682f"} Mar 11 10:25:46 crc kubenswrapper[4808]: I0311 10:25:46.177808 4808 scope.go:117] "RemoveContainer" containerID="713b44db5f04305137581df2a17a4e783258fa66a6b487f3c22633da024fb6c9" Mar 11 10:25:46 crc kubenswrapper[4808]: I0311 10:25:46.178431 4808 scope.go:117] "RemoveContainer" containerID="736d4f503e3af501480de20ee69fa653054f5eeb2ae9186331abccc648c6682f" Mar 11 10:25:46 crc kubenswrapper[4808]: E0311 10:25:46.178744 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 10:25:57 crc kubenswrapper[4808]: I0311 10:25:57.791489 4808 scope.go:117] "RemoveContainer" containerID="736d4f503e3af501480de20ee69fa653054f5eeb2ae9186331abccc648c6682f" Mar 11 10:25:57 crc kubenswrapper[4808]: E0311 10:25:57.792351 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 10:26:00 crc kubenswrapper[4808]: I0311 10:26:00.159773 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553746-bdxjr"] Mar 11 10:26:00 crc kubenswrapper[4808]: E0311 10:26:00.160493 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a103e496-f239-4c74-ae75-7a7baa88edd5" containerName="oc" Mar 11 10:26:00 crc kubenswrapper[4808]: I0311 10:26:00.160510 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="a103e496-f239-4c74-ae75-7a7baa88edd5" containerName="oc" Mar 11 10:26:00 crc kubenswrapper[4808]: I0311 10:26:00.160712 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="a103e496-f239-4c74-ae75-7a7baa88edd5" containerName="oc" Mar 11 10:26:00 crc kubenswrapper[4808]: I0311 10:26:00.161345 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553746-bdxjr" Mar 11 10:26:00 crc kubenswrapper[4808]: I0311 10:26:00.166345 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-8r7cc" Mar 11 10:26:00 crc kubenswrapper[4808]: I0311 10:26:00.166549 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 10:26:00 crc kubenswrapper[4808]: I0311 10:26:00.166621 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 10:26:00 crc kubenswrapper[4808]: I0311 10:26:00.174301 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553746-bdxjr"] Mar 11 10:26:00 crc kubenswrapper[4808]: I0311 10:26:00.275727 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9g5g2\" (UniqueName: \"kubernetes.io/projected/05b293a9-6e2e-48d9-b4c0-e0f475a6288d-kube-api-access-9g5g2\") pod \"auto-csr-approver-29553746-bdxjr\" (UID: \"05b293a9-6e2e-48d9-b4c0-e0f475a6288d\") " pod="openshift-infra/auto-csr-approver-29553746-bdxjr" Mar 11 10:26:00 crc kubenswrapper[4808]: I0311 10:26:00.376885 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9g5g2\" (UniqueName: \"kubernetes.io/projected/05b293a9-6e2e-48d9-b4c0-e0f475a6288d-kube-api-access-9g5g2\") pod \"auto-csr-approver-29553746-bdxjr\" (UID: \"05b293a9-6e2e-48d9-b4c0-e0f475a6288d\") " pod="openshift-infra/auto-csr-approver-29553746-bdxjr" Mar 11 10:26:00 crc kubenswrapper[4808]: I0311 10:26:00.407276 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9g5g2\" (UniqueName: \"kubernetes.io/projected/05b293a9-6e2e-48d9-b4c0-e0f475a6288d-kube-api-access-9g5g2\") pod \"auto-csr-approver-29553746-bdxjr\" (UID: \"05b293a9-6e2e-48d9-b4c0-e0f475a6288d\") " pod="openshift-infra/auto-csr-approver-29553746-bdxjr" Mar 11 10:26:00 crc kubenswrapper[4808]: I0311 10:26:00.478952 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553746-bdxjr" Mar 11 10:26:00 crc kubenswrapper[4808]: I0311 10:26:00.934886 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553746-bdxjr"] Mar 11 10:26:01 crc kubenswrapper[4808]: I0311 10:26:01.335061 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553746-bdxjr" event={"ID":"05b293a9-6e2e-48d9-b4c0-e0f475a6288d","Type":"ContainerStarted","Data":"857e25f3acd99f461687a6b84a1c5519e59864425c9887714a3fae772ea13f65"} Mar 11 10:26:02 crc kubenswrapper[4808]: I0311 10:26:02.342713 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553746-bdxjr" event={"ID":"05b293a9-6e2e-48d9-b4c0-e0f475a6288d","Type":"ContainerStarted","Data":"563a1c239a4163caa081ca40ef059a8188572e0e777003522e6b13baa46a93c9"} Mar 11 10:26:02 crc kubenswrapper[4808]: I0311 10:26:02.361709 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29553746-bdxjr" podStartSLOduration=1.2984581880000001 podStartE2EDuration="2.361680925s" podCreationTimestamp="2026-03-11 10:26:00 +0000 UTC" firstStartedPulling="2026-03-11 10:26:00.940824852 +0000 UTC m=+6411.894148172" lastFinishedPulling="2026-03-11 10:26:02.004047589 +0000 UTC m=+6412.957370909" observedRunningTime="2026-03-11 10:26:02.357768315 +0000 UTC m=+6413.311091645" watchObservedRunningTime="2026-03-11 10:26:02.361680925 +0000 UTC m=+6413.315004285" Mar 11 10:26:03 crc kubenswrapper[4808]: I0311 10:26:03.354375 4808 generic.go:334] "Generic (PLEG): container finished" podID="05b293a9-6e2e-48d9-b4c0-e0f475a6288d" containerID="563a1c239a4163caa081ca40ef059a8188572e0e777003522e6b13baa46a93c9" exitCode=0 Mar 11 10:26:03 crc kubenswrapper[4808]: I0311 10:26:03.354698 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553746-bdxjr" event={"ID":"05b293a9-6e2e-48d9-b4c0-e0f475a6288d","Type":"ContainerDied","Data":"563a1c239a4163caa081ca40ef059a8188572e0e777003522e6b13baa46a93c9"} Mar 11 10:26:04 crc kubenswrapper[4808]: I0311 10:26:04.833042 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553746-bdxjr" Mar 11 10:26:04 crc kubenswrapper[4808]: I0311 10:26:04.956468 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9g5g2\" (UniqueName: \"kubernetes.io/projected/05b293a9-6e2e-48d9-b4c0-e0f475a6288d-kube-api-access-9g5g2\") pod \"05b293a9-6e2e-48d9-b4c0-e0f475a6288d\" (UID: \"05b293a9-6e2e-48d9-b4c0-e0f475a6288d\") " Mar 11 10:26:04 crc kubenswrapper[4808]: I0311 10:26:04.961200 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05b293a9-6e2e-48d9-b4c0-e0f475a6288d-kube-api-access-9g5g2" (OuterVolumeSpecName: "kube-api-access-9g5g2") pod "05b293a9-6e2e-48d9-b4c0-e0f475a6288d" (UID: "05b293a9-6e2e-48d9-b4c0-e0f475a6288d"). InnerVolumeSpecName "kube-api-access-9g5g2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:26:05 crc kubenswrapper[4808]: I0311 10:26:05.059125 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9g5g2\" (UniqueName: \"kubernetes.io/projected/05b293a9-6e2e-48d9-b4c0-e0f475a6288d-kube-api-access-9g5g2\") on node \"crc\" DevicePath \"\"" Mar 11 10:26:05 crc kubenswrapper[4808]: I0311 10:26:05.372183 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553746-bdxjr" event={"ID":"05b293a9-6e2e-48d9-b4c0-e0f475a6288d","Type":"ContainerDied","Data":"857e25f3acd99f461687a6b84a1c5519e59864425c9887714a3fae772ea13f65"} Mar 11 10:26:05 crc kubenswrapper[4808]: I0311 10:26:05.372230 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="857e25f3acd99f461687a6b84a1c5519e59864425c9887714a3fae772ea13f65" Mar 11 10:26:05 crc kubenswrapper[4808]: I0311 10:26:05.372497 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553746-bdxjr" Mar 11 10:26:05 crc kubenswrapper[4808]: I0311 10:26:05.433159 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553740-pqw28"] Mar 11 10:26:05 crc kubenswrapper[4808]: I0311 10:26:05.441771 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553740-pqw28"] Mar 11 10:26:05 crc kubenswrapper[4808]: I0311 10:26:05.799061 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ff122be-684e-4fff-bdd4-925636ea0afd" path="/var/lib/kubelet/pods/4ff122be-684e-4fff-bdd4-925636ea0afd/volumes" Mar 11 10:26:12 crc kubenswrapper[4808]: I0311 10:26:12.788952 4808 scope.go:117] "RemoveContainer" containerID="736d4f503e3af501480de20ee69fa653054f5eeb2ae9186331abccc648c6682f" Mar 11 10:26:12 crc kubenswrapper[4808]: E0311 10:26:12.790876 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 10:26:23 crc kubenswrapper[4808]: I0311 10:26:23.789853 4808 scope.go:117] "RemoveContainer" containerID="736d4f503e3af501480de20ee69fa653054f5eeb2ae9186331abccc648c6682f" Mar 11 10:26:23 crc kubenswrapper[4808]: E0311 10:26:23.791188 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 10:26:35 crc kubenswrapper[4808]: I0311 10:26:35.789923 4808 scope.go:117] "RemoveContainer" containerID="736d4f503e3af501480de20ee69fa653054f5eeb2ae9186331abccc648c6682f" Mar 11 10:26:35 crc kubenswrapper[4808]: E0311 10:26:35.790988 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 10:26:47 crc kubenswrapper[4808]: I0311 10:26:47.790134 4808 scope.go:117] "RemoveContainer" containerID="736d4f503e3af501480de20ee69fa653054f5eeb2ae9186331abccc648c6682f" Mar 11 10:26:47 crc kubenswrapper[4808]: E0311 10:26:47.790909 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 10:26:58 crc kubenswrapper[4808]: I0311 10:26:58.266586 4808 scope.go:117] "RemoveContainer" containerID="60f7d1327d649e2b103119c43fc97c8bdb112e3446797aa4a454ecde4d3028a6" Mar 11 10:27:02 crc kubenswrapper[4808]: I0311 10:27:02.789413 4808 scope.go:117] "RemoveContainer" containerID="736d4f503e3af501480de20ee69fa653054f5eeb2ae9186331abccc648c6682f" Mar 11 10:27:02 crc kubenswrapper[4808]: E0311 10:27:02.791549 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 10:27:14 crc kubenswrapper[4808]: I0311 10:27:14.789609 4808 scope.go:117] "RemoveContainer" containerID="736d4f503e3af501480de20ee69fa653054f5eeb2ae9186331abccc648c6682f" Mar 11 10:27:14 crc kubenswrapper[4808]: E0311 10:27:14.790470 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 10:27:25 crc kubenswrapper[4808]: I0311 10:27:25.789629 4808 scope.go:117] "RemoveContainer" containerID="736d4f503e3af501480de20ee69fa653054f5eeb2ae9186331abccc648c6682f" Mar 11 10:27:25 crc kubenswrapper[4808]: E0311 10:27:25.792084 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 10:27:38 crc kubenswrapper[4808]: I0311 10:27:38.789667 4808 scope.go:117] "RemoveContainer" containerID="736d4f503e3af501480de20ee69fa653054f5eeb2ae9186331abccc648c6682f" Mar 11 10:27:38 crc kubenswrapper[4808]: E0311 10:27:38.790778 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 10:27:49 crc kubenswrapper[4808]: I0311 10:27:49.802270 4808 scope.go:117] "RemoveContainer" containerID="736d4f503e3af501480de20ee69fa653054f5eeb2ae9186331abccc648c6682f" Mar 11 10:27:49 crc kubenswrapper[4808]: E0311 10:27:49.803128 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 10:27:50 crc kubenswrapper[4808]: I0311 10:27:50.147019 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hlckr"] Mar 11 10:27:50 crc kubenswrapper[4808]: E0311 10:27:50.147434 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05b293a9-6e2e-48d9-b4c0-e0f475a6288d" containerName="oc" Mar 11 10:27:50 crc kubenswrapper[4808]: I0311 10:27:50.147458 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="05b293a9-6e2e-48d9-b4c0-e0f475a6288d" containerName="oc" Mar 11 10:27:50 crc kubenswrapper[4808]: I0311 10:27:50.147659 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="05b293a9-6e2e-48d9-b4c0-e0f475a6288d" containerName="oc" Mar 11 10:27:50 crc kubenswrapper[4808]: I0311 10:27:50.149471 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hlckr" Mar 11 10:27:50 crc kubenswrapper[4808]: I0311 10:27:50.162814 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hlckr"] Mar 11 10:27:50 crc kubenswrapper[4808]: I0311 10:27:50.324668 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dlx4\" (UniqueName: \"kubernetes.io/projected/4416c6cf-1746-4a55-bef5-9df874731b4d-kube-api-access-5dlx4\") pod \"redhat-marketplace-hlckr\" (UID: \"4416c6cf-1746-4a55-bef5-9df874731b4d\") " pod="openshift-marketplace/redhat-marketplace-hlckr" Mar 11 10:27:50 crc kubenswrapper[4808]: I0311 10:27:50.324791 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4416c6cf-1746-4a55-bef5-9df874731b4d-catalog-content\") pod \"redhat-marketplace-hlckr\" (UID: \"4416c6cf-1746-4a55-bef5-9df874731b4d\") " pod="openshift-marketplace/redhat-marketplace-hlckr" Mar 11 10:27:50 crc kubenswrapper[4808]: I0311 10:27:50.324834 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4416c6cf-1746-4a55-bef5-9df874731b4d-utilities\") pod \"redhat-marketplace-hlckr\" (UID: \"4416c6cf-1746-4a55-bef5-9df874731b4d\") " pod="openshift-marketplace/redhat-marketplace-hlckr" Mar 11 10:27:50 crc kubenswrapper[4808]: I0311 10:27:50.426073 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4416c6cf-1746-4a55-bef5-9df874731b4d-catalog-content\") pod \"redhat-marketplace-hlckr\" (UID: \"4416c6cf-1746-4a55-bef5-9df874731b4d\") " pod="openshift-marketplace/redhat-marketplace-hlckr" Mar 11 10:27:50 crc kubenswrapper[4808]: I0311 10:27:50.426135 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4416c6cf-1746-4a55-bef5-9df874731b4d-utilities\") pod \"redhat-marketplace-hlckr\" (UID: \"4416c6cf-1746-4a55-bef5-9df874731b4d\") " pod="openshift-marketplace/redhat-marketplace-hlckr" Mar 11 10:27:50 crc kubenswrapper[4808]: I0311 10:27:50.426198 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dlx4\" (UniqueName: \"kubernetes.io/projected/4416c6cf-1746-4a55-bef5-9df874731b4d-kube-api-access-5dlx4\") pod \"redhat-marketplace-hlckr\" (UID: \"4416c6cf-1746-4a55-bef5-9df874731b4d\") " pod="openshift-marketplace/redhat-marketplace-hlckr" Mar 11 10:27:50 crc kubenswrapper[4808]: I0311 10:27:50.427004 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4416c6cf-1746-4a55-bef5-9df874731b4d-utilities\") pod \"redhat-marketplace-hlckr\" (UID: \"4416c6cf-1746-4a55-bef5-9df874731b4d\") " pod="openshift-marketplace/redhat-marketplace-hlckr" Mar 11 10:27:50 crc kubenswrapper[4808]: I0311 10:27:50.427840 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4416c6cf-1746-4a55-bef5-9df874731b4d-catalog-content\") pod \"redhat-marketplace-hlckr\" (UID: \"4416c6cf-1746-4a55-bef5-9df874731b4d\") " pod="openshift-marketplace/redhat-marketplace-hlckr" Mar 11 10:27:50 crc kubenswrapper[4808]: I0311 10:27:50.454579 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dlx4\" (UniqueName: \"kubernetes.io/projected/4416c6cf-1746-4a55-bef5-9df874731b4d-kube-api-access-5dlx4\") pod \"redhat-marketplace-hlckr\" (UID: \"4416c6cf-1746-4a55-bef5-9df874731b4d\") " pod="openshift-marketplace/redhat-marketplace-hlckr" Mar 11 10:27:50 crc kubenswrapper[4808]: I0311 10:27:50.477191 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hlckr" Mar 11 10:27:50 crc kubenswrapper[4808]: I0311 10:27:50.923713 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hlckr"] Mar 11 10:27:51 crc kubenswrapper[4808]: I0311 10:27:51.319450 4808 generic.go:334] "Generic (PLEG): container finished" podID="4416c6cf-1746-4a55-bef5-9df874731b4d" containerID="b9f5273cdd410a8e3af4b826df14a50ee5dfdef7c9789f1c3f4f35ee40d44e36" exitCode=0 Mar 11 10:27:51 crc kubenswrapper[4808]: I0311 10:27:51.319503 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hlckr" event={"ID":"4416c6cf-1746-4a55-bef5-9df874731b4d","Type":"ContainerDied","Data":"b9f5273cdd410a8e3af4b826df14a50ee5dfdef7c9789f1c3f4f35ee40d44e36"} Mar 11 10:27:51 crc kubenswrapper[4808]: I0311 10:27:51.319813 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hlckr" event={"ID":"4416c6cf-1746-4a55-bef5-9df874731b4d","Type":"ContainerStarted","Data":"d9085e054052dbbeb5676c4863b8917a2c4018081dcaa282de246ca35fb40bcf"} Mar 11 10:27:51 crc kubenswrapper[4808]: I0311 10:27:51.321630 4808 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 11 10:27:52 crc kubenswrapper[4808]: I0311 10:27:52.334337 4808 generic.go:334] "Generic (PLEG): container finished" podID="4416c6cf-1746-4a55-bef5-9df874731b4d" containerID="07c4cca95a909ddeeb42cc382da82f23f6ee50b9459dc6efb6896f3eabb0afa6" exitCode=0 Mar 11 10:27:52 crc kubenswrapper[4808]: I0311 10:27:52.334448 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hlckr" event={"ID":"4416c6cf-1746-4a55-bef5-9df874731b4d","Type":"ContainerDied","Data":"07c4cca95a909ddeeb42cc382da82f23f6ee50b9459dc6efb6896f3eabb0afa6"} Mar 11 10:27:53 crc kubenswrapper[4808]: I0311 10:27:53.343058 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hlckr" event={"ID":"4416c6cf-1746-4a55-bef5-9df874731b4d","Type":"ContainerStarted","Data":"d04cd171f15aeff1d8d9673978c30a1180b6bfc6c2c6686b4fc5ccd2e3ff9846"} Mar 11 10:27:53 crc kubenswrapper[4808]: I0311 10:27:53.363087 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hlckr" podStartSLOduration=1.938165699 podStartE2EDuration="3.363066217s" podCreationTimestamp="2026-03-11 10:27:50 +0000 UTC" firstStartedPulling="2026-03-11 10:27:51.321439765 +0000 UTC m=+6522.274763085" lastFinishedPulling="2026-03-11 10:27:52.746340273 +0000 UTC m=+6523.699663603" observedRunningTime="2026-03-11 10:27:53.360264328 +0000 UTC m=+6524.313587648" watchObservedRunningTime="2026-03-11 10:27:53.363066217 +0000 UTC m=+6524.316389547" Mar 11 10:28:00 crc kubenswrapper[4808]: I0311 10:28:00.160828 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553748-zbbpq"] Mar 11 10:28:00 crc kubenswrapper[4808]: I0311 10:28:00.162990 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553748-zbbpq" Mar 11 10:28:00 crc kubenswrapper[4808]: I0311 10:28:00.166429 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 10:28:00 crc kubenswrapper[4808]: I0311 10:28:00.166468 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-8r7cc" Mar 11 10:28:00 crc kubenswrapper[4808]: I0311 10:28:00.166513 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 10:28:00 crc kubenswrapper[4808]: I0311 10:28:00.169348 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553748-zbbpq"] Mar 11 10:28:00 crc kubenswrapper[4808]: I0311 10:28:00.312804 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42zjc\" (UniqueName: \"kubernetes.io/projected/fa846b72-49f5-425c-8b18-f5d4cdb63558-kube-api-access-42zjc\") pod \"auto-csr-approver-29553748-zbbpq\" (UID: \"fa846b72-49f5-425c-8b18-f5d4cdb63558\") " pod="openshift-infra/auto-csr-approver-29553748-zbbpq" Mar 11 10:28:00 crc kubenswrapper[4808]: I0311 10:28:00.414514 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42zjc\" (UniqueName: \"kubernetes.io/projected/fa846b72-49f5-425c-8b18-f5d4cdb63558-kube-api-access-42zjc\") pod \"auto-csr-approver-29553748-zbbpq\" (UID: \"fa846b72-49f5-425c-8b18-f5d4cdb63558\") " pod="openshift-infra/auto-csr-approver-29553748-zbbpq" Mar 11 10:28:00 crc kubenswrapper[4808]: I0311 10:28:00.439547 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42zjc\" (UniqueName: \"kubernetes.io/projected/fa846b72-49f5-425c-8b18-f5d4cdb63558-kube-api-access-42zjc\") pod \"auto-csr-approver-29553748-zbbpq\" (UID: \"fa846b72-49f5-425c-8b18-f5d4cdb63558\") " pod="openshift-infra/auto-csr-approver-29553748-zbbpq" Mar 11 10:28:00 crc kubenswrapper[4808]: I0311 10:28:00.477797 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hlckr" Mar 11 10:28:00 crc kubenswrapper[4808]: I0311 10:28:00.478141 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hlckr" Mar 11 10:28:00 crc kubenswrapper[4808]: I0311 10:28:00.492616 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553748-zbbpq" Mar 11 10:28:00 crc kubenswrapper[4808]: I0311 10:28:00.569583 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hlckr" Mar 11 10:28:00 crc kubenswrapper[4808]: I0311 10:28:00.789340 4808 scope.go:117] "RemoveContainer" containerID="736d4f503e3af501480de20ee69fa653054f5eeb2ae9186331abccc648c6682f" Mar 11 10:28:00 crc kubenswrapper[4808]: E0311 10:28:00.789787 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 10:28:00 crc kubenswrapper[4808]: I0311 10:28:00.967278 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553748-zbbpq"] Mar 11 10:28:01 crc kubenswrapper[4808]: I0311 10:28:01.426135 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553748-zbbpq" event={"ID":"fa846b72-49f5-425c-8b18-f5d4cdb63558","Type":"ContainerStarted","Data":"626c26470a008578657c7b69e57531440b671116b20b8fb3d2db55f1c129eb9f"} Mar 11 10:28:01 crc kubenswrapper[4808]: I0311 10:28:01.471927 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hlckr" Mar 11 10:28:01 crc kubenswrapper[4808]: I0311 10:28:01.518971 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hlckr"] Mar 11 10:28:02 crc kubenswrapper[4808]: I0311 10:28:02.436004 4808 generic.go:334] "Generic (PLEG): container finished" podID="fa846b72-49f5-425c-8b18-f5d4cdb63558" containerID="f6cb416eecf3995cee4e47b0724af81d5f8dfe90e32904f3b1bde907572e8246" exitCode=0 Mar 11 10:28:02 crc kubenswrapper[4808]: I0311 10:28:02.436080 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553748-zbbpq" event={"ID":"fa846b72-49f5-425c-8b18-f5d4cdb63558","Type":"ContainerDied","Data":"f6cb416eecf3995cee4e47b0724af81d5f8dfe90e32904f3b1bde907572e8246"} Mar 11 10:28:03 crc kubenswrapper[4808]: I0311 10:28:03.442850 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hlckr" podUID="4416c6cf-1746-4a55-bef5-9df874731b4d" containerName="registry-server" containerID="cri-o://d04cd171f15aeff1d8d9673978c30a1180b6bfc6c2c6686b4fc5ccd2e3ff9846" gracePeriod=2 Mar 11 10:28:03 crc kubenswrapper[4808]: I0311 10:28:03.771767 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553748-zbbpq" Mar 11 10:28:03 crc kubenswrapper[4808]: I0311 10:28:03.864898 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hlckr" Mar 11 10:28:03 crc kubenswrapper[4808]: I0311 10:28:03.887217 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-42zjc\" (UniqueName: \"kubernetes.io/projected/fa846b72-49f5-425c-8b18-f5d4cdb63558-kube-api-access-42zjc\") pod \"fa846b72-49f5-425c-8b18-f5d4cdb63558\" (UID: \"fa846b72-49f5-425c-8b18-f5d4cdb63558\") " Mar 11 10:28:03 crc kubenswrapper[4808]: I0311 10:28:03.896791 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa846b72-49f5-425c-8b18-f5d4cdb63558-kube-api-access-42zjc" (OuterVolumeSpecName: "kube-api-access-42zjc") pod "fa846b72-49f5-425c-8b18-f5d4cdb63558" (UID: "fa846b72-49f5-425c-8b18-f5d4cdb63558"). InnerVolumeSpecName "kube-api-access-42zjc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:28:03 crc kubenswrapper[4808]: I0311 10:28:03.991896 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5dlx4\" (UniqueName: \"kubernetes.io/projected/4416c6cf-1746-4a55-bef5-9df874731b4d-kube-api-access-5dlx4\") pod \"4416c6cf-1746-4a55-bef5-9df874731b4d\" (UID: \"4416c6cf-1746-4a55-bef5-9df874731b4d\") " Mar 11 10:28:03 crc kubenswrapper[4808]: I0311 10:28:03.992079 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4416c6cf-1746-4a55-bef5-9df874731b4d-utilities\") pod \"4416c6cf-1746-4a55-bef5-9df874731b4d\" (UID: \"4416c6cf-1746-4a55-bef5-9df874731b4d\") " Mar 11 10:28:03 crc kubenswrapper[4808]: I0311 10:28:03.992117 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4416c6cf-1746-4a55-bef5-9df874731b4d-catalog-content\") pod \"4416c6cf-1746-4a55-bef5-9df874731b4d\" (UID: \"4416c6cf-1746-4a55-bef5-9df874731b4d\") " Mar 11 10:28:03 crc kubenswrapper[4808]: I0311 10:28:03.992473 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-42zjc\" (UniqueName: \"kubernetes.io/projected/fa846b72-49f5-425c-8b18-f5d4cdb63558-kube-api-access-42zjc\") on node \"crc\" DevicePath \"\"" Mar 11 10:28:03 crc kubenswrapper[4808]: I0311 10:28:03.993078 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4416c6cf-1746-4a55-bef5-9df874731b4d-utilities" (OuterVolumeSpecName: "utilities") pod "4416c6cf-1746-4a55-bef5-9df874731b4d" (UID: "4416c6cf-1746-4a55-bef5-9df874731b4d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 10:28:03 crc kubenswrapper[4808]: I0311 10:28:03.995339 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4416c6cf-1746-4a55-bef5-9df874731b4d-kube-api-access-5dlx4" (OuterVolumeSpecName: "kube-api-access-5dlx4") pod "4416c6cf-1746-4a55-bef5-9df874731b4d" (UID: "4416c6cf-1746-4a55-bef5-9df874731b4d"). InnerVolumeSpecName "kube-api-access-5dlx4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:28:04 crc kubenswrapper[4808]: I0311 10:28:04.016749 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4416c6cf-1746-4a55-bef5-9df874731b4d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4416c6cf-1746-4a55-bef5-9df874731b4d" (UID: "4416c6cf-1746-4a55-bef5-9df874731b4d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 10:28:04 crc kubenswrapper[4808]: I0311 10:28:04.093939 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5dlx4\" (UniqueName: \"kubernetes.io/projected/4416c6cf-1746-4a55-bef5-9df874731b4d-kube-api-access-5dlx4\") on node \"crc\" DevicePath \"\"" Mar 11 10:28:04 crc kubenswrapper[4808]: I0311 10:28:04.093976 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4416c6cf-1746-4a55-bef5-9df874731b4d-utilities\") on node \"crc\" DevicePath \"\"" Mar 11 10:28:04 crc kubenswrapper[4808]: I0311 10:28:04.093988 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4416c6cf-1746-4a55-bef5-9df874731b4d-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 11 10:28:04 crc kubenswrapper[4808]: I0311 10:28:04.455279 4808 generic.go:334] "Generic (PLEG): container finished" podID="4416c6cf-1746-4a55-bef5-9df874731b4d" containerID="d04cd171f15aeff1d8d9673978c30a1180b6bfc6c2c6686b4fc5ccd2e3ff9846" exitCode=0 Mar 11 10:28:04 crc kubenswrapper[4808]: I0311 10:28:04.455401 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hlckr" event={"ID":"4416c6cf-1746-4a55-bef5-9df874731b4d","Type":"ContainerDied","Data":"d04cd171f15aeff1d8d9673978c30a1180b6bfc6c2c6686b4fc5ccd2e3ff9846"} Mar 11 10:28:04 crc kubenswrapper[4808]: I0311 10:28:04.455436 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hlckr" Mar 11 10:28:04 crc kubenswrapper[4808]: I0311 10:28:04.455485 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hlckr" event={"ID":"4416c6cf-1746-4a55-bef5-9df874731b4d","Type":"ContainerDied","Data":"d9085e054052dbbeb5676c4863b8917a2c4018081dcaa282de246ca35fb40bcf"} Mar 11 10:28:04 crc kubenswrapper[4808]: I0311 10:28:04.455516 4808 scope.go:117] "RemoveContainer" containerID="d04cd171f15aeff1d8d9673978c30a1180b6bfc6c2c6686b4fc5ccd2e3ff9846" Mar 11 10:28:04 crc kubenswrapper[4808]: I0311 10:28:04.459810 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553748-zbbpq" event={"ID":"fa846b72-49f5-425c-8b18-f5d4cdb63558","Type":"ContainerDied","Data":"626c26470a008578657c7b69e57531440b671116b20b8fb3d2db55f1c129eb9f"} Mar 11 10:28:04 crc kubenswrapper[4808]: I0311 10:28:04.460150 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="626c26470a008578657c7b69e57531440b671116b20b8fb3d2db55f1c129eb9f" Mar 11 10:28:04 crc kubenswrapper[4808]: I0311 10:28:04.460117 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553748-zbbpq" Mar 11 10:28:04 crc kubenswrapper[4808]: I0311 10:28:04.484772 4808 scope.go:117] "RemoveContainer" containerID="07c4cca95a909ddeeb42cc382da82f23f6ee50b9459dc6efb6896f3eabb0afa6" Mar 11 10:28:04 crc kubenswrapper[4808]: I0311 10:28:04.505784 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hlckr"] Mar 11 10:28:04 crc kubenswrapper[4808]: I0311 10:28:04.512390 4808 scope.go:117] "RemoveContainer" containerID="b9f5273cdd410a8e3af4b826df14a50ee5dfdef7c9789f1c3f4f35ee40d44e36" Mar 11 10:28:04 crc kubenswrapper[4808]: I0311 10:28:04.518427 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hlckr"] Mar 11 10:28:04 crc kubenswrapper[4808]: I0311 10:28:04.526862 4808 scope.go:117] "RemoveContainer" containerID="d04cd171f15aeff1d8d9673978c30a1180b6bfc6c2c6686b4fc5ccd2e3ff9846" Mar 11 10:28:04 crc kubenswrapper[4808]: E0311 10:28:04.527392 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d04cd171f15aeff1d8d9673978c30a1180b6bfc6c2c6686b4fc5ccd2e3ff9846\": container with ID starting with d04cd171f15aeff1d8d9673978c30a1180b6bfc6c2c6686b4fc5ccd2e3ff9846 not found: ID does not exist" containerID="d04cd171f15aeff1d8d9673978c30a1180b6bfc6c2c6686b4fc5ccd2e3ff9846" Mar 11 10:28:04 crc kubenswrapper[4808]: I0311 10:28:04.527497 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d04cd171f15aeff1d8d9673978c30a1180b6bfc6c2c6686b4fc5ccd2e3ff9846"} err="failed to get container status \"d04cd171f15aeff1d8d9673978c30a1180b6bfc6c2c6686b4fc5ccd2e3ff9846\": rpc error: code = NotFound desc = could not find container \"d04cd171f15aeff1d8d9673978c30a1180b6bfc6c2c6686b4fc5ccd2e3ff9846\": container with ID starting with d04cd171f15aeff1d8d9673978c30a1180b6bfc6c2c6686b4fc5ccd2e3ff9846 not found: ID does not exist" Mar 11 10:28:04 crc kubenswrapper[4808]: I0311 10:28:04.527574 4808 scope.go:117] "RemoveContainer" containerID="07c4cca95a909ddeeb42cc382da82f23f6ee50b9459dc6efb6896f3eabb0afa6" Mar 11 10:28:04 crc kubenswrapper[4808]: E0311 10:28:04.527942 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07c4cca95a909ddeeb42cc382da82f23f6ee50b9459dc6efb6896f3eabb0afa6\": container with ID starting with 07c4cca95a909ddeeb42cc382da82f23f6ee50b9459dc6efb6896f3eabb0afa6 not found: ID does not exist" containerID="07c4cca95a909ddeeb42cc382da82f23f6ee50b9459dc6efb6896f3eabb0afa6" Mar 11 10:28:04 crc kubenswrapper[4808]: I0311 10:28:04.527989 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07c4cca95a909ddeeb42cc382da82f23f6ee50b9459dc6efb6896f3eabb0afa6"} err="failed to get container status \"07c4cca95a909ddeeb42cc382da82f23f6ee50b9459dc6efb6896f3eabb0afa6\": rpc error: code = NotFound desc = could not find container \"07c4cca95a909ddeeb42cc382da82f23f6ee50b9459dc6efb6896f3eabb0afa6\": container with ID starting with 07c4cca95a909ddeeb42cc382da82f23f6ee50b9459dc6efb6896f3eabb0afa6 not found: ID does not exist" Mar 11 10:28:04 crc kubenswrapper[4808]: I0311 10:28:04.528019 4808 scope.go:117] "RemoveContainer" containerID="b9f5273cdd410a8e3af4b826df14a50ee5dfdef7c9789f1c3f4f35ee40d44e36" Mar 11 10:28:04 crc kubenswrapper[4808]: E0311 10:28:04.528329 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9f5273cdd410a8e3af4b826df14a50ee5dfdef7c9789f1c3f4f35ee40d44e36\": container with ID starting with b9f5273cdd410a8e3af4b826df14a50ee5dfdef7c9789f1c3f4f35ee40d44e36 not found: ID does not exist" containerID="b9f5273cdd410a8e3af4b826df14a50ee5dfdef7c9789f1c3f4f35ee40d44e36" Mar 11 10:28:04 crc kubenswrapper[4808]: I0311 10:28:04.528355 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9f5273cdd410a8e3af4b826df14a50ee5dfdef7c9789f1c3f4f35ee40d44e36"} err="failed to get container status \"b9f5273cdd410a8e3af4b826df14a50ee5dfdef7c9789f1c3f4f35ee40d44e36\": rpc error: code = NotFound desc = could not find container \"b9f5273cdd410a8e3af4b826df14a50ee5dfdef7c9789f1c3f4f35ee40d44e36\": container with ID starting with b9f5273cdd410a8e3af4b826df14a50ee5dfdef7c9789f1c3f4f35ee40d44e36 not found: ID does not exist" Mar 11 10:28:04 crc kubenswrapper[4808]: I0311 10:28:04.846529 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553742-tvhjb"] Mar 11 10:28:04 crc kubenswrapper[4808]: I0311 10:28:04.851903 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553742-tvhjb"] Mar 11 10:28:05 crc kubenswrapper[4808]: I0311 10:28:05.807652 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4416c6cf-1746-4a55-bef5-9df874731b4d" path="/var/lib/kubelet/pods/4416c6cf-1746-4a55-bef5-9df874731b4d/volumes" Mar 11 10:28:05 crc kubenswrapper[4808]: I0311 10:28:05.809049 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3a690cc-d7b2-4d2b-a810-d792daf721d1" path="/var/lib/kubelet/pods/e3a690cc-d7b2-4d2b-a810-d792daf721d1/volumes" Mar 11 10:28:15 crc kubenswrapper[4808]: I0311 10:28:15.789615 4808 scope.go:117] "RemoveContainer" containerID="736d4f503e3af501480de20ee69fa653054f5eeb2ae9186331abccc648c6682f" Mar 11 10:28:15 crc kubenswrapper[4808]: E0311 10:28:15.790556 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 10:28:26 crc kubenswrapper[4808]: I0311 10:28:26.789870 4808 scope.go:117] "RemoveContainer" containerID="736d4f503e3af501480de20ee69fa653054f5eeb2ae9186331abccc648c6682f" Mar 11 10:28:26 crc kubenswrapper[4808]: E0311 10:28:26.791323 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 10:28:37 crc kubenswrapper[4808]: I0311 10:28:37.791419 4808 scope.go:117] "RemoveContainer" containerID="736d4f503e3af501480de20ee69fa653054f5eeb2ae9186331abccc648c6682f" Mar 11 10:28:37 crc kubenswrapper[4808]: E0311 10:28:37.792429 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 10:28:48 crc kubenswrapper[4808]: I0311 10:28:48.789394 4808 scope.go:117] "RemoveContainer" containerID="736d4f503e3af501480de20ee69fa653054f5eeb2ae9186331abccc648c6682f" Mar 11 10:28:48 crc kubenswrapper[4808]: E0311 10:28:48.790296 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 10:28:58 crc kubenswrapper[4808]: I0311 10:28:58.375855 4808 scope.go:117] "RemoveContainer" containerID="cbf217815e76feb57f930177b64ced4c7e4e827d7f10a1eed0b73bf6384c6929" Mar 11 10:29:01 crc kubenswrapper[4808]: I0311 10:29:01.790035 4808 scope.go:117] "RemoveContainer" containerID="736d4f503e3af501480de20ee69fa653054f5eeb2ae9186331abccc648c6682f" Mar 11 10:29:01 crc kubenswrapper[4808]: E0311 10:29:01.790696 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 10:29:14 crc kubenswrapper[4808]: I0311 10:29:14.790299 4808 scope.go:117] "RemoveContainer" containerID="736d4f503e3af501480de20ee69fa653054f5eeb2ae9186331abccc648c6682f" Mar 11 10:29:14 crc kubenswrapper[4808]: E0311 10:29:14.791439 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 10:29:26 crc kubenswrapper[4808]: I0311 10:29:26.789721 4808 scope.go:117] "RemoveContainer" containerID="736d4f503e3af501480de20ee69fa653054f5eeb2ae9186331abccc648c6682f" Mar 11 10:29:26 crc kubenswrapper[4808]: E0311 10:29:26.792486 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 10:29:37 crc kubenswrapper[4808]: I0311 10:29:37.339721 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-kszct"] Mar 11 10:29:37 crc kubenswrapper[4808]: E0311 10:29:37.341405 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4416c6cf-1746-4a55-bef5-9df874731b4d" containerName="extract-utilities" Mar 11 10:29:37 crc kubenswrapper[4808]: I0311 10:29:37.341435 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="4416c6cf-1746-4a55-bef5-9df874731b4d" containerName="extract-utilities" Mar 11 10:29:37 crc kubenswrapper[4808]: E0311 10:29:37.341463 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4416c6cf-1746-4a55-bef5-9df874731b4d" containerName="extract-content" Mar 11 10:29:37 crc kubenswrapper[4808]: I0311 10:29:37.341477 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="4416c6cf-1746-4a55-bef5-9df874731b4d" containerName="extract-content" Mar 11 10:29:37 crc kubenswrapper[4808]: E0311 10:29:37.341518 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa846b72-49f5-425c-8b18-f5d4cdb63558" containerName="oc" Mar 11 10:29:37 crc kubenswrapper[4808]: I0311 10:29:37.341534 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa846b72-49f5-425c-8b18-f5d4cdb63558" containerName="oc" Mar 11 10:29:37 crc kubenswrapper[4808]: E0311 10:29:37.341581 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4416c6cf-1746-4a55-bef5-9df874731b4d" containerName="registry-server" Mar 11 10:29:37 crc kubenswrapper[4808]: I0311 10:29:37.341594 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="4416c6cf-1746-4a55-bef5-9df874731b4d" containerName="registry-server" Mar 11 10:29:37 crc kubenswrapper[4808]: I0311 10:29:37.341907 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa846b72-49f5-425c-8b18-f5d4cdb63558" containerName="oc" Mar 11 10:29:37 crc kubenswrapper[4808]: I0311 10:29:37.341959 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="4416c6cf-1746-4a55-bef5-9df874731b4d" containerName="registry-server" Mar 11 10:29:37 crc kubenswrapper[4808]: I0311 10:29:37.344083 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kszct" Mar 11 10:29:37 crc kubenswrapper[4808]: I0311 10:29:37.398797 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kszct"] Mar 11 10:29:37 crc kubenswrapper[4808]: I0311 10:29:37.450456 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91a42830-7b13-4552-8123-e4c94a567cc7-utilities\") pod \"community-operators-kszct\" (UID: \"91a42830-7b13-4552-8123-e4c94a567cc7\") " pod="openshift-marketplace/community-operators-kszct" Mar 11 10:29:37 crc kubenswrapper[4808]: I0311 10:29:37.450506 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91a42830-7b13-4552-8123-e4c94a567cc7-catalog-content\") pod \"community-operators-kszct\" (UID: \"91a42830-7b13-4552-8123-e4c94a567cc7\") " pod="openshift-marketplace/community-operators-kszct" Mar 11 10:29:37 crc kubenswrapper[4808]: I0311 10:29:37.450610 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54zm4\" (UniqueName: \"kubernetes.io/projected/91a42830-7b13-4552-8123-e4c94a567cc7-kube-api-access-54zm4\") pod \"community-operators-kszct\" (UID: \"91a42830-7b13-4552-8123-e4c94a567cc7\") " pod="openshift-marketplace/community-operators-kszct" Mar 11 10:29:37 crc kubenswrapper[4808]: I0311 10:29:37.552162 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54zm4\" (UniqueName: \"kubernetes.io/projected/91a42830-7b13-4552-8123-e4c94a567cc7-kube-api-access-54zm4\") pod \"community-operators-kszct\" (UID: \"91a42830-7b13-4552-8123-e4c94a567cc7\") " pod="openshift-marketplace/community-operators-kszct" Mar 11 10:29:37 crc kubenswrapper[4808]: I0311 10:29:37.552249 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91a42830-7b13-4552-8123-e4c94a567cc7-utilities\") pod \"community-operators-kszct\" (UID: \"91a42830-7b13-4552-8123-e4c94a567cc7\") " pod="openshift-marketplace/community-operators-kszct" Mar 11 10:29:37 crc kubenswrapper[4808]: I0311 10:29:37.552280 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91a42830-7b13-4552-8123-e4c94a567cc7-catalog-content\") pod \"community-operators-kszct\" (UID: \"91a42830-7b13-4552-8123-e4c94a567cc7\") " pod="openshift-marketplace/community-operators-kszct" Mar 11 10:29:37 crc kubenswrapper[4808]: I0311 10:29:37.552853 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91a42830-7b13-4552-8123-e4c94a567cc7-catalog-content\") pod \"community-operators-kszct\" (UID: \"91a42830-7b13-4552-8123-e4c94a567cc7\") " pod="openshift-marketplace/community-operators-kszct" Mar 11 10:29:37 crc kubenswrapper[4808]: I0311 10:29:37.553019 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91a42830-7b13-4552-8123-e4c94a567cc7-utilities\") pod \"community-operators-kszct\" (UID: \"91a42830-7b13-4552-8123-e4c94a567cc7\") " pod="openshift-marketplace/community-operators-kszct" Mar 11 10:29:37 crc kubenswrapper[4808]: I0311 10:29:37.576976 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54zm4\" (UniqueName: \"kubernetes.io/projected/91a42830-7b13-4552-8123-e4c94a567cc7-kube-api-access-54zm4\") pod \"community-operators-kszct\" (UID: \"91a42830-7b13-4552-8123-e4c94a567cc7\") " pod="openshift-marketplace/community-operators-kszct" Mar 11 10:29:37 crc kubenswrapper[4808]: I0311 10:29:37.714613 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kszct" Mar 11 10:29:38 crc kubenswrapper[4808]: I0311 10:29:38.212755 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kszct"] Mar 11 10:29:38 crc kubenswrapper[4808]: W0311 10:29:38.214558 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod91a42830_7b13_4552_8123_e4c94a567cc7.slice/crio-b2341b8cc2488247c83185d352c5fbefa5d06b493f4346597c848e264a6cf447 WatchSource:0}: Error finding container b2341b8cc2488247c83185d352c5fbefa5d06b493f4346597c848e264a6cf447: Status 404 returned error can't find the container with id b2341b8cc2488247c83185d352c5fbefa5d06b493f4346597c848e264a6cf447 Mar 11 10:29:38 crc kubenswrapper[4808]: I0311 10:29:38.270885 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kszct" event={"ID":"91a42830-7b13-4552-8123-e4c94a567cc7","Type":"ContainerStarted","Data":"b2341b8cc2488247c83185d352c5fbefa5d06b493f4346597c848e264a6cf447"} Mar 11 10:29:38 crc kubenswrapper[4808]: I0311 10:29:38.789524 4808 scope.go:117] "RemoveContainer" containerID="736d4f503e3af501480de20ee69fa653054f5eeb2ae9186331abccc648c6682f" Mar 11 10:29:38 crc kubenswrapper[4808]: E0311 10:29:38.789880 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 10:29:39 crc kubenswrapper[4808]: I0311 10:29:39.285105 4808 generic.go:334] "Generic (PLEG): container finished" podID="91a42830-7b13-4552-8123-e4c94a567cc7" containerID="2235da6e74690238eff51f981cea72411898b511ffc85f8ac4eee90552bead92" exitCode=0 Mar 11 10:29:39 crc kubenswrapper[4808]: I0311 10:29:39.285175 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kszct" event={"ID":"91a42830-7b13-4552-8123-e4c94a567cc7","Type":"ContainerDied","Data":"2235da6e74690238eff51f981cea72411898b511ffc85f8ac4eee90552bead92"} Mar 11 10:29:40 crc kubenswrapper[4808]: I0311 10:29:40.300703 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kszct" event={"ID":"91a42830-7b13-4552-8123-e4c94a567cc7","Type":"ContainerStarted","Data":"1772e8fa32a0a0f871d19f785c1ce4ba7e5d846bf2b00322bca5156a42f81ff0"} Mar 11 10:29:41 crc kubenswrapper[4808]: I0311 10:29:41.313783 4808 generic.go:334] "Generic (PLEG): container finished" podID="91a42830-7b13-4552-8123-e4c94a567cc7" containerID="1772e8fa32a0a0f871d19f785c1ce4ba7e5d846bf2b00322bca5156a42f81ff0" exitCode=0 Mar 11 10:29:41 crc kubenswrapper[4808]: I0311 10:29:41.313984 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kszct" event={"ID":"91a42830-7b13-4552-8123-e4c94a567cc7","Type":"ContainerDied","Data":"1772e8fa32a0a0f871d19f785c1ce4ba7e5d846bf2b00322bca5156a42f81ff0"} Mar 11 10:29:42 crc kubenswrapper[4808]: I0311 10:29:42.324201 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kszct" event={"ID":"91a42830-7b13-4552-8123-e4c94a567cc7","Type":"ContainerStarted","Data":"aac618a446f1b452155041d25bfc218560f439aab02344ea30b65c154e3e7611"} Mar 11 10:29:42 crc kubenswrapper[4808]: I0311 10:29:42.345715 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-kszct" podStartSLOduration=2.498119909 podStartE2EDuration="5.34569399s" podCreationTimestamp="2026-03-11 10:29:37 +0000 UTC" firstStartedPulling="2026-03-11 10:29:39.287862529 +0000 UTC m=+6630.241185839" lastFinishedPulling="2026-03-11 10:29:42.1354366 +0000 UTC m=+6633.088759920" observedRunningTime="2026-03-11 10:29:42.343180479 +0000 UTC m=+6633.296503799" watchObservedRunningTime="2026-03-11 10:29:42.34569399 +0000 UTC m=+6633.299017310" Mar 11 10:29:44 crc kubenswrapper[4808]: I0311 10:29:44.696748 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-l9g6w"] Mar 11 10:29:44 crc kubenswrapper[4808]: I0311 10:29:44.699908 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l9g6w" Mar 11 10:29:44 crc kubenswrapper[4808]: I0311 10:29:44.712338 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l9g6w"] Mar 11 10:29:44 crc kubenswrapper[4808]: I0311 10:29:44.818068 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsshw\" (UniqueName: \"kubernetes.io/projected/25bc2184-abc4-463f-82f9-fa045eb0ee89-kube-api-access-rsshw\") pod \"redhat-operators-l9g6w\" (UID: \"25bc2184-abc4-463f-82f9-fa045eb0ee89\") " pod="openshift-marketplace/redhat-operators-l9g6w" Mar 11 10:29:44 crc kubenswrapper[4808]: I0311 10:29:44.818162 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25bc2184-abc4-463f-82f9-fa045eb0ee89-utilities\") pod \"redhat-operators-l9g6w\" (UID: \"25bc2184-abc4-463f-82f9-fa045eb0ee89\") " pod="openshift-marketplace/redhat-operators-l9g6w" Mar 11 10:29:44 crc kubenswrapper[4808]: I0311 10:29:44.818200 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25bc2184-abc4-463f-82f9-fa045eb0ee89-catalog-content\") pod \"redhat-operators-l9g6w\" (UID: \"25bc2184-abc4-463f-82f9-fa045eb0ee89\") " pod="openshift-marketplace/redhat-operators-l9g6w" Mar 11 10:29:44 crc kubenswrapper[4808]: I0311 10:29:44.919724 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25bc2184-abc4-463f-82f9-fa045eb0ee89-utilities\") pod \"redhat-operators-l9g6w\" (UID: \"25bc2184-abc4-463f-82f9-fa045eb0ee89\") " pod="openshift-marketplace/redhat-operators-l9g6w" Mar 11 10:29:44 crc kubenswrapper[4808]: I0311 10:29:44.919808 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25bc2184-abc4-463f-82f9-fa045eb0ee89-catalog-content\") pod \"redhat-operators-l9g6w\" (UID: \"25bc2184-abc4-463f-82f9-fa045eb0ee89\") " pod="openshift-marketplace/redhat-operators-l9g6w" Mar 11 10:29:44 crc kubenswrapper[4808]: I0311 10:29:44.920328 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25bc2184-abc4-463f-82f9-fa045eb0ee89-catalog-content\") pod \"redhat-operators-l9g6w\" (UID: \"25bc2184-abc4-463f-82f9-fa045eb0ee89\") " pod="openshift-marketplace/redhat-operators-l9g6w" Mar 11 10:29:44 crc kubenswrapper[4808]: I0311 10:29:44.920929 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25bc2184-abc4-463f-82f9-fa045eb0ee89-utilities\") pod \"redhat-operators-l9g6w\" (UID: \"25bc2184-abc4-463f-82f9-fa045eb0ee89\") " pod="openshift-marketplace/redhat-operators-l9g6w" Mar 11 10:29:44 crc kubenswrapper[4808]: I0311 10:29:44.921780 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rsshw\" (UniqueName: \"kubernetes.io/projected/25bc2184-abc4-463f-82f9-fa045eb0ee89-kube-api-access-rsshw\") pod \"redhat-operators-l9g6w\" (UID: \"25bc2184-abc4-463f-82f9-fa045eb0ee89\") " pod="openshift-marketplace/redhat-operators-l9g6w" Mar 11 10:29:44 crc kubenswrapper[4808]: I0311 10:29:44.946144 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rsshw\" (UniqueName: \"kubernetes.io/projected/25bc2184-abc4-463f-82f9-fa045eb0ee89-kube-api-access-rsshw\") pod \"redhat-operators-l9g6w\" (UID: \"25bc2184-abc4-463f-82f9-fa045eb0ee89\") " pod="openshift-marketplace/redhat-operators-l9g6w" Mar 11 10:29:45 crc kubenswrapper[4808]: I0311 10:29:45.032107 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l9g6w" Mar 11 10:29:45 crc kubenswrapper[4808]: I0311 10:29:45.504862 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l9g6w"] Mar 11 10:29:46 crc kubenswrapper[4808]: I0311 10:29:46.354037 4808 generic.go:334] "Generic (PLEG): container finished" podID="25bc2184-abc4-463f-82f9-fa045eb0ee89" containerID="afacd4449165e60065bc5ad2596ed124752f8815138f76d014efdfedd0532d87" exitCode=0 Mar 11 10:29:46 crc kubenswrapper[4808]: I0311 10:29:46.354128 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l9g6w" event={"ID":"25bc2184-abc4-463f-82f9-fa045eb0ee89","Type":"ContainerDied","Data":"afacd4449165e60065bc5ad2596ed124752f8815138f76d014efdfedd0532d87"} Mar 11 10:29:46 crc kubenswrapper[4808]: I0311 10:29:46.354330 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l9g6w" event={"ID":"25bc2184-abc4-463f-82f9-fa045eb0ee89","Type":"ContainerStarted","Data":"5f42e6cd6053be11b915314602d4cd7291b61330212f638a279897c58f128266"} Mar 11 10:29:47 crc kubenswrapper[4808]: I0311 10:29:47.366625 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l9g6w" event={"ID":"25bc2184-abc4-463f-82f9-fa045eb0ee89","Type":"ContainerStarted","Data":"fc0a4c576cf9a6db53e0b2a1d8abf330b009a5bc3adfa9b4c4746c53da7a67f5"} Mar 11 10:29:47 crc kubenswrapper[4808]: I0311 10:29:47.714984 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-kszct" Mar 11 10:29:47 crc kubenswrapper[4808]: I0311 10:29:47.715031 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-kszct" Mar 11 10:29:47 crc kubenswrapper[4808]: I0311 10:29:47.770539 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-kszct" Mar 11 10:29:48 crc kubenswrapper[4808]: I0311 10:29:48.387883 4808 generic.go:334] "Generic (PLEG): container finished" podID="25bc2184-abc4-463f-82f9-fa045eb0ee89" containerID="fc0a4c576cf9a6db53e0b2a1d8abf330b009a5bc3adfa9b4c4746c53da7a67f5" exitCode=0 Mar 11 10:29:48 crc kubenswrapper[4808]: I0311 10:29:48.390052 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l9g6w" event={"ID":"25bc2184-abc4-463f-82f9-fa045eb0ee89","Type":"ContainerDied","Data":"fc0a4c576cf9a6db53e0b2a1d8abf330b009a5bc3adfa9b4c4746c53da7a67f5"} Mar 11 10:29:48 crc kubenswrapper[4808]: I0311 10:29:48.466886 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-kszct" Mar 11 10:29:49 crc kubenswrapper[4808]: I0311 10:29:49.399675 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l9g6w" event={"ID":"25bc2184-abc4-463f-82f9-fa045eb0ee89","Type":"ContainerStarted","Data":"0d00e072516873cc08fc5b6f295ee438bcdc86facf34c4ac7f62f2227bffe8e0"} Mar 11 10:29:49 crc kubenswrapper[4808]: I0311 10:29:49.422588 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-l9g6w" podStartSLOduration=2.98411972 podStartE2EDuration="5.422570494s" podCreationTimestamp="2026-03-11 10:29:44 +0000 UTC" firstStartedPulling="2026-03-11 10:29:46.356302324 +0000 UTC m=+6637.309625654" lastFinishedPulling="2026-03-11 10:29:48.794753108 +0000 UTC m=+6639.748076428" observedRunningTime="2026-03-11 10:29:49.418775957 +0000 UTC m=+6640.372099277" watchObservedRunningTime="2026-03-11 10:29:49.422570494 +0000 UTC m=+6640.375893804" Mar 11 10:29:50 crc kubenswrapper[4808]: I0311 10:29:50.088815 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kszct"] Mar 11 10:29:50 crc kubenswrapper[4808]: I0311 10:29:50.406761 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-kszct" podUID="91a42830-7b13-4552-8123-e4c94a567cc7" containerName="registry-server" containerID="cri-o://aac618a446f1b452155041d25bfc218560f439aab02344ea30b65c154e3e7611" gracePeriod=2 Mar 11 10:29:50 crc kubenswrapper[4808]: I0311 10:29:50.789922 4808 scope.go:117] "RemoveContainer" containerID="736d4f503e3af501480de20ee69fa653054f5eeb2ae9186331abccc648c6682f" Mar 11 10:29:50 crc kubenswrapper[4808]: E0311 10:29:50.790389 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 10:29:50 crc kubenswrapper[4808]: I0311 10:29:50.906146 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kszct" Mar 11 10:29:51 crc kubenswrapper[4808]: I0311 10:29:51.027525 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91a42830-7b13-4552-8123-e4c94a567cc7-catalog-content\") pod \"91a42830-7b13-4552-8123-e4c94a567cc7\" (UID: \"91a42830-7b13-4552-8123-e4c94a567cc7\") " Mar 11 10:29:51 crc kubenswrapper[4808]: I0311 10:29:51.027758 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91a42830-7b13-4552-8123-e4c94a567cc7-utilities\") pod \"91a42830-7b13-4552-8123-e4c94a567cc7\" (UID: \"91a42830-7b13-4552-8123-e4c94a567cc7\") " Mar 11 10:29:51 crc kubenswrapper[4808]: I0311 10:29:51.027804 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-54zm4\" (UniqueName: \"kubernetes.io/projected/91a42830-7b13-4552-8123-e4c94a567cc7-kube-api-access-54zm4\") pod \"91a42830-7b13-4552-8123-e4c94a567cc7\" (UID: \"91a42830-7b13-4552-8123-e4c94a567cc7\") " Mar 11 10:29:51 crc kubenswrapper[4808]: I0311 10:29:51.028608 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91a42830-7b13-4552-8123-e4c94a567cc7-utilities" (OuterVolumeSpecName: "utilities") pod "91a42830-7b13-4552-8123-e4c94a567cc7" (UID: "91a42830-7b13-4552-8123-e4c94a567cc7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 10:29:51 crc kubenswrapper[4808]: I0311 10:29:51.035906 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91a42830-7b13-4552-8123-e4c94a567cc7-kube-api-access-54zm4" (OuterVolumeSpecName: "kube-api-access-54zm4") pod "91a42830-7b13-4552-8123-e4c94a567cc7" (UID: "91a42830-7b13-4552-8123-e4c94a567cc7"). InnerVolumeSpecName "kube-api-access-54zm4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:29:51 crc kubenswrapper[4808]: I0311 10:29:51.083882 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91a42830-7b13-4552-8123-e4c94a567cc7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "91a42830-7b13-4552-8123-e4c94a567cc7" (UID: "91a42830-7b13-4552-8123-e4c94a567cc7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 10:29:51 crc kubenswrapper[4808]: I0311 10:29:51.129692 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91a42830-7b13-4552-8123-e4c94a567cc7-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 11 10:29:51 crc kubenswrapper[4808]: I0311 10:29:51.129729 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91a42830-7b13-4552-8123-e4c94a567cc7-utilities\") on node \"crc\" DevicePath \"\"" Mar 11 10:29:51 crc kubenswrapper[4808]: I0311 10:29:51.129738 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-54zm4\" (UniqueName: \"kubernetes.io/projected/91a42830-7b13-4552-8123-e4c94a567cc7-kube-api-access-54zm4\") on node \"crc\" DevicePath \"\"" Mar 11 10:29:51 crc kubenswrapper[4808]: I0311 10:29:51.418607 4808 generic.go:334] "Generic (PLEG): container finished" podID="91a42830-7b13-4552-8123-e4c94a567cc7" containerID="aac618a446f1b452155041d25bfc218560f439aab02344ea30b65c154e3e7611" exitCode=0 Mar 11 10:29:51 crc kubenswrapper[4808]: I0311 10:29:51.418661 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kszct" event={"ID":"91a42830-7b13-4552-8123-e4c94a567cc7","Type":"ContainerDied","Data":"aac618a446f1b452155041d25bfc218560f439aab02344ea30b65c154e3e7611"} Mar 11 10:29:51 crc kubenswrapper[4808]: I0311 10:29:51.418736 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kszct" Mar 11 10:29:51 crc kubenswrapper[4808]: I0311 10:29:51.418772 4808 scope.go:117] "RemoveContainer" containerID="aac618a446f1b452155041d25bfc218560f439aab02344ea30b65c154e3e7611" Mar 11 10:29:51 crc kubenswrapper[4808]: I0311 10:29:51.418753 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kszct" event={"ID":"91a42830-7b13-4552-8123-e4c94a567cc7","Type":"ContainerDied","Data":"b2341b8cc2488247c83185d352c5fbefa5d06b493f4346597c848e264a6cf447"} Mar 11 10:29:51 crc kubenswrapper[4808]: I0311 10:29:51.435340 4808 scope.go:117] "RemoveContainer" containerID="1772e8fa32a0a0f871d19f785c1ce4ba7e5d846bf2b00322bca5156a42f81ff0" Mar 11 10:29:51 crc kubenswrapper[4808]: I0311 10:29:51.456869 4808 scope.go:117] "RemoveContainer" containerID="2235da6e74690238eff51f981cea72411898b511ffc85f8ac4eee90552bead92" Mar 11 10:29:51 crc kubenswrapper[4808]: I0311 10:29:51.501234 4808 scope.go:117] "RemoveContainer" containerID="aac618a446f1b452155041d25bfc218560f439aab02344ea30b65c154e3e7611" Mar 11 10:29:51 crc kubenswrapper[4808]: I0311 10:29:51.503612 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kszct"] Mar 11 10:29:51 crc kubenswrapper[4808]: E0311 10:29:51.503676 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aac618a446f1b452155041d25bfc218560f439aab02344ea30b65c154e3e7611\": container with ID starting with aac618a446f1b452155041d25bfc218560f439aab02344ea30b65c154e3e7611 not found: ID does not exist" containerID="aac618a446f1b452155041d25bfc218560f439aab02344ea30b65c154e3e7611" Mar 11 10:29:51 crc kubenswrapper[4808]: I0311 10:29:51.503769 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aac618a446f1b452155041d25bfc218560f439aab02344ea30b65c154e3e7611"} err="failed to get container status \"aac618a446f1b452155041d25bfc218560f439aab02344ea30b65c154e3e7611\": rpc error: code = NotFound desc = could not find container \"aac618a446f1b452155041d25bfc218560f439aab02344ea30b65c154e3e7611\": container with ID starting with aac618a446f1b452155041d25bfc218560f439aab02344ea30b65c154e3e7611 not found: ID does not exist" Mar 11 10:29:51 crc kubenswrapper[4808]: I0311 10:29:51.503802 4808 scope.go:117] "RemoveContainer" containerID="1772e8fa32a0a0f871d19f785c1ce4ba7e5d846bf2b00322bca5156a42f81ff0" Mar 11 10:29:51 crc kubenswrapper[4808]: E0311 10:29:51.504232 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1772e8fa32a0a0f871d19f785c1ce4ba7e5d846bf2b00322bca5156a42f81ff0\": container with ID starting with 1772e8fa32a0a0f871d19f785c1ce4ba7e5d846bf2b00322bca5156a42f81ff0 not found: ID does not exist" containerID="1772e8fa32a0a0f871d19f785c1ce4ba7e5d846bf2b00322bca5156a42f81ff0" Mar 11 10:29:51 crc kubenswrapper[4808]: I0311 10:29:51.504274 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1772e8fa32a0a0f871d19f785c1ce4ba7e5d846bf2b00322bca5156a42f81ff0"} err="failed to get container status \"1772e8fa32a0a0f871d19f785c1ce4ba7e5d846bf2b00322bca5156a42f81ff0\": rpc error: code = NotFound desc = could not find container \"1772e8fa32a0a0f871d19f785c1ce4ba7e5d846bf2b00322bca5156a42f81ff0\": container with ID starting with 1772e8fa32a0a0f871d19f785c1ce4ba7e5d846bf2b00322bca5156a42f81ff0 not found: ID does not exist" Mar 11 10:29:51 crc kubenswrapper[4808]: I0311 10:29:51.504305 4808 scope.go:117] "RemoveContainer" containerID="2235da6e74690238eff51f981cea72411898b511ffc85f8ac4eee90552bead92" Mar 11 10:29:51 crc kubenswrapper[4808]: E0311 10:29:51.504623 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2235da6e74690238eff51f981cea72411898b511ffc85f8ac4eee90552bead92\": container with ID starting with 2235da6e74690238eff51f981cea72411898b511ffc85f8ac4eee90552bead92 not found: ID does not exist" containerID="2235da6e74690238eff51f981cea72411898b511ffc85f8ac4eee90552bead92" Mar 11 10:29:51 crc kubenswrapper[4808]: I0311 10:29:51.504651 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2235da6e74690238eff51f981cea72411898b511ffc85f8ac4eee90552bead92"} err="failed to get container status \"2235da6e74690238eff51f981cea72411898b511ffc85f8ac4eee90552bead92\": rpc error: code = NotFound desc = could not find container \"2235da6e74690238eff51f981cea72411898b511ffc85f8ac4eee90552bead92\": container with ID starting with 2235da6e74690238eff51f981cea72411898b511ffc85f8ac4eee90552bead92 not found: ID does not exist" Mar 11 10:29:51 crc kubenswrapper[4808]: I0311 10:29:51.511866 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-kszct"] Mar 11 10:29:51 crc kubenswrapper[4808]: I0311 10:29:51.822734 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91a42830-7b13-4552-8123-e4c94a567cc7" path="/var/lib/kubelet/pods/91a42830-7b13-4552-8123-e4c94a567cc7/volumes" Mar 11 10:29:55 crc kubenswrapper[4808]: I0311 10:29:55.033036 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-l9g6w" Mar 11 10:29:55 crc kubenswrapper[4808]: I0311 10:29:55.034326 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-l9g6w" Mar 11 10:29:56 crc kubenswrapper[4808]: I0311 10:29:56.086208 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-l9g6w" podUID="25bc2184-abc4-463f-82f9-fa045eb0ee89" containerName="registry-server" probeResult="failure" output=< Mar 11 10:29:56 crc kubenswrapper[4808]: timeout: failed to connect service ":50051" within 1s Mar 11 10:29:56 crc kubenswrapper[4808]: > Mar 11 10:30:00 crc kubenswrapper[4808]: I0311 10:30:00.154929 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553750-bmnlv"] Mar 11 10:30:00 crc kubenswrapper[4808]: E0311 10:30:00.155872 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91a42830-7b13-4552-8123-e4c94a567cc7" containerName="extract-content" Mar 11 10:30:00 crc kubenswrapper[4808]: I0311 10:30:00.155894 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="91a42830-7b13-4552-8123-e4c94a567cc7" containerName="extract-content" Mar 11 10:30:00 crc kubenswrapper[4808]: E0311 10:30:00.155913 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91a42830-7b13-4552-8123-e4c94a567cc7" containerName="registry-server" Mar 11 10:30:00 crc kubenswrapper[4808]: I0311 10:30:00.155923 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="91a42830-7b13-4552-8123-e4c94a567cc7" containerName="registry-server" Mar 11 10:30:00 crc kubenswrapper[4808]: E0311 10:30:00.155966 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91a42830-7b13-4552-8123-e4c94a567cc7" containerName="extract-utilities" Mar 11 10:30:00 crc kubenswrapper[4808]: I0311 10:30:00.155977 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="91a42830-7b13-4552-8123-e4c94a567cc7" containerName="extract-utilities" Mar 11 10:30:00 crc kubenswrapper[4808]: I0311 10:30:00.156239 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="91a42830-7b13-4552-8123-e4c94a567cc7" containerName="registry-server" Mar 11 10:30:00 crc kubenswrapper[4808]: I0311 10:30:00.157125 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553750-bmnlv" Mar 11 10:30:00 crc kubenswrapper[4808]: I0311 10:30:00.161311 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-8r7cc" Mar 11 10:30:00 crc kubenswrapper[4808]: I0311 10:30:00.162169 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 10:30:00 crc kubenswrapper[4808]: I0311 10:30:00.162456 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 10:30:00 crc kubenswrapper[4808]: I0311 10:30:00.165711 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29553750-jqtg7"] Mar 11 10:30:00 crc kubenswrapper[4808]: I0311 10:30:00.167118 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29553750-jqtg7" Mar 11 10:30:00 crc kubenswrapper[4808]: I0311 10:30:00.169068 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 11 10:30:00 crc kubenswrapper[4808]: I0311 10:30:00.169146 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 11 10:30:00 crc kubenswrapper[4808]: I0311 10:30:00.175269 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553750-bmnlv"] Mar 11 10:30:00 crc kubenswrapper[4808]: I0311 10:30:00.195593 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29553750-jqtg7"] Mar 11 10:30:00 crc kubenswrapper[4808]: I0311 10:30:00.295145 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ed7de11e-3179-4aa0-8cc8-d6c3056d9b03-secret-volume\") pod \"collect-profiles-29553750-jqtg7\" (UID: \"ed7de11e-3179-4aa0-8cc8-d6c3056d9b03\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553750-jqtg7" Mar 11 10:30:00 crc kubenswrapper[4808]: I0311 10:30:00.295623 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4hsm\" (UniqueName: \"kubernetes.io/projected/256f4e2a-fcff-437f-a29e-d344bf24b519-kube-api-access-s4hsm\") pod \"auto-csr-approver-29553750-bmnlv\" (UID: \"256f4e2a-fcff-437f-a29e-d344bf24b519\") " pod="openshift-infra/auto-csr-approver-29553750-bmnlv" Mar 11 10:30:00 crc kubenswrapper[4808]: I0311 10:30:00.295863 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sp5tl\" (UniqueName: \"kubernetes.io/projected/ed7de11e-3179-4aa0-8cc8-d6c3056d9b03-kube-api-access-sp5tl\") pod \"collect-profiles-29553750-jqtg7\" (UID: \"ed7de11e-3179-4aa0-8cc8-d6c3056d9b03\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553750-jqtg7" Mar 11 10:30:00 crc kubenswrapper[4808]: I0311 10:30:00.296108 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ed7de11e-3179-4aa0-8cc8-d6c3056d9b03-config-volume\") pod \"collect-profiles-29553750-jqtg7\" (UID: \"ed7de11e-3179-4aa0-8cc8-d6c3056d9b03\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553750-jqtg7" Mar 11 10:30:00 crc kubenswrapper[4808]: I0311 10:30:00.397515 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ed7de11e-3179-4aa0-8cc8-d6c3056d9b03-config-volume\") pod \"collect-profiles-29553750-jqtg7\" (UID: \"ed7de11e-3179-4aa0-8cc8-d6c3056d9b03\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553750-jqtg7" Mar 11 10:30:00 crc kubenswrapper[4808]: I0311 10:30:00.397597 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ed7de11e-3179-4aa0-8cc8-d6c3056d9b03-secret-volume\") pod \"collect-profiles-29553750-jqtg7\" (UID: \"ed7de11e-3179-4aa0-8cc8-d6c3056d9b03\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553750-jqtg7" Mar 11 10:30:00 crc kubenswrapper[4808]: I0311 10:30:00.397643 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4hsm\" (UniqueName: \"kubernetes.io/projected/256f4e2a-fcff-437f-a29e-d344bf24b519-kube-api-access-s4hsm\") pod \"auto-csr-approver-29553750-bmnlv\" (UID: \"256f4e2a-fcff-437f-a29e-d344bf24b519\") " pod="openshift-infra/auto-csr-approver-29553750-bmnlv" Mar 11 10:30:00 crc kubenswrapper[4808]: I0311 10:30:00.397673 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sp5tl\" (UniqueName: \"kubernetes.io/projected/ed7de11e-3179-4aa0-8cc8-d6c3056d9b03-kube-api-access-sp5tl\") pod \"collect-profiles-29553750-jqtg7\" (UID: \"ed7de11e-3179-4aa0-8cc8-d6c3056d9b03\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553750-jqtg7" Mar 11 10:30:00 crc kubenswrapper[4808]: I0311 10:30:00.400251 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ed7de11e-3179-4aa0-8cc8-d6c3056d9b03-config-volume\") pod \"collect-profiles-29553750-jqtg7\" (UID: \"ed7de11e-3179-4aa0-8cc8-d6c3056d9b03\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553750-jqtg7" Mar 11 10:30:00 crc kubenswrapper[4808]: I0311 10:30:00.403282 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ed7de11e-3179-4aa0-8cc8-d6c3056d9b03-secret-volume\") pod \"collect-profiles-29553750-jqtg7\" (UID: \"ed7de11e-3179-4aa0-8cc8-d6c3056d9b03\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553750-jqtg7" Mar 11 10:30:00 crc kubenswrapper[4808]: I0311 10:30:00.418920 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sp5tl\" (UniqueName: \"kubernetes.io/projected/ed7de11e-3179-4aa0-8cc8-d6c3056d9b03-kube-api-access-sp5tl\") pod \"collect-profiles-29553750-jqtg7\" (UID: \"ed7de11e-3179-4aa0-8cc8-d6c3056d9b03\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553750-jqtg7" Mar 11 10:30:00 crc kubenswrapper[4808]: I0311 10:30:00.420304 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4hsm\" (UniqueName: \"kubernetes.io/projected/256f4e2a-fcff-437f-a29e-d344bf24b519-kube-api-access-s4hsm\") pod \"auto-csr-approver-29553750-bmnlv\" (UID: \"256f4e2a-fcff-437f-a29e-d344bf24b519\") " pod="openshift-infra/auto-csr-approver-29553750-bmnlv" Mar 11 10:30:00 crc kubenswrapper[4808]: I0311 10:30:00.480417 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553750-bmnlv" Mar 11 10:30:00 crc kubenswrapper[4808]: I0311 10:30:00.490137 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29553750-jqtg7" Mar 11 10:30:00 crc kubenswrapper[4808]: I0311 10:30:00.914326 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553750-bmnlv"] Mar 11 10:30:00 crc kubenswrapper[4808]: I0311 10:30:00.924139 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29553750-jqtg7"] Mar 11 10:30:00 crc kubenswrapper[4808]: W0311 10:30:00.926546 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod256f4e2a_fcff_437f_a29e_d344bf24b519.slice/crio-a5404f3abecb0482d9a9e02b0be9479f7806d4d2519a40cc5dd30471b76a387d WatchSource:0}: Error finding container a5404f3abecb0482d9a9e02b0be9479f7806d4d2519a40cc5dd30471b76a387d: Status 404 returned error can't find the container with id a5404f3abecb0482d9a9e02b0be9479f7806d4d2519a40cc5dd30471b76a387d Mar 11 10:30:00 crc kubenswrapper[4808]: W0311 10:30:00.927506 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poded7de11e_3179_4aa0_8cc8_d6c3056d9b03.slice/crio-661d71fc0b1e158c511aac18629778dd8fbd2ca8e6e5161a5b7c96578fb35dba WatchSource:0}: Error finding container 661d71fc0b1e158c511aac18629778dd8fbd2ca8e6e5161a5b7c96578fb35dba: Status 404 returned error can't find the container with id 661d71fc0b1e158c511aac18629778dd8fbd2ca8e6e5161a5b7c96578fb35dba Mar 11 10:30:01 crc kubenswrapper[4808]: I0311 10:30:01.510609 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553750-bmnlv" event={"ID":"256f4e2a-fcff-437f-a29e-d344bf24b519","Type":"ContainerStarted","Data":"a5404f3abecb0482d9a9e02b0be9479f7806d4d2519a40cc5dd30471b76a387d"} Mar 11 10:30:01 crc kubenswrapper[4808]: I0311 10:30:01.513891 4808 generic.go:334] "Generic (PLEG): container finished" podID="ed7de11e-3179-4aa0-8cc8-d6c3056d9b03" containerID="281edb92847a69022501c79133fa5de33fb9d28442feeb3966fbf0c077e211c5" exitCode=0 Mar 11 10:30:01 crc kubenswrapper[4808]: I0311 10:30:01.513946 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29553750-jqtg7" event={"ID":"ed7de11e-3179-4aa0-8cc8-d6c3056d9b03","Type":"ContainerDied","Data":"281edb92847a69022501c79133fa5de33fb9d28442feeb3966fbf0c077e211c5"} Mar 11 10:30:01 crc kubenswrapper[4808]: I0311 10:30:01.513974 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29553750-jqtg7" event={"ID":"ed7de11e-3179-4aa0-8cc8-d6c3056d9b03","Type":"ContainerStarted","Data":"661d71fc0b1e158c511aac18629778dd8fbd2ca8e6e5161a5b7c96578fb35dba"} Mar 11 10:30:02 crc kubenswrapper[4808]: I0311 10:30:02.899858 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29553750-jqtg7" Mar 11 10:30:03 crc kubenswrapper[4808]: I0311 10:30:03.045116 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ed7de11e-3179-4aa0-8cc8-d6c3056d9b03-secret-volume\") pod \"ed7de11e-3179-4aa0-8cc8-d6c3056d9b03\" (UID: \"ed7de11e-3179-4aa0-8cc8-d6c3056d9b03\") " Mar 11 10:30:03 crc kubenswrapper[4808]: I0311 10:30:03.045274 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ed7de11e-3179-4aa0-8cc8-d6c3056d9b03-config-volume\") pod \"ed7de11e-3179-4aa0-8cc8-d6c3056d9b03\" (UID: \"ed7de11e-3179-4aa0-8cc8-d6c3056d9b03\") " Mar 11 10:30:03 crc kubenswrapper[4808]: I0311 10:30:03.045430 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sp5tl\" (UniqueName: \"kubernetes.io/projected/ed7de11e-3179-4aa0-8cc8-d6c3056d9b03-kube-api-access-sp5tl\") pod \"ed7de11e-3179-4aa0-8cc8-d6c3056d9b03\" (UID: \"ed7de11e-3179-4aa0-8cc8-d6c3056d9b03\") " Mar 11 10:30:03 crc kubenswrapper[4808]: I0311 10:30:03.045940 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed7de11e-3179-4aa0-8cc8-d6c3056d9b03-config-volume" (OuterVolumeSpecName: "config-volume") pod "ed7de11e-3179-4aa0-8cc8-d6c3056d9b03" (UID: "ed7de11e-3179-4aa0-8cc8-d6c3056d9b03"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 10:30:03 crc kubenswrapper[4808]: I0311 10:30:03.050533 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed7de11e-3179-4aa0-8cc8-d6c3056d9b03-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ed7de11e-3179-4aa0-8cc8-d6c3056d9b03" (UID: "ed7de11e-3179-4aa0-8cc8-d6c3056d9b03"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 10:30:03 crc kubenswrapper[4808]: I0311 10:30:03.050616 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed7de11e-3179-4aa0-8cc8-d6c3056d9b03-kube-api-access-sp5tl" (OuterVolumeSpecName: "kube-api-access-sp5tl") pod "ed7de11e-3179-4aa0-8cc8-d6c3056d9b03" (UID: "ed7de11e-3179-4aa0-8cc8-d6c3056d9b03"). InnerVolumeSpecName "kube-api-access-sp5tl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:30:03 crc kubenswrapper[4808]: I0311 10:30:03.147731 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sp5tl\" (UniqueName: \"kubernetes.io/projected/ed7de11e-3179-4aa0-8cc8-d6c3056d9b03-kube-api-access-sp5tl\") on node \"crc\" DevicePath \"\"" Mar 11 10:30:03 crc kubenswrapper[4808]: I0311 10:30:03.147798 4808 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ed7de11e-3179-4aa0-8cc8-d6c3056d9b03-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 11 10:30:03 crc kubenswrapper[4808]: I0311 10:30:03.147819 4808 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ed7de11e-3179-4aa0-8cc8-d6c3056d9b03-config-volume\") on node \"crc\" DevicePath \"\"" Mar 11 10:30:03 crc kubenswrapper[4808]: I0311 10:30:03.533895 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29553750-jqtg7" event={"ID":"ed7de11e-3179-4aa0-8cc8-d6c3056d9b03","Type":"ContainerDied","Data":"661d71fc0b1e158c511aac18629778dd8fbd2ca8e6e5161a5b7c96578fb35dba"} Mar 11 10:30:03 crc kubenswrapper[4808]: I0311 10:30:03.533947 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="661d71fc0b1e158c511aac18629778dd8fbd2ca8e6e5161a5b7c96578fb35dba" Mar 11 10:30:03 crc kubenswrapper[4808]: I0311 10:30:03.534006 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29553750-jqtg7" Mar 11 10:30:03 crc kubenswrapper[4808]: I0311 10:30:03.789811 4808 scope.go:117] "RemoveContainer" containerID="736d4f503e3af501480de20ee69fa653054f5eeb2ae9186331abccc648c6682f" Mar 11 10:30:03 crc kubenswrapper[4808]: E0311 10:30:03.790478 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 10:30:03 crc kubenswrapper[4808]: I0311 10:30:03.978224 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29553705-525jx"] Mar 11 10:30:03 crc kubenswrapper[4808]: I0311 10:30:03.986236 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29553705-525jx"] Mar 11 10:30:04 crc kubenswrapper[4808]: I0311 10:30:04.545502 4808 generic.go:334] "Generic (PLEG): container finished" podID="256f4e2a-fcff-437f-a29e-d344bf24b519" containerID="9e941891ffc546299c60383ca6b55548551c4d5af7e05ebc9f6da388faf8cbe0" exitCode=0 Mar 11 10:30:04 crc kubenswrapper[4808]: I0311 10:30:04.545552 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553750-bmnlv" event={"ID":"256f4e2a-fcff-437f-a29e-d344bf24b519","Type":"ContainerDied","Data":"9e941891ffc546299c60383ca6b55548551c4d5af7e05ebc9f6da388faf8cbe0"} Mar 11 10:30:05 crc kubenswrapper[4808]: I0311 10:30:05.106102 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-l9g6w" Mar 11 10:30:05 crc kubenswrapper[4808]: I0311 10:30:05.152229 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-l9g6w" Mar 11 10:30:05 crc kubenswrapper[4808]: I0311 10:30:05.341815 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-l9g6w"] Mar 11 10:30:05 crc kubenswrapper[4808]: I0311 10:30:05.797340 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6baca687-187c-4054-8060-85672564432f" path="/var/lib/kubelet/pods/6baca687-187c-4054-8060-85672564432f/volumes" Mar 11 10:30:05 crc kubenswrapper[4808]: I0311 10:30:05.882218 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553750-bmnlv" Mar 11 10:30:06 crc kubenswrapper[4808]: I0311 10:30:06.002826 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4hsm\" (UniqueName: \"kubernetes.io/projected/256f4e2a-fcff-437f-a29e-d344bf24b519-kube-api-access-s4hsm\") pod \"256f4e2a-fcff-437f-a29e-d344bf24b519\" (UID: \"256f4e2a-fcff-437f-a29e-d344bf24b519\") " Mar 11 10:30:06 crc kubenswrapper[4808]: I0311 10:30:06.008190 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/256f4e2a-fcff-437f-a29e-d344bf24b519-kube-api-access-s4hsm" (OuterVolumeSpecName: "kube-api-access-s4hsm") pod "256f4e2a-fcff-437f-a29e-d344bf24b519" (UID: "256f4e2a-fcff-437f-a29e-d344bf24b519"). InnerVolumeSpecName "kube-api-access-s4hsm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:30:06 crc kubenswrapper[4808]: I0311 10:30:06.104607 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4hsm\" (UniqueName: \"kubernetes.io/projected/256f4e2a-fcff-437f-a29e-d344bf24b519-kube-api-access-s4hsm\") on node \"crc\" DevicePath \"\"" Mar 11 10:30:06 crc kubenswrapper[4808]: I0311 10:30:06.576994 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553750-bmnlv" Mar 11 10:30:06 crc kubenswrapper[4808]: I0311 10:30:06.576979 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553750-bmnlv" event={"ID":"256f4e2a-fcff-437f-a29e-d344bf24b519","Type":"ContainerDied","Data":"a5404f3abecb0482d9a9e02b0be9479f7806d4d2519a40cc5dd30471b76a387d"} Mar 11 10:30:06 crc kubenswrapper[4808]: I0311 10:30:06.577096 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5404f3abecb0482d9a9e02b0be9479f7806d4d2519a40cc5dd30471b76a387d" Mar 11 10:30:06 crc kubenswrapper[4808]: I0311 10:30:06.577229 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-l9g6w" podUID="25bc2184-abc4-463f-82f9-fa045eb0ee89" containerName="registry-server" containerID="cri-o://0d00e072516873cc08fc5b6f295ee438bcdc86facf34c4ac7f62f2227bffe8e0" gracePeriod=2 Mar 11 10:30:06 crc kubenswrapper[4808]: I0311 10:30:06.938286 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553744-2wzr4"] Mar 11 10:30:06 crc kubenswrapper[4808]: I0311 10:30:06.945964 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553744-2wzr4"] Mar 11 10:30:07 crc kubenswrapper[4808]: I0311 10:30:07.037398 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l9g6w" Mar 11 10:30:07 crc kubenswrapper[4808]: I0311 10:30:07.126605 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rsshw\" (UniqueName: \"kubernetes.io/projected/25bc2184-abc4-463f-82f9-fa045eb0ee89-kube-api-access-rsshw\") pod \"25bc2184-abc4-463f-82f9-fa045eb0ee89\" (UID: \"25bc2184-abc4-463f-82f9-fa045eb0ee89\") " Mar 11 10:30:07 crc kubenswrapper[4808]: I0311 10:30:07.126778 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25bc2184-abc4-463f-82f9-fa045eb0ee89-catalog-content\") pod \"25bc2184-abc4-463f-82f9-fa045eb0ee89\" (UID: \"25bc2184-abc4-463f-82f9-fa045eb0ee89\") " Mar 11 10:30:07 crc kubenswrapper[4808]: I0311 10:30:07.126839 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25bc2184-abc4-463f-82f9-fa045eb0ee89-utilities\") pod \"25bc2184-abc4-463f-82f9-fa045eb0ee89\" (UID: \"25bc2184-abc4-463f-82f9-fa045eb0ee89\") " Mar 11 10:30:07 crc kubenswrapper[4808]: I0311 10:30:07.128233 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25bc2184-abc4-463f-82f9-fa045eb0ee89-utilities" (OuterVolumeSpecName: "utilities") pod "25bc2184-abc4-463f-82f9-fa045eb0ee89" (UID: "25bc2184-abc4-463f-82f9-fa045eb0ee89"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 10:30:07 crc kubenswrapper[4808]: I0311 10:30:07.132382 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25bc2184-abc4-463f-82f9-fa045eb0ee89-kube-api-access-rsshw" (OuterVolumeSpecName: "kube-api-access-rsshw") pod "25bc2184-abc4-463f-82f9-fa045eb0ee89" (UID: "25bc2184-abc4-463f-82f9-fa045eb0ee89"). InnerVolumeSpecName "kube-api-access-rsshw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:30:07 crc kubenswrapper[4808]: I0311 10:30:07.229154 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25bc2184-abc4-463f-82f9-fa045eb0ee89-utilities\") on node \"crc\" DevicePath \"\"" Mar 11 10:30:07 crc kubenswrapper[4808]: I0311 10:30:07.229189 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rsshw\" (UniqueName: \"kubernetes.io/projected/25bc2184-abc4-463f-82f9-fa045eb0ee89-kube-api-access-rsshw\") on node \"crc\" DevicePath \"\"" Mar 11 10:30:07 crc kubenswrapper[4808]: I0311 10:30:07.264680 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25bc2184-abc4-463f-82f9-fa045eb0ee89-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "25bc2184-abc4-463f-82f9-fa045eb0ee89" (UID: "25bc2184-abc4-463f-82f9-fa045eb0ee89"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 10:30:07 crc kubenswrapper[4808]: I0311 10:30:07.330442 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25bc2184-abc4-463f-82f9-fa045eb0ee89-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 11 10:30:07 crc kubenswrapper[4808]: I0311 10:30:07.591084 4808 generic.go:334] "Generic (PLEG): container finished" podID="25bc2184-abc4-463f-82f9-fa045eb0ee89" containerID="0d00e072516873cc08fc5b6f295ee438bcdc86facf34c4ac7f62f2227bffe8e0" exitCode=0 Mar 11 10:30:07 crc kubenswrapper[4808]: I0311 10:30:07.591124 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l9g6w" event={"ID":"25bc2184-abc4-463f-82f9-fa045eb0ee89","Type":"ContainerDied","Data":"0d00e072516873cc08fc5b6f295ee438bcdc86facf34c4ac7f62f2227bffe8e0"} Mar 11 10:30:07 crc kubenswrapper[4808]: I0311 10:30:07.591149 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l9g6w" event={"ID":"25bc2184-abc4-463f-82f9-fa045eb0ee89","Type":"ContainerDied","Data":"5f42e6cd6053be11b915314602d4cd7291b61330212f638a279897c58f128266"} Mar 11 10:30:07 crc kubenswrapper[4808]: I0311 10:30:07.591168 4808 scope.go:117] "RemoveContainer" containerID="0d00e072516873cc08fc5b6f295ee438bcdc86facf34c4ac7f62f2227bffe8e0" Mar 11 10:30:07 crc kubenswrapper[4808]: I0311 10:30:07.591192 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l9g6w" Mar 11 10:30:07 crc kubenswrapper[4808]: I0311 10:30:07.616230 4808 scope.go:117] "RemoveContainer" containerID="fc0a4c576cf9a6db53e0b2a1d8abf330b009a5bc3adfa9b4c4746c53da7a67f5" Mar 11 10:30:07 crc kubenswrapper[4808]: I0311 10:30:07.651687 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-l9g6w"] Mar 11 10:30:07 crc kubenswrapper[4808]: I0311 10:30:07.658054 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-l9g6w"] Mar 11 10:30:07 crc kubenswrapper[4808]: I0311 10:30:07.670807 4808 scope.go:117] "RemoveContainer" containerID="afacd4449165e60065bc5ad2596ed124752f8815138f76d014efdfedd0532d87" Mar 11 10:30:07 crc kubenswrapper[4808]: I0311 10:30:07.705226 4808 scope.go:117] "RemoveContainer" containerID="0d00e072516873cc08fc5b6f295ee438bcdc86facf34c4ac7f62f2227bffe8e0" Mar 11 10:30:07 crc kubenswrapper[4808]: E0311 10:30:07.705757 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d00e072516873cc08fc5b6f295ee438bcdc86facf34c4ac7f62f2227bffe8e0\": container with ID starting with 0d00e072516873cc08fc5b6f295ee438bcdc86facf34c4ac7f62f2227bffe8e0 not found: ID does not exist" containerID="0d00e072516873cc08fc5b6f295ee438bcdc86facf34c4ac7f62f2227bffe8e0" Mar 11 10:30:07 crc kubenswrapper[4808]: I0311 10:30:07.705816 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d00e072516873cc08fc5b6f295ee438bcdc86facf34c4ac7f62f2227bffe8e0"} err="failed to get container status \"0d00e072516873cc08fc5b6f295ee438bcdc86facf34c4ac7f62f2227bffe8e0\": rpc error: code = NotFound desc = could not find container \"0d00e072516873cc08fc5b6f295ee438bcdc86facf34c4ac7f62f2227bffe8e0\": container with ID starting with 0d00e072516873cc08fc5b6f295ee438bcdc86facf34c4ac7f62f2227bffe8e0 not found: ID does not exist" Mar 11 10:30:07 crc kubenswrapper[4808]: I0311 10:30:07.705854 4808 scope.go:117] "RemoveContainer" containerID="fc0a4c576cf9a6db53e0b2a1d8abf330b009a5bc3adfa9b4c4746c53da7a67f5" Mar 11 10:30:07 crc kubenswrapper[4808]: E0311 10:30:07.706282 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc0a4c576cf9a6db53e0b2a1d8abf330b009a5bc3adfa9b4c4746c53da7a67f5\": container with ID starting with fc0a4c576cf9a6db53e0b2a1d8abf330b009a5bc3adfa9b4c4746c53da7a67f5 not found: ID does not exist" containerID="fc0a4c576cf9a6db53e0b2a1d8abf330b009a5bc3adfa9b4c4746c53da7a67f5" Mar 11 10:30:07 crc kubenswrapper[4808]: I0311 10:30:07.706312 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc0a4c576cf9a6db53e0b2a1d8abf330b009a5bc3adfa9b4c4746c53da7a67f5"} err="failed to get container status \"fc0a4c576cf9a6db53e0b2a1d8abf330b009a5bc3adfa9b4c4746c53da7a67f5\": rpc error: code = NotFound desc = could not find container \"fc0a4c576cf9a6db53e0b2a1d8abf330b009a5bc3adfa9b4c4746c53da7a67f5\": container with ID starting with fc0a4c576cf9a6db53e0b2a1d8abf330b009a5bc3adfa9b4c4746c53da7a67f5 not found: ID does not exist" Mar 11 10:30:07 crc kubenswrapper[4808]: I0311 10:30:07.706334 4808 scope.go:117] "RemoveContainer" containerID="afacd4449165e60065bc5ad2596ed124752f8815138f76d014efdfedd0532d87" Mar 11 10:30:07 crc kubenswrapper[4808]: E0311 10:30:07.706673 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"afacd4449165e60065bc5ad2596ed124752f8815138f76d014efdfedd0532d87\": container with ID starting with afacd4449165e60065bc5ad2596ed124752f8815138f76d014efdfedd0532d87 not found: ID does not exist" containerID="afacd4449165e60065bc5ad2596ed124752f8815138f76d014efdfedd0532d87" Mar 11 10:30:07 crc kubenswrapper[4808]: I0311 10:30:07.706708 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afacd4449165e60065bc5ad2596ed124752f8815138f76d014efdfedd0532d87"} err="failed to get container status \"afacd4449165e60065bc5ad2596ed124752f8815138f76d014efdfedd0532d87\": rpc error: code = NotFound desc = could not find container \"afacd4449165e60065bc5ad2596ed124752f8815138f76d014efdfedd0532d87\": container with ID starting with afacd4449165e60065bc5ad2596ed124752f8815138f76d014efdfedd0532d87 not found: ID does not exist" Mar 11 10:30:07 crc kubenswrapper[4808]: I0311 10:30:07.806035 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25bc2184-abc4-463f-82f9-fa045eb0ee89" path="/var/lib/kubelet/pods/25bc2184-abc4-463f-82f9-fa045eb0ee89/volumes" Mar 11 10:30:07 crc kubenswrapper[4808]: I0311 10:30:07.806985 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a103e496-f239-4c74-ae75-7a7baa88edd5" path="/var/lib/kubelet/pods/a103e496-f239-4c74-ae75-7a7baa88edd5/volumes" Mar 11 10:30:17 crc kubenswrapper[4808]: I0311 10:30:17.789745 4808 scope.go:117] "RemoveContainer" containerID="736d4f503e3af501480de20ee69fa653054f5eeb2ae9186331abccc648c6682f" Mar 11 10:30:17 crc kubenswrapper[4808]: E0311 10:30:17.790731 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 10:30:29 crc kubenswrapper[4808]: I0311 10:30:29.799677 4808 scope.go:117] "RemoveContainer" containerID="736d4f503e3af501480de20ee69fa653054f5eeb2ae9186331abccc648c6682f" Mar 11 10:30:29 crc kubenswrapper[4808]: E0311 10:30:29.800957 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 10:30:41 crc kubenswrapper[4808]: I0311 10:30:41.790080 4808 scope.go:117] "RemoveContainer" containerID="736d4f503e3af501480de20ee69fa653054f5eeb2ae9186331abccc648c6682f" Mar 11 10:30:41 crc kubenswrapper[4808]: E0311 10:30:41.791127 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 10:30:56 crc kubenswrapper[4808]: I0311 10:30:56.789614 4808 scope.go:117] "RemoveContainer" containerID="736d4f503e3af501480de20ee69fa653054f5eeb2ae9186331abccc648c6682f" Mar 11 10:30:57 crc kubenswrapper[4808]: I0311 10:30:57.065200 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" event={"ID":"3dda5309-668d-4e3c-b3b2-1d708eecc578","Type":"ContainerStarted","Data":"00cc296dafe7303528d0b110e4134845496f29238d7cda6f869b6177d61ee874"} Mar 11 10:30:58 crc kubenswrapper[4808]: I0311 10:30:58.513605 4808 scope.go:117] "RemoveContainer" containerID="834d472ab3da85d30059757c7ab7e8a0051e6fcd749b98ec257ed1da6e382126" Mar 11 10:30:58 crc kubenswrapper[4808]: I0311 10:30:58.555463 4808 scope.go:117] "RemoveContainer" containerID="856340a335e6fb702ad5c5c24ef14ad63c72fd27b1de0efb12719b7c7718138d" Mar 11 10:32:00 crc kubenswrapper[4808]: I0311 10:32:00.144517 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553752-hrg4m"] Mar 11 10:32:00 crc kubenswrapper[4808]: E0311 10:32:00.145614 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25bc2184-abc4-463f-82f9-fa045eb0ee89" containerName="registry-server" Mar 11 10:32:00 crc kubenswrapper[4808]: I0311 10:32:00.145637 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="25bc2184-abc4-463f-82f9-fa045eb0ee89" containerName="registry-server" Mar 11 10:32:00 crc kubenswrapper[4808]: E0311 10:32:00.145653 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25bc2184-abc4-463f-82f9-fa045eb0ee89" containerName="extract-content" Mar 11 10:32:00 crc kubenswrapper[4808]: I0311 10:32:00.145665 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="25bc2184-abc4-463f-82f9-fa045eb0ee89" containerName="extract-content" Mar 11 10:32:00 crc kubenswrapper[4808]: E0311 10:32:00.145694 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed7de11e-3179-4aa0-8cc8-d6c3056d9b03" containerName="collect-profiles" Mar 11 10:32:00 crc kubenswrapper[4808]: I0311 10:32:00.145707 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed7de11e-3179-4aa0-8cc8-d6c3056d9b03" containerName="collect-profiles" Mar 11 10:32:00 crc kubenswrapper[4808]: E0311 10:32:00.145740 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25bc2184-abc4-463f-82f9-fa045eb0ee89" containerName="extract-utilities" Mar 11 10:32:00 crc kubenswrapper[4808]: I0311 10:32:00.145753 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="25bc2184-abc4-463f-82f9-fa045eb0ee89" containerName="extract-utilities" Mar 11 10:32:00 crc kubenswrapper[4808]: E0311 10:32:00.145782 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="256f4e2a-fcff-437f-a29e-d344bf24b519" containerName="oc" Mar 11 10:32:00 crc kubenswrapper[4808]: I0311 10:32:00.145794 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="256f4e2a-fcff-437f-a29e-d344bf24b519" containerName="oc" Mar 11 10:32:00 crc kubenswrapper[4808]: I0311 10:32:00.146074 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="25bc2184-abc4-463f-82f9-fa045eb0ee89" containerName="registry-server" Mar 11 10:32:00 crc kubenswrapper[4808]: I0311 10:32:00.146094 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed7de11e-3179-4aa0-8cc8-d6c3056d9b03" containerName="collect-profiles" Mar 11 10:32:00 crc kubenswrapper[4808]: I0311 10:32:00.146114 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="256f4e2a-fcff-437f-a29e-d344bf24b519" containerName="oc" Mar 11 10:32:00 crc kubenswrapper[4808]: I0311 10:32:00.147078 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553752-hrg4m" Mar 11 10:32:00 crc kubenswrapper[4808]: I0311 10:32:00.149694 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 10:32:00 crc kubenswrapper[4808]: I0311 10:32:00.149901 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-8r7cc" Mar 11 10:32:00 crc kubenswrapper[4808]: I0311 10:32:00.149935 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 10:32:00 crc kubenswrapper[4808]: I0311 10:32:00.153821 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553752-hrg4m"] Mar 11 10:32:00 crc kubenswrapper[4808]: I0311 10:32:00.286262 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thwsj\" (UniqueName: \"kubernetes.io/projected/3629122d-f8fc-4c36-980c-74732b419e4e-kube-api-access-thwsj\") pod \"auto-csr-approver-29553752-hrg4m\" (UID: \"3629122d-f8fc-4c36-980c-74732b419e4e\") " pod="openshift-infra/auto-csr-approver-29553752-hrg4m" Mar 11 10:32:00 crc kubenswrapper[4808]: I0311 10:32:00.388424 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thwsj\" (UniqueName: \"kubernetes.io/projected/3629122d-f8fc-4c36-980c-74732b419e4e-kube-api-access-thwsj\") pod \"auto-csr-approver-29553752-hrg4m\" (UID: \"3629122d-f8fc-4c36-980c-74732b419e4e\") " pod="openshift-infra/auto-csr-approver-29553752-hrg4m" Mar 11 10:32:00 crc kubenswrapper[4808]: I0311 10:32:00.420903 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thwsj\" (UniqueName: \"kubernetes.io/projected/3629122d-f8fc-4c36-980c-74732b419e4e-kube-api-access-thwsj\") pod \"auto-csr-approver-29553752-hrg4m\" (UID: \"3629122d-f8fc-4c36-980c-74732b419e4e\") " pod="openshift-infra/auto-csr-approver-29553752-hrg4m" Mar 11 10:32:00 crc kubenswrapper[4808]: I0311 10:32:00.470046 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553752-hrg4m" Mar 11 10:32:00 crc kubenswrapper[4808]: I0311 10:32:00.972534 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553752-hrg4m"] Mar 11 10:32:01 crc kubenswrapper[4808]: I0311 10:32:01.654741 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553752-hrg4m" event={"ID":"3629122d-f8fc-4c36-980c-74732b419e4e","Type":"ContainerStarted","Data":"4f1b7dff181f1b090d1da74aab17415fa7c9695d7a6857b2848cc556b19edde3"} Mar 11 10:32:02 crc kubenswrapper[4808]: I0311 10:32:02.662038 4808 generic.go:334] "Generic (PLEG): container finished" podID="3629122d-f8fc-4c36-980c-74732b419e4e" containerID="6a45ee5f87790ef1fd0837a3fd4382ad40b0cfeec0d93f6c6b2a69e75b3abbf4" exitCode=0 Mar 11 10:32:02 crc kubenswrapper[4808]: I0311 10:32:02.662087 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553752-hrg4m" event={"ID":"3629122d-f8fc-4c36-980c-74732b419e4e","Type":"ContainerDied","Data":"6a45ee5f87790ef1fd0837a3fd4382ad40b0cfeec0d93f6c6b2a69e75b3abbf4"} Mar 11 10:32:03 crc kubenswrapper[4808]: I0311 10:32:03.966662 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553752-hrg4m" Mar 11 10:32:04 crc kubenswrapper[4808]: I0311 10:32:04.060924 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-thwsj\" (UniqueName: \"kubernetes.io/projected/3629122d-f8fc-4c36-980c-74732b419e4e-kube-api-access-thwsj\") pod \"3629122d-f8fc-4c36-980c-74732b419e4e\" (UID: \"3629122d-f8fc-4c36-980c-74732b419e4e\") " Mar 11 10:32:04 crc kubenswrapper[4808]: I0311 10:32:04.065737 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3629122d-f8fc-4c36-980c-74732b419e4e-kube-api-access-thwsj" (OuterVolumeSpecName: "kube-api-access-thwsj") pod "3629122d-f8fc-4c36-980c-74732b419e4e" (UID: "3629122d-f8fc-4c36-980c-74732b419e4e"). InnerVolumeSpecName "kube-api-access-thwsj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:32:04 crc kubenswrapper[4808]: I0311 10:32:04.163025 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-thwsj\" (UniqueName: \"kubernetes.io/projected/3629122d-f8fc-4c36-980c-74732b419e4e-kube-api-access-thwsj\") on node \"crc\" DevicePath \"\"" Mar 11 10:32:04 crc kubenswrapper[4808]: I0311 10:32:04.681098 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553752-hrg4m" event={"ID":"3629122d-f8fc-4c36-980c-74732b419e4e","Type":"ContainerDied","Data":"4f1b7dff181f1b090d1da74aab17415fa7c9695d7a6857b2848cc556b19edde3"} Mar 11 10:32:04 crc kubenswrapper[4808]: I0311 10:32:04.681420 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f1b7dff181f1b090d1da74aab17415fa7c9695d7a6857b2848cc556b19edde3" Mar 11 10:32:04 crc kubenswrapper[4808]: I0311 10:32:04.681165 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553752-hrg4m" Mar 11 10:32:05 crc kubenswrapper[4808]: I0311 10:32:05.040581 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553746-bdxjr"] Mar 11 10:32:05 crc kubenswrapper[4808]: I0311 10:32:05.049600 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553746-bdxjr"] Mar 11 10:32:05 crc kubenswrapper[4808]: I0311 10:32:05.803598 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05b293a9-6e2e-48d9-b4c0-e0f475a6288d" path="/var/lib/kubelet/pods/05b293a9-6e2e-48d9-b4c0-e0f475a6288d/volumes" Mar 11 10:32:58 crc kubenswrapper[4808]: I0311 10:32:58.676945 4808 scope.go:117] "RemoveContainer" containerID="563a1c239a4163caa081ca40ef059a8188572e0e777003522e6b13baa46a93c9" Mar 11 10:33:07 crc kubenswrapper[4808]: I0311 10:33:07.847985 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-w2gcw/must-gather-4q9wf"] Mar 11 10:33:07 crc kubenswrapper[4808]: E0311 10:33:07.849511 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3629122d-f8fc-4c36-980c-74732b419e4e" containerName="oc" Mar 11 10:33:07 crc kubenswrapper[4808]: I0311 10:33:07.849527 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="3629122d-f8fc-4c36-980c-74732b419e4e" containerName="oc" Mar 11 10:33:07 crc kubenswrapper[4808]: I0311 10:33:07.849677 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="3629122d-f8fc-4c36-980c-74732b419e4e" containerName="oc" Mar 11 10:33:07 crc kubenswrapper[4808]: I0311 10:33:07.850517 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-w2gcw/must-gather-4q9wf" Mar 11 10:33:07 crc kubenswrapper[4808]: I0311 10:33:07.854800 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-w2gcw"/"default-dockercfg-bhh6n" Mar 11 10:33:07 crc kubenswrapper[4808]: I0311 10:33:07.854803 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-w2gcw"/"openshift-service-ca.crt" Mar 11 10:33:07 crc kubenswrapper[4808]: I0311 10:33:07.854993 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-w2gcw"/"kube-root-ca.crt" Mar 11 10:33:07 crc kubenswrapper[4808]: I0311 10:33:07.861215 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-w2gcw/must-gather-4q9wf"] Mar 11 10:33:07 crc kubenswrapper[4808]: I0311 10:33:07.941984 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjzb7\" (UniqueName: \"kubernetes.io/projected/6df58535-3024-4f98-b282-5d4a9ac4d6a3-kube-api-access-hjzb7\") pod \"must-gather-4q9wf\" (UID: \"6df58535-3024-4f98-b282-5d4a9ac4d6a3\") " pod="openshift-must-gather-w2gcw/must-gather-4q9wf" Mar 11 10:33:07 crc kubenswrapper[4808]: I0311 10:33:07.942089 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6df58535-3024-4f98-b282-5d4a9ac4d6a3-must-gather-output\") pod \"must-gather-4q9wf\" (UID: \"6df58535-3024-4f98-b282-5d4a9ac4d6a3\") " pod="openshift-must-gather-w2gcw/must-gather-4q9wf" Mar 11 10:33:08 crc kubenswrapper[4808]: I0311 10:33:08.043659 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6df58535-3024-4f98-b282-5d4a9ac4d6a3-must-gather-output\") pod \"must-gather-4q9wf\" (UID: \"6df58535-3024-4f98-b282-5d4a9ac4d6a3\") " pod="openshift-must-gather-w2gcw/must-gather-4q9wf" Mar 11 10:33:08 crc kubenswrapper[4808]: I0311 10:33:08.043872 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjzb7\" (UniqueName: \"kubernetes.io/projected/6df58535-3024-4f98-b282-5d4a9ac4d6a3-kube-api-access-hjzb7\") pod \"must-gather-4q9wf\" (UID: \"6df58535-3024-4f98-b282-5d4a9ac4d6a3\") " pod="openshift-must-gather-w2gcw/must-gather-4q9wf" Mar 11 10:33:08 crc kubenswrapper[4808]: I0311 10:33:08.044119 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6df58535-3024-4f98-b282-5d4a9ac4d6a3-must-gather-output\") pod \"must-gather-4q9wf\" (UID: \"6df58535-3024-4f98-b282-5d4a9ac4d6a3\") " pod="openshift-must-gather-w2gcw/must-gather-4q9wf" Mar 11 10:33:08 crc kubenswrapper[4808]: I0311 10:33:08.066831 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjzb7\" (UniqueName: \"kubernetes.io/projected/6df58535-3024-4f98-b282-5d4a9ac4d6a3-kube-api-access-hjzb7\") pod \"must-gather-4q9wf\" (UID: \"6df58535-3024-4f98-b282-5d4a9ac4d6a3\") " pod="openshift-must-gather-w2gcw/must-gather-4q9wf" Mar 11 10:33:08 crc kubenswrapper[4808]: I0311 10:33:08.212762 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-w2gcw/must-gather-4q9wf" Mar 11 10:33:08 crc kubenswrapper[4808]: I0311 10:33:08.661219 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-w2gcw/must-gather-4q9wf"] Mar 11 10:33:08 crc kubenswrapper[4808]: I0311 10:33:08.672714 4808 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 11 10:33:09 crc kubenswrapper[4808]: I0311 10:33:09.253057 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-w2gcw/must-gather-4q9wf" event={"ID":"6df58535-3024-4f98-b282-5d4a9ac4d6a3","Type":"ContainerStarted","Data":"5285902ddf1de89a4533fd96db236051f0a7468442d9ea6980dc8ea7d323f42f"} Mar 11 10:33:15 crc kubenswrapper[4808]: I0311 10:33:15.302796 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-w2gcw/must-gather-4q9wf" event={"ID":"6df58535-3024-4f98-b282-5d4a9ac4d6a3","Type":"ContainerStarted","Data":"57f82cca6fb2d46dc9f61c9a4a08a432f00755d5c029852ad781aeb038bb3af2"} Mar 11 10:33:15 crc kubenswrapper[4808]: I0311 10:33:15.303403 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-w2gcw/must-gather-4q9wf" event={"ID":"6df58535-3024-4f98-b282-5d4a9ac4d6a3","Type":"ContainerStarted","Data":"a14bb0b3dd8d2c43ac60a068dee920ae075d91906efea0326f821cea997e9fbd"} Mar 11 10:33:15 crc kubenswrapper[4808]: I0311 10:33:15.332392 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-w2gcw/must-gather-4q9wf" podStartSLOduration=2.329197473 podStartE2EDuration="8.332344123s" podCreationTimestamp="2026-03-11 10:33:07 +0000 UTC" firstStartedPulling="2026-03-11 10:33:08.67243471 +0000 UTC m=+6839.625758040" lastFinishedPulling="2026-03-11 10:33:14.67558137 +0000 UTC m=+6845.628904690" observedRunningTime="2026-03-11 10:33:15.322593038 +0000 UTC m=+6846.275916358" watchObservedRunningTime="2026-03-11 10:33:15.332344123 +0000 UTC m=+6846.285667443" Mar 11 10:33:16 crc kubenswrapper[4808]: I0311 10:33:16.027349 4808 patch_prober.go:28] interesting pod/machine-config-daemon-tfsm9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 10:33:16 crc kubenswrapper[4808]: I0311 10:33:16.027800 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 10:33:17 crc kubenswrapper[4808]: E0311 10:33:17.230962 4808 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.113:47232->38.102.83.113:39975: write tcp 38.102.83.113:47232->38.102.83.113:39975: write: connection reset by peer Mar 11 10:33:17 crc kubenswrapper[4808]: I0311 10:33:17.680248 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-w2gcw/crc-debug-7pqnt"] Mar 11 10:33:17 crc kubenswrapper[4808]: I0311 10:33:17.681471 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-w2gcw/crc-debug-7pqnt" Mar 11 10:33:17 crc kubenswrapper[4808]: I0311 10:33:17.806320 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f260d383-e9d7-4525-a2ec-11641a0f2ac6-host\") pod \"crc-debug-7pqnt\" (UID: \"f260d383-e9d7-4525-a2ec-11641a0f2ac6\") " pod="openshift-must-gather-w2gcw/crc-debug-7pqnt" Mar 11 10:33:17 crc kubenswrapper[4808]: I0311 10:33:17.806660 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxqwq\" (UniqueName: \"kubernetes.io/projected/f260d383-e9d7-4525-a2ec-11641a0f2ac6-kube-api-access-dxqwq\") pod \"crc-debug-7pqnt\" (UID: \"f260d383-e9d7-4525-a2ec-11641a0f2ac6\") " pod="openshift-must-gather-w2gcw/crc-debug-7pqnt" Mar 11 10:33:17 crc kubenswrapper[4808]: I0311 10:33:17.908369 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxqwq\" (UniqueName: \"kubernetes.io/projected/f260d383-e9d7-4525-a2ec-11641a0f2ac6-kube-api-access-dxqwq\") pod \"crc-debug-7pqnt\" (UID: \"f260d383-e9d7-4525-a2ec-11641a0f2ac6\") " pod="openshift-must-gather-w2gcw/crc-debug-7pqnt" Mar 11 10:33:17 crc kubenswrapper[4808]: I0311 10:33:17.908551 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f260d383-e9d7-4525-a2ec-11641a0f2ac6-host\") pod \"crc-debug-7pqnt\" (UID: \"f260d383-e9d7-4525-a2ec-11641a0f2ac6\") " pod="openshift-must-gather-w2gcw/crc-debug-7pqnt" Mar 11 10:33:17 crc kubenswrapper[4808]: I0311 10:33:17.909096 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f260d383-e9d7-4525-a2ec-11641a0f2ac6-host\") pod \"crc-debug-7pqnt\" (UID: \"f260d383-e9d7-4525-a2ec-11641a0f2ac6\") " pod="openshift-must-gather-w2gcw/crc-debug-7pqnt" Mar 11 10:33:17 crc kubenswrapper[4808]: I0311 10:33:17.927275 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxqwq\" (UniqueName: \"kubernetes.io/projected/f260d383-e9d7-4525-a2ec-11641a0f2ac6-kube-api-access-dxqwq\") pod \"crc-debug-7pqnt\" (UID: \"f260d383-e9d7-4525-a2ec-11641a0f2ac6\") " pod="openshift-must-gather-w2gcw/crc-debug-7pqnt" Mar 11 10:33:18 crc kubenswrapper[4808]: I0311 10:33:17.999928 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-w2gcw/crc-debug-7pqnt" Mar 11 10:33:18 crc kubenswrapper[4808]: I0311 10:33:18.327813 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-w2gcw/crc-debug-7pqnt" event={"ID":"f260d383-e9d7-4525-a2ec-11641a0f2ac6","Type":"ContainerStarted","Data":"c1bc82b016a8ebb78562a6d1b27c1743123e063a977c4124a7ae3e2b6f9b312f"} Mar 11 10:33:30 crc kubenswrapper[4808]: I0311 10:33:30.429961 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-w2gcw/crc-debug-7pqnt" event={"ID":"f260d383-e9d7-4525-a2ec-11641a0f2ac6","Type":"ContainerStarted","Data":"3ccdb61d31a53d08c4311cc817845032d20856957530f121ece74b0aa3327213"} Mar 11 10:33:30 crc kubenswrapper[4808]: I0311 10:33:30.446839 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-w2gcw/crc-debug-7pqnt" podStartSLOduration=1.955754116 podStartE2EDuration="13.446821606s" podCreationTimestamp="2026-03-11 10:33:17 +0000 UTC" firstStartedPulling="2026-03-11 10:33:18.032742613 +0000 UTC m=+6848.986065933" lastFinishedPulling="2026-03-11 10:33:29.523810093 +0000 UTC m=+6860.477133423" observedRunningTime="2026-03-11 10:33:30.443864422 +0000 UTC m=+6861.397187742" watchObservedRunningTime="2026-03-11 10:33:30.446821606 +0000 UTC m=+6861.400144926" Mar 11 10:33:46 crc kubenswrapper[4808]: I0311 10:33:46.027571 4808 patch_prober.go:28] interesting pod/machine-config-daemon-tfsm9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 10:33:46 crc kubenswrapper[4808]: I0311 10:33:46.028044 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 10:33:47 crc kubenswrapper[4808]: I0311 10:33:47.563213 4808 generic.go:334] "Generic (PLEG): container finished" podID="f260d383-e9d7-4525-a2ec-11641a0f2ac6" containerID="3ccdb61d31a53d08c4311cc817845032d20856957530f121ece74b0aa3327213" exitCode=0 Mar 11 10:33:47 crc kubenswrapper[4808]: I0311 10:33:47.563324 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-w2gcw/crc-debug-7pqnt" event={"ID":"f260d383-e9d7-4525-a2ec-11641a0f2ac6","Type":"ContainerDied","Data":"3ccdb61d31a53d08c4311cc817845032d20856957530f121ece74b0aa3327213"} Mar 11 10:33:48 crc kubenswrapper[4808]: I0311 10:33:48.679843 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-w2gcw/crc-debug-7pqnt" Mar 11 10:33:48 crc kubenswrapper[4808]: I0311 10:33:48.706853 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-w2gcw/crc-debug-7pqnt"] Mar 11 10:33:48 crc kubenswrapper[4808]: I0311 10:33:48.713350 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-w2gcw/crc-debug-7pqnt"] Mar 11 10:33:48 crc kubenswrapper[4808]: I0311 10:33:48.720198 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxqwq\" (UniqueName: \"kubernetes.io/projected/f260d383-e9d7-4525-a2ec-11641a0f2ac6-kube-api-access-dxqwq\") pod \"f260d383-e9d7-4525-a2ec-11641a0f2ac6\" (UID: \"f260d383-e9d7-4525-a2ec-11641a0f2ac6\") " Mar 11 10:33:48 crc kubenswrapper[4808]: I0311 10:33:48.720455 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f260d383-e9d7-4525-a2ec-11641a0f2ac6-host\") pod \"f260d383-e9d7-4525-a2ec-11641a0f2ac6\" (UID: \"f260d383-e9d7-4525-a2ec-11641a0f2ac6\") " Mar 11 10:33:48 crc kubenswrapper[4808]: I0311 10:33:48.720694 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f260d383-e9d7-4525-a2ec-11641a0f2ac6-host" (OuterVolumeSpecName: "host") pod "f260d383-e9d7-4525-a2ec-11641a0f2ac6" (UID: "f260d383-e9d7-4525-a2ec-11641a0f2ac6"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 11 10:33:48 crc kubenswrapper[4808]: I0311 10:33:48.721150 4808 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f260d383-e9d7-4525-a2ec-11641a0f2ac6-host\") on node \"crc\" DevicePath \"\"" Mar 11 10:33:48 crc kubenswrapper[4808]: I0311 10:33:48.739505 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f260d383-e9d7-4525-a2ec-11641a0f2ac6-kube-api-access-dxqwq" (OuterVolumeSpecName: "kube-api-access-dxqwq") pod "f260d383-e9d7-4525-a2ec-11641a0f2ac6" (UID: "f260d383-e9d7-4525-a2ec-11641a0f2ac6"). InnerVolumeSpecName "kube-api-access-dxqwq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:33:48 crc kubenswrapper[4808]: I0311 10:33:48.822507 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxqwq\" (UniqueName: \"kubernetes.io/projected/f260d383-e9d7-4525-a2ec-11641a0f2ac6-kube-api-access-dxqwq\") on node \"crc\" DevicePath \"\"" Mar 11 10:33:49 crc kubenswrapper[4808]: I0311 10:33:49.579129 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1bc82b016a8ebb78562a6d1b27c1743123e063a977c4124a7ae3e2b6f9b312f" Mar 11 10:33:49 crc kubenswrapper[4808]: I0311 10:33:49.579221 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-w2gcw/crc-debug-7pqnt" Mar 11 10:33:49 crc kubenswrapper[4808]: I0311 10:33:49.801742 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f260d383-e9d7-4525-a2ec-11641a0f2ac6" path="/var/lib/kubelet/pods/f260d383-e9d7-4525-a2ec-11641a0f2ac6/volumes" Mar 11 10:33:49 crc kubenswrapper[4808]: I0311 10:33:49.893935 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-w2gcw/crc-debug-5tsdl"] Mar 11 10:33:49 crc kubenswrapper[4808]: E0311 10:33:49.894274 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f260d383-e9d7-4525-a2ec-11641a0f2ac6" containerName="container-00" Mar 11 10:33:49 crc kubenswrapper[4808]: I0311 10:33:49.894288 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="f260d383-e9d7-4525-a2ec-11641a0f2ac6" containerName="container-00" Mar 11 10:33:49 crc kubenswrapper[4808]: I0311 10:33:49.894454 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="f260d383-e9d7-4525-a2ec-11641a0f2ac6" containerName="container-00" Mar 11 10:33:49 crc kubenswrapper[4808]: I0311 10:33:49.894950 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-w2gcw/crc-debug-5tsdl" Mar 11 10:33:49 crc kubenswrapper[4808]: I0311 10:33:49.941315 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xd5v\" (UniqueName: \"kubernetes.io/projected/a0c6d14f-ef79-41d8-948a-fbd510024ee2-kube-api-access-2xd5v\") pod \"crc-debug-5tsdl\" (UID: \"a0c6d14f-ef79-41d8-948a-fbd510024ee2\") " pod="openshift-must-gather-w2gcw/crc-debug-5tsdl" Mar 11 10:33:49 crc kubenswrapper[4808]: I0311 10:33:49.941533 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a0c6d14f-ef79-41d8-948a-fbd510024ee2-host\") pod \"crc-debug-5tsdl\" (UID: \"a0c6d14f-ef79-41d8-948a-fbd510024ee2\") " pod="openshift-must-gather-w2gcw/crc-debug-5tsdl" Mar 11 10:33:50 crc kubenswrapper[4808]: I0311 10:33:50.043034 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a0c6d14f-ef79-41d8-948a-fbd510024ee2-host\") pod \"crc-debug-5tsdl\" (UID: \"a0c6d14f-ef79-41d8-948a-fbd510024ee2\") " pod="openshift-must-gather-w2gcw/crc-debug-5tsdl" Mar 11 10:33:50 crc kubenswrapper[4808]: I0311 10:33:50.043125 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xd5v\" (UniqueName: \"kubernetes.io/projected/a0c6d14f-ef79-41d8-948a-fbd510024ee2-kube-api-access-2xd5v\") pod \"crc-debug-5tsdl\" (UID: \"a0c6d14f-ef79-41d8-948a-fbd510024ee2\") " pod="openshift-must-gather-w2gcw/crc-debug-5tsdl" Mar 11 10:33:50 crc kubenswrapper[4808]: I0311 10:33:50.043214 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a0c6d14f-ef79-41d8-948a-fbd510024ee2-host\") pod \"crc-debug-5tsdl\" (UID: \"a0c6d14f-ef79-41d8-948a-fbd510024ee2\") " pod="openshift-must-gather-w2gcw/crc-debug-5tsdl" Mar 11 10:33:50 crc kubenswrapper[4808]: I0311 10:33:50.063098 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xd5v\" (UniqueName: \"kubernetes.io/projected/a0c6d14f-ef79-41d8-948a-fbd510024ee2-kube-api-access-2xd5v\") pod \"crc-debug-5tsdl\" (UID: \"a0c6d14f-ef79-41d8-948a-fbd510024ee2\") " pod="openshift-must-gather-w2gcw/crc-debug-5tsdl" Mar 11 10:33:50 crc kubenswrapper[4808]: I0311 10:33:50.213950 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-w2gcw/crc-debug-5tsdl" Mar 11 10:33:50 crc kubenswrapper[4808]: W0311 10:33:50.265804 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda0c6d14f_ef79_41d8_948a_fbd510024ee2.slice/crio-530d8f50917a1f1b37d2b435f6a79b9042156246cabecfea79b068acaa9d1ed0 WatchSource:0}: Error finding container 530d8f50917a1f1b37d2b435f6a79b9042156246cabecfea79b068acaa9d1ed0: Status 404 returned error can't find the container with id 530d8f50917a1f1b37d2b435f6a79b9042156246cabecfea79b068acaa9d1ed0 Mar 11 10:33:50 crc kubenswrapper[4808]: I0311 10:33:50.588391 4808 generic.go:334] "Generic (PLEG): container finished" podID="a0c6d14f-ef79-41d8-948a-fbd510024ee2" containerID="8048bd987cc4122f65ba3d593cabc104fa8a4940b1c44855d0531b1b0c723872" exitCode=1 Mar 11 10:33:50 crc kubenswrapper[4808]: I0311 10:33:50.588469 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-w2gcw/crc-debug-5tsdl" event={"ID":"a0c6d14f-ef79-41d8-948a-fbd510024ee2","Type":"ContainerDied","Data":"8048bd987cc4122f65ba3d593cabc104fa8a4940b1c44855d0531b1b0c723872"} Mar 11 10:33:50 crc kubenswrapper[4808]: I0311 10:33:50.588757 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-w2gcw/crc-debug-5tsdl" event={"ID":"a0c6d14f-ef79-41d8-948a-fbd510024ee2","Type":"ContainerStarted","Data":"530d8f50917a1f1b37d2b435f6a79b9042156246cabecfea79b068acaa9d1ed0"} Mar 11 10:33:50 crc kubenswrapper[4808]: I0311 10:33:50.624926 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-w2gcw/crc-debug-5tsdl"] Mar 11 10:33:50 crc kubenswrapper[4808]: I0311 10:33:50.635245 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-w2gcw/crc-debug-5tsdl"] Mar 11 10:33:51 crc kubenswrapper[4808]: I0311 10:33:51.686111 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-w2gcw/crc-debug-5tsdl" Mar 11 10:33:51 crc kubenswrapper[4808]: I0311 10:33:51.771977 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2xd5v\" (UniqueName: \"kubernetes.io/projected/a0c6d14f-ef79-41d8-948a-fbd510024ee2-kube-api-access-2xd5v\") pod \"a0c6d14f-ef79-41d8-948a-fbd510024ee2\" (UID: \"a0c6d14f-ef79-41d8-948a-fbd510024ee2\") " Mar 11 10:33:51 crc kubenswrapper[4808]: I0311 10:33:51.772050 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a0c6d14f-ef79-41d8-948a-fbd510024ee2-host\") pod \"a0c6d14f-ef79-41d8-948a-fbd510024ee2\" (UID: \"a0c6d14f-ef79-41d8-948a-fbd510024ee2\") " Mar 11 10:33:51 crc kubenswrapper[4808]: I0311 10:33:51.772171 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a0c6d14f-ef79-41d8-948a-fbd510024ee2-host" (OuterVolumeSpecName: "host") pod "a0c6d14f-ef79-41d8-948a-fbd510024ee2" (UID: "a0c6d14f-ef79-41d8-948a-fbd510024ee2"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 11 10:33:51 crc kubenswrapper[4808]: I0311 10:33:51.772402 4808 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a0c6d14f-ef79-41d8-948a-fbd510024ee2-host\") on node \"crc\" DevicePath \"\"" Mar 11 10:33:51 crc kubenswrapper[4808]: I0311 10:33:51.776913 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0c6d14f-ef79-41d8-948a-fbd510024ee2-kube-api-access-2xd5v" (OuterVolumeSpecName: "kube-api-access-2xd5v") pod "a0c6d14f-ef79-41d8-948a-fbd510024ee2" (UID: "a0c6d14f-ef79-41d8-948a-fbd510024ee2"). InnerVolumeSpecName "kube-api-access-2xd5v". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:33:51 crc kubenswrapper[4808]: I0311 10:33:51.801452 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0c6d14f-ef79-41d8-948a-fbd510024ee2" path="/var/lib/kubelet/pods/a0c6d14f-ef79-41d8-948a-fbd510024ee2/volumes" Mar 11 10:33:51 crc kubenswrapper[4808]: I0311 10:33:51.874187 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2xd5v\" (UniqueName: \"kubernetes.io/projected/a0c6d14f-ef79-41d8-948a-fbd510024ee2-kube-api-access-2xd5v\") on node \"crc\" DevicePath \"\"" Mar 11 10:33:52 crc kubenswrapper[4808]: I0311 10:33:52.607120 4808 scope.go:117] "RemoveContainer" containerID="8048bd987cc4122f65ba3d593cabc104fa8a4940b1c44855d0531b1b0c723872" Mar 11 10:33:52 crc kubenswrapper[4808]: I0311 10:33:52.607220 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-w2gcw/crc-debug-5tsdl" Mar 11 10:33:53 crc kubenswrapper[4808]: I0311 10:33:53.830501 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-w5tx6"] Mar 11 10:33:53 crc kubenswrapper[4808]: E0311 10:33:53.831164 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0c6d14f-ef79-41d8-948a-fbd510024ee2" containerName="container-00" Mar 11 10:33:53 crc kubenswrapper[4808]: I0311 10:33:53.831179 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0c6d14f-ef79-41d8-948a-fbd510024ee2" containerName="container-00" Mar 11 10:33:53 crc kubenswrapper[4808]: I0311 10:33:53.831416 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0c6d14f-ef79-41d8-948a-fbd510024ee2" containerName="container-00" Mar 11 10:33:53 crc kubenswrapper[4808]: I0311 10:33:53.832766 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w5tx6" Mar 11 10:33:53 crc kubenswrapper[4808]: I0311 10:33:53.848766 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w5tx6"] Mar 11 10:33:53 crc kubenswrapper[4808]: I0311 10:33:53.904162 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/040a18bd-c666-4b4d-9269-afb1cdebe8de-utilities\") pod \"certified-operators-w5tx6\" (UID: \"040a18bd-c666-4b4d-9269-afb1cdebe8de\") " pod="openshift-marketplace/certified-operators-w5tx6" Mar 11 10:33:53 crc kubenswrapper[4808]: I0311 10:33:53.904257 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zk9tk\" (UniqueName: \"kubernetes.io/projected/040a18bd-c666-4b4d-9269-afb1cdebe8de-kube-api-access-zk9tk\") pod \"certified-operators-w5tx6\" (UID: \"040a18bd-c666-4b4d-9269-afb1cdebe8de\") " pod="openshift-marketplace/certified-operators-w5tx6" Mar 11 10:33:53 crc kubenswrapper[4808]: I0311 10:33:53.904299 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/040a18bd-c666-4b4d-9269-afb1cdebe8de-catalog-content\") pod \"certified-operators-w5tx6\" (UID: \"040a18bd-c666-4b4d-9269-afb1cdebe8de\") " pod="openshift-marketplace/certified-operators-w5tx6" Mar 11 10:33:54 crc kubenswrapper[4808]: I0311 10:33:54.005551 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/040a18bd-c666-4b4d-9269-afb1cdebe8de-utilities\") pod \"certified-operators-w5tx6\" (UID: \"040a18bd-c666-4b4d-9269-afb1cdebe8de\") " pod="openshift-marketplace/certified-operators-w5tx6" Mar 11 10:33:54 crc kubenswrapper[4808]: I0311 10:33:54.005632 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zk9tk\" (UniqueName: \"kubernetes.io/projected/040a18bd-c666-4b4d-9269-afb1cdebe8de-kube-api-access-zk9tk\") pod \"certified-operators-w5tx6\" (UID: \"040a18bd-c666-4b4d-9269-afb1cdebe8de\") " pod="openshift-marketplace/certified-operators-w5tx6" Mar 11 10:33:54 crc kubenswrapper[4808]: I0311 10:33:54.005666 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/040a18bd-c666-4b4d-9269-afb1cdebe8de-catalog-content\") pod \"certified-operators-w5tx6\" (UID: \"040a18bd-c666-4b4d-9269-afb1cdebe8de\") " pod="openshift-marketplace/certified-operators-w5tx6" Mar 11 10:33:54 crc kubenswrapper[4808]: I0311 10:33:54.006705 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/040a18bd-c666-4b4d-9269-afb1cdebe8de-utilities\") pod \"certified-operators-w5tx6\" (UID: \"040a18bd-c666-4b4d-9269-afb1cdebe8de\") " pod="openshift-marketplace/certified-operators-w5tx6" Mar 11 10:33:54 crc kubenswrapper[4808]: I0311 10:33:54.006756 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/040a18bd-c666-4b4d-9269-afb1cdebe8de-catalog-content\") pod \"certified-operators-w5tx6\" (UID: \"040a18bd-c666-4b4d-9269-afb1cdebe8de\") " pod="openshift-marketplace/certified-operators-w5tx6" Mar 11 10:33:54 crc kubenswrapper[4808]: I0311 10:33:54.025343 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zk9tk\" (UniqueName: \"kubernetes.io/projected/040a18bd-c666-4b4d-9269-afb1cdebe8de-kube-api-access-zk9tk\") pod \"certified-operators-w5tx6\" (UID: \"040a18bd-c666-4b4d-9269-afb1cdebe8de\") " pod="openshift-marketplace/certified-operators-w5tx6" Mar 11 10:33:54 crc kubenswrapper[4808]: I0311 10:33:54.176738 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w5tx6" Mar 11 10:33:54 crc kubenswrapper[4808]: I0311 10:33:54.695712 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w5tx6"] Mar 11 10:33:54 crc kubenswrapper[4808]: W0311 10:33:54.703517 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod040a18bd_c666_4b4d_9269_afb1cdebe8de.slice/crio-f0168754688cd7480bdca5d5fda07dfdf57e2822b259786859b430bc80bf6324 WatchSource:0}: Error finding container f0168754688cd7480bdca5d5fda07dfdf57e2822b259786859b430bc80bf6324: Status 404 returned error can't find the container with id f0168754688cd7480bdca5d5fda07dfdf57e2822b259786859b430bc80bf6324 Mar 11 10:33:55 crc kubenswrapper[4808]: I0311 10:33:55.634805 4808 generic.go:334] "Generic (PLEG): container finished" podID="040a18bd-c666-4b4d-9269-afb1cdebe8de" containerID="5872e4a99107a05d5d88bb12a8814e3032a90fd12a00368073eb3313367a3707" exitCode=0 Mar 11 10:33:55 crc kubenswrapper[4808]: I0311 10:33:55.634847 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w5tx6" event={"ID":"040a18bd-c666-4b4d-9269-afb1cdebe8de","Type":"ContainerDied","Data":"5872e4a99107a05d5d88bb12a8814e3032a90fd12a00368073eb3313367a3707"} Mar 11 10:33:55 crc kubenswrapper[4808]: I0311 10:33:55.634872 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w5tx6" event={"ID":"040a18bd-c666-4b4d-9269-afb1cdebe8de","Type":"ContainerStarted","Data":"f0168754688cd7480bdca5d5fda07dfdf57e2822b259786859b430bc80bf6324"} Mar 11 10:33:57 crc kubenswrapper[4808]: I0311 10:33:57.653528 4808 generic.go:334] "Generic (PLEG): container finished" podID="040a18bd-c666-4b4d-9269-afb1cdebe8de" containerID="ab73ba7352de2bc605550a2e4ede3f6f7614cfb8388116c3d7937c4902c49589" exitCode=0 Mar 11 10:33:57 crc kubenswrapper[4808]: I0311 10:33:57.653629 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w5tx6" event={"ID":"040a18bd-c666-4b4d-9269-afb1cdebe8de","Type":"ContainerDied","Data":"ab73ba7352de2bc605550a2e4ede3f6f7614cfb8388116c3d7937c4902c49589"} Mar 11 10:33:58 crc kubenswrapper[4808]: I0311 10:33:58.663895 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w5tx6" event={"ID":"040a18bd-c666-4b4d-9269-afb1cdebe8de","Type":"ContainerStarted","Data":"df15bd19fc4e2eedfe47f03196705a51076b0e1fb07a4930c4c1a9f45d541ed5"} Mar 11 10:33:58 crc kubenswrapper[4808]: I0311 10:33:58.683834 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-w5tx6" podStartSLOduration=3.177317226 podStartE2EDuration="5.683817848s" podCreationTimestamp="2026-03-11 10:33:53 +0000 UTC" firstStartedPulling="2026-03-11 10:33:55.638467849 +0000 UTC m=+6886.591791169" lastFinishedPulling="2026-03-11 10:33:58.144968431 +0000 UTC m=+6889.098291791" observedRunningTime="2026-03-11 10:33:58.681164893 +0000 UTC m=+6889.634488223" watchObservedRunningTime="2026-03-11 10:33:58.683817848 +0000 UTC m=+6889.637141178" Mar 11 10:34:00 crc kubenswrapper[4808]: I0311 10:34:00.144347 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553754-rckb9"] Mar 11 10:34:00 crc kubenswrapper[4808]: I0311 10:34:00.145697 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553754-rckb9" Mar 11 10:34:00 crc kubenswrapper[4808]: I0311 10:34:00.147854 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 10:34:00 crc kubenswrapper[4808]: I0311 10:34:00.147969 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 10:34:00 crc kubenswrapper[4808]: I0311 10:34:00.148137 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-8r7cc" Mar 11 10:34:00 crc kubenswrapper[4808]: I0311 10:34:00.164299 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553754-rckb9"] Mar 11 10:34:00 crc kubenswrapper[4808]: I0311 10:34:00.233122 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lgpk\" (UniqueName: \"kubernetes.io/projected/b6928cdf-b46f-41da-a0e1-842ab6fb5c00-kube-api-access-2lgpk\") pod \"auto-csr-approver-29553754-rckb9\" (UID: \"b6928cdf-b46f-41da-a0e1-842ab6fb5c00\") " pod="openshift-infra/auto-csr-approver-29553754-rckb9" Mar 11 10:34:00 crc kubenswrapper[4808]: I0311 10:34:00.334696 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lgpk\" (UniqueName: \"kubernetes.io/projected/b6928cdf-b46f-41da-a0e1-842ab6fb5c00-kube-api-access-2lgpk\") pod \"auto-csr-approver-29553754-rckb9\" (UID: \"b6928cdf-b46f-41da-a0e1-842ab6fb5c00\") " pod="openshift-infra/auto-csr-approver-29553754-rckb9" Mar 11 10:34:00 crc kubenswrapper[4808]: I0311 10:34:00.357061 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lgpk\" (UniqueName: \"kubernetes.io/projected/b6928cdf-b46f-41da-a0e1-842ab6fb5c00-kube-api-access-2lgpk\") pod \"auto-csr-approver-29553754-rckb9\" (UID: \"b6928cdf-b46f-41da-a0e1-842ab6fb5c00\") " pod="openshift-infra/auto-csr-approver-29553754-rckb9" Mar 11 10:34:00 crc kubenswrapper[4808]: I0311 10:34:00.461571 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553754-rckb9" Mar 11 10:34:00 crc kubenswrapper[4808]: I0311 10:34:00.913856 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553754-rckb9"] Mar 11 10:34:01 crc kubenswrapper[4808]: I0311 10:34:01.687796 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553754-rckb9" event={"ID":"b6928cdf-b46f-41da-a0e1-842ab6fb5c00","Type":"ContainerStarted","Data":"e0caf1626542c1c6e7d0b13240edd914b1f7195a45221115712afb68a1a36070"} Mar 11 10:34:02 crc kubenswrapper[4808]: I0311 10:34:02.697558 4808 generic.go:334] "Generic (PLEG): container finished" podID="b6928cdf-b46f-41da-a0e1-842ab6fb5c00" containerID="dfa6d5cb26081fc108c13dd9b053a4dcc61ed2f4b86c993f119bce09ca0fa557" exitCode=0 Mar 11 10:34:02 crc kubenswrapper[4808]: I0311 10:34:02.697744 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553754-rckb9" event={"ID":"b6928cdf-b46f-41da-a0e1-842ab6fb5c00","Type":"ContainerDied","Data":"dfa6d5cb26081fc108c13dd9b053a4dcc61ed2f4b86c993f119bce09ca0fa557"} Mar 11 10:34:04 crc kubenswrapper[4808]: I0311 10:34:04.013123 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553754-rckb9" Mar 11 10:34:04 crc kubenswrapper[4808]: I0311 10:34:04.103123 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2lgpk\" (UniqueName: \"kubernetes.io/projected/b6928cdf-b46f-41da-a0e1-842ab6fb5c00-kube-api-access-2lgpk\") pod \"b6928cdf-b46f-41da-a0e1-842ab6fb5c00\" (UID: \"b6928cdf-b46f-41da-a0e1-842ab6fb5c00\") " Mar 11 10:34:04 crc kubenswrapper[4808]: I0311 10:34:04.108384 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6928cdf-b46f-41da-a0e1-842ab6fb5c00-kube-api-access-2lgpk" (OuterVolumeSpecName: "kube-api-access-2lgpk") pod "b6928cdf-b46f-41da-a0e1-842ab6fb5c00" (UID: "b6928cdf-b46f-41da-a0e1-842ab6fb5c00"). InnerVolumeSpecName "kube-api-access-2lgpk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:34:04 crc kubenswrapper[4808]: I0311 10:34:04.178418 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-w5tx6" Mar 11 10:34:04 crc kubenswrapper[4808]: I0311 10:34:04.178642 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-w5tx6" Mar 11 10:34:04 crc kubenswrapper[4808]: I0311 10:34:04.205800 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2lgpk\" (UniqueName: \"kubernetes.io/projected/b6928cdf-b46f-41da-a0e1-842ab6fb5c00-kube-api-access-2lgpk\") on node \"crc\" DevicePath \"\"" Mar 11 10:34:04 crc kubenswrapper[4808]: I0311 10:34:04.224001 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-w5tx6" Mar 11 10:34:04 crc kubenswrapper[4808]: I0311 10:34:04.722048 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553754-rckb9" event={"ID":"b6928cdf-b46f-41da-a0e1-842ab6fb5c00","Type":"ContainerDied","Data":"e0caf1626542c1c6e7d0b13240edd914b1f7195a45221115712afb68a1a36070"} Mar 11 10:34:04 crc kubenswrapper[4808]: I0311 10:34:04.722263 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0caf1626542c1c6e7d0b13240edd914b1f7195a45221115712afb68a1a36070" Mar 11 10:34:04 crc kubenswrapper[4808]: I0311 10:34:04.722086 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553754-rckb9" Mar 11 10:34:04 crc kubenswrapper[4808]: I0311 10:34:04.791603 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-w5tx6" Mar 11 10:34:04 crc kubenswrapper[4808]: I0311 10:34:04.869247 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-w5tx6"] Mar 11 10:34:05 crc kubenswrapper[4808]: I0311 10:34:05.078708 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553748-zbbpq"] Mar 11 10:34:05 crc kubenswrapper[4808]: I0311 10:34:05.088411 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553748-zbbpq"] Mar 11 10:34:05 crc kubenswrapper[4808]: I0311 10:34:05.805161 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa846b72-49f5-425c-8b18-f5d4cdb63558" path="/var/lib/kubelet/pods/fa846b72-49f5-425c-8b18-f5d4cdb63558/volumes" Mar 11 10:34:06 crc kubenswrapper[4808]: I0311 10:34:06.741039 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-w5tx6" podUID="040a18bd-c666-4b4d-9269-afb1cdebe8de" containerName="registry-server" containerID="cri-o://df15bd19fc4e2eedfe47f03196705a51076b0e1fb07a4930c4c1a9f45d541ed5" gracePeriod=2 Mar 11 10:34:07 crc kubenswrapper[4808]: I0311 10:34:07.203237 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w5tx6" Mar 11 10:34:07 crc kubenswrapper[4808]: I0311 10:34:07.256976 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/040a18bd-c666-4b4d-9269-afb1cdebe8de-catalog-content\") pod \"040a18bd-c666-4b4d-9269-afb1cdebe8de\" (UID: \"040a18bd-c666-4b4d-9269-afb1cdebe8de\") " Mar 11 10:34:07 crc kubenswrapper[4808]: I0311 10:34:07.257047 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/040a18bd-c666-4b4d-9269-afb1cdebe8de-utilities\") pod \"040a18bd-c666-4b4d-9269-afb1cdebe8de\" (UID: \"040a18bd-c666-4b4d-9269-afb1cdebe8de\") " Mar 11 10:34:07 crc kubenswrapper[4808]: I0311 10:34:07.257120 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zk9tk\" (UniqueName: \"kubernetes.io/projected/040a18bd-c666-4b4d-9269-afb1cdebe8de-kube-api-access-zk9tk\") pod \"040a18bd-c666-4b4d-9269-afb1cdebe8de\" (UID: \"040a18bd-c666-4b4d-9269-afb1cdebe8de\") " Mar 11 10:34:07 crc kubenswrapper[4808]: I0311 10:34:07.258022 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/040a18bd-c666-4b4d-9269-afb1cdebe8de-utilities" (OuterVolumeSpecName: "utilities") pod "040a18bd-c666-4b4d-9269-afb1cdebe8de" (UID: "040a18bd-c666-4b4d-9269-afb1cdebe8de"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 10:34:07 crc kubenswrapper[4808]: I0311 10:34:07.265855 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/040a18bd-c666-4b4d-9269-afb1cdebe8de-kube-api-access-zk9tk" (OuterVolumeSpecName: "kube-api-access-zk9tk") pod "040a18bd-c666-4b4d-9269-afb1cdebe8de" (UID: "040a18bd-c666-4b4d-9269-afb1cdebe8de"). InnerVolumeSpecName "kube-api-access-zk9tk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:34:07 crc kubenswrapper[4808]: I0311 10:34:07.325269 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/040a18bd-c666-4b4d-9269-afb1cdebe8de-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "040a18bd-c666-4b4d-9269-afb1cdebe8de" (UID: "040a18bd-c666-4b4d-9269-afb1cdebe8de"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 10:34:07 crc kubenswrapper[4808]: I0311 10:34:07.359575 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zk9tk\" (UniqueName: \"kubernetes.io/projected/040a18bd-c666-4b4d-9269-afb1cdebe8de-kube-api-access-zk9tk\") on node \"crc\" DevicePath \"\"" Mar 11 10:34:07 crc kubenswrapper[4808]: I0311 10:34:07.359853 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/040a18bd-c666-4b4d-9269-afb1cdebe8de-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 11 10:34:07 crc kubenswrapper[4808]: I0311 10:34:07.359936 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/040a18bd-c666-4b4d-9269-afb1cdebe8de-utilities\") on node \"crc\" DevicePath \"\"" Mar 11 10:34:07 crc kubenswrapper[4808]: I0311 10:34:07.752392 4808 generic.go:334] "Generic (PLEG): container finished" podID="040a18bd-c666-4b4d-9269-afb1cdebe8de" containerID="df15bd19fc4e2eedfe47f03196705a51076b0e1fb07a4930c4c1a9f45d541ed5" exitCode=0 Mar 11 10:34:07 crc kubenswrapper[4808]: I0311 10:34:07.752443 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w5tx6" event={"ID":"040a18bd-c666-4b4d-9269-afb1cdebe8de","Type":"ContainerDied","Data":"df15bd19fc4e2eedfe47f03196705a51076b0e1fb07a4930c4c1a9f45d541ed5"} Mar 11 10:34:07 crc kubenswrapper[4808]: I0311 10:34:07.752476 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w5tx6" event={"ID":"040a18bd-c666-4b4d-9269-afb1cdebe8de","Type":"ContainerDied","Data":"f0168754688cd7480bdca5d5fda07dfdf57e2822b259786859b430bc80bf6324"} Mar 11 10:34:07 crc kubenswrapper[4808]: I0311 10:34:07.752496 4808 scope.go:117] "RemoveContainer" containerID="df15bd19fc4e2eedfe47f03196705a51076b0e1fb07a4930c4c1a9f45d541ed5" Mar 11 10:34:07 crc kubenswrapper[4808]: I0311 10:34:07.753154 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w5tx6" Mar 11 10:34:07 crc kubenswrapper[4808]: I0311 10:34:07.773974 4808 scope.go:117] "RemoveContainer" containerID="ab73ba7352de2bc605550a2e4ede3f6f7614cfb8388116c3d7937c4902c49589" Mar 11 10:34:07 crc kubenswrapper[4808]: I0311 10:34:07.803555 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-w5tx6"] Mar 11 10:34:07 crc kubenswrapper[4808]: I0311 10:34:07.803596 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-w5tx6"] Mar 11 10:34:07 crc kubenswrapper[4808]: I0311 10:34:07.821557 4808 scope.go:117] "RemoveContainer" containerID="5872e4a99107a05d5d88bb12a8814e3032a90fd12a00368073eb3313367a3707" Mar 11 10:34:07 crc kubenswrapper[4808]: I0311 10:34:07.836397 4808 scope.go:117] "RemoveContainer" containerID="df15bd19fc4e2eedfe47f03196705a51076b0e1fb07a4930c4c1a9f45d541ed5" Mar 11 10:34:07 crc kubenswrapper[4808]: E0311 10:34:07.837024 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df15bd19fc4e2eedfe47f03196705a51076b0e1fb07a4930c4c1a9f45d541ed5\": container with ID starting with df15bd19fc4e2eedfe47f03196705a51076b0e1fb07a4930c4c1a9f45d541ed5 not found: ID does not exist" containerID="df15bd19fc4e2eedfe47f03196705a51076b0e1fb07a4930c4c1a9f45d541ed5" Mar 11 10:34:07 crc kubenswrapper[4808]: I0311 10:34:07.837063 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df15bd19fc4e2eedfe47f03196705a51076b0e1fb07a4930c4c1a9f45d541ed5"} err="failed to get container status \"df15bd19fc4e2eedfe47f03196705a51076b0e1fb07a4930c4c1a9f45d541ed5\": rpc error: code = NotFound desc = could not find container \"df15bd19fc4e2eedfe47f03196705a51076b0e1fb07a4930c4c1a9f45d541ed5\": container with ID starting with df15bd19fc4e2eedfe47f03196705a51076b0e1fb07a4930c4c1a9f45d541ed5 not found: ID does not exist" Mar 11 10:34:07 crc kubenswrapper[4808]: I0311 10:34:07.837089 4808 scope.go:117] "RemoveContainer" containerID="ab73ba7352de2bc605550a2e4ede3f6f7614cfb8388116c3d7937c4902c49589" Mar 11 10:34:07 crc kubenswrapper[4808]: E0311 10:34:07.837529 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab73ba7352de2bc605550a2e4ede3f6f7614cfb8388116c3d7937c4902c49589\": container with ID starting with ab73ba7352de2bc605550a2e4ede3f6f7614cfb8388116c3d7937c4902c49589 not found: ID does not exist" containerID="ab73ba7352de2bc605550a2e4ede3f6f7614cfb8388116c3d7937c4902c49589" Mar 11 10:34:07 crc kubenswrapper[4808]: I0311 10:34:07.837557 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab73ba7352de2bc605550a2e4ede3f6f7614cfb8388116c3d7937c4902c49589"} err="failed to get container status \"ab73ba7352de2bc605550a2e4ede3f6f7614cfb8388116c3d7937c4902c49589\": rpc error: code = NotFound desc = could not find container \"ab73ba7352de2bc605550a2e4ede3f6f7614cfb8388116c3d7937c4902c49589\": container with ID starting with ab73ba7352de2bc605550a2e4ede3f6f7614cfb8388116c3d7937c4902c49589 not found: ID does not exist" Mar 11 10:34:07 crc kubenswrapper[4808]: I0311 10:34:07.837573 4808 scope.go:117] "RemoveContainer" containerID="5872e4a99107a05d5d88bb12a8814e3032a90fd12a00368073eb3313367a3707" Mar 11 10:34:07 crc kubenswrapper[4808]: E0311 10:34:07.837930 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5872e4a99107a05d5d88bb12a8814e3032a90fd12a00368073eb3313367a3707\": container with ID starting with 5872e4a99107a05d5d88bb12a8814e3032a90fd12a00368073eb3313367a3707 not found: ID does not exist" containerID="5872e4a99107a05d5d88bb12a8814e3032a90fd12a00368073eb3313367a3707" Mar 11 10:34:07 crc kubenswrapper[4808]: I0311 10:34:07.837981 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5872e4a99107a05d5d88bb12a8814e3032a90fd12a00368073eb3313367a3707"} err="failed to get container status \"5872e4a99107a05d5d88bb12a8814e3032a90fd12a00368073eb3313367a3707\": rpc error: code = NotFound desc = could not find container \"5872e4a99107a05d5d88bb12a8814e3032a90fd12a00368073eb3313367a3707\": container with ID starting with 5872e4a99107a05d5d88bb12a8814e3032a90fd12a00368073eb3313367a3707 not found: ID does not exist" Mar 11 10:34:08 crc kubenswrapper[4808]: I0311 10:34:08.605878 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-76b77f9d55-nglmp_47681338-c25c-475f-a5a9-faa1c1f65049/init/0.log" Mar 11 10:34:08 crc kubenswrapper[4808]: I0311 10:34:08.773559 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-76b77f9d55-nglmp_47681338-c25c-475f-a5a9-faa1c1f65049/init/0.log" Mar 11 10:34:08 crc kubenswrapper[4808]: I0311 10:34:08.790218 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-76b77f9d55-nglmp_47681338-c25c-475f-a5a9-faa1c1f65049/dnsmasq-dns/0.log" Mar 11 10:34:08 crc kubenswrapper[4808]: I0311 10:34:08.946972 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-fd9864c75-s46ps_8a0372f6-df1c-4590-9d76-eab6ef966ab2/keystone-api/0.log" Mar 11 10:34:09 crc kubenswrapper[4808]: I0311 10:34:09.051011 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-copy-data_652963de-2f4b-4395-a2e9-4f1fbd52f042/adoption/0.log" Mar 11 10:34:09 crc kubenswrapper[4808]: I0311 10:34:09.239090 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_67bbd1bb-f8a9-4de4-b4a0-fbe575d7a85d/mysql-bootstrap/0.log" Mar 11 10:34:09 crc kubenswrapper[4808]: I0311 10:34:09.427502 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_67bbd1bb-f8a9-4de4-b4a0-fbe575d7a85d/mysql-bootstrap/0.log" Mar 11 10:34:09 crc kubenswrapper[4808]: I0311 10:34:09.459606 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_67bbd1bb-f8a9-4de4-b4a0-fbe575d7a85d/galera/0.log" Mar 11 10:34:09 crc kubenswrapper[4808]: I0311 10:34:09.629404 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_a8ead8e6-bc4c-48cf-9375-cc9c27c5db50/mysql-bootstrap/0.log" Mar 11 10:34:09 crc kubenswrapper[4808]: I0311 10:34:09.808517 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="040a18bd-c666-4b4d-9269-afb1cdebe8de" path="/var/lib/kubelet/pods/040a18bd-c666-4b4d-9269-afb1cdebe8de/volumes" Mar 11 10:34:09 crc kubenswrapper[4808]: I0311 10:34:09.825259 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_a8ead8e6-bc4c-48cf-9375-cc9c27c5db50/mysql-bootstrap/0.log" Mar 11 10:34:09 crc kubenswrapper[4808]: I0311 10:34:09.855830 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_a8ead8e6-bc4c-48cf-9375-cc9c27c5db50/galera/0.log" Mar 11 10:34:09 crc kubenswrapper[4808]: I0311 10:34:09.972770 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_ff6f14ad-50df-4762-a17f-a0390d23b346/openstackclient/0.log" Mar 11 10:34:10 crc kubenswrapper[4808]: I0311 10:34:10.118087 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-copy-data_25eaf277-9c52-4ae6-909f-5340d24dc284/adoption/0.log" Mar 11 10:34:10 crc kubenswrapper[4808]: I0311 10:34:10.351462 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_c5eb2773-f6fc-4322-8bb9-ded8d96de7bb/openstack-network-exporter/0.log" Mar 11 10:34:10 crc kubenswrapper[4808]: I0311 10:34:10.371424 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_c5eb2773-f6fc-4322-8bb9-ded8d96de7bb/ovn-northd/0.log" Mar 11 10:34:10 crc kubenswrapper[4808]: I0311 10:34:10.540180 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_58c3819d-74e4-4b60-9633-d91a32c760a4/ovsdbserver-nb/0.log" Mar 11 10:34:10 crc kubenswrapper[4808]: I0311 10:34:10.548745 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_9687850f-8da8-4ab1-9316-7d0d9d4f75f6/memcached/0.log" Mar 11 10:34:10 crc kubenswrapper[4808]: I0311 10:34:10.555430 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_58c3819d-74e4-4b60-9633-d91a32c760a4/openstack-network-exporter/0.log" Mar 11 10:34:10 crc kubenswrapper[4808]: I0311 10:34:10.686582 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-1_8150999f-828b-4e71-98b6-9edd203c7b27/openstack-network-exporter/0.log" Mar 11 10:34:10 crc kubenswrapper[4808]: I0311 10:34:10.737308 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-1_8150999f-828b-4e71-98b6-9edd203c7b27/ovsdbserver-nb/0.log" Mar 11 10:34:10 crc kubenswrapper[4808]: I0311 10:34:10.875791 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-2_4b0b0d29-2902-4951-812c-05049145f128/openstack-network-exporter/0.log" Mar 11 10:34:10 crc kubenswrapper[4808]: I0311 10:34:10.952271 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-2_4b0b0d29-2902-4951-812c-05049145f128/ovsdbserver-nb/0.log" Mar 11 10:34:10 crc kubenswrapper[4808]: I0311 10:34:10.957066 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_047f7cf8-06eb-4caf-8ffc-846dcee172f4/openstack-network-exporter/0.log" Mar 11 10:34:11 crc kubenswrapper[4808]: I0311 10:34:11.038001 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_047f7cf8-06eb-4caf-8ffc-846dcee172f4/ovsdbserver-sb/0.log" Mar 11 10:34:11 crc kubenswrapper[4808]: I0311 10:34:11.150372 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-1_ee49f07d-8df5-402e-824a-3635bdfbe980/ovsdbserver-sb/0.log" Mar 11 10:34:11 crc kubenswrapper[4808]: I0311 10:34:11.161859 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-1_ee49f07d-8df5-402e-824a-3635bdfbe980/openstack-network-exporter/0.log" Mar 11 10:34:11 crc kubenswrapper[4808]: I0311 10:34:11.317270 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-2_bffb61a6-2929-4424-a4cc-b8cac705264a/openstack-network-exporter/0.log" Mar 11 10:34:11 crc kubenswrapper[4808]: I0311 10:34:11.346329 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-2_bffb61a6-2929-4424-a4cc-b8cac705264a/ovsdbserver-sb/0.log" Mar 11 10:34:11 crc kubenswrapper[4808]: I0311 10:34:11.453308 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_e1274333-a9d8-468f-be72-671f26f26d78/setup-container/0.log" Mar 11 10:34:11 crc kubenswrapper[4808]: I0311 10:34:11.634046 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_e1274333-a9d8-468f-be72-671f26f26d78/setup-container/0.log" Mar 11 10:34:11 crc kubenswrapper[4808]: I0311 10:34:11.674171 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_36603a85-71a0-4dc5-ab71-cfee0f3331c3/setup-container/0.log" Mar 11 10:34:11 crc kubenswrapper[4808]: I0311 10:34:11.676137 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_e1274333-a9d8-468f-be72-671f26f26d78/rabbitmq/0.log" Mar 11 10:34:11 crc kubenswrapper[4808]: I0311 10:34:11.885611 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_36603a85-71a0-4dc5-ab71-cfee0f3331c3/rabbitmq/0.log" Mar 11 10:34:11 crc kubenswrapper[4808]: I0311 10:34:11.894185 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_36603a85-71a0-4dc5-ab71-cfee0f3331c3/setup-container/0.log" Mar 11 10:34:16 crc kubenswrapper[4808]: I0311 10:34:16.027524 4808 patch_prober.go:28] interesting pod/machine-config-daemon-tfsm9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 10:34:16 crc kubenswrapper[4808]: I0311 10:34:16.027938 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 10:34:16 crc kubenswrapper[4808]: I0311 10:34:16.027992 4808 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" Mar 11 10:34:16 crc kubenswrapper[4808]: I0311 10:34:16.028814 4808 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"00cc296dafe7303528d0b110e4134845496f29238d7cda6f869b6177d61ee874"} pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 11 10:34:16 crc kubenswrapper[4808]: I0311 10:34:16.028883 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerName="machine-config-daemon" containerID="cri-o://00cc296dafe7303528d0b110e4134845496f29238d7cda6f869b6177d61ee874" gracePeriod=600 Mar 11 10:34:16 crc kubenswrapper[4808]: I0311 10:34:16.817888 4808 generic.go:334] "Generic (PLEG): container finished" podID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerID="00cc296dafe7303528d0b110e4134845496f29238d7cda6f869b6177d61ee874" exitCode=0 Mar 11 10:34:16 crc kubenswrapper[4808]: I0311 10:34:16.818333 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" event={"ID":"3dda5309-668d-4e3c-b3b2-1d708eecc578","Type":"ContainerDied","Data":"00cc296dafe7303528d0b110e4134845496f29238d7cda6f869b6177d61ee874"} Mar 11 10:34:16 crc kubenswrapper[4808]: I0311 10:34:16.818369 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" event={"ID":"3dda5309-668d-4e3c-b3b2-1d708eecc578","Type":"ContainerStarted","Data":"1319ea54b643c165259c07e980866f491c6dc66e4563046f9e2b24e8cfa88223"} Mar 11 10:34:16 crc kubenswrapper[4808]: I0311 10:34:16.818385 4808 scope.go:117] "RemoveContainer" containerID="736d4f503e3af501480de20ee69fa653054f5eeb2ae9186331abccc648c6682f" Mar 11 10:34:27 crc kubenswrapper[4808]: I0311 10:34:27.459992 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49banh6c4_0a920883-841f-4567-aed2-8d6c2f5e2d1e/util/0.log" Mar 11 10:34:27 crc kubenswrapper[4808]: I0311 10:34:27.716431 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49banh6c4_0a920883-841f-4567-aed2-8d6c2f5e2d1e/pull/0.log" Mar 11 10:34:27 crc kubenswrapper[4808]: I0311 10:34:27.758992 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49banh6c4_0a920883-841f-4567-aed2-8d6c2f5e2d1e/pull/0.log" Mar 11 10:34:27 crc kubenswrapper[4808]: I0311 10:34:27.779295 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49banh6c4_0a920883-841f-4567-aed2-8d6c2f5e2d1e/util/0.log" Mar 11 10:34:27 crc kubenswrapper[4808]: I0311 10:34:27.918214 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49banh6c4_0a920883-841f-4567-aed2-8d6c2f5e2d1e/pull/0.log" Mar 11 10:34:27 crc kubenswrapper[4808]: I0311 10:34:27.968614 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49banh6c4_0a920883-841f-4567-aed2-8d6c2f5e2d1e/util/0.log" Mar 11 10:34:28 crc kubenswrapper[4808]: I0311 10:34:28.001633 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49banh6c4_0a920883-841f-4567-aed2-8d6c2f5e2d1e/extract/0.log" Mar 11 10:34:28 crc kubenswrapper[4808]: I0311 10:34:28.512677 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-66d56f6ff4-7zq56_46a1ed3a-101b-4412-80d3-b246794f4439/manager/0.log" Mar 11 10:34:28 crc kubenswrapper[4808]: I0311 10:34:28.755701 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-5964f64c48-lqxgc_cd11b5ec-454a-4bbb-a4e4-5b4569c0e219/manager/0.log" Mar 11 10:34:28 crc kubenswrapper[4808]: I0311 10:34:28.928349 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-77b6666d85-6wz56_4d0b42e5-b4bf-47a9-afed-ddcf3f770ca0/manager/0.log" Mar 11 10:34:29 crc kubenswrapper[4808]: I0311 10:34:29.179232 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-6d9d6b584d-n49mc_14bd6d19-3c60-4752-bd29-09809df6f7b6/manager/0.log" Mar 11 10:34:29 crc kubenswrapper[4808]: I0311 10:34:29.762173 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6bbb499bbc-n674k_b1553dcf-b18c-45ef-a328-9eb5b86d5a02/manager/0.log" Mar 11 10:34:29 crc kubenswrapper[4808]: I0311 10:34:29.987159 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-5995f4446f-cb6bf_557a289a-7329-40b7-a593-bfcfa58e679d/manager/0.log" Mar 11 10:34:30 crc kubenswrapper[4808]: I0311 10:34:30.279637 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-684f77d66d-9k6vf_f90023c4-1729-4193-9853-4548be9c786c/manager/0.log" Mar 11 10:34:30 crc kubenswrapper[4808]: I0311 10:34:30.491141 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-68f45f9d9f-95v7v_b39efce9-91d6-43b0-b80c-30c223d26460/manager/0.log" Mar 11 10:34:30 crc kubenswrapper[4808]: I0311 10:34:30.718984 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-984cd4dcf-nt6vp_b82ff9e7-d047-4da9-8de7-177e1a3fbb7e/manager/0.log" Mar 11 10:34:31 crc kubenswrapper[4808]: I0311 10:34:31.085143 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-658d4cdd5-lsmx2_6863fca8-473b-4fc3-8e19-f717c2d164c9/manager/0.log" Mar 11 10:34:31 crc kubenswrapper[4808]: I0311 10:34:31.102978 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-776c5696bf-snl5f_9d6bd72a-6ed8-4558-950a-80c4aab533b0/manager/0.log" Mar 11 10:34:31 crc kubenswrapper[4808]: I0311 10:34:31.403148 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-569cc54c5-44gbn_94668c95-67c1-4f5e-9bde-e2d34d7ce631/manager/0.log" Mar 11 10:34:31 crc kubenswrapper[4808]: I0311 10:34:31.485878 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-5f4f55cb5c-8kpv9_354b0411-5c50-48d0-9ed8-e4871a92dc0e/manager/0.log" Mar 11 10:34:31 crc kubenswrapper[4808]: I0311 10:34:31.640295 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6647d7885fct7h5_aabfac3f-1196-4f9c-be5f-84bfbb833ae3/manager/0.log" Mar 11 10:34:31 crc kubenswrapper[4808]: I0311 10:34:31.838068 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-6cf8df7788-ghlgz_e125e50d-5830-4cbc-9eaf-7df7c62cb706/operator/0.log" Mar 11 10:34:32 crc kubenswrapper[4808]: I0311 10:34:32.158919 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-s5m27_d013bfe9-72da-4846-9528-9e5d1c6846e7/registry-server/0.log" Mar 11 10:34:32 crc kubenswrapper[4808]: I0311 10:34:32.246346 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-bbc5b68f9-pgtxk_cf281551-b7f4-4d5c-823e-6e70132ae2d0/manager/0.log" Mar 11 10:34:32 crc kubenswrapper[4808]: I0311 10:34:32.354487 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-574d45c66c-b72f8_7dd34e69-2d2e-43aa-9f2e-ee9a2a747a0a/manager/0.log" Mar 11 10:34:32 crc kubenswrapper[4808]: I0311 10:34:32.576971 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-7lk57_b0711f78-69be-46d0-8857-e01d6927edfd/operator/0.log" Mar 11 10:34:32 crc kubenswrapper[4808]: I0311 10:34:32.743247 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-677c674df7-j9bkm_4b276a74-0d31-48b5-9556-0671578c4ab2/manager/0.log" Mar 11 10:34:32 crc kubenswrapper[4808]: I0311 10:34:32.947109 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-6cd66dbd4b-clf8c_1ea888eb-3b8a-4871-846d-6a3006087ffb/manager/0.log" Mar 11 10:34:33 crc kubenswrapper[4808]: I0311 10:34:33.048779 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5c5cb9c4d7-kns94_cd5790a0-53c4-4ee8-95fa-f72ea9135488/manager/0.log" Mar 11 10:34:33 crc kubenswrapper[4808]: I0311 10:34:33.172154 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-6dd88c6f67-jw9mr_90bdb089-2ce0-4e37-bda6-5db68ebd89e8/manager/0.log" Mar 11 10:34:33 crc kubenswrapper[4808]: I0311 10:34:33.299490 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-6679ddfdc7-c8cqh_37e5de88-802a-408e-9362-51166d0b7662/manager/0.log" Mar 11 10:34:39 crc kubenswrapper[4808]: I0311 10:34:39.940459 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-677bd678f7-s5kbb_383d2b79-82c7-4abd-bd0e-7cc157c35f28/manager/0.log" Mar 11 10:34:53 crc kubenswrapper[4808]: I0311 10:34:53.198957 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-s9lbs_cde5dc2c-6004-42ac-bd4a-93f0bec898fa/control-plane-machine-set-operator/0.log" Mar 11 10:34:53 crc kubenswrapper[4808]: I0311 10:34:53.399126 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-zld8r_da668c40-37d5-4ebf-ae7e-59e9c301b386/kube-rbac-proxy/0.log" Mar 11 10:34:53 crc kubenswrapper[4808]: I0311 10:34:53.416728 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-zld8r_da668c40-37d5-4ebf-ae7e-59e9c301b386/machine-api-operator/0.log" Mar 11 10:34:58 crc kubenswrapper[4808]: I0311 10:34:58.786304 4808 scope.go:117] "RemoveContainer" containerID="f6cb416eecf3995cee4e47b0724af81d5f8dfe90e32904f3b1bde907572e8246" Mar 11 10:35:06 crc kubenswrapper[4808]: I0311 10:35:06.076118 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-545d4d4674-z7dgl_3d663586-8683-4f72-adb2-86264ae4d951/cert-manager-controller/0.log" Mar 11 10:35:06 crc kubenswrapper[4808]: I0311 10:35:06.212278 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-5545bd876-hpwdk_6ea1f6d6-8e38-4c9b-bc81-d72f4f182323/cert-manager-cainjector/0.log" Mar 11 10:35:06 crc kubenswrapper[4808]: I0311 10:35:06.247037 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-6888856db4-l9v5h_f5f2c055-31b4-4e3b-a3e9-7e724fb0f1de/cert-manager-webhook/0.log" Mar 11 10:35:19 crc kubenswrapper[4808]: I0311 10:35:19.866131 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-86f58fcf4-cnw9s_80c7794c-5470-4f6b-8f7d-abc0e9f31785/nmstate-console-plugin/0.log" Mar 11 10:35:20 crc kubenswrapper[4808]: I0311 10:35:20.052418 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-rzf4d_bac5717e-202e-43a5-a822-a7eeedc11af5/nmstate-handler/0.log" Mar 11 10:35:20 crc kubenswrapper[4808]: I0311 10:35:20.143823 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-9b8c8685d-trl6k_05ae43c7-5b21-4813-adfe-8906527c2a44/kube-rbac-proxy/0.log" Mar 11 10:35:20 crc kubenswrapper[4808]: I0311 10:35:20.192897 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-9b8c8685d-trl6k_05ae43c7-5b21-4813-adfe-8906527c2a44/nmstate-metrics/0.log" Mar 11 10:35:20 crc kubenswrapper[4808]: I0311 10:35:20.241042 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-796d4cfff4-6bfzv_5c6a1c35-c3e7-41df-b410-34578fad1d2d/nmstate-operator/0.log" Mar 11 10:35:20 crc kubenswrapper[4808]: I0311 10:35:20.365839 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f558f5558-ds975_85875097-abf3-4f4d-baa8-19ee0d1b85e7/nmstate-webhook/0.log" Mar 11 10:35:48 crc kubenswrapper[4808]: I0311 10:35:48.126757 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-bj8qf_3bcbbc12-ad14-4533-8e3a-740616030126/kube-rbac-proxy/0.log" Mar 11 10:35:48 crc kubenswrapper[4808]: I0311 10:35:48.339286 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sfkdf_50d06bd1-c292-47f1-bea4-19930631d122/cp-frr-files/0.log" Mar 11 10:35:48 crc kubenswrapper[4808]: I0311 10:35:48.532967 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-bj8qf_3bcbbc12-ad14-4533-8e3a-740616030126/controller/0.log" Mar 11 10:35:48 crc kubenswrapper[4808]: I0311 10:35:48.553302 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sfkdf_50d06bd1-c292-47f1-bea4-19930631d122/cp-frr-files/0.log" Mar 11 10:35:48 crc kubenswrapper[4808]: I0311 10:35:48.572059 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sfkdf_50d06bd1-c292-47f1-bea4-19930631d122/cp-metrics/0.log" Mar 11 10:35:48 crc kubenswrapper[4808]: I0311 10:35:48.636441 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sfkdf_50d06bd1-c292-47f1-bea4-19930631d122/cp-reloader/0.log" Mar 11 10:35:48 crc kubenswrapper[4808]: I0311 10:35:48.705619 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sfkdf_50d06bd1-c292-47f1-bea4-19930631d122/cp-reloader/0.log" Mar 11 10:35:48 crc kubenswrapper[4808]: I0311 10:35:48.862285 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sfkdf_50d06bd1-c292-47f1-bea4-19930631d122/cp-frr-files/0.log" Mar 11 10:35:48 crc kubenswrapper[4808]: I0311 10:35:48.885121 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sfkdf_50d06bd1-c292-47f1-bea4-19930631d122/cp-metrics/0.log" Mar 11 10:35:48 crc kubenswrapper[4808]: I0311 10:35:48.915993 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sfkdf_50d06bd1-c292-47f1-bea4-19930631d122/cp-metrics/0.log" Mar 11 10:35:48 crc kubenswrapper[4808]: I0311 10:35:48.939338 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sfkdf_50d06bd1-c292-47f1-bea4-19930631d122/cp-reloader/0.log" Mar 11 10:35:49 crc kubenswrapper[4808]: I0311 10:35:49.130314 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sfkdf_50d06bd1-c292-47f1-bea4-19930631d122/cp-frr-files/0.log" Mar 11 10:35:49 crc kubenswrapper[4808]: I0311 10:35:49.133809 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sfkdf_50d06bd1-c292-47f1-bea4-19930631d122/cp-reloader/0.log" Mar 11 10:35:49 crc kubenswrapper[4808]: I0311 10:35:49.178069 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sfkdf_50d06bd1-c292-47f1-bea4-19930631d122/controller/0.log" Mar 11 10:35:49 crc kubenswrapper[4808]: I0311 10:35:49.197073 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sfkdf_50d06bd1-c292-47f1-bea4-19930631d122/cp-metrics/0.log" Mar 11 10:35:49 crc kubenswrapper[4808]: I0311 10:35:49.414038 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sfkdf_50d06bd1-c292-47f1-bea4-19930631d122/kube-rbac-proxy/0.log" Mar 11 10:35:49 crc kubenswrapper[4808]: I0311 10:35:49.436378 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sfkdf_50d06bd1-c292-47f1-bea4-19930631d122/frr-metrics/0.log" Mar 11 10:35:49 crc kubenswrapper[4808]: I0311 10:35:49.468471 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sfkdf_50d06bd1-c292-47f1-bea4-19930631d122/kube-rbac-proxy-frr/0.log" Mar 11 10:35:49 crc kubenswrapper[4808]: I0311 10:35:49.630467 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sfkdf_50d06bd1-c292-47f1-bea4-19930631d122/reloader/0.log" Mar 11 10:35:49 crc kubenswrapper[4808]: I0311 10:35:49.640042 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-bcc4b6f68-nn6tk_b4effb47-31c0-4aff-89e2-0bee10906717/frr-k8s-webhook-server/0.log" Mar 11 10:35:49 crc kubenswrapper[4808]: I0311 10:35:49.852700 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-66bd996c44-tl4z2_966ee7c4-0701-4503-adcb-b5f389defad1/manager/0.log" Mar 11 10:35:50 crc kubenswrapper[4808]: I0311 10:35:50.053153 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-666999559-qhsgf_d25c1c34-0aa8-42ca-ad90-0c50c934b31d/webhook-server/0.log" Mar 11 10:35:50 crc kubenswrapper[4808]: I0311 10:35:50.129641 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-xfmfr_43da1e53-74d6-4e82-8937-beb757888f5f/kube-rbac-proxy/0.log" Mar 11 10:35:50 crc kubenswrapper[4808]: I0311 10:35:50.819676 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-xfmfr_43da1e53-74d6-4e82-8937-beb757888f5f/speaker/0.log" Mar 11 10:35:51 crc kubenswrapper[4808]: I0311 10:35:51.647961 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sfkdf_50d06bd1-c292-47f1-bea4-19930631d122/frr/0.log" Mar 11 10:36:00 crc kubenswrapper[4808]: I0311 10:36:00.146630 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553756-pfks8"] Mar 11 10:36:00 crc kubenswrapper[4808]: E0311 10:36:00.147485 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="040a18bd-c666-4b4d-9269-afb1cdebe8de" containerName="extract-utilities" Mar 11 10:36:00 crc kubenswrapper[4808]: I0311 10:36:00.147500 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="040a18bd-c666-4b4d-9269-afb1cdebe8de" containerName="extract-utilities" Mar 11 10:36:00 crc kubenswrapper[4808]: E0311 10:36:00.147530 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="040a18bd-c666-4b4d-9269-afb1cdebe8de" containerName="extract-content" Mar 11 10:36:00 crc kubenswrapper[4808]: I0311 10:36:00.147536 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="040a18bd-c666-4b4d-9269-afb1cdebe8de" containerName="extract-content" Mar 11 10:36:00 crc kubenswrapper[4808]: E0311 10:36:00.147553 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6928cdf-b46f-41da-a0e1-842ab6fb5c00" containerName="oc" Mar 11 10:36:00 crc kubenswrapper[4808]: I0311 10:36:00.147560 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6928cdf-b46f-41da-a0e1-842ab6fb5c00" containerName="oc" Mar 11 10:36:00 crc kubenswrapper[4808]: E0311 10:36:00.147566 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="040a18bd-c666-4b4d-9269-afb1cdebe8de" containerName="registry-server" Mar 11 10:36:00 crc kubenswrapper[4808]: I0311 10:36:00.147572 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="040a18bd-c666-4b4d-9269-afb1cdebe8de" containerName="registry-server" Mar 11 10:36:00 crc kubenswrapper[4808]: I0311 10:36:00.147708 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6928cdf-b46f-41da-a0e1-842ab6fb5c00" containerName="oc" Mar 11 10:36:00 crc kubenswrapper[4808]: I0311 10:36:00.147728 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="040a18bd-c666-4b4d-9269-afb1cdebe8de" containerName="registry-server" Mar 11 10:36:00 crc kubenswrapper[4808]: I0311 10:36:00.148218 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553756-pfks8" Mar 11 10:36:00 crc kubenswrapper[4808]: I0311 10:36:00.150657 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-8r7cc" Mar 11 10:36:00 crc kubenswrapper[4808]: I0311 10:36:00.150812 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 10:36:00 crc kubenswrapper[4808]: I0311 10:36:00.151082 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 10:36:00 crc kubenswrapper[4808]: I0311 10:36:00.170263 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553756-pfks8"] Mar 11 10:36:00 crc kubenswrapper[4808]: I0311 10:36:00.229399 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qg9c\" (UniqueName: \"kubernetes.io/projected/02e16810-917f-4884-bd9f-60d19cba0be0-kube-api-access-6qg9c\") pod \"auto-csr-approver-29553756-pfks8\" (UID: \"02e16810-917f-4884-bd9f-60d19cba0be0\") " pod="openshift-infra/auto-csr-approver-29553756-pfks8" Mar 11 10:36:00 crc kubenswrapper[4808]: I0311 10:36:00.331100 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qg9c\" (UniqueName: \"kubernetes.io/projected/02e16810-917f-4884-bd9f-60d19cba0be0-kube-api-access-6qg9c\") pod \"auto-csr-approver-29553756-pfks8\" (UID: \"02e16810-917f-4884-bd9f-60d19cba0be0\") " pod="openshift-infra/auto-csr-approver-29553756-pfks8" Mar 11 10:36:00 crc kubenswrapper[4808]: I0311 10:36:00.349849 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qg9c\" (UniqueName: \"kubernetes.io/projected/02e16810-917f-4884-bd9f-60d19cba0be0-kube-api-access-6qg9c\") pod \"auto-csr-approver-29553756-pfks8\" (UID: \"02e16810-917f-4884-bd9f-60d19cba0be0\") " pod="openshift-infra/auto-csr-approver-29553756-pfks8" Mar 11 10:36:00 crc kubenswrapper[4808]: I0311 10:36:00.469469 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553756-pfks8" Mar 11 10:36:00 crc kubenswrapper[4808]: I0311 10:36:00.965260 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553756-pfks8"] Mar 11 10:36:01 crc kubenswrapper[4808]: I0311 10:36:01.687416 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553756-pfks8" event={"ID":"02e16810-917f-4884-bd9f-60d19cba0be0","Type":"ContainerStarted","Data":"e1576f808737bd07f6e442827ad9a7b59207aa5c2c0f2d501b5a4f178ee07b05"} Mar 11 10:36:02 crc kubenswrapper[4808]: I0311 10:36:02.694469 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553756-pfks8" event={"ID":"02e16810-917f-4884-bd9f-60d19cba0be0","Type":"ContainerStarted","Data":"4fc2cebf2ba7232929e638d2dc38f5bf0fd275a4558d30cdc871b7e13a3e8dfc"} Mar 11 10:36:02 crc kubenswrapper[4808]: I0311 10:36:02.711298 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29553756-pfks8" podStartSLOduration=1.410012157 podStartE2EDuration="2.711271886s" podCreationTimestamp="2026-03-11 10:36:00 +0000 UTC" firstStartedPulling="2026-03-11 10:36:00.968196366 +0000 UTC m=+7011.921519686" lastFinishedPulling="2026-03-11 10:36:02.269456095 +0000 UTC m=+7013.222779415" observedRunningTime="2026-03-11 10:36:02.705964657 +0000 UTC m=+7013.659287987" watchObservedRunningTime="2026-03-11 10:36:02.711271886 +0000 UTC m=+7013.664595206" Mar 11 10:36:03 crc kubenswrapper[4808]: I0311 10:36:03.424183 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874j2pt4_ad1db85c-8250-4d3f-a27a-e4680d126b3d/util/0.log" Mar 11 10:36:03 crc kubenswrapper[4808]: I0311 10:36:03.575897 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874j2pt4_ad1db85c-8250-4d3f-a27a-e4680d126b3d/util/0.log" Mar 11 10:36:03 crc kubenswrapper[4808]: I0311 10:36:03.609805 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874j2pt4_ad1db85c-8250-4d3f-a27a-e4680d126b3d/pull/0.log" Mar 11 10:36:03 crc kubenswrapper[4808]: I0311 10:36:03.622307 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874j2pt4_ad1db85c-8250-4d3f-a27a-e4680d126b3d/pull/0.log" Mar 11 10:36:03 crc kubenswrapper[4808]: I0311 10:36:03.703008 4808 generic.go:334] "Generic (PLEG): container finished" podID="02e16810-917f-4884-bd9f-60d19cba0be0" containerID="4fc2cebf2ba7232929e638d2dc38f5bf0fd275a4558d30cdc871b7e13a3e8dfc" exitCode=0 Mar 11 10:36:03 crc kubenswrapper[4808]: I0311 10:36:03.703053 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553756-pfks8" event={"ID":"02e16810-917f-4884-bd9f-60d19cba0be0","Type":"ContainerDied","Data":"4fc2cebf2ba7232929e638d2dc38f5bf0fd275a4558d30cdc871b7e13a3e8dfc"} Mar 11 10:36:03 crc kubenswrapper[4808]: I0311 10:36:03.782552 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874j2pt4_ad1db85c-8250-4d3f-a27a-e4680d126b3d/util/0.log" Mar 11 10:36:03 crc kubenswrapper[4808]: I0311 10:36:03.791866 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874j2pt4_ad1db85c-8250-4d3f-a27a-e4680d126b3d/pull/0.log" Mar 11 10:36:03 crc kubenswrapper[4808]: I0311 10:36:03.815348 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874j2pt4_ad1db85c-8250-4d3f-a27a-e4680d126b3d/extract/0.log" Mar 11 10:36:03 crc kubenswrapper[4808]: I0311 10:36:03.946714 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c189zgv_e795600f-0369-4c94-b30c-f79f2d2eef08/util/0.log" Mar 11 10:36:04 crc kubenswrapper[4808]: I0311 10:36:04.122921 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c189zgv_e795600f-0369-4c94-b30c-f79f2d2eef08/pull/0.log" Mar 11 10:36:04 crc kubenswrapper[4808]: I0311 10:36:04.135940 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c189zgv_e795600f-0369-4c94-b30c-f79f2d2eef08/pull/0.log" Mar 11 10:36:04 crc kubenswrapper[4808]: I0311 10:36:04.153178 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c189zgv_e795600f-0369-4c94-b30c-f79f2d2eef08/util/0.log" Mar 11 10:36:04 crc kubenswrapper[4808]: I0311 10:36:04.299911 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c189zgv_e795600f-0369-4c94-b30c-f79f2d2eef08/extract/0.log" Mar 11 10:36:04 crc kubenswrapper[4808]: I0311 10:36:04.319647 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c189zgv_e795600f-0369-4c94-b30c-f79f2d2eef08/util/0.log" Mar 11 10:36:04 crc kubenswrapper[4808]: I0311 10:36:04.347209 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c189zgv_e795600f-0369-4c94-b30c-f79f2d2eef08/pull/0.log" Mar 11 10:36:04 crc kubenswrapper[4808]: I0311 10:36:04.450524 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e52dlhh_32820c19-f3d4-48c6-bb0a-a9d402f4db28/util/0.log" Mar 11 10:36:04 crc kubenswrapper[4808]: I0311 10:36:04.686940 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e52dlhh_32820c19-f3d4-48c6-bb0a-a9d402f4db28/util/0.log" Mar 11 10:36:04 crc kubenswrapper[4808]: I0311 10:36:04.710260 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e52dlhh_32820c19-f3d4-48c6-bb0a-a9d402f4db28/pull/0.log" Mar 11 10:36:04 crc kubenswrapper[4808]: I0311 10:36:04.713973 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e52dlhh_32820c19-f3d4-48c6-bb0a-a9d402f4db28/pull/0.log" Mar 11 10:36:04 crc kubenswrapper[4808]: I0311 10:36:04.874651 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e52dlhh_32820c19-f3d4-48c6-bb0a-a9d402f4db28/util/0.log" Mar 11 10:36:04 crc kubenswrapper[4808]: I0311 10:36:04.891198 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e52dlhh_32820c19-f3d4-48c6-bb0a-a9d402f4db28/pull/0.log" Mar 11 10:36:04 crc kubenswrapper[4808]: I0311 10:36:04.959160 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e52dlhh_32820c19-f3d4-48c6-bb0a-a9d402f4db28/extract/0.log" Mar 11 10:36:05 crc kubenswrapper[4808]: I0311 10:36:05.047720 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mcsmp_f4446619-4a8d-4e5c-8b21-6c47bbace99c/extract-utilities/0.log" Mar 11 10:36:05 crc kubenswrapper[4808]: I0311 10:36:05.082145 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553756-pfks8" Mar 11 10:36:05 crc kubenswrapper[4808]: I0311 10:36:05.213139 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6qg9c\" (UniqueName: \"kubernetes.io/projected/02e16810-917f-4884-bd9f-60d19cba0be0-kube-api-access-6qg9c\") pod \"02e16810-917f-4884-bd9f-60d19cba0be0\" (UID: \"02e16810-917f-4884-bd9f-60d19cba0be0\") " Mar 11 10:36:05 crc kubenswrapper[4808]: I0311 10:36:05.219255 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02e16810-917f-4884-bd9f-60d19cba0be0-kube-api-access-6qg9c" (OuterVolumeSpecName: "kube-api-access-6qg9c") pod "02e16810-917f-4884-bd9f-60d19cba0be0" (UID: "02e16810-917f-4884-bd9f-60d19cba0be0"). InnerVolumeSpecName "kube-api-access-6qg9c". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:36:05 crc kubenswrapper[4808]: I0311 10:36:05.298811 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mcsmp_f4446619-4a8d-4e5c-8b21-6c47bbace99c/extract-content/0.log" Mar 11 10:36:05 crc kubenswrapper[4808]: I0311 10:36:05.302827 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mcsmp_f4446619-4a8d-4e5c-8b21-6c47bbace99c/extract-utilities/0.log" Mar 11 10:36:05 crc kubenswrapper[4808]: I0311 10:36:05.310132 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mcsmp_f4446619-4a8d-4e5c-8b21-6c47bbace99c/extract-content/0.log" Mar 11 10:36:05 crc kubenswrapper[4808]: I0311 10:36:05.315430 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6qg9c\" (UniqueName: \"kubernetes.io/projected/02e16810-917f-4884-bd9f-60d19cba0be0-kube-api-access-6qg9c\") on node \"crc\" DevicePath \"\"" Mar 11 10:36:05 crc kubenswrapper[4808]: I0311 10:36:05.443392 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mcsmp_f4446619-4a8d-4e5c-8b21-6c47bbace99c/extract-content/0.log" Mar 11 10:36:05 crc kubenswrapper[4808]: I0311 10:36:05.474596 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mcsmp_f4446619-4a8d-4e5c-8b21-6c47bbace99c/extract-utilities/0.log" Mar 11 10:36:05 crc kubenswrapper[4808]: I0311 10:36:05.689167 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-2vsnp_e7fbeb3b-1b0e-481e-a0e9-6673407ec18f/extract-utilities/0.log" Mar 11 10:36:05 crc kubenswrapper[4808]: I0311 10:36:05.720667 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553756-pfks8" event={"ID":"02e16810-917f-4884-bd9f-60d19cba0be0","Type":"ContainerDied","Data":"e1576f808737bd07f6e442827ad9a7b59207aa5c2c0f2d501b5a4f178ee07b05"} Mar 11 10:36:05 crc kubenswrapper[4808]: I0311 10:36:05.720692 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553756-pfks8" Mar 11 10:36:05 crc kubenswrapper[4808]: I0311 10:36:05.721056 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1576f808737bd07f6e442827ad9a7b59207aa5c2c0f2d501b5a4f178ee07b05" Mar 11 10:36:05 crc kubenswrapper[4808]: I0311 10:36:05.771219 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553750-bmnlv"] Mar 11 10:36:05 crc kubenswrapper[4808]: I0311 10:36:05.780477 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553750-bmnlv"] Mar 11 10:36:05 crc kubenswrapper[4808]: I0311 10:36:05.799724 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="256f4e2a-fcff-437f-a29e-d344bf24b519" path="/var/lib/kubelet/pods/256f4e2a-fcff-437f-a29e-d344bf24b519/volumes" Mar 11 10:36:05 crc kubenswrapper[4808]: I0311 10:36:05.890521 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-2vsnp_e7fbeb3b-1b0e-481e-a0e9-6673407ec18f/extract-content/0.log" Mar 11 10:36:05 crc kubenswrapper[4808]: I0311 10:36:05.949177 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-2vsnp_e7fbeb3b-1b0e-481e-a0e9-6673407ec18f/extract-utilities/0.log" Mar 11 10:36:05 crc kubenswrapper[4808]: I0311 10:36:05.957649 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-2vsnp_e7fbeb3b-1b0e-481e-a0e9-6673407ec18f/extract-content/0.log" Mar 11 10:36:06 crc kubenswrapper[4808]: I0311 10:36:06.015696 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mcsmp_f4446619-4a8d-4e5c-8b21-6c47bbace99c/registry-server/0.log" Mar 11 10:36:06 crc kubenswrapper[4808]: I0311 10:36:06.156278 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-2vsnp_e7fbeb3b-1b0e-481e-a0e9-6673407ec18f/extract-utilities/0.log" Mar 11 10:36:06 crc kubenswrapper[4808]: I0311 10:36:06.213151 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-2vsnp_e7fbeb3b-1b0e-481e-a0e9-6673407ec18f/extract-content/0.log" Mar 11 10:36:06 crc kubenswrapper[4808]: I0311 10:36:06.430156 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-b6wjx_469bdff1-1585-4d3e-8261-8d36e14aae58/marketplace-operator/0.log" Mar 11 10:36:06 crc kubenswrapper[4808]: I0311 10:36:06.524819 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qmn6r_89ebd57f-641c-4d65-b3b8-cc6ccc005770/extract-utilities/0.log" Mar 11 10:36:06 crc kubenswrapper[4808]: I0311 10:36:06.772872 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qmn6r_89ebd57f-641c-4d65-b3b8-cc6ccc005770/extract-utilities/0.log" Mar 11 10:36:06 crc kubenswrapper[4808]: I0311 10:36:06.795812 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qmn6r_89ebd57f-641c-4d65-b3b8-cc6ccc005770/extract-content/0.log" Mar 11 10:36:06 crc kubenswrapper[4808]: I0311 10:36:06.820754 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qmn6r_89ebd57f-641c-4d65-b3b8-cc6ccc005770/extract-content/0.log" Mar 11 10:36:06 crc kubenswrapper[4808]: I0311 10:36:06.978273 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qmn6r_89ebd57f-641c-4d65-b3b8-cc6ccc005770/extract-utilities/0.log" Mar 11 10:36:07 crc kubenswrapper[4808]: I0311 10:36:07.036421 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qmn6r_89ebd57f-641c-4d65-b3b8-cc6ccc005770/extract-content/0.log" Mar 11 10:36:07 crc kubenswrapper[4808]: I0311 10:36:07.302865 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hwp6r_0426dadd-5c0b-4f78-8364-7c09b73b51dc/extract-utilities/0.log" Mar 11 10:36:07 crc kubenswrapper[4808]: I0311 10:36:07.356708 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qmn6r_89ebd57f-641c-4d65-b3b8-cc6ccc005770/registry-server/0.log" Mar 11 10:36:07 crc kubenswrapper[4808]: I0311 10:36:07.415707 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-2vsnp_e7fbeb3b-1b0e-481e-a0e9-6673407ec18f/registry-server/0.log" Mar 11 10:36:07 crc kubenswrapper[4808]: I0311 10:36:07.531506 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hwp6r_0426dadd-5c0b-4f78-8364-7c09b73b51dc/extract-utilities/0.log" Mar 11 10:36:07 crc kubenswrapper[4808]: I0311 10:36:07.531914 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hwp6r_0426dadd-5c0b-4f78-8364-7c09b73b51dc/extract-content/0.log" Mar 11 10:36:07 crc kubenswrapper[4808]: I0311 10:36:07.563197 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hwp6r_0426dadd-5c0b-4f78-8364-7c09b73b51dc/extract-content/0.log" Mar 11 10:36:07 crc kubenswrapper[4808]: I0311 10:36:07.721291 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hwp6r_0426dadd-5c0b-4f78-8364-7c09b73b51dc/extract-utilities/0.log" Mar 11 10:36:07 crc kubenswrapper[4808]: I0311 10:36:07.725054 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hwp6r_0426dadd-5c0b-4f78-8364-7c09b73b51dc/extract-content/0.log" Mar 11 10:36:08 crc kubenswrapper[4808]: I0311 10:36:08.320584 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hwp6r_0426dadd-5c0b-4f78-8364-7c09b73b51dc/registry-server/0.log" Mar 11 10:36:16 crc kubenswrapper[4808]: I0311 10:36:16.027938 4808 patch_prober.go:28] interesting pod/machine-config-daemon-tfsm9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 10:36:16 crc kubenswrapper[4808]: I0311 10:36:16.028458 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 10:36:22 crc kubenswrapper[4808]: E0311 10:36:22.926027 4808 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.113:47798->38.102.83.113:39975: write tcp 38.102.83.113:47798->38.102.83.113:39975: write: broken pipe Mar 11 10:36:46 crc kubenswrapper[4808]: I0311 10:36:46.027684 4808 patch_prober.go:28] interesting pod/machine-config-daemon-tfsm9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 10:36:46 crc kubenswrapper[4808]: I0311 10:36:46.028428 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 10:36:58 crc kubenswrapper[4808]: I0311 10:36:58.907488 4808 scope.go:117] "RemoveContainer" containerID="9e941891ffc546299c60383ca6b55548551c4d5af7e05ebc9f6da388faf8cbe0" Mar 11 10:37:16 crc kubenswrapper[4808]: I0311 10:37:16.027290 4808 patch_prober.go:28] interesting pod/machine-config-daemon-tfsm9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 10:37:16 crc kubenswrapper[4808]: I0311 10:37:16.028141 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 10:37:16 crc kubenswrapper[4808]: I0311 10:37:16.028192 4808 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" Mar 11 10:37:16 crc kubenswrapper[4808]: I0311 10:37:16.028959 4808 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1319ea54b643c165259c07e980866f491c6dc66e4563046f9e2b24e8cfa88223"} pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 11 10:37:16 crc kubenswrapper[4808]: I0311 10:37:16.029018 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerName="machine-config-daemon" containerID="cri-o://1319ea54b643c165259c07e980866f491c6dc66e4563046f9e2b24e8cfa88223" gracePeriod=600 Mar 11 10:37:16 crc kubenswrapper[4808]: E0311 10:37:16.163190 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 10:37:16 crc kubenswrapper[4808]: I0311 10:37:16.404877 4808 generic.go:334] "Generic (PLEG): container finished" podID="3dda5309-668d-4e3c-b3b2-1d708eecc578" containerID="1319ea54b643c165259c07e980866f491c6dc66e4563046f9e2b24e8cfa88223" exitCode=0 Mar 11 10:37:16 crc kubenswrapper[4808]: I0311 10:37:16.404935 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" event={"ID":"3dda5309-668d-4e3c-b3b2-1d708eecc578","Type":"ContainerDied","Data":"1319ea54b643c165259c07e980866f491c6dc66e4563046f9e2b24e8cfa88223"} Mar 11 10:37:16 crc kubenswrapper[4808]: I0311 10:37:16.404973 4808 scope.go:117] "RemoveContainer" containerID="00cc296dafe7303528d0b110e4134845496f29238d7cda6f869b6177d61ee874" Mar 11 10:37:16 crc kubenswrapper[4808]: I0311 10:37:16.405960 4808 scope.go:117] "RemoveContainer" containerID="1319ea54b643c165259c07e980866f491c6dc66e4563046f9e2b24e8cfa88223" Mar 11 10:37:16 crc kubenswrapper[4808]: E0311 10:37:16.406652 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 10:37:28 crc kubenswrapper[4808]: I0311 10:37:28.788816 4808 scope.go:117] "RemoveContainer" containerID="1319ea54b643c165259c07e980866f491c6dc66e4563046f9e2b24e8cfa88223" Mar 11 10:37:28 crc kubenswrapper[4808]: E0311 10:37:28.789449 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 10:37:40 crc kubenswrapper[4808]: I0311 10:37:40.790132 4808 scope.go:117] "RemoveContainer" containerID="1319ea54b643c165259c07e980866f491c6dc66e4563046f9e2b24e8cfa88223" Mar 11 10:37:40 crc kubenswrapper[4808]: E0311 10:37:40.790978 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 10:37:41 crc kubenswrapper[4808]: I0311 10:37:41.871153 4808 generic.go:334] "Generic (PLEG): container finished" podID="6df58535-3024-4f98-b282-5d4a9ac4d6a3" containerID="a14bb0b3dd8d2c43ac60a068dee920ae075d91906efea0326f821cea997e9fbd" exitCode=0 Mar 11 10:37:41 crc kubenswrapper[4808]: I0311 10:37:41.871302 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-w2gcw/must-gather-4q9wf" event={"ID":"6df58535-3024-4f98-b282-5d4a9ac4d6a3","Type":"ContainerDied","Data":"a14bb0b3dd8d2c43ac60a068dee920ae075d91906efea0326f821cea997e9fbd"} Mar 11 10:37:41 crc kubenswrapper[4808]: I0311 10:37:41.872329 4808 scope.go:117] "RemoveContainer" containerID="a14bb0b3dd8d2c43ac60a068dee920ae075d91906efea0326f821cea997e9fbd" Mar 11 10:37:42 crc kubenswrapper[4808]: I0311 10:37:42.014403 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-w2gcw_must-gather-4q9wf_6df58535-3024-4f98-b282-5d4a9ac4d6a3/gather/0.log" Mar 11 10:37:49 crc kubenswrapper[4808]: I0311 10:37:49.648566 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-w2gcw/must-gather-4q9wf"] Mar 11 10:37:49 crc kubenswrapper[4808]: I0311 10:37:49.649534 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-w2gcw/must-gather-4q9wf" podUID="6df58535-3024-4f98-b282-5d4a9ac4d6a3" containerName="copy" containerID="cri-o://57f82cca6fb2d46dc9f61c9a4a08a432f00755d5c029852ad781aeb038bb3af2" gracePeriod=2 Mar 11 10:37:49 crc kubenswrapper[4808]: I0311 10:37:49.661846 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-w2gcw/must-gather-4q9wf"] Mar 11 10:37:49 crc kubenswrapper[4808]: I0311 10:37:49.953142 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-w2gcw_must-gather-4q9wf_6df58535-3024-4f98-b282-5d4a9ac4d6a3/copy/0.log" Mar 11 10:37:49 crc kubenswrapper[4808]: I0311 10:37:49.953948 4808 generic.go:334] "Generic (PLEG): container finished" podID="6df58535-3024-4f98-b282-5d4a9ac4d6a3" containerID="57f82cca6fb2d46dc9f61c9a4a08a432f00755d5c029852ad781aeb038bb3af2" exitCode=143 Mar 11 10:37:50 crc kubenswrapper[4808]: I0311 10:37:50.074477 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-w2gcw_must-gather-4q9wf_6df58535-3024-4f98-b282-5d4a9ac4d6a3/copy/0.log" Mar 11 10:37:50 crc kubenswrapper[4808]: I0311 10:37:50.074839 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-w2gcw/must-gather-4q9wf" Mar 11 10:37:50 crc kubenswrapper[4808]: I0311 10:37:50.143478 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6df58535-3024-4f98-b282-5d4a9ac4d6a3-must-gather-output\") pod \"6df58535-3024-4f98-b282-5d4a9ac4d6a3\" (UID: \"6df58535-3024-4f98-b282-5d4a9ac4d6a3\") " Mar 11 10:37:50 crc kubenswrapper[4808]: I0311 10:37:50.143844 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hjzb7\" (UniqueName: \"kubernetes.io/projected/6df58535-3024-4f98-b282-5d4a9ac4d6a3-kube-api-access-hjzb7\") pod \"6df58535-3024-4f98-b282-5d4a9ac4d6a3\" (UID: \"6df58535-3024-4f98-b282-5d4a9ac4d6a3\") " Mar 11 10:37:50 crc kubenswrapper[4808]: I0311 10:37:50.155736 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6df58535-3024-4f98-b282-5d4a9ac4d6a3-kube-api-access-hjzb7" (OuterVolumeSpecName: "kube-api-access-hjzb7") pod "6df58535-3024-4f98-b282-5d4a9ac4d6a3" (UID: "6df58535-3024-4f98-b282-5d4a9ac4d6a3"). InnerVolumeSpecName "kube-api-access-hjzb7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:37:50 crc kubenswrapper[4808]: I0311 10:37:50.244886 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6df58535-3024-4f98-b282-5d4a9ac4d6a3-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "6df58535-3024-4f98-b282-5d4a9ac4d6a3" (UID: "6df58535-3024-4f98-b282-5d4a9ac4d6a3"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 10:37:50 crc kubenswrapper[4808]: I0311 10:37:50.246159 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hjzb7\" (UniqueName: \"kubernetes.io/projected/6df58535-3024-4f98-b282-5d4a9ac4d6a3-kube-api-access-hjzb7\") on node \"crc\" DevicePath \"\"" Mar 11 10:37:50 crc kubenswrapper[4808]: I0311 10:37:50.246175 4808 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6df58535-3024-4f98-b282-5d4a9ac4d6a3-must-gather-output\") on node \"crc\" DevicePath \"\"" Mar 11 10:37:50 crc kubenswrapper[4808]: I0311 10:37:50.965034 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-w2gcw_must-gather-4q9wf_6df58535-3024-4f98-b282-5d4a9ac4d6a3/copy/0.log" Mar 11 10:37:50 crc kubenswrapper[4808]: I0311 10:37:50.965614 4808 scope.go:117] "RemoveContainer" containerID="57f82cca6fb2d46dc9f61c9a4a08a432f00755d5c029852ad781aeb038bb3af2" Mar 11 10:37:50 crc kubenswrapper[4808]: I0311 10:37:50.965633 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-w2gcw/must-gather-4q9wf" Mar 11 10:37:50 crc kubenswrapper[4808]: I0311 10:37:50.990306 4808 scope.go:117] "RemoveContainer" containerID="a14bb0b3dd8d2c43ac60a068dee920ae075d91906efea0326f821cea997e9fbd" Mar 11 10:37:51 crc kubenswrapper[4808]: E0311 10:37:51.088548 4808 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6df58535_3024_4f98_b282_5d4a9ac4d6a3.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6df58535_3024_4f98_b282_5d4a9ac4d6a3.slice/crio-5285902ddf1de89a4533fd96db236051f0a7468442d9ea6980dc8ea7d323f42f\": RecentStats: unable to find data in memory cache]" Mar 11 10:37:51 crc kubenswrapper[4808]: I0311 10:37:51.790770 4808 scope.go:117] "RemoveContainer" containerID="1319ea54b643c165259c07e980866f491c6dc66e4563046f9e2b24e8cfa88223" Mar 11 10:37:51 crc kubenswrapper[4808]: E0311 10:37:51.791663 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 10:37:51 crc kubenswrapper[4808]: I0311 10:37:51.801550 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6df58535-3024-4f98-b282-5d4a9ac4d6a3" path="/var/lib/kubelet/pods/6df58535-3024-4f98-b282-5d4a9ac4d6a3/volumes" Mar 11 10:38:00 crc kubenswrapper[4808]: I0311 10:38:00.147820 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553758-6rwm2"] Mar 11 10:38:00 crc kubenswrapper[4808]: E0311 10:38:00.149401 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02e16810-917f-4884-bd9f-60d19cba0be0" containerName="oc" Mar 11 10:38:00 crc kubenswrapper[4808]: I0311 10:38:00.149494 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="02e16810-917f-4884-bd9f-60d19cba0be0" containerName="oc" Mar 11 10:38:00 crc kubenswrapper[4808]: E0311 10:38:00.149570 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6df58535-3024-4f98-b282-5d4a9ac4d6a3" containerName="gather" Mar 11 10:38:00 crc kubenswrapper[4808]: I0311 10:38:00.149625 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="6df58535-3024-4f98-b282-5d4a9ac4d6a3" containerName="gather" Mar 11 10:38:00 crc kubenswrapper[4808]: E0311 10:38:00.149687 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6df58535-3024-4f98-b282-5d4a9ac4d6a3" containerName="copy" Mar 11 10:38:00 crc kubenswrapper[4808]: I0311 10:38:00.149744 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="6df58535-3024-4f98-b282-5d4a9ac4d6a3" containerName="copy" Mar 11 10:38:00 crc kubenswrapper[4808]: I0311 10:38:00.149935 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="02e16810-917f-4884-bd9f-60d19cba0be0" containerName="oc" Mar 11 10:38:00 crc kubenswrapper[4808]: I0311 10:38:00.150020 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="6df58535-3024-4f98-b282-5d4a9ac4d6a3" containerName="gather" Mar 11 10:38:00 crc kubenswrapper[4808]: I0311 10:38:00.150084 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="6df58535-3024-4f98-b282-5d4a9ac4d6a3" containerName="copy" Mar 11 10:38:00 crc kubenswrapper[4808]: I0311 10:38:00.150659 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553758-6rwm2" Mar 11 10:38:00 crc kubenswrapper[4808]: I0311 10:38:00.153397 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-8r7cc" Mar 11 10:38:00 crc kubenswrapper[4808]: I0311 10:38:00.153710 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 10:38:00 crc kubenswrapper[4808]: I0311 10:38:00.153886 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 10:38:00 crc kubenswrapper[4808]: I0311 10:38:00.165316 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553758-6rwm2"] Mar 11 10:38:00 crc kubenswrapper[4808]: I0311 10:38:00.322445 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hdwx\" (UniqueName: \"kubernetes.io/projected/3ba2315a-e72b-4ce4-b170-d6f4661cef94-kube-api-access-4hdwx\") pod \"auto-csr-approver-29553758-6rwm2\" (UID: \"3ba2315a-e72b-4ce4-b170-d6f4661cef94\") " pod="openshift-infra/auto-csr-approver-29553758-6rwm2" Mar 11 10:38:00 crc kubenswrapper[4808]: I0311 10:38:00.425175 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hdwx\" (UniqueName: \"kubernetes.io/projected/3ba2315a-e72b-4ce4-b170-d6f4661cef94-kube-api-access-4hdwx\") pod \"auto-csr-approver-29553758-6rwm2\" (UID: \"3ba2315a-e72b-4ce4-b170-d6f4661cef94\") " pod="openshift-infra/auto-csr-approver-29553758-6rwm2" Mar 11 10:38:00 crc kubenswrapper[4808]: I0311 10:38:00.462318 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hdwx\" (UniqueName: \"kubernetes.io/projected/3ba2315a-e72b-4ce4-b170-d6f4661cef94-kube-api-access-4hdwx\") pod \"auto-csr-approver-29553758-6rwm2\" (UID: \"3ba2315a-e72b-4ce4-b170-d6f4661cef94\") " pod="openshift-infra/auto-csr-approver-29553758-6rwm2" Mar 11 10:38:00 crc kubenswrapper[4808]: I0311 10:38:00.495240 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553758-6rwm2" Mar 11 10:38:00 crc kubenswrapper[4808]: I0311 10:38:00.733838 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553758-6rwm2"] Mar 11 10:38:01 crc kubenswrapper[4808]: I0311 10:38:01.047663 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553758-6rwm2" event={"ID":"3ba2315a-e72b-4ce4-b170-d6f4661cef94","Type":"ContainerStarted","Data":"9656922e57b2b82e401a1bf6a144b0a95a540b039c76355c048c24717c6f5b2e"} Mar 11 10:38:03 crc kubenswrapper[4808]: I0311 10:38:03.068261 4808 generic.go:334] "Generic (PLEG): container finished" podID="3ba2315a-e72b-4ce4-b170-d6f4661cef94" containerID="b2961998573af7d3a67ad8f1637c383ea67a5eeecee041a2aaa7e94ed1c2ace9" exitCode=0 Mar 11 10:38:03 crc kubenswrapper[4808]: I0311 10:38:03.068380 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553758-6rwm2" event={"ID":"3ba2315a-e72b-4ce4-b170-d6f4661cef94","Type":"ContainerDied","Data":"b2961998573af7d3a67ad8f1637c383ea67a5eeecee041a2aaa7e94ed1c2ace9"} Mar 11 10:38:03 crc kubenswrapper[4808]: I0311 10:38:03.789519 4808 scope.go:117] "RemoveContainer" containerID="1319ea54b643c165259c07e980866f491c6dc66e4563046f9e2b24e8cfa88223" Mar 11 10:38:03 crc kubenswrapper[4808]: E0311 10:38:03.789922 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 10:38:04 crc kubenswrapper[4808]: I0311 10:38:04.444582 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553758-6rwm2" Mar 11 10:38:04 crc kubenswrapper[4808]: I0311 10:38:04.617142 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hdwx\" (UniqueName: \"kubernetes.io/projected/3ba2315a-e72b-4ce4-b170-d6f4661cef94-kube-api-access-4hdwx\") pod \"3ba2315a-e72b-4ce4-b170-d6f4661cef94\" (UID: \"3ba2315a-e72b-4ce4-b170-d6f4661cef94\") " Mar 11 10:38:04 crc kubenswrapper[4808]: I0311 10:38:04.624010 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ba2315a-e72b-4ce4-b170-d6f4661cef94-kube-api-access-4hdwx" (OuterVolumeSpecName: "kube-api-access-4hdwx") pod "3ba2315a-e72b-4ce4-b170-d6f4661cef94" (UID: "3ba2315a-e72b-4ce4-b170-d6f4661cef94"). InnerVolumeSpecName "kube-api-access-4hdwx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:38:04 crc kubenswrapper[4808]: I0311 10:38:04.719951 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4hdwx\" (UniqueName: \"kubernetes.io/projected/3ba2315a-e72b-4ce4-b170-d6f4661cef94-kube-api-access-4hdwx\") on node \"crc\" DevicePath \"\"" Mar 11 10:38:05 crc kubenswrapper[4808]: I0311 10:38:05.085204 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553758-6rwm2" event={"ID":"3ba2315a-e72b-4ce4-b170-d6f4661cef94","Type":"ContainerDied","Data":"9656922e57b2b82e401a1bf6a144b0a95a540b039c76355c048c24717c6f5b2e"} Mar 11 10:38:05 crc kubenswrapper[4808]: I0311 10:38:05.085255 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9656922e57b2b82e401a1bf6a144b0a95a540b039c76355c048c24717c6f5b2e" Mar 11 10:38:05 crc kubenswrapper[4808]: I0311 10:38:05.085699 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553758-6rwm2" Mar 11 10:38:05 crc kubenswrapper[4808]: I0311 10:38:05.511598 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553752-hrg4m"] Mar 11 10:38:05 crc kubenswrapper[4808]: I0311 10:38:05.517460 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553752-hrg4m"] Mar 11 10:38:05 crc kubenswrapper[4808]: I0311 10:38:05.798373 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3629122d-f8fc-4c36-980c-74732b419e4e" path="/var/lib/kubelet/pods/3629122d-f8fc-4c36-980c-74732b419e4e/volumes" Mar 11 10:38:14 crc kubenswrapper[4808]: I0311 10:38:14.789194 4808 scope.go:117] "RemoveContainer" containerID="1319ea54b643c165259c07e980866f491c6dc66e4563046f9e2b24e8cfa88223" Mar 11 10:38:14 crc kubenswrapper[4808]: E0311 10:38:14.791434 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 10:38:27 crc kubenswrapper[4808]: I0311 10:38:27.789803 4808 scope.go:117] "RemoveContainer" containerID="1319ea54b643c165259c07e980866f491c6dc66e4563046f9e2b24e8cfa88223" Mar 11 10:38:27 crc kubenswrapper[4808]: E0311 10:38:27.790598 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 10:38:42 crc kubenswrapper[4808]: I0311 10:38:42.789692 4808 scope.go:117] "RemoveContainer" containerID="1319ea54b643c165259c07e980866f491c6dc66e4563046f9e2b24e8cfa88223" Mar 11 10:38:42 crc kubenswrapper[4808]: E0311 10:38:42.790793 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 10:38:55 crc kubenswrapper[4808]: I0311 10:38:55.793080 4808 scope.go:117] "RemoveContainer" containerID="1319ea54b643c165259c07e980866f491c6dc66e4563046f9e2b24e8cfa88223" Mar 11 10:38:55 crc kubenswrapper[4808]: E0311 10:38:55.794967 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 10:38:59 crc kubenswrapper[4808]: I0311 10:38:59.021841 4808 scope.go:117] "RemoveContainer" containerID="6a45ee5f87790ef1fd0837a3fd4382ad40b0cfeec0d93f6c6b2a69e75b3abbf4" Mar 11 10:39:07 crc kubenswrapper[4808]: I0311 10:39:07.790020 4808 scope.go:117] "RemoveContainer" containerID="1319ea54b643c165259c07e980866f491c6dc66e4563046f9e2b24e8cfa88223" Mar 11 10:39:07 crc kubenswrapper[4808]: E0311 10:39:07.791265 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 10:39:18 crc kubenswrapper[4808]: I0311 10:39:18.789334 4808 scope.go:117] "RemoveContainer" containerID="1319ea54b643c165259c07e980866f491c6dc66e4563046f9e2b24e8cfa88223" Mar 11 10:39:18 crc kubenswrapper[4808]: E0311 10:39:18.790572 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 10:39:29 crc kubenswrapper[4808]: I0311 10:39:29.473085 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lvvgd"] Mar 11 10:39:29 crc kubenswrapper[4808]: E0311 10:39:29.474333 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ba2315a-e72b-4ce4-b170-d6f4661cef94" containerName="oc" Mar 11 10:39:29 crc kubenswrapper[4808]: I0311 10:39:29.474406 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ba2315a-e72b-4ce4-b170-d6f4661cef94" containerName="oc" Mar 11 10:39:29 crc kubenswrapper[4808]: I0311 10:39:29.474696 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ba2315a-e72b-4ce4-b170-d6f4661cef94" containerName="oc" Mar 11 10:39:29 crc kubenswrapper[4808]: I0311 10:39:29.479107 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lvvgd" Mar 11 10:39:29 crc kubenswrapper[4808]: I0311 10:39:29.514610 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lvvgd"] Mar 11 10:39:29 crc kubenswrapper[4808]: I0311 10:39:29.663207 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvh6w\" (UniqueName: \"kubernetes.io/projected/792694b9-4f94-431b-b9a6-a072c1d5f75a-kube-api-access-mvh6w\") pod \"redhat-marketplace-lvvgd\" (UID: \"792694b9-4f94-431b-b9a6-a072c1d5f75a\") " pod="openshift-marketplace/redhat-marketplace-lvvgd" Mar 11 10:39:29 crc kubenswrapper[4808]: I0311 10:39:29.663648 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/792694b9-4f94-431b-b9a6-a072c1d5f75a-utilities\") pod \"redhat-marketplace-lvvgd\" (UID: \"792694b9-4f94-431b-b9a6-a072c1d5f75a\") " pod="openshift-marketplace/redhat-marketplace-lvvgd" Mar 11 10:39:29 crc kubenswrapper[4808]: I0311 10:39:29.663698 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/792694b9-4f94-431b-b9a6-a072c1d5f75a-catalog-content\") pod \"redhat-marketplace-lvvgd\" (UID: \"792694b9-4f94-431b-b9a6-a072c1d5f75a\") " pod="openshift-marketplace/redhat-marketplace-lvvgd" Mar 11 10:39:29 crc kubenswrapper[4808]: I0311 10:39:29.765371 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/792694b9-4f94-431b-b9a6-a072c1d5f75a-utilities\") pod \"redhat-marketplace-lvvgd\" (UID: \"792694b9-4f94-431b-b9a6-a072c1d5f75a\") " pod="openshift-marketplace/redhat-marketplace-lvvgd" Mar 11 10:39:29 crc kubenswrapper[4808]: I0311 10:39:29.765448 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/792694b9-4f94-431b-b9a6-a072c1d5f75a-catalog-content\") pod \"redhat-marketplace-lvvgd\" (UID: \"792694b9-4f94-431b-b9a6-a072c1d5f75a\") " pod="openshift-marketplace/redhat-marketplace-lvvgd" Mar 11 10:39:29 crc kubenswrapper[4808]: I0311 10:39:29.765576 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvh6w\" (UniqueName: \"kubernetes.io/projected/792694b9-4f94-431b-b9a6-a072c1d5f75a-kube-api-access-mvh6w\") pod \"redhat-marketplace-lvvgd\" (UID: \"792694b9-4f94-431b-b9a6-a072c1d5f75a\") " pod="openshift-marketplace/redhat-marketplace-lvvgd" Mar 11 10:39:29 crc kubenswrapper[4808]: I0311 10:39:29.766233 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/792694b9-4f94-431b-b9a6-a072c1d5f75a-utilities\") pod \"redhat-marketplace-lvvgd\" (UID: \"792694b9-4f94-431b-b9a6-a072c1d5f75a\") " pod="openshift-marketplace/redhat-marketplace-lvvgd" Mar 11 10:39:29 crc kubenswrapper[4808]: I0311 10:39:29.766245 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/792694b9-4f94-431b-b9a6-a072c1d5f75a-catalog-content\") pod \"redhat-marketplace-lvvgd\" (UID: \"792694b9-4f94-431b-b9a6-a072c1d5f75a\") " pod="openshift-marketplace/redhat-marketplace-lvvgd" Mar 11 10:39:29 crc kubenswrapper[4808]: I0311 10:39:29.794241 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvh6w\" (UniqueName: \"kubernetes.io/projected/792694b9-4f94-431b-b9a6-a072c1d5f75a-kube-api-access-mvh6w\") pod \"redhat-marketplace-lvvgd\" (UID: \"792694b9-4f94-431b-b9a6-a072c1d5f75a\") " pod="openshift-marketplace/redhat-marketplace-lvvgd" Mar 11 10:39:29 crc kubenswrapper[4808]: I0311 10:39:29.797960 4808 scope.go:117] "RemoveContainer" containerID="1319ea54b643c165259c07e980866f491c6dc66e4563046f9e2b24e8cfa88223" Mar 11 10:39:29 crc kubenswrapper[4808]: E0311 10:39:29.798173 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 10:39:29 crc kubenswrapper[4808]: I0311 10:39:29.811280 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lvvgd" Mar 11 10:39:30 crc kubenswrapper[4808]: I0311 10:39:30.287803 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lvvgd"] Mar 11 10:39:31 crc kubenswrapper[4808]: I0311 10:39:31.158096 4808 generic.go:334] "Generic (PLEG): container finished" podID="792694b9-4f94-431b-b9a6-a072c1d5f75a" containerID="7b551cc3b4dfdf5034a82e280cb4e60ac722a45421316032043620d2e42b5eef" exitCode=0 Mar 11 10:39:31 crc kubenswrapper[4808]: I0311 10:39:31.158146 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lvvgd" event={"ID":"792694b9-4f94-431b-b9a6-a072c1d5f75a","Type":"ContainerDied","Data":"7b551cc3b4dfdf5034a82e280cb4e60ac722a45421316032043620d2e42b5eef"} Mar 11 10:39:31 crc kubenswrapper[4808]: I0311 10:39:31.158416 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lvvgd" event={"ID":"792694b9-4f94-431b-b9a6-a072c1d5f75a","Type":"ContainerStarted","Data":"d174487a1b7d97b219f35ef2779e7e56f231f47d47d0b577d04fb11f5d0a12aa"} Mar 11 10:39:31 crc kubenswrapper[4808]: I0311 10:39:31.161146 4808 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 11 10:39:33 crc kubenswrapper[4808]: I0311 10:39:33.176667 4808 generic.go:334] "Generic (PLEG): container finished" podID="792694b9-4f94-431b-b9a6-a072c1d5f75a" containerID="172115485d2b3a711028182e01e034802dd511201234bd33941197e93c8f3a81" exitCode=0 Mar 11 10:39:33 crc kubenswrapper[4808]: I0311 10:39:33.176731 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lvvgd" event={"ID":"792694b9-4f94-431b-b9a6-a072c1d5f75a","Type":"ContainerDied","Data":"172115485d2b3a711028182e01e034802dd511201234bd33941197e93c8f3a81"} Mar 11 10:39:34 crc kubenswrapper[4808]: I0311 10:39:34.192662 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lvvgd" event={"ID":"792694b9-4f94-431b-b9a6-a072c1d5f75a","Type":"ContainerStarted","Data":"be06e702b8c45c74c2f5dad54a2f99f28dbe24a744ad14548561d3b397b7b83c"} Mar 11 10:39:34 crc kubenswrapper[4808]: I0311 10:39:34.216380 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lvvgd" podStartSLOduration=2.6254824919999997 podStartE2EDuration="5.216334323s" podCreationTimestamp="2026-03-11 10:39:29 +0000 UTC" firstStartedPulling="2026-03-11 10:39:31.16093333 +0000 UTC m=+7222.114256650" lastFinishedPulling="2026-03-11 10:39:33.751785121 +0000 UTC m=+7224.705108481" observedRunningTime="2026-03-11 10:39:34.215592622 +0000 UTC m=+7225.168915982" watchObservedRunningTime="2026-03-11 10:39:34.216334323 +0000 UTC m=+7225.169657643" Mar 11 10:39:39 crc kubenswrapper[4808]: I0311 10:39:39.812186 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lvvgd" Mar 11 10:39:39 crc kubenswrapper[4808]: I0311 10:39:39.813399 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lvvgd" Mar 11 10:39:39 crc kubenswrapper[4808]: I0311 10:39:39.879979 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lvvgd" Mar 11 10:39:40 crc kubenswrapper[4808]: I0311 10:39:40.327886 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lvvgd" Mar 11 10:39:40 crc kubenswrapper[4808]: I0311 10:39:40.391333 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lvvgd"] Mar 11 10:39:42 crc kubenswrapper[4808]: I0311 10:39:42.289416 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lvvgd" podUID="792694b9-4f94-431b-b9a6-a072c1d5f75a" containerName="registry-server" containerID="cri-o://be06e702b8c45c74c2f5dad54a2f99f28dbe24a744ad14548561d3b397b7b83c" gracePeriod=2 Mar 11 10:39:42 crc kubenswrapper[4808]: I0311 10:39:42.783041 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lvvgd" Mar 11 10:39:42 crc kubenswrapper[4808]: I0311 10:39:42.923308 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mvh6w\" (UniqueName: \"kubernetes.io/projected/792694b9-4f94-431b-b9a6-a072c1d5f75a-kube-api-access-mvh6w\") pod \"792694b9-4f94-431b-b9a6-a072c1d5f75a\" (UID: \"792694b9-4f94-431b-b9a6-a072c1d5f75a\") " Mar 11 10:39:42 crc kubenswrapper[4808]: I0311 10:39:42.923450 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/792694b9-4f94-431b-b9a6-a072c1d5f75a-catalog-content\") pod \"792694b9-4f94-431b-b9a6-a072c1d5f75a\" (UID: \"792694b9-4f94-431b-b9a6-a072c1d5f75a\") " Mar 11 10:39:42 crc kubenswrapper[4808]: I0311 10:39:42.923590 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/792694b9-4f94-431b-b9a6-a072c1d5f75a-utilities\") pod \"792694b9-4f94-431b-b9a6-a072c1d5f75a\" (UID: \"792694b9-4f94-431b-b9a6-a072c1d5f75a\") " Mar 11 10:39:42 crc kubenswrapper[4808]: I0311 10:39:42.926165 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/792694b9-4f94-431b-b9a6-a072c1d5f75a-utilities" (OuterVolumeSpecName: "utilities") pod "792694b9-4f94-431b-b9a6-a072c1d5f75a" (UID: "792694b9-4f94-431b-b9a6-a072c1d5f75a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 10:39:42 crc kubenswrapper[4808]: I0311 10:39:42.930101 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/792694b9-4f94-431b-b9a6-a072c1d5f75a-kube-api-access-mvh6w" (OuterVolumeSpecName: "kube-api-access-mvh6w") pod "792694b9-4f94-431b-b9a6-a072c1d5f75a" (UID: "792694b9-4f94-431b-b9a6-a072c1d5f75a"). InnerVolumeSpecName "kube-api-access-mvh6w". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:39:43 crc kubenswrapper[4808]: I0311 10:39:43.025333 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/792694b9-4f94-431b-b9a6-a072c1d5f75a-utilities\") on node \"crc\" DevicePath \"\"" Mar 11 10:39:43 crc kubenswrapper[4808]: I0311 10:39:43.025389 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mvh6w\" (UniqueName: \"kubernetes.io/projected/792694b9-4f94-431b-b9a6-a072c1d5f75a-kube-api-access-mvh6w\") on node \"crc\" DevicePath \"\"" Mar 11 10:39:43 crc kubenswrapper[4808]: I0311 10:39:43.237336 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/792694b9-4f94-431b-b9a6-a072c1d5f75a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "792694b9-4f94-431b-b9a6-a072c1d5f75a" (UID: "792694b9-4f94-431b-b9a6-a072c1d5f75a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 10:39:43 crc kubenswrapper[4808]: I0311 10:39:43.302586 4808 generic.go:334] "Generic (PLEG): container finished" podID="792694b9-4f94-431b-b9a6-a072c1d5f75a" containerID="be06e702b8c45c74c2f5dad54a2f99f28dbe24a744ad14548561d3b397b7b83c" exitCode=0 Mar 11 10:39:43 crc kubenswrapper[4808]: I0311 10:39:43.302647 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lvvgd" event={"ID":"792694b9-4f94-431b-b9a6-a072c1d5f75a","Type":"ContainerDied","Data":"be06e702b8c45c74c2f5dad54a2f99f28dbe24a744ad14548561d3b397b7b83c"} Mar 11 10:39:43 crc kubenswrapper[4808]: I0311 10:39:43.302688 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lvvgd" event={"ID":"792694b9-4f94-431b-b9a6-a072c1d5f75a","Type":"ContainerDied","Data":"d174487a1b7d97b219f35ef2779e7e56f231f47d47d0b577d04fb11f5d0a12aa"} Mar 11 10:39:43 crc kubenswrapper[4808]: I0311 10:39:43.302715 4808 scope.go:117] "RemoveContainer" containerID="be06e702b8c45c74c2f5dad54a2f99f28dbe24a744ad14548561d3b397b7b83c" Mar 11 10:39:43 crc kubenswrapper[4808]: I0311 10:39:43.302871 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lvvgd" Mar 11 10:39:43 crc kubenswrapper[4808]: I0311 10:39:43.328172 4808 scope.go:117] "RemoveContainer" containerID="172115485d2b3a711028182e01e034802dd511201234bd33941197e93c8f3a81" Mar 11 10:39:43 crc kubenswrapper[4808]: I0311 10:39:43.330613 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/792694b9-4f94-431b-b9a6-a072c1d5f75a-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 11 10:39:43 crc kubenswrapper[4808]: I0311 10:39:43.357360 4808 scope.go:117] "RemoveContainer" containerID="7b551cc3b4dfdf5034a82e280cb4e60ac722a45421316032043620d2e42b5eef" Mar 11 10:39:43 crc kubenswrapper[4808]: I0311 10:39:43.359046 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lvvgd"] Mar 11 10:39:43 crc kubenswrapper[4808]: I0311 10:39:43.366203 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lvvgd"] Mar 11 10:39:43 crc kubenswrapper[4808]: I0311 10:39:43.416437 4808 scope.go:117] "RemoveContainer" containerID="be06e702b8c45c74c2f5dad54a2f99f28dbe24a744ad14548561d3b397b7b83c" Mar 11 10:39:43 crc kubenswrapper[4808]: E0311 10:39:43.416899 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be06e702b8c45c74c2f5dad54a2f99f28dbe24a744ad14548561d3b397b7b83c\": container with ID starting with be06e702b8c45c74c2f5dad54a2f99f28dbe24a744ad14548561d3b397b7b83c not found: ID does not exist" containerID="be06e702b8c45c74c2f5dad54a2f99f28dbe24a744ad14548561d3b397b7b83c" Mar 11 10:39:43 crc kubenswrapper[4808]: I0311 10:39:43.416968 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be06e702b8c45c74c2f5dad54a2f99f28dbe24a744ad14548561d3b397b7b83c"} err="failed to get container status \"be06e702b8c45c74c2f5dad54a2f99f28dbe24a744ad14548561d3b397b7b83c\": rpc error: code = NotFound desc = could not find container \"be06e702b8c45c74c2f5dad54a2f99f28dbe24a744ad14548561d3b397b7b83c\": container with ID starting with be06e702b8c45c74c2f5dad54a2f99f28dbe24a744ad14548561d3b397b7b83c not found: ID does not exist" Mar 11 10:39:43 crc kubenswrapper[4808]: I0311 10:39:43.416993 4808 scope.go:117] "RemoveContainer" containerID="172115485d2b3a711028182e01e034802dd511201234bd33941197e93c8f3a81" Mar 11 10:39:43 crc kubenswrapper[4808]: E0311 10:39:43.417300 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"172115485d2b3a711028182e01e034802dd511201234bd33941197e93c8f3a81\": container with ID starting with 172115485d2b3a711028182e01e034802dd511201234bd33941197e93c8f3a81 not found: ID does not exist" containerID="172115485d2b3a711028182e01e034802dd511201234bd33941197e93c8f3a81" Mar 11 10:39:43 crc kubenswrapper[4808]: I0311 10:39:43.417417 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"172115485d2b3a711028182e01e034802dd511201234bd33941197e93c8f3a81"} err="failed to get container status \"172115485d2b3a711028182e01e034802dd511201234bd33941197e93c8f3a81\": rpc error: code = NotFound desc = could not find container \"172115485d2b3a711028182e01e034802dd511201234bd33941197e93c8f3a81\": container with ID starting with 172115485d2b3a711028182e01e034802dd511201234bd33941197e93c8f3a81 not found: ID does not exist" Mar 11 10:39:43 crc kubenswrapper[4808]: I0311 10:39:43.417510 4808 scope.go:117] "RemoveContainer" containerID="7b551cc3b4dfdf5034a82e280cb4e60ac722a45421316032043620d2e42b5eef" Mar 11 10:39:43 crc kubenswrapper[4808]: E0311 10:39:43.418025 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b551cc3b4dfdf5034a82e280cb4e60ac722a45421316032043620d2e42b5eef\": container with ID starting with 7b551cc3b4dfdf5034a82e280cb4e60ac722a45421316032043620d2e42b5eef not found: ID does not exist" containerID="7b551cc3b4dfdf5034a82e280cb4e60ac722a45421316032043620d2e42b5eef" Mar 11 10:39:43 crc kubenswrapper[4808]: I0311 10:39:43.418114 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b551cc3b4dfdf5034a82e280cb4e60ac722a45421316032043620d2e42b5eef"} err="failed to get container status \"7b551cc3b4dfdf5034a82e280cb4e60ac722a45421316032043620d2e42b5eef\": rpc error: code = NotFound desc = could not find container \"7b551cc3b4dfdf5034a82e280cb4e60ac722a45421316032043620d2e42b5eef\": container with ID starting with 7b551cc3b4dfdf5034a82e280cb4e60ac722a45421316032043620d2e42b5eef not found: ID does not exist" Mar 11 10:39:43 crc kubenswrapper[4808]: E0311 10:39:43.475254 4808 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod792694b9_4f94_431b_b9a6_a072c1d5f75a.slice\": RecentStats: unable to find data in memory cache]" Mar 11 10:39:43 crc kubenswrapper[4808]: I0311 10:39:43.813707 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="792694b9-4f94-431b-b9a6-a072c1d5f75a" path="/var/lib/kubelet/pods/792694b9-4f94-431b-b9a6-a072c1d5f75a/volumes" Mar 11 10:39:44 crc kubenswrapper[4808]: I0311 10:39:44.750091 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2jxvr"] Mar 11 10:39:44 crc kubenswrapper[4808]: E0311 10:39:44.750986 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="792694b9-4f94-431b-b9a6-a072c1d5f75a" containerName="extract-utilities" Mar 11 10:39:44 crc kubenswrapper[4808]: I0311 10:39:44.751012 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="792694b9-4f94-431b-b9a6-a072c1d5f75a" containerName="extract-utilities" Mar 11 10:39:44 crc kubenswrapper[4808]: E0311 10:39:44.751050 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="792694b9-4f94-431b-b9a6-a072c1d5f75a" containerName="registry-server" Mar 11 10:39:44 crc kubenswrapper[4808]: I0311 10:39:44.751068 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="792694b9-4f94-431b-b9a6-a072c1d5f75a" containerName="registry-server" Mar 11 10:39:44 crc kubenswrapper[4808]: E0311 10:39:44.751117 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="792694b9-4f94-431b-b9a6-a072c1d5f75a" containerName="extract-content" Mar 11 10:39:44 crc kubenswrapper[4808]: I0311 10:39:44.751135 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="792694b9-4f94-431b-b9a6-a072c1d5f75a" containerName="extract-content" Mar 11 10:39:44 crc kubenswrapper[4808]: I0311 10:39:44.751458 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="792694b9-4f94-431b-b9a6-a072c1d5f75a" containerName="registry-server" Mar 11 10:39:44 crc kubenswrapper[4808]: I0311 10:39:44.759412 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2jxvr" Mar 11 10:39:44 crc kubenswrapper[4808]: I0311 10:39:44.776591 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2jxvr"] Mar 11 10:39:44 crc kubenswrapper[4808]: I0311 10:39:44.790764 4808 scope.go:117] "RemoveContainer" containerID="1319ea54b643c165259c07e980866f491c6dc66e4563046f9e2b24e8cfa88223" Mar 11 10:39:44 crc kubenswrapper[4808]: E0311 10:39:44.791231 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 10:39:44 crc kubenswrapper[4808]: I0311 10:39:44.857998 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2156d9ec-c47d-4415-b75b-9cff45ce7d49-utilities\") pod \"redhat-operators-2jxvr\" (UID: \"2156d9ec-c47d-4415-b75b-9cff45ce7d49\") " pod="openshift-marketplace/redhat-operators-2jxvr" Mar 11 10:39:44 crc kubenswrapper[4808]: I0311 10:39:44.858095 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2156d9ec-c47d-4415-b75b-9cff45ce7d49-catalog-content\") pod \"redhat-operators-2jxvr\" (UID: \"2156d9ec-c47d-4415-b75b-9cff45ce7d49\") " pod="openshift-marketplace/redhat-operators-2jxvr" Mar 11 10:39:44 crc kubenswrapper[4808]: I0311 10:39:44.858180 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ql99\" (UniqueName: \"kubernetes.io/projected/2156d9ec-c47d-4415-b75b-9cff45ce7d49-kube-api-access-9ql99\") pod \"redhat-operators-2jxvr\" (UID: \"2156d9ec-c47d-4415-b75b-9cff45ce7d49\") " pod="openshift-marketplace/redhat-operators-2jxvr" Mar 11 10:39:44 crc kubenswrapper[4808]: I0311 10:39:44.959659 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ql99\" (UniqueName: \"kubernetes.io/projected/2156d9ec-c47d-4415-b75b-9cff45ce7d49-kube-api-access-9ql99\") pod \"redhat-operators-2jxvr\" (UID: \"2156d9ec-c47d-4415-b75b-9cff45ce7d49\") " pod="openshift-marketplace/redhat-operators-2jxvr" Mar 11 10:39:44 crc kubenswrapper[4808]: I0311 10:39:44.959748 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2156d9ec-c47d-4415-b75b-9cff45ce7d49-utilities\") pod \"redhat-operators-2jxvr\" (UID: \"2156d9ec-c47d-4415-b75b-9cff45ce7d49\") " pod="openshift-marketplace/redhat-operators-2jxvr" Mar 11 10:39:44 crc kubenswrapper[4808]: I0311 10:39:44.960251 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2156d9ec-c47d-4415-b75b-9cff45ce7d49-catalog-content\") pod \"redhat-operators-2jxvr\" (UID: \"2156d9ec-c47d-4415-b75b-9cff45ce7d49\") " pod="openshift-marketplace/redhat-operators-2jxvr" Mar 11 10:39:44 crc kubenswrapper[4808]: I0311 10:39:44.960328 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2156d9ec-c47d-4415-b75b-9cff45ce7d49-utilities\") pod \"redhat-operators-2jxvr\" (UID: \"2156d9ec-c47d-4415-b75b-9cff45ce7d49\") " pod="openshift-marketplace/redhat-operators-2jxvr" Mar 11 10:39:44 crc kubenswrapper[4808]: I0311 10:39:44.960581 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2156d9ec-c47d-4415-b75b-9cff45ce7d49-catalog-content\") pod \"redhat-operators-2jxvr\" (UID: \"2156d9ec-c47d-4415-b75b-9cff45ce7d49\") " pod="openshift-marketplace/redhat-operators-2jxvr" Mar 11 10:39:44 crc kubenswrapper[4808]: I0311 10:39:44.986835 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ql99\" (UniqueName: \"kubernetes.io/projected/2156d9ec-c47d-4415-b75b-9cff45ce7d49-kube-api-access-9ql99\") pod \"redhat-operators-2jxvr\" (UID: \"2156d9ec-c47d-4415-b75b-9cff45ce7d49\") " pod="openshift-marketplace/redhat-operators-2jxvr" Mar 11 10:39:45 crc kubenswrapper[4808]: I0311 10:39:45.093918 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2jxvr" Mar 11 10:39:45 crc kubenswrapper[4808]: I0311 10:39:45.331783 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2jxvr"] Mar 11 10:39:46 crc kubenswrapper[4808]: I0311 10:39:46.345249 4808 generic.go:334] "Generic (PLEG): container finished" podID="2156d9ec-c47d-4415-b75b-9cff45ce7d49" containerID="c0ade77ee16bdd3013fecf3d9dfef166aecc8a4b4102f71eaaf08a08f2067391" exitCode=0 Mar 11 10:39:46 crc kubenswrapper[4808]: I0311 10:39:46.345314 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2jxvr" event={"ID":"2156d9ec-c47d-4415-b75b-9cff45ce7d49","Type":"ContainerDied","Data":"c0ade77ee16bdd3013fecf3d9dfef166aecc8a4b4102f71eaaf08a08f2067391"} Mar 11 10:39:46 crc kubenswrapper[4808]: I0311 10:39:46.345628 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2jxvr" event={"ID":"2156d9ec-c47d-4415-b75b-9cff45ce7d49","Type":"ContainerStarted","Data":"94504c4b3c455ae81220da61bcbf6e8a3f455d6913d471d18c483ce11dc9f713"} Mar 11 10:39:47 crc kubenswrapper[4808]: I0311 10:39:47.145486 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4fxwf"] Mar 11 10:39:47 crc kubenswrapper[4808]: I0311 10:39:47.147532 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4fxwf" Mar 11 10:39:47 crc kubenswrapper[4808]: I0311 10:39:47.155835 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4fxwf"] Mar 11 10:39:47 crc kubenswrapper[4808]: I0311 10:39:47.304715 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x76lv\" (UniqueName: \"kubernetes.io/projected/563d9931-3eee-474a-a65d-88dcd6e15748-kube-api-access-x76lv\") pod \"community-operators-4fxwf\" (UID: \"563d9931-3eee-474a-a65d-88dcd6e15748\") " pod="openshift-marketplace/community-operators-4fxwf" Mar 11 10:39:47 crc kubenswrapper[4808]: I0311 10:39:47.304764 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/563d9931-3eee-474a-a65d-88dcd6e15748-utilities\") pod \"community-operators-4fxwf\" (UID: \"563d9931-3eee-474a-a65d-88dcd6e15748\") " pod="openshift-marketplace/community-operators-4fxwf" Mar 11 10:39:47 crc kubenswrapper[4808]: I0311 10:39:47.304889 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/563d9931-3eee-474a-a65d-88dcd6e15748-catalog-content\") pod \"community-operators-4fxwf\" (UID: \"563d9931-3eee-474a-a65d-88dcd6e15748\") " pod="openshift-marketplace/community-operators-4fxwf" Mar 11 10:39:47 crc kubenswrapper[4808]: I0311 10:39:47.353420 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2jxvr" event={"ID":"2156d9ec-c47d-4415-b75b-9cff45ce7d49","Type":"ContainerStarted","Data":"a9680c73102926d6e7d9d8b5ad1aa00dc4aa97b01fd1f8bf4ac5f27e1ff37bfa"} Mar 11 10:39:47 crc kubenswrapper[4808]: I0311 10:39:47.406711 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x76lv\" (UniqueName: \"kubernetes.io/projected/563d9931-3eee-474a-a65d-88dcd6e15748-kube-api-access-x76lv\") pod \"community-operators-4fxwf\" (UID: \"563d9931-3eee-474a-a65d-88dcd6e15748\") " pod="openshift-marketplace/community-operators-4fxwf" Mar 11 10:39:47 crc kubenswrapper[4808]: I0311 10:39:47.406754 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/563d9931-3eee-474a-a65d-88dcd6e15748-utilities\") pod \"community-operators-4fxwf\" (UID: \"563d9931-3eee-474a-a65d-88dcd6e15748\") " pod="openshift-marketplace/community-operators-4fxwf" Mar 11 10:39:47 crc kubenswrapper[4808]: I0311 10:39:47.406815 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/563d9931-3eee-474a-a65d-88dcd6e15748-catalog-content\") pod \"community-operators-4fxwf\" (UID: \"563d9931-3eee-474a-a65d-88dcd6e15748\") " pod="openshift-marketplace/community-operators-4fxwf" Mar 11 10:39:47 crc kubenswrapper[4808]: I0311 10:39:47.407278 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/563d9931-3eee-474a-a65d-88dcd6e15748-catalog-content\") pod \"community-operators-4fxwf\" (UID: \"563d9931-3eee-474a-a65d-88dcd6e15748\") " pod="openshift-marketplace/community-operators-4fxwf" Mar 11 10:39:47 crc kubenswrapper[4808]: I0311 10:39:47.407784 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/563d9931-3eee-474a-a65d-88dcd6e15748-utilities\") pod \"community-operators-4fxwf\" (UID: \"563d9931-3eee-474a-a65d-88dcd6e15748\") " pod="openshift-marketplace/community-operators-4fxwf" Mar 11 10:39:47 crc kubenswrapper[4808]: I0311 10:39:47.426957 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x76lv\" (UniqueName: \"kubernetes.io/projected/563d9931-3eee-474a-a65d-88dcd6e15748-kube-api-access-x76lv\") pod \"community-operators-4fxwf\" (UID: \"563d9931-3eee-474a-a65d-88dcd6e15748\") " pod="openshift-marketplace/community-operators-4fxwf" Mar 11 10:39:47 crc kubenswrapper[4808]: I0311 10:39:47.486478 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4fxwf" Mar 11 10:39:48 crc kubenswrapper[4808]: W0311 10:39:48.001169 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod563d9931_3eee_474a_a65d_88dcd6e15748.slice/crio-a9bfa29a72498e1a5f7c73c66c5deff011140ed239ac05cf71c2b1f8ae626aae WatchSource:0}: Error finding container a9bfa29a72498e1a5f7c73c66c5deff011140ed239ac05cf71c2b1f8ae626aae: Status 404 returned error can't find the container with id a9bfa29a72498e1a5f7c73c66c5deff011140ed239ac05cf71c2b1f8ae626aae Mar 11 10:39:48 crc kubenswrapper[4808]: I0311 10:39:48.008597 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4fxwf"] Mar 11 10:39:48 crc kubenswrapper[4808]: I0311 10:39:48.365757 4808 generic.go:334] "Generic (PLEG): container finished" podID="2156d9ec-c47d-4415-b75b-9cff45ce7d49" containerID="a9680c73102926d6e7d9d8b5ad1aa00dc4aa97b01fd1f8bf4ac5f27e1ff37bfa" exitCode=0 Mar 11 10:39:48 crc kubenswrapper[4808]: I0311 10:39:48.365813 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2jxvr" event={"ID":"2156d9ec-c47d-4415-b75b-9cff45ce7d49","Type":"ContainerDied","Data":"a9680c73102926d6e7d9d8b5ad1aa00dc4aa97b01fd1f8bf4ac5f27e1ff37bfa"} Mar 11 10:39:48 crc kubenswrapper[4808]: I0311 10:39:48.371162 4808 generic.go:334] "Generic (PLEG): container finished" podID="563d9931-3eee-474a-a65d-88dcd6e15748" containerID="d8a75c16851720ac926c4ab8921aa1fca590a66c501ef9758fd0c2cb5b7beec1" exitCode=0 Mar 11 10:39:48 crc kubenswrapper[4808]: I0311 10:39:48.371208 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4fxwf" event={"ID":"563d9931-3eee-474a-a65d-88dcd6e15748","Type":"ContainerDied","Data":"d8a75c16851720ac926c4ab8921aa1fca590a66c501ef9758fd0c2cb5b7beec1"} Mar 11 10:39:48 crc kubenswrapper[4808]: I0311 10:39:48.371238 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4fxwf" event={"ID":"563d9931-3eee-474a-a65d-88dcd6e15748","Type":"ContainerStarted","Data":"a9bfa29a72498e1a5f7c73c66c5deff011140ed239ac05cf71c2b1f8ae626aae"} Mar 11 10:39:49 crc kubenswrapper[4808]: I0311 10:39:49.383375 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2jxvr" event={"ID":"2156d9ec-c47d-4415-b75b-9cff45ce7d49","Type":"ContainerStarted","Data":"8a43a0035dcda56c3509853a2d577415a62859c7a9e36d5909b27d1fe664ded8"} Mar 11 10:39:49 crc kubenswrapper[4808]: I0311 10:39:49.386039 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4fxwf" event={"ID":"563d9931-3eee-474a-a65d-88dcd6e15748","Type":"ContainerStarted","Data":"593799e4ca6010c70e27734fcd9a09dca0dad9730044a84649baf21cf5b3224e"} Mar 11 10:39:49 crc kubenswrapper[4808]: I0311 10:39:49.406612 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2jxvr" podStartSLOduration=2.91745172 podStartE2EDuration="5.406593012s" podCreationTimestamp="2026-03-11 10:39:44 +0000 UTC" firstStartedPulling="2026-03-11 10:39:46.348225895 +0000 UTC m=+7237.301549215" lastFinishedPulling="2026-03-11 10:39:48.837367147 +0000 UTC m=+7239.790690507" observedRunningTime="2026-03-11 10:39:49.403887855 +0000 UTC m=+7240.357211165" watchObservedRunningTime="2026-03-11 10:39:49.406593012 +0000 UTC m=+7240.359916332" Mar 11 10:39:50 crc kubenswrapper[4808]: I0311 10:39:50.396706 4808 generic.go:334] "Generic (PLEG): container finished" podID="563d9931-3eee-474a-a65d-88dcd6e15748" containerID="593799e4ca6010c70e27734fcd9a09dca0dad9730044a84649baf21cf5b3224e" exitCode=0 Mar 11 10:39:50 crc kubenswrapper[4808]: I0311 10:39:50.396772 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4fxwf" event={"ID":"563d9931-3eee-474a-a65d-88dcd6e15748","Type":"ContainerDied","Data":"593799e4ca6010c70e27734fcd9a09dca0dad9730044a84649baf21cf5b3224e"} Mar 11 10:39:51 crc kubenswrapper[4808]: I0311 10:39:51.412800 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4fxwf" event={"ID":"563d9931-3eee-474a-a65d-88dcd6e15748","Type":"ContainerStarted","Data":"74e5c301a14022e2d0548c644f503bd8b3920d47ddd39f2609c150008a8e908b"} Mar 11 10:39:51 crc kubenswrapper[4808]: I0311 10:39:51.430477 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4fxwf" podStartSLOduration=1.9754962219999999 podStartE2EDuration="4.430449831s" podCreationTimestamp="2026-03-11 10:39:47 +0000 UTC" firstStartedPulling="2026-03-11 10:39:48.373425452 +0000 UTC m=+7239.326748802" lastFinishedPulling="2026-03-11 10:39:50.828379091 +0000 UTC m=+7241.781702411" observedRunningTime="2026-03-11 10:39:51.428526737 +0000 UTC m=+7242.381850077" watchObservedRunningTime="2026-03-11 10:39:51.430449831 +0000 UTC m=+7242.383773161" Mar 11 10:39:55 crc kubenswrapper[4808]: I0311 10:39:55.094396 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2jxvr" Mar 11 10:39:55 crc kubenswrapper[4808]: I0311 10:39:55.095034 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2jxvr" Mar 11 10:39:56 crc kubenswrapper[4808]: I0311 10:39:56.147439 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2jxvr" podUID="2156d9ec-c47d-4415-b75b-9cff45ce7d49" containerName="registry-server" probeResult="failure" output=< Mar 11 10:39:56 crc kubenswrapper[4808]: timeout: failed to connect service ":50051" within 1s Mar 11 10:39:56 crc kubenswrapper[4808]: > Mar 11 10:39:57 crc kubenswrapper[4808]: I0311 10:39:57.486769 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4fxwf" Mar 11 10:39:57 crc kubenswrapper[4808]: I0311 10:39:57.486871 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4fxwf" Mar 11 10:39:57 crc kubenswrapper[4808]: I0311 10:39:57.554106 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4fxwf" Mar 11 10:39:58 crc kubenswrapper[4808]: I0311 10:39:58.560135 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4fxwf" Mar 11 10:39:58 crc kubenswrapper[4808]: I0311 10:39:58.632599 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4fxwf"] Mar 11 10:39:58 crc kubenswrapper[4808]: I0311 10:39:58.789053 4808 scope.go:117] "RemoveContainer" containerID="1319ea54b643c165259c07e980866f491c6dc66e4563046f9e2b24e8cfa88223" Mar 11 10:39:58 crc kubenswrapper[4808]: E0311 10:39:58.789273 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 10:39:59 crc kubenswrapper[4808]: I0311 10:39:59.114175 4808 scope.go:117] "RemoveContainer" containerID="3ccdb61d31a53d08c4311cc817845032d20856957530f121ece74b0aa3327213" Mar 11 10:40:00 crc kubenswrapper[4808]: I0311 10:40:00.159602 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553760-srl5s"] Mar 11 10:40:00 crc kubenswrapper[4808]: I0311 10:40:00.161071 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553760-srl5s" Mar 11 10:40:00 crc kubenswrapper[4808]: I0311 10:40:00.164961 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 10:40:00 crc kubenswrapper[4808]: I0311 10:40:00.165140 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 10:40:00 crc kubenswrapper[4808]: I0311 10:40:00.165451 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-8r7cc" Mar 11 10:40:00 crc kubenswrapper[4808]: I0311 10:40:00.171771 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553760-srl5s"] Mar 11 10:40:00 crc kubenswrapper[4808]: I0311 10:40:00.274132 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czldv\" (UniqueName: \"kubernetes.io/projected/42ca2cb6-63c9-4c9c-bc24-7991668f3bff-kube-api-access-czldv\") pod \"auto-csr-approver-29553760-srl5s\" (UID: \"42ca2cb6-63c9-4c9c-bc24-7991668f3bff\") " pod="openshift-infra/auto-csr-approver-29553760-srl5s" Mar 11 10:40:00 crc kubenswrapper[4808]: I0311 10:40:00.376652 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czldv\" (UniqueName: \"kubernetes.io/projected/42ca2cb6-63c9-4c9c-bc24-7991668f3bff-kube-api-access-czldv\") pod \"auto-csr-approver-29553760-srl5s\" (UID: \"42ca2cb6-63c9-4c9c-bc24-7991668f3bff\") " pod="openshift-infra/auto-csr-approver-29553760-srl5s" Mar 11 10:40:00 crc kubenswrapper[4808]: I0311 10:40:00.401255 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czldv\" (UniqueName: \"kubernetes.io/projected/42ca2cb6-63c9-4c9c-bc24-7991668f3bff-kube-api-access-czldv\") pod \"auto-csr-approver-29553760-srl5s\" (UID: \"42ca2cb6-63c9-4c9c-bc24-7991668f3bff\") " pod="openshift-infra/auto-csr-approver-29553760-srl5s" Mar 11 10:40:00 crc kubenswrapper[4808]: I0311 10:40:00.483122 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553760-srl5s" Mar 11 10:40:00 crc kubenswrapper[4808]: I0311 10:40:00.490013 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4fxwf" podUID="563d9931-3eee-474a-a65d-88dcd6e15748" containerName="registry-server" containerID="cri-o://74e5c301a14022e2d0548c644f503bd8b3920d47ddd39f2609c150008a8e908b" gracePeriod=2 Mar 11 10:40:00 crc kubenswrapper[4808]: I0311 10:40:00.750652 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553760-srl5s"] Mar 11 10:40:01 crc kubenswrapper[4808]: I0311 10:40:01.393214 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4fxwf" Mar 11 10:40:01 crc kubenswrapper[4808]: I0311 10:40:01.499393 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553760-srl5s" event={"ID":"42ca2cb6-63c9-4c9c-bc24-7991668f3bff","Type":"ContainerStarted","Data":"073c0098aca1e6f9c568a29f65017d4d98813482fbabaa40260587456653fb20"} Mar 11 10:40:01 crc kubenswrapper[4808]: I0311 10:40:01.501584 4808 generic.go:334] "Generic (PLEG): container finished" podID="563d9931-3eee-474a-a65d-88dcd6e15748" containerID="74e5c301a14022e2d0548c644f503bd8b3920d47ddd39f2609c150008a8e908b" exitCode=0 Mar 11 10:40:01 crc kubenswrapper[4808]: I0311 10:40:01.501613 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4fxwf" event={"ID":"563d9931-3eee-474a-a65d-88dcd6e15748","Type":"ContainerDied","Data":"74e5c301a14022e2d0548c644f503bd8b3920d47ddd39f2609c150008a8e908b"} Mar 11 10:40:01 crc kubenswrapper[4808]: I0311 10:40:01.501630 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4fxwf" event={"ID":"563d9931-3eee-474a-a65d-88dcd6e15748","Type":"ContainerDied","Data":"a9bfa29a72498e1a5f7c73c66c5deff011140ed239ac05cf71c2b1f8ae626aae"} Mar 11 10:40:01 crc kubenswrapper[4808]: I0311 10:40:01.501646 4808 scope.go:117] "RemoveContainer" containerID="74e5c301a14022e2d0548c644f503bd8b3920d47ddd39f2609c150008a8e908b" Mar 11 10:40:01 crc kubenswrapper[4808]: I0311 10:40:01.501663 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4fxwf" Mar 11 10:40:01 crc kubenswrapper[4808]: I0311 10:40:01.508105 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x76lv\" (UniqueName: \"kubernetes.io/projected/563d9931-3eee-474a-a65d-88dcd6e15748-kube-api-access-x76lv\") pod \"563d9931-3eee-474a-a65d-88dcd6e15748\" (UID: \"563d9931-3eee-474a-a65d-88dcd6e15748\") " Mar 11 10:40:01 crc kubenswrapper[4808]: I0311 10:40:01.508425 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/563d9931-3eee-474a-a65d-88dcd6e15748-catalog-content\") pod \"563d9931-3eee-474a-a65d-88dcd6e15748\" (UID: \"563d9931-3eee-474a-a65d-88dcd6e15748\") " Mar 11 10:40:01 crc kubenswrapper[4808]: I0311 10:40:01.508571 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/563d9931-3eee-474a-a65d-88dcd6e15748-utilities\") pod \"563d9931-3eee-474a-a65d-88dcd6e15748\" (UID: \"563d9931-3eee-474a-a65d-88dcd6e15748\") " Mar 11 10:40:01 crc kubenswrapper[4808]: I0311 10:40:01.511129 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/563d9931-3eee-474a-a65d-88dcd6e15748-utilities" (OuterVolumeSpecName: "utilities") pod "563d9931-3eee-474a-a65d-88dcd6e15748" (UID: "563d9931-3eee-474a-a65d-88dcd6e15748"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 10:40:01 crc kubenswrapper[4808]: I0311 10:40:01.517850 4808 scope.go:117] "RemoveContainer" containerID="593799e4ca6010c70e27734fcd9a09dca0dad9730044a84649baf21cf5b3224e" Mar 11 10:40:01 crc kubenswrapper[4808]: I0311 10:40:01.520866 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/563d9931-3eee-474a-a65d-88dcd6e15748-kube-api-access-x76lv" (OuterVolumeSpecName: "kube-api-access-x76lv") pod "563d9931-3eee-474a-a65d-88dcd6e15748" (UID: "563d9931-3eee-474a-a65d-88dcd6e15748"). InnerVolumeSpecName "kube-api-access-x76lv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:40:01 crc kubenswrapper[4808]: I0311 10:40:01.578232 4808 scope.go:117] "RemoveContainer" containerID="d8a75c16851720ac926c4ab8921aa1fca590a66c501ef9758fd0c2cb5b7beec1" Mar 11 10:40:01 crc kubenswrapper[4808]: I0311 10:40:01.586477 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/563d9931-3eee-474a-a65d-88dcd6e15748-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "563d9931-3eee-474a-a65d-88dcd6e15748" (UID: "563d9931-3eee-474a-a65d-88dcd6e15748"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 10:40:01 crc kubenswrapper[4808]: I0311 10:40:01.604097 4808 scope.go:117] "RemoveContainer" containerID="74e5c301a14022e2d0548c644f503bd8b3920d47ddd39f2609c150008a8e908b" Mar 11 10:40:01 crc kubenswrapper[4808]: E0311 10:40:01.604479 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74e5c301a14022e2d0548c644f503bd8b3920d47ddd39f2609c150008a8e908b\": container with ID starting with 74e5c301a14022e2d0548c644f503bd8b3920d47ddd39f2609c150008a8e908b not found: ID does not exist" containerID="74e5c301a14022e2d0548c644f503bd8b3920d47ddd39f2609c150008a8e908b" Mar 11 10:40:01 crc kubenswrapper[4808]: I0311 10:40:01.604545 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74e5c301a14022e2d0548c644f503bd8b3920d47ddd39f2609c150008a8e908b"} err="failed to get container status \"74e5c301a14022e2d0548c644f503bd8b3920d47ddd39f2609c150008a8e908b\": rpc error: code = NotFound desc = could not find container \"74e5c301a14022e2d0548c644f503bd8b3920d47ddd39f2609c150008a8e908b\": container with ID starting with 74e5c301a14022e2d0548c644f503bd8b3920d47ddd39f2609c150008a8e908b not found: ID does not exist" Mar 11 10:40:01 crc kubenswrapper[4808]: I0311 10:40:01.604579 4808 scope.go:117] "RemoveContainer" containerID="593799e4ca6010c70e27734fcd9a09dca0dad9730044a84649baf21cf5b3224e" Mar 11 10:40:01 crc kubenswrapper[4808]: E0311 10:40:01.604875 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"593799e4ca6010c70e27734fcd9a09dca0dad9730044a84649baf21cf5b3224e\": container with ID starting with 593799e4ca6010c70e27734fcd9a09dca0dad9730044a84649baf21cf5b3224e not found: ID does not exist" containerID="593799e4ca6010c70e27734fcd9a09dca0dad9730044a84649baf21cf5b3224e" Mar 11 10:40:01 crc kubenswrapper[4808]: I0311 10:40:01.604911 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"593799e4ca6010c70e27734fcd9a09dca0dad9730044a84649baf21cf5b3224e"} err="failed to get container status \"593799e4ca6010c70e27734fcd9a09dca0dad9730044a84649baf21cf5b3224e\": rpc error: code = NotFound desc = could not find container \"593799e4ca6010c70e27734fcd9a09dca0dad9730044a84649baf21cf5b3224e\": container with ID starting with 593799e4ca6010c70e27734fcd9a09dca0dad9730044a84649baf21cf5b3224e not found: ID does not exist" Mar 11 10:40:01 crc kubenswrapper[4808]: I0311 10:40:01.604936 4808 scope.go:117] "RemoveContainer" containerID="d8a75c16851720ac926c4ab8921aa1fca590a66c501ef9758fd0c2cb5b7beec1" Mar 11 10:40:01 crc kubenswrapper[4808]: E0311 10:40:01.605241 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8a75c16851720ac926c4ab8921aa1fca590a66c501ef9758fd0c2cb5b7beec1\": container with ID starting with d8a75c16851720ac926c4ab8921aa1fca590a66c501ef9758fd0c2cb5b7beec1 not found: ID does not exist" containerID="d8a75c16851720ac926c4ab8921aa1fca590a66c501ef9758fd0c2cb5b7beec1" Mar 11 10:40:01 crc kubenswrapper[4808]: I0311 10:40:01.605276 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8a75c16851720ac926c4ab8921aa1fca590a66c501ef9758fd0c2cb5b7beec1"} err="failed to get container status \"d8a75c16851720ac926c4ab8921aa1fca590a66c501ef9758fd0c2cb5b7beec1\": rpc error: code = NotFound desc = could not find container \"d8a75c16851720ac926c4ab8921aa1fca590a66c501ef9758fd0c2cb5b7beec1\": container with ID starting with d8a75c16851720ac926c4ab8921aa1fca590a66c501ef9758fd0c2cb5b7beec1 not found: ID does not exist" Mar 11 10:40:01 crc kubenswrapper[4808]: I0311 10:40:01.611607 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/563d9931-3eee-474a-a65d-88dcd6e15748-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 11 10:40:01 crc kubenswrapper[4808]: I0311 10:40:01.611639 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/563d9931-3eee-474a-a65d-88dcd6e15748-utilities\") on node \"crc\" DevicePath \"\"" Mar 11 10:40:01 crc kubenswrapper[4808]: I0311 10:40:01.611653 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x76lv\" (UniqueName: \"kubernetes.io/projected/563d9931-3eee-474a-a65d-88dcd6e15748-kube-api-access-x76lv\") on node \"crc\" DevicePath \"\"" Mar 11 10:40:01 crc kubenswrapper[4808]: I0311 10:40:01.850668 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4fxwf"] Mar 11 10:40:01 crc kubenswrapper[4808]: I0311 10:40:01.859808 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4fxwf"] Mar 11 10:40:02 crc kubenswrapper[4808]: I0311 10:40:02.512164 4808 generic.go:334] "Generic (PLEG): container finished" podID="42ca2cb6-63c9-4c9c-bc24-7991668f3bff" containerID="9a7b6d78e108dcf794d8ac65155646926281bd9d15bf28119036569bc0280b55" exitCode=0 Mar 11 10:40:02 crc kubenswrapper[4808]: I0311 10:40:02.512295 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553760-srl5s" event={"ID":"42ca2cb6-63c9-4c9c-bc24-7991668f3bff","Type":"ContainerDied","Data":"9a7b6d78e108dcf794d8ac65155646926281bd9d15bf28119036569bc0280b55"} Mar 11 10:40:03 crc kubenswrapper[4808]: I0311 10:40:03.800732 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="563d9931-3eee-474a-a65d-88dcd6e15748" path="/var/lib/kubelet/pods/563d9931-3eee-474a-a65d-88dcd6e15748/volumes" Mar 11 10:40:03 crc kubenswrapper[4808]: I0311 10:40:03.915379 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553760-srl5s" Mar 11 10:40:04 crc kubenswrapper[4808]: I0311 10:40:04.051454 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-czldv\" (UniqueName: \"kubernetes.io/projected/42ca2cb6-63c9-4c9c-bc24-7991668f3bff-kube-api-access-czldv\") pod \"42ca2cb6-63c9-4c9c-bc24-7991668f3bff\" (UID: \"42ca2cb6-63c9-4c9c-bc24-7991668f3bff\") " Mar 11 10:40:04 crc kubenswrapper[4808]: I0311 10:40:04.056454 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42ca2cb6-63c9-4c9c-bc24-7991668f3bff-kube-api-access-czldv" (OuterVolumeSpecName: "kube-api-access-czldv") pod "42ca2cb6-63c9-4c9c-bc24-7991668f3bff" (UID: "42ca2cb6-63c9-4c9c-bc24-7991668f3bff"). InnerVolumeSpecName "kube-api-access-czldv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:40:04 crc kubenswrapper[4808]: I0311 10:40:04.153476 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-czldv\" (UniqueName: \"kubernetes.io/projected/42ca2cb6-63c9-4c9c-bc24-7991668f3bff-kube-api-access-czldv\") on node \"crc\" DevicePath \"\"" Mar 11 10:40:04 crc kubenswrapper[4808]: I0311 10:40:04.538399 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553760-srl5s" event={"ID":"42ca2cb6-63c9-4c9c-bc24-7991668f3bff","Type":"ContainerDied","Data":"073c0098aca1e6f9c568a29f65017d4d98813482fbabaa40260587456653fb20"} Mar 11 10:40:04 crc kubenswrapper[4808]: I0311 10:40:04.538710 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="073c0098aca1e6f9c568a29f65017d4d98813482fbabaa40260587456653fb20" Mar 11 10:40:04 crc kubenswrapper[4808]: I0311 10:40:04.538486 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553760-srl5s" Mar 11 10:40:04 crc kubenswrapper[4808]: I0311 10:40:04.994951 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553754-rckb9"] Mar 11 10:40:05 crc kubenswrapper[4808]: I0311 10:40:05.004250 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553754-rckb9"] Mar 11 10:40:05 crc kubenswrapper[4808]: I0311 10:40:05.160617 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2jxvr" Mar 11 10:40:05 crc kubenswrapper[4808]: I0311 10:40:05.210812 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2jxvr" Mar 11 10:40:05 crc kubenswrapper[4808]: I0311 10:40:05.613977 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2jxvr"] Mar 11 10:40:05 crc kubenswrapper[4808]: I0311 10:40:05.811514 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6928cdf-b46f-41da-a0e1-842ab6fb5c00" path="/var/lib/kubelet/pods/b6928cdf-b46f-41da-a0e1-842ab6fb5c00/volumes" Mar 11 10:40:06 crc kubenswrapper[4808]: I0311 10:40:06.555498 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2jxvr" podUID="2156d9ec-c47d-4415-b75b-9cff45ce7d49" containerName="registry-server" containerID="cri-o://8a43a0035dcda56c3509853a2d577415a62859c7a9e36d5909b27d1fe664ded8" gracePeriod=2 Mar 11 10:40:07 crc kubenswrapper[4808]: I0311 10:40:07.032317 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2jxvr" Mar 11 10:40:07 crc kubenswrapper[4808]: I0311 10:40:07.208754 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2156d9ec-c47d-4415-b75b-9cff45ce7d49-catalog-content\") pod \"2156d9ec-c47d-4415-b75b-9cff45ce7d49\" (UID: \"2156d9ec-c47d-4415-b75b-9cff45ce7d49\") " Mar 11 10:40:07 crc kubenswrapper[4808]: I0311 10:40:07.208889 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9ql99\" (UniqueName: \"kubernetes.io/projected/2156d9ec-c47d-4415-b75b-9cff45ce7d49-kube-api-access-9ql99\") pod \"2156d9ec-c47d-4415-b75b-9cff45ce7d49\" (UID: \"2156d9ec-c47d-4415-b75b-9cff45ce7d49\") " Mar 11 10:40:07 crc kubenswrapper[4808]: I0311 10:40:07.208966 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2156d9ec-c47d-4415-b75b-9cff45ce7d49-utilities\") pod \"2156d9ec-c47d-4415-b75b-9cff45ce7d49\" (UID: \"2156d9ec-c47d-4415-b75b-9cff45ce7d49\") " Mar 11 10:40:07 crc kubenswrapper[4808]: I0311 10:40:07.211579 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2156d9ec-c47d-4415-b75b-9cff45ce7d49-utilities" (OuterVolumeSpecName: "utilities") pod "2156d9ec-c47d-4415-b75b-9cff45ce7d49" (UID: "2156d9ec-c47d-4415-b75b-9cff45ce7d49"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 10:40:07 crc kubenswrapper[4808]: I0311 10:40:07.217650 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2156d9ec-c47d-4415-b75b-9cff45ce7d49-kube-api-access-9ql99" (OuterVolumeSpecName: "kube-api-access-9ql99") pod "2156d9ec-c47d-4415-b75b-9cff45ce7d49" (UID: "2156d9ec-c47d-4415-b75b-9cff45ce7d49"). InnerVolumeSpecName "kube-api-access-9ql99". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:40:07 crc kubenswrapper[4808]: I0311 10:40:07.311496 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9ql99\" (UniqueName: \"kubernetes.io/projected/2156d9ec-c47d-4415-b75b-9cff45ce7d49-kube-api-access-9ql99\") on node \"crc\" DevicePath \"\"" Mar 11 10:40:07 crc kubenswrapper[4808]: I0311 10:40:07.311532 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2156d9ec-c47d-4415-b75b-9cff45ce7d49-utilities\") on node \"crc\" DevicePath \"\"" Mar 11 10:40:07 crc kubenswrapper[4808]: I0311 10:40:07.371400 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2156d9ec-c47d-4415-b75b-9cff45ce7d49-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2156d9ec-c47d-4415-b75b-9cff45ce7d49" (UID: "2156d9ec-c47d-4415-b75b-9cff45ce7d49"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 10:40:07 crc kubenswrapper[4808]: I0311 10:40:07.413136 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2156d9ec-c47d-4415-b75b-9cff45ce7d49-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 11 10:40:07 crc kubenswrapper[4808]: I0311 10:40:07.574808 4808 generic.go:334] "Generic (PLEG): container finished" podID="2156d9ec-c47d-4415-b75b-9cff45ce7d49" containerID="8a43a0035dcda56c3509853a2d577415a62859c7a9e36d5909b27d1fe664ded8" exitCode=0 Mar 11 10:40:07 crc kubenswrapper[4808]: I0311 10:40:07.574870 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2jxvr" event={"ID":"2156d9ec-c47d-4415-b75b-9cff45ce7d49","Type":"ContainerDied","Data":"8a43a0035dcda56c3509853a2d577415a62859c7a9e36d5909b27d1fe664ded8"} Mar 11 10:40:07 crc kubenswrapper[4808]: I0311 10:40:07.574901 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2jxvr" Mar 11 10:40:07 crc kubenswrapper[4808]: I0311 10:40:07.574921 4808 scope.go:117] "RemoveContainer" containerID="8a43a0035dcda56c3509853a2d577415a62859c7a9e36d5909b27d1fe664ded8" Mar 11 10:40:07 crc kubenswrapper[4808]: I0311 10:40:07.574907 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2jxvr" event={"ID":"2156d9ec-c47d-4415-b75b-9cff45ce7d49","Type":"ContainerDied","Data":"94504c4b3c455ae81220da61bcbf6e8a3f455d6913d471d18c483ce11dc9f713"} Mar 11 10:40:07 crc kubenswrapper[4808]: I0311 10:40:07.621179 4808 scope.go:117] "RemoveContainer" containerID="a9680c73102926d6e7d9d8b5ad1aa00dc4aa97b01fd1f8bf4ac5f27e1ff37bfa" Mar 11 10:40:07 crc kubenswrapper[4808]: I0311 10:40:07.621288 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2jxvr"] Mar 11 10:40:07 crc kubenswrapper[4808]: I0311 10:40:07.629067 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2jxvr"] Mar 11 10:40:07 crc kubenswrapper[4808]: I0311 10:40:07.646665 4808 scope.go:117] "RemoveContainer" containerID="c0ade77ee16bdd3013fecf3d9dfef166aecc8a4b4102f71eaaf08a08f2067391" Mar 11 10:40:07 crc kubenswrapper[4808]: I0311 10:40:07.692036 4808 scope.go:117] "RemoveContainer" containerID="8a43a0035dcda56c3509853a2d577415a62859c7a9e36d5909b27d1fe664ded8" Mar 11 10:40:07 crc kubenswrapper[4808]: E0311 10:40:07.692555 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a43a0035dcda56c3509853a2d577415a62859c7a9e36d5909b27d1fe664ded8\": container with ID starting with 8a43a0035dcda56c3509853a2d577415a62859c7a9e36d5909b27d1fe664ded8 not found: ID does not exist" containerID="8a43a0035dcda56c3509853a2d577415a62859c7a9e36d5909b27d1fe664ded8" Mar 11 10:40:07 crc kubenswrapper[4808]: I0311 10:40:07.692604 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a43a0035dcda56c3509853a2d577415a62859c7a9e36d5909b27d1fe664ded8"} err="failed to get container status \"8a43a0035dcda56c3509853a2d577415a62859c7a9e36d5909b27d1fe664ded8\": rpc error: code = NotFound desc = could not find container \"8a43a0035dcda56c3509853a2d577415a62859c7a9e36d5909b27d1fe664ded8\": container with ID starting with 8a43a0035dcda56c3509853a2d577415a62859c7a9e36d5909b27d1fe664ded8 not found: ID does not exist" Mar 11 10:40:07 crc kubenswrapper[4808]: I0311 10:40:07.692637 4808 scope.go:117] "RemoveContainer" containerID="a9680c73102926d6e7d9d8b5ad1aa00dc4aa97b01fd1f8bf4ac5f27e1ff37bfa" Mar 11 10:40:07 crc kubenswrapper[4808]: E0311 10:40:07.693270 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9680c73102926d6e7d9d8b5ad1aa00dc4aa97b01fd1f8bf4ac5f27e1ff37bfa\": container with ID starting with a9680c73102926d6e7d9d8b5ad1aa00dc4aa97b01fd1f8bf4ac5f27e1ff37bfa not found: ID does not exist" containerID="a9680c73102926d6e7d9d8b5ad1aa00dc4aa97b01fd1f8bf4ac5f27e1ff37bfa" Mar 11 10:40:07 crc kubenswrapper[4808]: I0311 10:40:07.693330 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9680c73102926d6e7d9d8b5ad1aa00dc4aa97b01fd1f8bf4ac5f27e1ff37bfa"} err="failed to get container status \"a9680c73102926d6e7d9d8b5ad1aa00dc4aa97b01fd1f8bf4ac5f27e1ff37bfa\": rpc error: code = NotFound desc = could not find container \"a9680c73102926d6e7d9d8b5ad1aa00dc4aa97b01fd1f8bf4ac5f27e1ff37bfa\": container with ID starting with a9680c73102926d6e7d9d8b5ad1aa00dc4aa97b01fd1f8bf4ac5f27e1ff37bfa not found: ID does not exist" Mar 11 10:40:07 crc kubenswrapper[4808]: I0311 10:40:07.693406 4808 scope.go:117] "RemoveContainer" containerID="c0ade77ee16bdd3013fecf3d9dfef166aecc8a4b4102f71eaaf08a08f2067391" Mar 11 10:40:07 crc kubenswrapper[4808]: E0311 10:40:07.693870 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0ade77ee16bdd3013fecf3d9dfef166aecc8a4b4102f71eaaf08a08f2067391\": container with ID starting with c0ade77ee16bdd3013fecf3d9dfef166aecc8a4b4102f71eaaf08a08f2067391 not found: ID does not exist" containerID="c0ade77ee16bdd3013fecf3d9dfef166aecc8a4b4102f71eaaf08a08f2067391" Mar 11 10:40:07 crc kubenswrapper[4808]: I0311 10:40:07.693896 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0ade77ee16bdd3013fecf3d9dfef166aecc8a4b4102f71eaaf08a08f2067391"} err="failed to get container status \"c0ade77ee16bdd3013fecf3d9dfef166aecc8a4b4102f71eaaf08a08f2067391\": rpc error: code = NotFound desc = could not find container \"c0ade77ee16bdd3013fecf3d9dfef166aecc8a4b4102f71eaaf08a08f2067391\": container with ID starting with c0ade77ee16bdd3013fecf3d9dfef166aecc8a4b4102f71eaaf08a08f2067391 not found: ID does not exist" Mar 11 10:40:07 crc kubenswrapper[4808]: I0311 10:40:07.805928 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2156d9ec-c47d-4415-b75b-9cff45ce7d49" path="/var/lib/kubelet/pods/2156d9ec-c47d-4415-b75b-9cff45ce7d49/volumes" Mar 11 10:40:11 crc kubenswrapper[4808]: I0311 10:40:11.789338 4808 scope.go:117] "RemoveContainer" containerID="1319ea54b643c165259c07e980866f491c6dc66e4563046f9e2b24e8cfa88223" Mar 11 10:40:11 crc kubenswrapper[4808]: E0311 10:40:11.790059 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 10:40:22 crc kubenswrapper[4808]: I0311 10:40:22.790026 4808 scope.go:117] "RemoveContainer" containerID="1319ea54b643c165259c07e980866f491c6dc66e4563046f9e2b24e8cfa88223" Mar 11 10:40:22 crc kubenswrapper[4808]: E0311 10:40:22.792703 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 10:40:37 crc kubenswrapper[4808]: I0311 10:40:37.790331 4808 scope.go:117] "RemoveContainer" containerID="1319ea54b643c165259c07e980866f491c6dc66e4563046f9e2b24e8cfa88223" Mar 11 10:40:37 crc kubenswrapper[4808]: E0311 10:40:37.791229 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 10:40:51 crc kubenswrapper[4808]: I0311 10:40:51.789258 4808 scope.go:117] "RemoveContainer" containerID="1319ea54b643c165259c07e980866f491c6dc66e4563046f9e2b24e8cfa88223" Mar 11 10:40:51 crc kubenswrapper[4808]: E0311 10:40:51.790613 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 10:40:59 crc kubenswrapper[4808]: I0311 10:40:59.190161 4808 scope.go:117] "RemoveContainer" containerID="dfa6d5cb26081fc108c13dd9b053a4dcc61ed2f4b86c993f119bce09ca0fa557" Mar 11 10:41:05 crc kubenswrapper[4808]: I0311 10:41:05.789616 4808 scope.go:117] "RemoveContainer" containerID="1319ea54b643c165259c07e980866f491c6dc66e4563046f9e2b24e8cfa88223" Mar 11 10:41:05 crc kubenswrapper[4808]: E0311 10:41:05.791891 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 10:41:16 crc kubenswrapper[4808]: I0311 10:41:16.790481 4808 scope.go:117] "RemoveContainer" containerID="1319ea54b643c165259c07e980866f491c6dc66e4563046f9e2b24e8cfa88223" Mar 11 10:41:16 crc kubenswrapper[4808]: E0311 10:41:16.791751 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578" Mar 11 10:41:30 crc kubenswrapper[4808]: I0311 10:41:30.790249 4808 scope.go:117] "RemoveContainer" containerID="1319ea54b643c165259c07e980866f491c6dc66e4563046f9e2b24e8cfa88223" Mar 11 10:41:30 crc kubenswrapper[4808]: E0311 10:41:30.793126 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tfsm9_openshift-machine-config-operator(3dda5309-668d-4e3c-b3b2-1d708eecc578)\"" pod="openshift-machine-config-operator/machine-config-daemon-tfsm9" podUID="3dda5309-668d-4e3c-b3b2-1d708eecc578"